Rollercoaster: ups and downs of Google Scholar citations

In the UK there is an advertising disclaimer that “the value of your investments may go down as well as up.” Since papers are our main commodity in science and citations are something of a return, surely the “value” of a published paper only ever increases over time. Doesn’t it?

I think this is true when citations to a paper are tracked at a conventional database (Web of Science for example). Citations are added and very rarely taken away. With Google Scholar it is a different story. Now, I am a huge Google Scholar fan so this post is not a criticism of the service at all. One of the nice things about GS is that it counts citations from the “grey literature”, i.e. theses, patents etc. But not so grey as to include blogs and news articles (most of the time). So you get a broader view of the influence of a paper beyond the confines of a conventional database. With this broader view comes volatility, as I’ll show below.

I don’t obsessively check my own page every day – honestly I don’t(!) – but I did happen to check my own page twice within a short space of time and I noticed that my H-index went up by 1 and then decreased by 1. I’m pretty sure I didn’t imagine this and so I began to wonder how stable the citation data in Google Scholar actually is and whether I could track cites automatically.

What goes up (must come down)

Manually checking GS every day is beyond me, and what are computers for anyway? I set up a little routine to grab my data each day and look at the stability of citations (details of how to do this are below if you’re interested).

You can click on the plot to see it in its full glory.

Each line is a plot of citations to a paper over many weeks. The grey line is no citations gained or lost, relative to the start. As the paper accrues citations the line becomes more red and if it loses citations below the starting point it turns blue. They are ranked by the integral of change in citation over time.

The data are retrieved daily so if a paper gains citations and loses an equal number in less than 24 hours, this is not detected.

You can see from the plot that the number of citations to a paper can go down as well as up. For one paper, citations dropped significantly from one day to the next, which undid two month’s worth of increases. This paper is my highest cited work and dropped 10 cites from 443 to 433.

I’m guessing that running this routine on someone working in a field with a higher citation rate would show more volatility.

The increases in citations have an obvious cause but what about the decreases? My guess is that they are duplicate citations which are removed when they are added to a “cluster” (Google’s way of dealing with multiple URLs for the same paper). Another cause is probably something that is subsequently judged to not be a paper, e.g. a blog post, and getting removed.

Please please tell me now

The alert emails from Google Scholar have always puzzled me. I have alerts set up to tell me when my work is cited. I love getting them – who doesn’t want to see who has cited their work? Annoyingly they arrive infrequently and only ever contain one or two new papers. I looked at the frequency of changes in citation number and checked when I received emails from Google Scholar.

Over the same period as the plot above, you can see that citations to my profile happen pretty frequently. Again, if my work was cited at a higher rate, I guess this would be even more frequent. But in this period I only received six or so alert emails. I don’t think GS waits until a citation is stable for a while before emailing, because they tend to come immediately after an update. The alert emails remain a mystery to me. It would be great if they came a bit more often and it would be even better if they told you which paper(s) they cite!

Summary

Google Scholar is a wonderful service that finds an extra 20% or so of the impact of your work compared to other databases. With this extra information comes volatility and the numbers you see on there probably shouldn’t be treated as absolute.

Methods

To do this I used Christian Kreibich’s python script to retrieve information from Google Scholar. I wrote a little shell script to run the scholar.py and set up a daemon to do this everyday at the same time. I couldn’t find a way to search my UserID and so the search for my name brings up some unrelated papers that need to be filtered. There are restrictions on what you can retrieve, so my script retrieved papers within three different time frames to avoid hitting the limit for paper information retrieval.

The daemon is a plist in ~/Library/LaunchAgents/

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
    <key>Label</key>
    <string>com.quantixed.gscrape</string>
    <key>KeepAlive</key>
    <false/>
    <key>RunAtLoad</key>
    <false/>
    <key>Program</key>
    <string>/path/to/the/shell/script/gscrape.sh</string>
      <key>StartCalendarInterval</key>
      <dict>
          <key>Hour</key>
          <integer>14</integer>
          <key>Minute</key>
          <integer>30</integer>
      </dict>
</dict>
</plist>

And the shell script is something like

#!/bin/bash
cd /path/to/the/shell/script/
/usr/bin/pythonw '/path/to/your/scholar.py-master/scholar.py' -c 500 --author "Joe Bloggs" --after=1999 --before=2007 --csv > a.csv
/usr/bin/pythonw '/path/to/your/scholar.py-master/scholar.py' -c 500 --author "Joe Bloggs" --after=2008 --before=2012 --csv > b.csv
/usr/bin/pythonw '/path/to/your/scholar.py-master/scholar.py' -c 500 --author "Joe Bloggs" --after=2013 --csv > c.csv
OF=all_$(date +%Y%m%d).csv
cat a.csv b.csv c.csv > $OF

To crunch the data I wrote something in Igor which reads in the CSVs and plotted out my data. This meant first getting a list of clusterIDs which correspond to my papers in order to filter out other people’s work.

I have a surprising number of tracks in my library with Rollercoaster in the title. I will go with indie wannabe act Northern Uproar for the title of this post.

“What goes up (must come down)” is from Graham & Brown’s Super Fresco wallpaper ad from 1984.

“Please please tell me now” is a lyric from Duran Duran’s “Is There Something I Should Know?”.

The Second Arrangement

To validate our analyses, I’ve been using randomisation to show that the results we see would not arise due to chance. For example, the location of pixels in an image can be randomised and the analysis rerun to see if – for example – there is still colocalisation. A recent task meant randomising live cell movies in the time dimension, where two channels were being correlated with one another. In exploring how to do this automatically, I learned a few new things about permutations.

Here is the problem: If we have two channels (fluorophores), we can test for colocalisation or cross-correlation and get a result. Now, how likely is it that this was due to chance? So we want to re-arrange the frames of one channel relative to the other such that frame i of channel 1 is never paired with frame i of channel 2. This is because we want all pairs to be different to the original pairing. It was straightforward to program this, but I became interested in the maths behind it.

The maths: Rearranging n objects is known as permutation, but the problem described above is known as Derangement. The number of permutations of n frames is n!, but we need to exclude cases where the ith member stays in the ith position. It turns out that to do this, you need to use the principle of inclusion and exclusion. If you are interested, the solution boils down to

n!\sum_{k=0}^{n}\frac{(-1)^k}{k!}

Which basically means: for n frames, there are n! number of permutations, but you need to subtract and add diminishing numbers of different permutations to get to the result. Full description is given in the wikipedia link. Details of inclusion and exclusion are here.

I had got as far as figuring out that the ratio of permutations to derangements converges to e. However,  you can tell that I am not a mathematician as I used brute force calculation to get there rather than write out the solution. Anyway, what this means in a computing sense, is that if you do one permutation, you might get a unique combination, with two you’re very likely to get it, and by three you’ll certainly have it.

Back to the problem at hand. It occurred to me that not only do we not want frame i of channel 1 paired with frame i of channel 2 but actually it would be preferable to exclude frames i ± 2, let’s say. Because if two vesicles are in the same location at frame i they may also be colocalised at frame i-1 for example. This is more complex to write down because for frames 1 and 2 and frames n and n-1, there are fewer possibilities for exclusion than for all other frames. For all other frames there are n-5 legal positions. This obviously sets a lower limit for the number of frames capable of being permuted.

The answer to this problem is solved by rook polynomials. You can think of the original positions of frames as columns on a n x n chess board. The rows are the frames that need rearranging, excluded positions are coloured in. Now the permutations can be thought of as Rooks in a chess game (they can move horizontally or vertically but not diagonally). We need to work out how many arrangements of Rooks are possible such that there is one rook per row and such that no Rook can take another.

If we have an 7 frame movie, we have a 7 x 7 board looking like this (left). The “illegal” squares are coloured in. Frame 1 must go in position D,E,F or G, but then frame 2 can only go in E, F or G. If a rook is at E1, then we cannot have a rook at E2. And so on.

To calculate the derangements:

1 + 29 x + 310 x^2 + 1544 x^3 + 3732 x^4 + 4136 x^5 + 1756 x^6 + 172 x^7

This is a polynomial expansion of this expression:

R_{m,n}(x) = n!x^nL_n^{m-n}(-x^{-1})

where L_n^\alpha(x) is an associated Laguerre polynomial. The solution in this case is 8 possibilities. From 7! = 5040 permutations. Of course our movies have many more frames and so the randomisation is not so limited. In this example, frame 4 can only either go in position A or G.

Why is this important? The way that the randomisation is done is: the frames get randomised and then checked to see if any “illegal” positions have been detected. If so, do it again. When no illegal positions are detected, shuffle the movie accordingly. In the first case, the computation time per frame is constant, whereas in the second case it could take much longer (because there will be more rejections). In the case of 7 frames, with the restriction of no frames at i ±2, then the failure rate is 5032/5040 = 99.8%. Depending on how the code is written, this can cause some (potentially lengthy) wait time. Luckily, the failure rate comes down with more frames.

What about it practice? The numbers involved in directly calculating the permutations and exclusions quickly becomes too big using non-optimised code on a simple desktop setup (a 12 x 12 board exceeds 20 GB). The numbers and rates don’t mean much, what I wanted to know was whether this slows down my code in a real test. To look at this I ran 100 repetitions of permutations of movies with 10-1000 frames. Whereas with the simple derangement problem permutations needed to be run once or twice, with greater restrictions, this means eight or nine times before a “correct” solution is found. The code can be written in a way that means that this calculation is done on a placeholder wave rather than the real data and then applied to the data afterwards. This reduces computation time. For movies of around 300 frames, the total run time of my code (which does quite a few things besides this) is around 3 minutes, and I can live with that.

So, applying this more stringent exclusion will work for long movies and the wait times are not too bad. I learned something about combinatorics along the way. Thanks for reading!

Further notes

The first derangement issue I mentioned is also referred to as the hat-check problem. Which refers to people (numbered 1,2,3 … n) with corresponding hats (labelled 1,2,3 … n). How many ways can they be given the hats at random such that they do not get their own hat?

Adding i+1 as an illegal position is known as problème des ménages. This is a problem of how to seat married couples so that they sit in a man-woman arrangement without being seated next to their partner. Perhaps i ±2 should be known as the vesicle problem?

The post title comes from “The Second Arrangement” by Steely Dan. An unreleased track recorded for the Gaucho sessions.

Parallel lines: new paper on modelling mitotic microtubules in 3D

We have a new paper out! You can access it here.

The people

This paper really was a team effort. Faye Nixon and Tom Honnor are joint-first authors. Faye did most of the experimental work in the final months of her PhD and Tom came up with the idea for the mathematical modelling and helped to rewrite our analysis method in R. Other people helped in lots of ways. George did extra segmentation, rendering and movie making. Nick helped during the revisions of the paper. Ali helped to image samples… the list is quite long.

The paper in a nutshell

We used a 3D imaging technique called SBF-SEM to see microtubules in dividing cells, then used computers to describe their organisation.

What’s SBF-SEM?

Serial block face scanning electron microscopy. This method allows us to take an image of a cell and then remove a tiny slice, take another image and so on. We then have a pile of images which covers the entire cell. Next we need to put them back together and make some sense of them.

How do you do that?

We use a computer to track where all the microtubules are in the cell. In dividing cells – in mitosis – the microtubules are in the form of a mitotic spindle. This is a machine that the cell builds to share the chromosomes to the two new cells. It’s very important that this process goes right. If it fails, mistakes can lead to diseases such as cancer. Before we started, it wasn’t known whether SBF-SEM had the power to see microtubules, but we show in this paper that it is possible.

We can see lots of other cool things inside the cell too like chromosomes, kinetochores, mitochondria, membranes. We made many interesting observations in the paper, although the focus was on the microtubules.

So you can see all the microtubules, what’s interesting about that?

The interesting thing is that our resolution is really good, and is at a large scale. This means we can determine the direction of all the microtubules in the spindle and use this for understanding how well the microtubules are organised. Previous work had suggested that proteins whose expression is altered in cancer cause changes in the organisation of spindle microtubules. Our computational methods allowed us to test these ideas for the first time.

Resolution at a large scale, what does that mean?

The spindle is made of thousands of microtubules. With a normal light microscope, we can see the spindle but we can’t tell individual microtubules apart. There are improvements in light microscopy (called super-resolution) but even with those improvements, right in the body of the spindle it is still not possible to resolve individual microtubules. SBF-SEM can do this. It doesn’t have the best resolution available though. A method called Electron Tomography has much higher resolution. However, to image microtubules at this large scale (meaning for one whole spindle), it would take months or years of effort! SBF-SEM takes a few hours. Our resolution is better than light microscopy, worse than electron tomography, but because we can see the whole spindle and image more samples, it has huge benefits.

What mathematical modelling did you do?

Cells are beautiful things but they are far from perfect. The microtubules in a mitotic spindle follow a pattern, but don’t do so exactly. So what we did was to create a “virtual spindle” where each microtubule had been made perfect. It was a bit like “photoshopping” the cell. Instead of straightening the noses of actresses, we corrected the path of every microtubule. How much photoshopping was needed told us how imperfect the microtubule’s direction was. This measure – which was a readout of microtubule “wonkiness” – could be done on thousands of microtubules and tell us whether cancer-associated proteins really cause the microtubules to lose organisation.

The publication process

The paper is published in Journal of Cell Science and it was a great experience. Last November, we put up a preprint on this work and left it up for a few weeks. We got some great feedback and modified the paper a bit before submitting it to a journal. One reviewer gave us a long list of useful comments that we needed to address. However, the other two reviewers didn’t think our paper was a big enough breakthrough for that journal. Our paper was rejected*. This can happen sometimes and it is frustrating as an author because it is difficult for anybody to judge which papers will go on to make an impact and which ones won’t. One of the two reviewers thought that because the resolution of SBF-SEM is lower than electron tomography, our paper was not good enough. The other one thought that because SBF-SEM will not surpass light microscopy as an imaging method (really!**) and because EM cannot be done live (the cells have to be fixed), it was not enough of a breakthrough. As I explained above, the power is that SBF-SEM is between these two methods. Somehow, the referees weren’t convinced. We did some more work, revised the paper, and sent it to J Cell Sci.

J Cell Sci is a great journal which is published by Company of Biologists, a not-for-profit organisation who put a lot of money back into cell biology in the UK. They are preprint friendly, they allow the submission of papers in any format, and most importantly, they have a fast-track*** option. This allowed me to send on the reviews we had and including our response to them. They sent the paper back to the reviewer who had a list of useful comments and they were happy with the changes we made. It was accepted just 18 days after we sent it in and it was online 8 days later. I’m really pleased with the whole publishing experience with J Cell Sci.

 

* I’m writing about this because we all have papers rejected. There’s no shame in that at all. Moreover, it’s obvious from the dates on the preprint and on the JCS paper that our manuscript was rejected from another journal first.

** Anyone who knows something about microscopy will find this amusing and/or ridiculous.

*** Fast-track is offered by lots of journals nowadays. It allows authors to send in a paper that has been reviewed elsewhere with the peer review file. How the paper has been revised in light of those comments is assessed by at the Editor and one peer reviewer.

Parallel lines is of course the title of the seminal Blondie LP. I have used this title before for a blog post, but it matches the topic so well.

Adventures in Code V: making a map of Igor functions

I’ve generated a lot of code for IgorPro. Keeping track of it all has got easier since I started using GitHub – even so – I have found myself writing something only to discover that I had previously written the same thing. I was thinking that it would be good to make a list of all functions that I’ve written to locate long lost functions.

This question was brought up on the Igor mailing list a while back and there are several solutions – especially if you want to look at dependencies. However, this two liner works to generate a file called funcfile.txt which contains a list of functions and the ipf file that they are appear in.

grep "^[ \t]*Function" *.ipf | grep -oE '[ \t]+[A-Za-z_0-9]+\(' | tr -d " " | tr -d "(" > output
for i in `cat output`; do grep -ie "$i" *.ipf | grep -w "Function" >> funcfile.txt ; done

Thanks to Thomas Braun on the mailing list for the idea. I have converted it to work on grep (BSD grep) 2.5.1-FreeBSD which runs on macOS. Use the terminal, cd to the directory containing your ipf files and run it. Enjoy!

EDIT: I did a bit more work on this idea and it has now expanded to its own repo. Briefly, funcfile.txt is converted to tsv and then parsed – using Igor – to json. This can be displayed using some d3.js magic.


Part of a series with code snippets and tips.

Realm of Chaos

Caution: this post is for nerds only.

I watched this numberphile video last night and was fascinated by the point pattern that was created in it. I thought I would quickly program my own version to recreate it and then look at patterns made by more points.

I didn’t realise until afterwards that there is actually a web version of the program used in the video here. It is a bit limited though so my code was still worthwhile.

A fractal triangular pattern can be created by:

  1. Setting three points
  2. Picking a randomly placed seed point
  3. Rolling a die and going halfway towards the result
  4. Repeat last step

If the first three points are randomly placed the pattern is skewed, so I added the ability to generate an equilateral triangle. Here is the result.

and here are the results of a triangle through to a decagon.

All of these are generated with one million points using alpha=0.25. The triangle, pentagon and hexagon make nice patterns but the square and polygons with more than six points make pretty uninteresting patterns.

Watching the creation of the point pattern from a triangular set is quite fun. This is 30000 points with a frame every 10 points.

Here is the code.

Some other notes: this version runs in IgorPro. In my version, the seed is set at the centre of the image rather than a random location. I used the random allocation of points rather than a six-sided dice.

The post title is taken from the title track from Bolt Thrower’s “Realm of Chaos”.

Notes To The Future

Previously I wrote about our move to electronic lab notebooks (ELNs). This post contains the technical details to understand how it works for us. You can even replicate our setup if you want to take the plunge.

Why go electronic?

Lots and lots of lab books and folders.

Many reasons: I wanted to be able to quickly find information in our lab books. I wanted lab members to be able to share information more freely. I wanted to protect against loss of a notebook. I think switching to ELNs is inevitable and not only that I needed to do something about the paper notebooks: my group had amassed 100 in 10 years.

We took the plunge and went electronic. To recap, I decided to use WordPress as a platform for our ELN.

Getting started

We had a Linux box on which I could install WordPress. This involved installing phpMyAdmin and registering a mySQL database and then starting up WordPress. If that sounds complicated, it really isn’t. I simply found a page on the web with step-by-step instructions for my box. You could run this on an old computer or even on a Raspberry Pi, it just has to be on a local network.

Next, I set myself up as admin and then created a user account for each person in the lab. Users can have different privileges. I set all people in the lab to Author. This means they can make, edit and delete posts. Being an Author is better than the other options (Contributor or Editor) which wouldn’t work for users to make entries, e.g. Contributors cannot upload images. Obviously authors being able to delete posts is not acceptable for an ELN, so I removed this capability with a plugin (see below).

I decided that we would all write in the same ELN. This makes searching the contents much easier for me, the PI. The people in the lab were a bit concerned about this because they were each used to having their own lab book. It would be possible to set up a separate ELN for each person but this would be too unwieldy for the PI, so I grouped everyone together. However, it doen’t feel like writing in a communal notebook because each Author of a post is identifiable and so it is possible to look at the ELN of just one user as a “virtual lab book”. To do this easily, you need a plugin (see below).

If we lost the WP installation it would be a disaster, so I setup a backup. This is done locally with a plugin (see below). Additionally, I set up an rsync routine from the box that goes off weekly to our main lab server. Our main lab server uses ZFS and is backed up to a further geographically distinct location. So this is pretty indestructible (if that statement is not tempting fate…). The box has a RAID6 array of disks but in the case of hardware failure plus corruption and complete loss of the array, we would lose one week of entries at most.

Theme

We tried out a few before settling on one that we liked. We might change and tweak this more as we go on.

The one we liked was called gista. It looks really nice, like a github page. It is no longer maintained unfortunately. Many of the other themes we looked at have really big fonts for the posts, which gives a really bloggy look, but is not conducive to a ELN.

Two things needed tweaking for gitsta to be just right: I wanted the author name to be visible directly after the title and I didn’t want comments to show up. This meant editing the content.php file. Finally, the style.css file needs changing to have the word gista-child in the comments, to allow it to get dependencies from gitsta and to show up in your list of themes to select.

The editing is pretty easy, since there are lots of guides online for doing this. If you just want to download our edited version to try it, you can get it from here (I might make some more changes in the future). If you want to use it, just download it, rename the directory as gitsta-child and then place it in WordPress/wp-content/themes/ of your installation – it should be good to go!

Plugins

As you saw above, I installed a few plugins which are essential for full functionality

  • My Private Site – this plugin locks off the site so that only people with a login can access the site. Our ELN is secure – note that this is not a challenge to try to hack us – it sits inside our internal network and as such is not “on the internet”. Nonetheless, anyone with access to the network who could find the IP could potentially read our ELN. This plugin locks off access to everyone not in our lab.
  • Authors Widget – this plugin allows the addition of a little menu to the sidebar (widget) allowing the selection of posts by one author. This allows us to switch between virtual labbooks for each lab member. Users can bookmark their own Author name so that they only see their labbook if they want.
  • Capability Manager Enhanced – you can edit rights of each level of user or create new levels of user. I used this to remove the ability to delete posts.
  • BackWPup – this allows the local backup of all WP content. It’s highly customisable and is recommended.

Other plugins which are non-essential-but-useful

  • WP Statistics – this is a plugin that allows admin to see how many visits etc the ELN has had that day/week etc. This one works on a local installation like ours. Others will not work because they require the site to be on the internet.
  • WP-Markdown – this allows you to write your posts in md. I like writing in md, nobody in my lab uses this function.

Gitsta wants to use gust rather than the native WP dashboard. But gust and md were too complicated for our needs, so I uninstalled gust.

Using the ELN

Lab members/users/authors make “posts” for each lab book entry. This means we have formalised how lab book entries are done. We already had a guide for best practice for labbook entries in our lab manual which translates wonderfully to the ELN. It’s nothing earth-shattering, just that each experiment has a title, aim, methods, results and conclusion (just like we were taught in school!). In a paper notebook this is actually difficult to do because our experiments run for days (sometimes weeks) and many experiments run simultaneously. This means you either have to budget pages in the notebook for each separate experiment, interleave entries (which is not very readable) or write up at the end (which is not best practice). With ELNs you just make one entry for each experiment and update all of them as you go along. Problem solved. Edits are possible and it is possible to see what changes have been made and it is even possible to roll back changes.

Posts are given a title. We have a system in the lab for initials plus numbers for each experiment. This is used for everything associated with that experiment, so the files are easy to find, the films can be located and databases can cross-reference. The ELN also allows us to add categories and tags. So we have wide ranging categories (these are set by admin) and tags which can be more granular. Each post created by an author is identifiable as such, even without the experiment code to the title. So it is possible to filter the view to see posts:

  • by one lab member
  • on Imaging (or whatever topic)
  • by date or in a date range

Of course you can also search the whole ELN, which is the thing I need most of all because it gets difficult to remember who did what and when. Even lab members themselves don’t remember that they did an experiment two or more years previously! So this feature will be very useful in the future.

WordPress allows pictures to be uploaded and links to be added. Inserting images is easy to show examples of how an experiment went. For data that is captured digitally this is a case of uploading the file. For things that are printed out or are a physical thing, i.e. western films or gel doc pictures, we are currently taking a picture and adding these to the post. In theory we can add hard links to data on our server. This is certainly not allowed in many other ELNs for security reasons.

In many ways the ELN is no different to our existing lab books. Our ELN is not on the internet and as such is not accessible from home without VPN to the University. This is analogous to our current set up where the paper lab books have to stay in the lab and are not allowed to be taken home.

Finally, in response to a question on Twitter after the previous ELN post: how do we protect against manipulation? Well previously we followed best practice for paper books. We used hard bound books with numbered pages (ensuring pages couldn’t be removed), Tip-ex was not allowed, edits had to be done in a different colour pen and dated etc. I think the ELN is better in many ways. Posts cannot be deleted, edits are logged and timestamped. User permissions mean I know who has edited what and when. Obviously, as with paper books, if somebody is intent on deception, they can still falsify their own lab records in some way. In my opinion, the way to combat this is regular review of the primary data and also maintaining an environment where people don’t feel like they should deceive.

The post title is taken from “Notes To The Future” by Patti Smith , the version I have is recorded Live in St. Mark’s Church, NYC in 2002 from Land (1975-2002). I thought this was appropriate since a lab note book is essentially notes to your future self. ELNs are also the future of taking notes in the lab.

The Soft Bulletin: Electronic Lab Notebooks

We finally took the plunge and adopted electronic lab notebook (ELNs) for the lab. This short post describes our choice of software. I will write another post about how it’s going, how I set it up and other technical details.

tl;dr we are using WordPress as our ELN.

First, so you can understand my wishlist of requirements for the perfect ELN.

  1. Easy-to-use. Allow adding pictures and notes easily.
  2. Versioning (ability to check edits and audit changes)
  3. Backup and data security
  4. Ability to export and go elsewhere if required
  5. Free or low cost
  6. Integration with existing lab systems if possible
  7. Open software, future development
  8. Clarity over who owns the software, who owns the data, and where the information is stored
  9. Can be deployed for the entire lab

There are many ELN software solutions available, but actually very few fulfil all of those requirements. So narrowing down the options was quite straightforward in the end. Here is the path I went down.

Evernote

I have used Evernote as my ELN for over a year. I don’t do labwork these days, but I make notes when doing computer programming, data analysis and writing papers. I also use it for personal stuff. I like it a lot, but Evernote is not an ELN solution for a whole lab. First, there is an issue over people using it for work and for personal stuff. How do we archive their lab documents without accessing other data? How do we pay for it? What happens when they leave? These sorts of issues prevent the use of many of the available ELN software packages, for a whole lab. I think many ELN software packages would work well for individuals, but I wanted something to deploy for the whole lab. For example, so that I can easily search and find stuff long after the lab member has left and not have to go into different packages to do this.

OneNote

The next most obvious solution is OneNote from Microsoft. Our University provides free access to this package and so using it would get around any pricing problems. Each lab member could use it with their University identity, separating any problems with work/life. It has some nice features (shared by Evernote) such as photographing documents/whiteboards etc and saving them straight to notes. I know several individuals (not whole labs) using this as their ELN. I’m not a big fan of running Microsoft software on Macs and we are completely Apple native in the lab. Even so, OneNote was a promising solution.

I also looked into several other software packages:

I liked the sound of RSpace, but it wasn’t clear to me who they were, why they wanted to offer a free ELN service and where they would store our data and what they might want to do with it. Last year, the scare that Evernote were going to snoop on users’ data made me realise that when it came to our ELNs – we had to host the data. I didn’t want to trust a company to do this. I also didn’t want to rely on a company to:

  • continue to do what we sign up for, e.g. provide a free software
  • keep updating the software, e.g.  so that macOS updates don’t kill it
  • not sell up to an evil company
  • do something else that I didn’t agree with.

As I saw it, this left one option: self-hosting and not only that, there were only two possibilities.

Use a wiki

This is – in many ways – my preferred solution. Wikis have been going for years and they are widely used. I set one up and made a lab notebook entry. It was great. I could edit it and edits were timestamped. It looked OK (but not amazing). There were possibilities to add tables, links etc. However, I thought that doing the code to make an entry would be a challenge for some people in the lab. I know that wikis are everywhere and that editing them is simple, but I kept thinking of the project student that comes to the lab for a short project. They need to read papers to figure out their project, they have to learn to clone/run gels/image cells/whatever AND then they also have to learn to write in a wiki? Just to keep a log of what they are doing? For just a short stay? I could see this meaning that the ELN gets neglected and things didn’t get documented.

I know other labs are using a wiki as an ELN and they do it successfully. It is possible, but I don’t think it would work for us. I also needed to entice people in the lab to convert them from using paper lab notebooks. This meant something that looked nice.

Use WordPress

This option I did not take seriously at first. A colleague told me two years ago that WordPress would be the best platform for an ELN, and I smiled politely. I write this blog on a wordpress dot com platform, but somehow didn’t consider it as an ELN option. After looking for alternatives that we could self-host, it slowly dawned on me that WordPress (a self-hosted installation) actually meets all of the requirements for an ELN.

  1. It’s easy-to-use. My father, who is in his 70s, edits a website using WordPress as a platform. So any person working in the lab should be able to do it.
  2. Versioning. You can see edits and roll back changes if required. Not as granular as wiki but still good.
  3. Backup and data security. I will cover our exact specification in a future post. Our ELN is internal and can’t be accessed from outside the University. We have backup and it is pretty secure. Obviously, self-hosting means that if we have a technical problem, we have to fix it. Although I could move it to new hardware very quickly.
  4. Ability to export and go elsewhere if required. It is simple to pack up an xml and move to another platform. The ubiquity of WordPress means that this will always be the case.
  5. Free or low cost. WordPress is free and you can have as many users as you like! The hardware has a cost, but we have that hardware anyway.
  6. Integration with existing lab systems if possible. We use naming conventions for people’s lab book entries and experiments. Moving to WordPress makes this more formal. Direct links to the primary data on our lab server are possible (not necessarily true of other ELN software).
  7. Open software, future development. Again WordPress is ubiquitous and so there are options for themes and plugins to help make it a good ELN. We can also do some development if needed. There is a large community, meaning tweaking the installation is easy to do.
  8. Clarity over who owns the software, who owns the data, and where the information is stored. It’s installed on our machines and so we don’t have to worry about this.
  9. It can be deployed for the whole lab. Details in the follow-up post.

It also looks good and has a more up-to-date feel to it than a wiki. A screenshot of an innocuous lab notebook entry is shown to the right. I’ve blurred out some details of our more exciting experiments.

It’s early days. I started by getting the newer people in the lab to convert. Anyone who had only a few months left in the lab was excused from using the new system. I’m happy with the way it looks and how it works. We’ll see how it works out.

The main benefits for me are readability and being able to look at what people are doing. I’m looking forward to being able to search back through the entries, as this can be a serious timesuck with paper lab notebooks.

Edit 2017-04-26T07:28:43Z After posting this yesterday a few other suggestions came through that you might want to consider.

Labfolder, I had actually looked at this and it seems good but at 10 euros per user per month, I thought it was too expensive. I get that good software solutions have a cost and am not against paying for good software. I’d prefer a one-off cost (well, of course I’d prefer free!).

Mary Elting alerted me to Shawn Douglas’s lektor-based ELN. Again this ticks all of the boxes I mentioned above.

Manuel Théry suggested ELab. Again, I hadn’t seen this and it looks like it meets the criteria.

The Soft Bulletin is an occasional series of posts about software choices in research. The name comes from The Flaming Lips LP of the same name.

 

The Soft Bulletin: PDF organisation

I recently asked on Twitter for any recommendations for software to organise my PDFs. I got several replies, but nothing really fitted the bill. This is a brief summary.

My situation

I have quite a lot of books, textbooks, cheat sheets, manuals, protocols etc. in PDF format and I need a way to organise them. I don’t need to reference this content, I just need to search it and access it quickly – ideally across several devices.

Note: I don’t collect PDFs of research articles. I have a hundred or so articles that were difficult to get hold of, and I keep those, but I’m pretty complacent about my access to scholarly literature.

I currently use Papers2 for storing my PDFs. It’s OK, but there are some bugs in it. Papers3 came out a few years ago, but I didn’t do the upgrade because there are issues with sync across multiple computers. Now it doesn’t look like Papers will be supported in the future. For example, I heard on Twitter that there is no ETA for an issue with Papers3 on Sierra. Future proofing – I’ve come to realise – is important to me as I am pretty loyal to software, I don’t like to change to something else, but I do like new features and innovation.

I don’t need a solution for referencing. I am resigned to using EndNote for that.

Ideally I just want something like iTunes to organise my PDFs, but I don’t want to use iTunes! Perhaps my requirements are too particular and what I want just isn’t available.

The suggestions

Thanks to everyone who made suggestions. Together with other solutions they were (in no particular order):

Zotero

www.zotero.org

I downloaded this and gave it a brief try. PDF import worked well and the UI looked OK. I stumbled on the sync capabilities. I currently sync my computers with Unison and this is complicated (but not impossible) to do for Zotero. They want you to use cloud syncing – which I would probably be OK with. I need to test out which cloud service is best to use. There is a webDAV option which my University supports and I think this would work for me. I think this software is the most likely candidate for me to switch to.

Mendeley

www.mendeley.com

This software got the most recommendations. I have to admit that the Elsevier connection is a huge turn-off for me. Although the irony of using it to organise my almost exclusively Elsevier-free content would be quite nice. I know that most of this type of software has been bought out by the publishing giants (Papers by Springer, EndNote by Thomson Reuters/Clarivate), but I don’t like this and I don’t have to support it if I don’t want to. I didn’t look into sync capabilities here.

Bookends

www.sonnysoftware.com

People rave about this software package for Mac. I like the fact that it has a separate lineage to the other packages. It is very expensive and it is primarily a referencing package. Right now, I’m just looking for something to organise my PDFs and this seems to be overkill.

Evernote

www.evernote.com

I use Evernote as a lab notebook and it is possible to use it to store PDFs. You can make a NoteBook for them, add a Note for each one and attach the PDF. The major plus here is that I already use it (and pay for it). The big negative is that I would prefer a separate standalone package to organise my PDFs. I know, difficult to please aren’t I?

Finder and Spotlight

This is the D.I.Y. option.

I have to say that this is the most appealing in many ways. If you just name PDFs systematically and store them in a folder hierarchy that you organise and tag – it would work. Sync would work with my current solution. Searching with Spotlight would work just as well as any other program. I would not need another program! At some point in the past I organised my PDFs like this. I moved to storing them in Papers so that it would save them in a hierarchical structure for me. This is what I mean by an iTunes-like organiser. An app to name, tag and file-away the PDFs would be ideal. I don’t want to go back to this if I can help it.

ReadCube

www.readcube.com

Like Mendeley, this is an option that I did not seriously entertain. I think this is too far away from what I want. As I see it, this software is designed as a web extension and paper recommendation service, which is not what I’m looking for.

Papers3

papersapp.com

As mentioned above, the lack of updates to this software and problems with sync mean that I am looking for something else. I really liked Papers2 and would be happy to continue using this if various things like import and editing were improved. I guess the option here is to stay with Papers2 and put up with the little things that annoy me. At some point though there will be a macOS update which breaks it and then I will be stuck.

Endnote
endnote.com

I use Endnote for referencing. I hate Endnote with a passion. But I can use it. I know how to write styles etc. and edit term lists because I’ve used it since something like v3. At some point in the past I began to store papers in Endnote. I stopped doing this and moved to Papers2. I have to admit it’s OK at doing this, although the way it organises the PDFs on disk is a bit strange IMO. I don’t like storing books and other content in my library though so this is not a good solution.

iBooks

Here is a curveball. I use iBooks and Kindle app for reading books in mobi/epub/pdf format. Actually, iBooks works quite well for PDFs and has the ability to sync with other devices. I have a feeling this could work, although some of the PDFs I have are quite bulky and I’d need to figure out a way for them to stay in the cloud and not reside on mobile devices. It’s definitely designed for reading books and not for pulling up the PDF in Preview and quickly finding a specific thing. For this reason I don’t think it would work.

Note that there are other apps for this task. Also, if you search for “PDF” in the App Store, there plenty of other programs aimed at people outside academia. Maybe one of those would be OK.

So what did I do?

I doubt anyone has the precise requirements that I have and so you’re probably not interested in what I decided. However, the simplest thing to do was to import the next batch of PDFs into Papers2 and wait to see if something better comes along. I will try Zotero a bit more when I get some time and see if this is the solution for me.

The post title is taken from The Flaming Lips’ 1999 album “The Soft Bulletin”.

Bateman Writes: 1994

BBC 6Music recently went back in time to 1994. This made me wonder what albums released that year were my favourites. As previously described on this blog, I have this information readily available. So I quickly crunched the numbers. I focused on full-length albums and, using play density (sum of all plays divided by number of album tracks) as a metric, I plotted out the Top 20.

1994

 

 

There you have it. Scorn’s epic Evanescence has the highest play density of any album released in 1994 in my iTunes library. By some distance. If you haven’t heard it, this is an amazing record that broke new ground and spawned numerous musical genres. I think that record, One Last Laugh In A Place of Dying… and Ro Sham Bo would all be high on my all-time favourite list. A good year for music then as far as I’m concerned.

Other observations: I was amazed that Definitely Maybe was up there, since I am not a big fan of Oasis. Likewise for Dummy by Portishead. Note that Oxford’s Angels and Superdeformed[…] are bootleg records.

Bubbling under: this was the top 20, but there were some great records bubbling under in the 20s and 30s. Here are the best 5.

  • Heatmiser – Cop and Speeder
  • Circle – Meronia
  • Credit to the Nation – Take Dis
  • Kyuss – Welcome to Sky Valley
  • Drive Like Jehu – Yank Crime

I heard tracks from some of these bands on 6Music, but many were missing. Maybe there is something for you to investigate.

Part of a series obsessively looking at music in an obsessive manner.

Meeting in the Aisle

Lab meetings: love them or loathe them, they’re an important part of lab-life. There’s many different formats and ways to do a lab meeting. Sometimes it feels like we’ve tried them all! I’m going to describe our current format and then discuss some other things to try.

Our current lab meeting format is:

  • Weekly. For one hour (Wednesdays at 9am)
  • One person each week talks about their progress. It rotates around.
  • At the start, we talk about general lab issues.
  • Then, last week’s data presenter does a 5 minute, one slide Journal club on a paper of their choice.
  • We organise the rota and table any issues using our general lab Trello board.

Currently, we meet in one of the pods in our building. A pod is a sound-proofed booth that seats 8 people on two sofa style seats. It has a table and an additional 2 people can cram in if needed. Previously we used a meeting room, with the presenter stood at the front using PowerPoint with a projector. One week the meeting room was unavailable and so we used a pod instead. It is a lot more informal and the suggestions and discussions flowed as a result. So we have kept the meeting in the pod, using a laptop to present data.

In addition to this, each person in my lab meets with me for 30 min on a Monday morning to go through raw data and troubleshooting. They also present a more formal talk to the centre once every 6-9 months. I mention this to give some context. Our lab meetings are something between “my cloning hasn’t worked” and a polished presentation.

I’m happy with the current arrangement, but we’ve tried many alternatives. Here is a brief list of things you can consider.

 

Two presenters

In my opinion this is a bad idea. We went through a period of doing this so that lab presentations were more frequent, or because we were also doing journal clubs too (I forget which). What happens is that one person has a lot of data and gets lots of discussion and then we either run out of time or the other person feels bad if they don’t have as much stuff to talk about. Accidentally you have made unnecessary competition amongst lab members which is not good. Just go for one presenter. The presenter feels like it is their day to get as much as they can out of the meeting and then next week the focus will move to someone else.

Round-the-table

This is where you go round and people say what they have done since the last meeting. Depending on the size of the group, this probably takes 2 hours or “as long as it takes” which cuts further into the working day. If the meeting is too frequent, lab members can soon get into a groove of saying “nothing worked” each time and it’s difficult to keep track of who is struggling. Not only is it easy for people to hide, the meeting can also become dominated by someone with interesting data. The format also doesn’t develop any presentation/explanation skills. My preference is to keep the focus on one person.

Rotating data talk and journal clubs

It is really common, especially if you have a small group to do data presentation one week and then journal club the next week. My feelings on Journal Clubs are: if they are done properly, they can be really useful and constructive. Too often they regress into the complete trashing of a paper. As fun as this is, it doesn’t teach trainees the right skills. I’d love it if people in the lab were on top of the literature, but forcing people to delve deeply into one paper is not very effective in promoting this behaviour. I think that it’s more important to use the lab meeting time to go through lab data rather than talk about someone else’s work. Some labs have it set up where the presenter can pick data or paper, which means people who are struggling with their project can hide behind presenting papers. I’m not a fan. We currently do a 5-minute journal club to briefly cover a paper and say why they thought it was good. This takes up minimal time and people can read more deeply if they want. I got this tip from another lab. I recently heard of a lab who spend one meeting a month going through one paper per lab member. We might try this in the future. We also have a list on our General lab Trello board for suggesting cool papers that people think others should read.

Banning powerpoint, western films on the table

At some point I got fed up with seeing a full-on talk from lab members each week, with an introduction and summary (and even acknowledgements!). Partly because it was very repetitive, partly because it inhibited discussions and also I felt people were spending too much time preparing their talk. Moving to the pod (see above) kind of solved this naturally. In the past, we did a total back-to-basics: “PowerPoint is now banned bring your lab book and let’s see the raw data”. This was a good shock to the system. However, people started printing out diagrams… these were made in PowerPoint … and before I knew it, PowerPoint was back! Now, there is value in lab members giving a proper talk in lab meeting. Everyone needs to learn to do it and it can quickly get people used to presenting. Not everyone is great at it though and what lab members need from a lab meeting – I believe – is feedback on their project and injection of new ideas. A formal talk from someone struggling to do a good job or overcome with nervousness doesn’t help anyone. I prefer to keep things informal. Lots of interruptions, questions and enthusiasm from the audience.

Joint lab meetings

When my group was starting and I just had two people we joined in with another lab in their lab meetings.   This worked well until my group was too large to make it work well. What was good was that the other PI was more experienced and liked to do a “blood on the floor” style of lab meeting. This is not really my style, but we had a “good cop, bad cop” thing going on which was useful. For a while. If the lab ethos is too different it can cause friction and if the other PI has any bad habits, things can quickly unravel. There’s also issues around collaboration and projects overlapping which can make joint lab meetings difficult. So, this can be useful if you can find the right lab to partner with, but proceed with caution.

Themed lab meetings 

No, not turning up dressed as someone from The Rocky Horror Picture Show… In my lab we work in two different areas. For a few years we segregated the lab meetings by theme. This seemed like a great idea initially, but in the end I changed from this because I worried it set up an artificial divide. People from the other theme started to ask if they could work in the lab instead. There was also different numbers of people working on the two themes. I tried to rotate the presenters fairly, but there was resentment that people presented more often on one theme than the other.  I know some dual-PI labs who do this successfully, but they have far more people. This is not recommended for a regular one PI lab with less than 10 people. Anyway, most labs just work in one area anyway.

Skype and remote lab meetings

For about one year, we had a student join our lab meetings via skype. She was working at another university and it was important for her to be involved in these meetings. It worked OK and she could even present her data when it was her turn. We used the lab dropbox folder for sharing slides, papers and data with her. We still use this folder now for that purpose. I know PIs who skype in to lab meetings when they are away, so that the lab meeting always goes ahead at the same time each week. I have never done this and don’t think it would work for our lab.

Fun stuff – breaking the routine

OK. Depending on your definition of fun… to check on the state of people’s lab books. I ask lab members to bring along their lab books without warning to the lab meeting and then get them to swap with a random person and then ask them to explain what that person did in the lab on a random date. It gets the message across and also brings up issues people are having with recording their data. We also occasionally do fun stuff such as quizzes but tend to do these outside of the lab meeting. I’ve also used the lab meeting to teach people how to do things in a software package or some other demo. This breaks things up a bit and can freshen up the lab meeting routine. Something else to consider to keep it fun: a cookie schedule. We don’t have one, but people randomly bring in some food if they have been away somewhere or they have cooked a delicacy from their home country.

State of the lab address

Once a year, normally in January when no-one wants to do the first lab meeting of the New Year, I do a state of the lab address. I go through the goals and objectives of the lab. Things that I feel are going well, areas where we could have done better. Successes from last year. The aim is to set the scene for the year ahead.

People in the lab can get a bit deep into their project and having some kind of overview is actually really helpful for them (or so they tell me!). Invite them along if you are giving a seminar or use a lab meeting to try out a seminar you are going to give so that they can see the big picture.

Ideas session

It doesn’t happen often that a presenter has nothing to present. The gaps between presenters are long enough to ensure this doesn’t happen. However, sometimes it can be that the person scheduled to talk has just given a bigger talk to the whole centre (and I forgot to check). When this has happened, we have switched to a forward-looking lab meeting to plan out ideas. Again this can break up the routine.

Time

I think 1 hour is enough. Any longer and it can start to drag out. I try to make it every week. Occasionally it gets cancelled when my schedule doesn’t allow it. But if the schedule gets too ad hoc, it sends the wrong message to the lab members.

Wednesday morning works well for us, but we’ve tried Tuesday mornings, Wednesday afternoons etc. I’m happy to set this by the demands from experiments etc. For example, most people in my lab like to image cells Thursday and Friday so those days are off limits. I also ask that everyone comes on time, and try to lead by example. I know a lab where they instigated a 1 Euro fine for lateness, including the PI. This is used as a cookie fund.

No lab meeting at all!

During my PhD we never had a regular lab meeting. Well, I can remember a few occassions where we tried to get it going but it didn’t stick. In my postdoc lab we also similarly failed to do it regularly. I didn’t mind at the time and was happy to spend the time instead working in the lab. However, I can see that many issues in the lab would’ve probably been solved by regular meetings. So I’m pro-lab meeting.

And finally…

Maybe this should have been at the beginning… but what exactly is the point of a lab meeting?

Presenter – Feedback on their project, injection of new ideas, is this the right route to go down? etc. Improve presentation skills, explain their project to others can help understanding.

Other lab people – Update on the presenter’s project, a feeling for what is expected, ideas for their own project. Have your say and learn to ask questions constructively.

PI – Update on project, give feedback, oversee the tone and standard.

Everyone – lab cohesion, a chance to address issues around the lab, catch up on the latest papers and data.

If none of the above suggestions sound good to you, maybe think about what you are trying to get out of your lab meetings and design a format that helps you achieve this.

The post title is taken from Meeting in the Aisle by Radiohead, B-side on the Karma Police single.