Tag Archives: graphs

The Great Curve: Citation distributions

This post follows on from a previous post on citation distributions and the wrongness of Impact Factor.

Stephen Curry had previously made the call that journals should “show us the data” that underlie the much-maligned Journal Impact Factor (JIF). However, this call made me wonder what “showing us the data” would look like and how journals might do it.

What citation distribution should we look at? The JIF looks at citations in a year to articles published in the preceding 2 years. This captures a period in a paper’s life, but it misses “slow burner” papers and also underestimates the impact of papers that just keep generating citations long after publication. I wrote a quick bit of code that would look at a decade’s worth of papers at one journal to see what happened to them as yearly cohorts over that decade. I picked EMBO J to look at since they have actually published their own citation distribution, and also they appear willing to engage with more transparency around scientific publication. Note that, when they published their distribution, it considered citations to papers via a JIF-style window over 5 years.

I pulled 4082 papers with a publication date of 2004-2014 from Web of Science (the search was limited to Articles) along with data on citations that occurred per year. I generated histograms to look at distribution of citations for each year. Papers published in 2004 are in the top row, papers from 2014 are in the bottom row. The first histogram shows citations in the same year as publication, in the next column, the following year and so-on. Number of papers is on y and on x the number of citations. Sorry for the lack of labelling! My excuse is that my code made a plot with “subwindows”, which I’m not too familiar with.

allPlot

What is interesting is that the distribution changes over time:

  • In the year of publication, most papers are not cited at all, which is expected since there is a lag to publication of papers which can cite the work and also some papers do not come out until later in the year, meaning the likelihood of a citing paper coming out decreases as the year progresses.
  • The following year most papers are picking up citations: the distribution moves rightwards.
  • Over the next few years the distribution relaxes back leftwards as the citations die away.
  • The distributions are always skewed. Few papers get loads of citations, most get very few.

Although I truncated the x-axis at 40 citations, there are a handful of papers that are picking up >40 cites per year up to 10 years after publication – clearly these are very useful papers!

To summarise these distributions I generated the median (and the mean – I know, I know) number of citations for each publication year-citation year combination and made plots.

citedist

The mean is shown on the left and median on the right. The layout is the same as in the multi-histogram plot above.

Follow along a row and you can again see how the cohort of papers attracts citations, peaks and then dies away. You can also see that some years were better than others in terms of citations, 2004 and 2005 were good years, 2007 was not so good. It is very difficult, if not impossible, to judge how 2013 and 2014 papers will fare into the future.

What was the point of all this? Well, I think showing the citation data that underlie the JIF is a good start. However, citation data are more nuanced than the JIF allows for. So being able to choose how we look at the citations is important to understand how a journal performs. Having some kind of widget that allows one to select the year(s) of papers to look at and the year(s) that the citations came from would be perfect, but this is beyond me. Otherwise, journals would probably elect to show us a distribution for a golden year (like 2004 in this case), or pick a window for comparison that looked highly favourable.

Finally, I think journals are unlikely to provide this kind of analysis. They should, if only because it is a chance for a journal to show how it publishes many papers that are really useful to the community. Anyway, maybe they don’t have to… What this quick analysis shows is that it can be (fairly) easily harvested and displayed. We could crowdsource this analysis using standardised code.

Below is the code that I used – it’s a bit rough and would need some work before it could be used generally. It also uses a 2D filtering method that was posted on IgorExchange by John Weeks.
cdcode

The post title is taken from “The Great Curve” by Talking Heads from their classic LP Remain in Light.

At a Crawl: Analysis of Cell Migration in IgorPro

In the lab we have been doing quite a bit of analysis of cell migration in 2D. Typically RPE1 cells migrating on fibronectin-coated glass. There are quite a few tools out there to track cell movements and to analyse their migration. Naturally, none of these did quite what we wanted and none fitted nicely into our analysis workflow. This meant writing something from scratch in IgorPro. You can access the code from my GitHub pages.

We’ve previously published a paper doing these kinds of analysis, but this code is all-new, faster and more efficient

The CellMigration repo contains an ipf that has three functions which will import tracks into Igor and analyse them. We use manual tracking in ImageJ/FIJI to obtain the co-ordinates for analysis, this is the only non-Igor part. I figured it was not worth porting this part too. Instructions are given in the repo and are hopefully self-explanatory. Here’s a screenshot of a typical experiment.

CMScreenShot

Acknowledgements: Colour palette is taken from the SRON stylesheet. The excellent igorutils (by yamad) gave me the idea for a structure and jtigor helped me with referencing StrConstants.

The post title is taken from the track “At a Crawl” by The Melvins from their Ozma LP

Waiting to Happen: Publication lag times in Cell Biology Journals

My interest in publication lag times continues. Previous posts have looked at how long it takes my lab to publish our work, how often trainees publish and I also looked at very long lag times at Oncogene. I recently read a blog post on automated calculation of publication lag times for Bioinformatics journals. I thought it would be great to do this for Cell Biology journals too. Hopefully people will find it useful and can use this list when thinking about where to send their paper.

What is publication lag time?

If you are reading this, you probably know how science publication works. Feel free to skip. Otherwise, it goes something like this. After writing up your work for publication, you submit it to a journal. Assuming that this journal will eventually publish the paper (there is usually a period of submitting, getting rejected, resubmitting to a different journal etc.), they receive the paper on a certain date. They send it out to review, they collate the reviews and send back a decision, you (almost always) revise your paper further and then send it back. This can happen several times. At some point it gets accepted on a certain date. The journal then prepares the paper for publication in a scheduled issue on a specific date (they can also immediately post papers online without formatting). All of these steps add significant delays. It typically takes 9 months to publish a paper in the biomedical sciences. In 2015 this sounds very silly, when world-wide dissemination of information is as simple as a few clicks on a trackpad. The bigger problem is that we rely on papers as a currency to get jobs or funding and so these delays can be more than just a frustration, they can affect your ability to actually do more science.

The good news is that it is very straightforward to parse the received, accepted and published dates from PubMed. So we can easily calculate the publication lags for cell biology journals. If you don’t work in cell biology, just follow the instructions below to make your own list.

The bad news is that the deposition of the date information in PubMed depends on the journal. The extra bad news is that three of the major cell biology journals do not deposit their data: J Cell Biol, Mol Biol Cell and J Cell Sci. My original plan was to compare these three journals with Traffic, Nat Cell Biol and Dev Cell. Instead, I extended the list to include other journals which take non-cell biology papers (and deposit their data).

LagTimes1

A summary of the last ten years

Three sets of box plots here show the publication lags for eight journals that take cell biology papers. The journals are Cell, Cell Stem Cell, Current Biology, Developmental Cell, EMBO Journal, Nature Cell Biology, Nature Methods and Traffic (see note at the end about eLife). They are shown in alphabetical order. The box plots show the median and the IQR, whiskers show the 10th and 90th percentiles. The three plots show the time from Received-to-Published (Rec-Pub), and then a breakdown of this time into Received-to-Accepted (Rec-Acc) and Accepted-to-Published (Rec-Pub). The colours are just to make it easier to tell the journals apart and don’t have any significance.

You can see from these plots that the journals differ widely in the time it takes to publish a paper there. Current Biology is very fast, whereas Cell Stem Cell is relatively slow. The time it takes the journals to move them from acceptance to publication is pretty constant. Apart from Traffic where it takes an average of ~3 months to get something in to print. Remember that the paper is often online for this period so this is not necessarily a bad thing. I was not surprised that Current Biology was the fastest. At this journal, a presubmission inquiry is required and the referees are often lined up in advance. The staff are keen to publish rapidly, hence the name, Current Biology. I was amazed at Nature Cell Biology having such a short time from Received-to-Acceptance. The delay in Review-to-Acceptance comes from multiple rounds of revision and from doing extra experimental work. Anecdotally, it seems that the review at Nature Cell Biol should be just as lengthy as at Dev Cell or EMBO J. I wonder if the received date is accurate… it is possible to massage this date by first rejecting the paper, but allowing a resubmission. Then using the resubmission date as the received date [Edit: see below]. One way to legitimately limit this delay is to only allow a certain time for revisions and only allow one round of corrections. This is what happens at J Cell Biol, unfortunately we don’t have this data to see how effective this is.

lagtimes2

How has the lag time changed over the last ten years?

Have the slow journals always been slow? When did they become slow?  Again three plots are shown (side-by-side) depicting the Rec-Pub and then the Rec-Acc and Acc-Pub time. Now the intensity of red or blue shows the data for each year (2014 is the most intense colour). Again you can see that the dataset is not complete with missing date information for Traffic for many years, for example.

Interestingly, the publication lag has been pretty constant for some journals but not others. Cell Stem Cell and Dev Cell (but not the mothership – Cell) have seen increases as have Nature Cell Biology and Nature Methods. On the whole Acc-Pub times are stable, except for Nature Methods which is the only journal in the list to see an increase over the time period. This just leaves us with the task of drawing up a ranked list of the fastest to the slowest journal. Then we can see which of these journals is likely to delay dissemination of our work the most.

The Median times (in days) for 2013 are below. The journals are ranked in order of fastest to slowest for Received-to-Publication. I had to use 2013 because EMBO J is missing data for 2014.

Journal Rec-Pub Rec-Acc Acc-Pub
Curr Biol 159 99.5 56
Nat Methods 192 125 68
Cell 195 169 35
EMBO J 203 142 61
Nature Cell Biol 237 180 59
Traffic 244 161 86
Dev Cell 247 204 43
Cell Stem Cell 284 205 66

You’ll see that only Cell Stem Cell is over the threshold where it would be faster to conceive and give birth to a human being than to publish a paper there (on average). If the additional time wasted in submitting your manuscript to other journals is factored in, it is likely that most papers are at least on a par with the median gestation time.

If you are wondering why eLife is missing… as a new journal it didn’t have ten years worth of data to analyse. It did have a reasonably complete set for 2013 (but Rec-Acc only). The median time was 89 days, beating Current Biology by 10.5 days.

Methods

Please check out Neil Saunders’ post on how to do this. I did a PubMed search for (journal1[ta] OR journal2[ta] OR ...) AND journal article[pt] to make sure I didn’t get any reviews or letters etc. I limited the search from 2003 onwards to make sure I had 10 years of data for the journals that deposited it. I downloaded the file as xml and I used Ruby/Nokogiri to parse the file to csv. Installing Nokogiri is reasonably straightforward, but the documentation is pretty impenetrable. The ruby script I used was from Neil’s post (step 3) with a few lines added:


#!/usr/bin/ruby

require 'nokogiri'

f = File.open(ARGV.first)
doc = Nokogiri::XML(f)
f.close

doc.xpath("//PubmedArticle").each do |a|
r = ["", "", "", "", "", "", "", "", "", "", ""]
r[0] = a.xpath("MedlineCitation/Article/Journal/ISOAbbreviation").text
r[1] = a.xpath("MedlineCitation/PMID").text
r[2] = a.xpath("PubmedData/History/PubMedPubDate[@PubStatus='received']/Year").text
r[3] = a.xpath("PubmedData/History/PubMedPubDate[@PubStatus='received']/Month").text
r[4] = a.xpath("PubmedData/History/PubMedPubDate[@PubStatus='received']/Day").text
r[5] = a.xpath("PubmedData/History/PubMedPubDate[@PubStatus='accepted']/Year").text
r[6] = a.xpath("PubmedData/History/PubMedPubDate[@PubStatus='accepted']/Month").text
r[7] = a.xpath("PubmedData/History/PubMedPubDate[@PubStatus='accepted']/Day").text
r[8] = a.xpath("MedlineCitation/Article/Journal/JournalIssue/Pubdate/Year").text
r[9] = a.xpath("MedlineCitation/Article/Journal/JournalIssue/Pubdate/Month").text
r[10] = a.xpath("MedlineCitation/Article/Journal/JournalIssue/Pubdate/Day").text
puts r.join(",")
end

and then executed as described. The csv could then be imported into IgorPro and processed. Neil’s post describes a workflow for R, or you could use Excel or whatever at this point. As he notes, quite a few records are missing the date information and some of it is wrong, i.e. published before it was accepted. These need to be cleaned up. The other problem is that the month is sometimes an integer and sometimes a three-letter code. He uses lubridate in R to get around this, a loop-replace in Igor is easy to construct and even Excel can handle this with an IF statement, e.g. IF(LEN(G2)=3,MONTH(1&LEFT(G2,3)),G2) if the month is in G2. Good luck!

Edit 9/3/15 @ 17:17 several people (including Deborah Sweet and Bernd Pulverer from Cell Press/Cell Stem Cell and EMBO, respectively) have confirmed via Twitter that some journals use the date of resubmission as the submitted date. Cell Stem Cell and EMBO journals use the real dates. There is no way to tell whether a journal does this or not (from the deposited data). Stuart Cantrill from Nature Chemistry pointed out that his journal do declare that they sometimes reset the clock. I’m not sure about other journals. My own feeling is that – for full transparency – journals should 1) record the actual dates of submission, acceptance and publication, 2) deposit them in PubMed and add them to the paper. As pointed out by Jim Woodgett, scientists want the actual dates on their paper, partly because they are the real dates, but also to claim priority in certain cases. There is a conflict here, because journals might appear inefficient if they have long publication lag times. I think this should be an incentive for Editors to simplify revisions by giving clear guidance and limiting successive revision cycles. (This Edit was corrected 10/3/15 @ 11:04).

The post title is taken from “Waiting to Happen” by Super Furry Animals from the “Something 4 The Weekend” single.

My Favorite Things

I realised recently that I’ve maintained a consistent iTunes library for ~10 years. For most of that time I’ve been listening exclusively to iTunes, rather than to music in other formats. So the library is a useful source of information about my tastes in music. It should be possible to look at who are my favourite artists, what bands need more investigation, or just to generate some interesting statistics based on my favourite music.

Play count is the central statistic here as it tells me how often I’ve listened to a certain track. It’s the equivalent of a +1/upvote/fave/like or maybe even a citation. Play count increases by one if you listen to a track all the way to the end. So if a track starts and you don’t want to hear it and you skip on to the next song, there’s no +1. There’s a caveat here in that the time a track has been in the library, influences the play count to a certain extent – but that’s for another post*. The second indicator for liking a track or artist is the fact that it’s in the library. This may sound obvious, but what I mean is that artists with lots of tracks in the library are more likely to be favourite artists compared to a band with just one or two tracks in there. A caveat here is that some artists do not have long careers for a variety of reasons, which can limit the number of tracks actually available to load into the library. Check the methods at the foot of the post if you want to do the same.

What’s the most popular year? Firstly, I looked at the most popular year in the library. This question was the focus of an earlier post that found that 1971 was the best year in music. The play distribution per year can be plotted together with a summary of how many tracks and how many plays in total from each year are in the library. There’s a bias towards 90s music, which probably reflects my age, but could also be caused by my habit of collecting CD singles which peaked as a format in this decade. The average number of plays is actually pretty constant for all years (median of ~4), the mean is perhaps slightly higher for late-2000s music.

Favourite styles of music: I also looked at Genre. Which styles of music are my favourite? I plotted the total number of tracks versus the total number of plays for each Genre in the library. Size of the marker reflects the median number of plays per track for that genre. Most Genres obey a rule where total plays is a function of total tracks, but there are exceptions. Crossover, Hip-hop/Rap and Power-pop are highlighted as those with an above average number of plays. I’m not lacking in Power-pop with a few thousand tracks, but I should probably get my hands on more Crossover or Hip-Hop/Rap.

Nov14

Using citation statistics to find my favourite artists: Next, I looked at who my favourite artists are. It could be argued that I should know who my favourite artists are! But tastes can change over a 10 year period and I was interested in an unbiased view of my favourite artists rather than who I think they are. A plot of Total Tracks vs Mean plays per track is reasonably informative. The artists with the highest plays per track are those with only one track in the library, e.g. Harvey Danger with Flagpole Sitta. So this statistic is pretty unreliable. Equally, I’ve got lots of tracks by Manic Street Preachers but evidently I don’t play them that often. I realised that the problem of identifying favourite artists based on these two pieces of information (plays and number of tracks) is pretty similar to assessing scientists using citation metrics (citations and number of papers). Hirsch proposed the h-index to meld these two bits of information into a single metric, the h-index. It’s easily computed and I already had an Igor procedure to calculate it en masse, so I ran it on the library information.

Before doing this, I consolidated multiple versions of the same track into one. I knew that I had several versions of the same track, especially as I have multiple versions of some albums (e.g. Pet Sounds = 3 copies = mono + stereo + a capella), the top offending track was “Baby’s Coming Back” by Jellyfish, 11 copies! Anyway, these were consolidated before running the h-index calculation.

The top artist was Elliott Smith with an h-index of 32. This means he has 32 tracks that have been listened to at least 32 times each. I was amazed that Muse had the second highest h-index (I don’t consider myself a huge fan of their music) until I remembered a period where their albums were on an iPod Nano used during exercise. Amusingly (and narcissistically) my own music – the artist names are redacted – scored quite highly with two out of three bands in the top 100, which are shown here. These artists with high h-indeces are the most consistently played in the library and probably constitute my favourite artists, but is the ranking correct?

The procedure also calculates the g-index for every artist. The g-index is similar to the h-index but takes into account very highly played tracks (very highly cited papers) over the h threshold. For example, The Smiths h=26. This could be 26 tracks that have been listened to exactly 26 times or they could have been listened to 90 times each. The h-index cannot reveal this, but the g-index gets to this by assessing average plays for the ranked tracks. The Smiths g=35. To find the artists that are most-played-of-the-consistently-most-played, I subtracted h from g and plotted the Top 50. This ranked list I think most closely represents my favourite artists, according to my listening habits over the last ten years.

Nov14-2

Track length: Finally, I looked at the track length. I have a range of track lengths in the library, from “You Suffer” by Napalm Death (iTunes has this at 4 s, but Wikipedia says it is 1.36 s), through to epic tracks like “Blue Room” by The Orb. Most tracks are in the 3-4 min range. Plays per track indicates that this track length is optimal with most of the highly played tracks being within this window. The super-long tracks are rarely listened to, probably because of their length. Short tracks also have higher than average plays, probably because they are less likely to be skipped, due to their length.

These were the first things that sprang to mind for iTunes analysis. As I said at the top, there’s lots of information in the library to dig through, but I think this is enough for one post. And not a pie-chart in sight!

Methods: the library is in xml format and can be read/parsed this way. More easily, you can just select the whole library and copy-paste it into TextEdit and then load this into a data analysis package. In this case, IgorPro (as always). Make sure that the interesting fields are shown in the full library view (Music>Songs). To do everything in this post you need artist, track, album, genre, length, year and play count. At the time of writing, I had 21326 tracks in the library. For the “H-index” analysis, I consolidated multiple versions of the same track, giving 18684 tracks. This is possible by concatenating artist and the first ten characters of the track title (separated by a unique character) and adding the play counts for these concatenated versions. The artist could then be deconvolved (using the unique character) and used for the H-calculation. It’s not very elegant, but seemed to work well. The H-index and G-index calculations were automated (previously sort-of-described here), as was most of the plot generation. The inspiration for the colour coding is from the 2013 Feltron Report.

* there’s an interesting post here about modelling the ideal playlist. I worked through the ideas in that post but found that it doesn’t scale well to large libraries, especially if they’ve been going for a long time, i.e. mine.

The post title is taken from John Coltrane’s cover version of My Favorite Things from the album of the same name. Excuse the US English spelling.

Belly Button Window

A bit of navel gazing for this post. Since moving the blog to wordpress.com in the summer, it recently accrued 5000 views. Time to analyse what people are reading…

blogstatsThe most popular post on the blog (by a long way) is “Strange Things“, a post about the eLife impact factor (2824 views). The next most popular is a post about a Twitter H-index, with 498 views. The Strange Things post has accounted for ~50% of views since it went live (bottom plot) and this fraction seems to be creeping up. More new content is needed to change this situation.

I enjoy putting blog posts together and love the discussion that follows from my posts. It’s also been nice when people have told me that they read my blog and enjoy my posts. One thing I didn’t expect was the way that people can take away very different messages from the same post. I don’t know why I found this surprising, since this often happens with our scientific papers! Actually, in the same way as our papers, the most popular posts are not the ones that I would say are the best.

Wet Wet Wet: I have thought about deleting the Strange Things post, since it isn’t really what I want this blog to be about. An analogy here is the Scottish pop-soul outfit Wet Wet Wet who released a dreadful cover of The Troggs’ “Love is All Around” in 1994. In the end, the band deleted the single in the hope of redemption, or so they said. Given that the song had been at number one for 15 weeks, the damage was already done. I think the same applies here, so the post will stay.

Directing Traffic: Most people coming to the blog are clicking on links on Twitter. A smaller number come via other blogs which feature links to my posts. A very small number come to the blog via a Google search. Google has changed the way it formats the clicks and so most of the time it is not possible to know what people were searching for. For those that I can see, the only search term is… yes, you’ve guessed it: “elife impact factor”.

Methods: WordPress stats are available for blog owners via URL formatting. All you need is your API key and (obviously) your blog address.

Instructions are found at http://stats.wordpress.com/csv.php

A basic URL format would be: http://stats.wordpress.com/csv.php?api_key=yourapikey&blog_uri=yourblogaddress replacing yourapikey with your API key (this can be retrieved at https://apikey.wordpress.com) and yourblogaddress with your blog address e.g. quantixed.wordpress.com

Various options are available from the first page to get the stats in which you are  interested. For example, the following can be appended to the second URL to get a breakdown of views by post title for the past year:

&table=postviews&days=365&limit=-1

The format can be csv, json or xml depending on how your preference for what you want to do next with the information.

The title is from “Belly Button Window” by Jimi Hendrix, a posthumous release on the Cry of Love LP.

What The World Is Waiting For

The transition for scientific journals from print to online has been slow and painful. And it is not yet complete. This week I got an RSS alert to a “new” paper in Oncogene. When I downloaded it, something was familiar… very familiar… I’d read it almost a year ago! Sure enough, the AOP (ahead of print or advance online publication) date for this paper was September 2013 and here it was in the August 2014 issue being “published”.

I wondered why a journal would do this. It is possible that delaying actual publication would artificially boost the Impact Factor of a journal because there is a delay before citations roll in and citations also peak after two years. So if a journal delays actual publication, then the Impact Factor assessment window captures a “hotter” period when papers are more likely to generate more citations*. Richard Sever (@cshperspectives) jumped in to point out a less nefarious explanation – the journal obviously has a backlog of papers but is not allowed to just print more papers to catch up, due to page budgets.

There followed a long discussion about this… which you’re welcome to read. I was away giving a talk and missed all the fun, but if I may summarise on behalf of everybody: isn’t it silly that we still have pages – actual pages, made of paper – and this is restricting publication.

I wondered how Oncogene got to this position. I retrieved the data for AOP and actual publication for the last five years of papers at Oncogene excluding reviews, from Pubmed. Using oncogene[ta] NOT review[pt] as a search term. The field DP has the date published (the “issue date” that the paper appears in print) and PHST has several interesting dates including [aheadofprint]. These could be parsed and imported into IgorPro as 1D waves. The lag time from AOP to print could then be calculated. I got 2916 papers from the search and was able to get data for 2441 papers.

OncogeneLagTimeYou can see for this journal that the lag time has been stable at around 300 days (~10 months) for issues published since 2013. So a paper AOP in Feb 2012 had to wait over 10 months to make it into print. This followed a linear period of lag time growth from mid-2010.

I have no links to Oncogene and don’t particularly want to single them out. I’m sure similar lags are happening at other print journals. Actually, my only interaction with Oncogene was that they sent this paper of ours out to review in 2011 (it got two not-negative-but-admittedly-not-glowing reviews) and then they rejected it because they didn’t like the cell line we used. I always thought this was a bizarre decision: why couldn’t they just decide that before sending it to review and wasting our time? Now, I wonder whether they were not keen to add to their increasing backlog of papers at their journal? Whatever the reason, it has put me off submitting other papers there.

I know that there are good arguments for continuing print versions of journals, but from a scientist’s perspective the first publication is publication. Any subsequent versions are simply redundant and confusing.

*Edit: Alexis Verger (@Alexis_Verger) pointed me to a paper which describes that, for neuroscience journals, the lag time has increased over time. Moreover, the authors suggest that this is for the purpose of maximising Journal Impact Factor.

The post title comes from the double A-side Fools Gold/What The World Is Waiting For by The Stone Roses.

Vitamin K

Note: this is not a serious blog post.

Neil Hall’s think piece in Genome Biology on the Kardashian index (K-index) caused an online storm recently, spawning hashtags and outrage in not-so-equal measure. Despite all the vitriol that headed Neil’s way, very little of it concerned his use of Microsoft Excel to make his plot of Twitter followers vs total citations! Looking at the plot with the ellipse around a bunch of the points and also at the equations, I thought it might be worth double-checking Neil’s calculations.

In case you don’t know what this is about: the K-index is the ratio of actual Twitter followers (F_{a}) to the number of Twitter followers you are predicted to have (F_{c}) based on the total number of citations to your papers (C) from the equation:

F_{c}=43.3C^{0.32}

So the K-index is:

K-index=\frac{F_{a}}{F_{c}}

He argues that if a scientist has a K-index >5 then they are more famous for their twitterings than for their science. This was the most controversial aspect of the piece. It wasn’t clear whether he meant that highly cited scientists should get tweeting or that top-tweeters should try to generate some more citations (not as easy as it sounds). The equation for F_{c} was a bit suspect, derived from some kind of fit through some of the points. Anyway, it seemed to me that the ellipse containing the Kardashians didn’t look right.

I generated the data for F_{c} and for a line to show the threshold at which one becomes a Kardashian (k) in IgorPro as follows:

Make /o /N=100000 fc
fc =43.3*(x^0.32)
Duplicate fc k //yes, this does look rude
k *=5
display fc, k //and again!

K-index

This plot could be resized and overlaid on Neil’s Excel chart from Genome Biology. I kept the points but deleted the rest and then made this graph.

The Kardashians are in the peach zone. You’ll notice one poor chap is classed as a Kardashian by Neil, yet he is innocent! Clearly below the line, i.e. K-index <5.

Two confessions:

  1. My K-index today is 1.97 according to Twitter and Google Scholar.
  2. Embarrassingly, I didn’t know of the business person who gave her name to the K-index was until reading Neil’s article and the ensuing discussion. So I did learn something from this!

The post title is taken from “Vitamin K” by Gruff Rhys from the Hotel Shampoo album.

Round and Round

I thought I’d share a procedure for rotating a 2D set of coordinates about the origin. Why would you want do this? Well, we’ve been looking at cell migration in 2D – tracking nuclear position over time. Cells migrate at random and I previously blogged about ways to visualise these tracks more clearly. Part of this earlier procedure was to set the start of each track at (0,0). This gives a random hairball of tracks moving away from the origin. Wouldn’t it be a good idea to orient all the tracks so that the endpoint lies on the same axis? This would simplify the view and allow one to assess how ‘directional’ the cell tracks are. To rotate a set of coordinates, you need to use a rotation matrix. This allows you to convert the x,y coordinates to their new position x’,y’. This rotation is counter-clockwise.

x' = x \cos \theta - y \sin \theta\,

y' = x \sin \theta + y \cos \theta\,

However, we need to find theta first. To do this we need to find the angle between two lines, using this formula.

\cos \theta = \frac {\mathbf a \cdot \mathbf b}{\left \Vert {\mathbf a} \right \Vert \cdot \left \Vert {\mathbf b} \right \Vert}

The maths is kept to a minimum here. If you are interested, look at the code at the bottom.

beforeThe two lines (a and b) are formed by the x-axis (origin to some point on the x-axis, i.e. y=0) and by a line running from the origin to the last coordinate in the series. This calculation can be done for each track with theta for each track being used to rotate the that whole track (x,y changed to x’,y’ for each point).

Here is an example of just a few tracks from an experiment. Typically we have hundreds of tracks for each experimental group and the code will blast through them all very quickly (<1 s).

after

After rotation, the tracks are now aligned so that the last point is on the x-axis at y=0. This allows us to see how ‘directional’ the tracks are. The end points are now aligned, when they migrated there, how convoluted was their path.

The code to do this is up on Igor Exchange code snippets. A picture of the code is below (markup for code in WordPress is not very clear). See the code snippet if you want to use it.

rotator

The weakness of this method is that acos (arccos) only gives results from 0 to Pi (0 to 180°). There is a correction in the procedure, but everything needs editing if you want to rotate the co-ordinates to some other plane. Feedback welcome.

Edit Jim Prouty and A.G. have suggested two modifications to the code. The first is to use complex waves rather than 2D real waves. Then use two native Igor functions r2polar or p2rect. The second suggestion is to use Matrix operations! As is often the case with Igor there are several ways of doing things. The method described here is long-winded compared to a MatrixOp and if the waves were huge these solutions would be much, much faster. As it is, our migration movies typically have 60 points and as mentioned rotator() blasts through them very quickly. More complex coordinate sets would need something more sophisticated.

The post title is taken from “Round & Round” by New Order from their Technique LP.

Sure To Fall

What does the life cycle of a scientific paper look like?

It stands to reason that after a paper is published, people download and read the paper and then if it generates sufficient interest, it will begin to be cited. At some point these citations will peak and the interest will die away as the work gets superseded or the field moves on. So each paper has a useful lifespan. When does the average paper start to accumulate citations, when do they peak and when do they die away?

Citation behaviours are known to be very field-specific. So to narrow things down, I focussed on cell biology and in one area “clathrin-mediated endocytosis” in particular. It’s an area that I’ve published in – of course this stuff is driven by self-interest. I downloaded data for 1000 papers from Web of Science that had accumulated the most citations. Reviews were excluded, as I assume their citation patterns are different from primary literature. The idea was just to take a large sample of papers on a topic. The data are pretty good, but there are some errors (see below).

Number-crunching (feel free to skip this bit): I imported the data into IgorPro making a 1D wave for each record (paper). I deleted the last point corresponding to cites in 2014 (the year is not complete). I aligned all records so that year of publication was 0. Next, the citations were normalised to the maximum number achieved in the peak year. This allows us to look at the lifecycle in a sensible way. Next I took out records to papers less than 6 years old as I reasoned these would have not have completed their lifecycle and could contaminate the analysis (it turned out to make little difference). The lifecycles were plotted and averaged. I also wrote a quick function to pull out the peak year for citations post hoc.

So what did it show?

Citations to a paper go up and go down, as expected (top left). When cumulative citations are plotted most of the articles have an initial burst and then level off. The exception are ~8 articles that continue to rise linearly (top right). On average a paper generates its peak citations three years after publication (box plot). The fall after this peak period is pretty linear and it’s apparently all over somewhere >15 years after publication (bottom left). To look at the decline in more detail I aligned the papers so that year 0 was the year of peak citations. The average now loses almost 40% of those peak citations in the following year and then declines steadily (bottom right).

Edit: The dreaded Impact Factor calculation takes the citations to articles published in the preceding 2 years and divides by the number of citable items in that period. This means that each paper only contributes to the Impact Factor in years 1 and 2. This is before the average paper reaches its peak citation period. Thanks to David Stephens (@david_s_bristol) for pointing this out. The alternative 5 year Impact Factor gets around this limitation.

Perhaps lifecycle is the wrong term: papers in this dataset don’t actually ‘die’, i.e. go to 0 citations. There is always a chance that a paper will pick up the odd citation. Papers published 15 years ago are still clocking 20% of their peak citations. Looking at papers cited at lower rates would be informative here.

Two other weaknesses that affect precision is that 1) a year is a long time and 2) publication is subject to long lag times. The analysis would be improved by categorising the records based on the month-year when the paper was published and the month-year when each citation comes in. Papers published in January in one year probably have a different peak than those published in December of the same year, but this is lost when looking at year alone. Secondly, due to publication lag, it is impossible to know when the peak period of influence for a paper truly is.
MisCytesProblems in the dataset. Some reviews remained despite being supposedly excluded, i.e. they are not properly tagged in the database. Also, some records have citations from years before the article was published! The numbers of citations are small enough to not worry for this analysis, but it makes you wonder about how accurate the whole dataset is. I’ve written before about how complete citation data may or may not be. These sorts of things are a concern for all of us who are judged by these things for hiring and promotion decisions.

The post title is taken from ‘Sure To Fall’ by The Beatles, recorded during The Decca Sessions.

Very Best Years

What was the best year in music?

OK, I have to be upfront and say that I thought the answer to this would be 1991. Why? Just a hunch. Nevermind, Loveless, Spiderland, Laughing Stock… it was a pretty good year. I thought it would be fun to find out if there really was a golden year in music. It turns out that it wasn’t 1991.

There are many ways to look at this question, but I figured that a good place to start was to find what year had the highest density of great LPs. But how do we define a great LP? Music critics are notorious for getting it wrong and so I’m a big fan of rateyourmusic.com (RYM) which democratises the grading process for music by crowdsourcing opinion. It allows people to rate LPs in their collection and these ratings are aggregated via a slightly opaque system and the albums are ranked into charts. I scraped the data for the Top 1000 LPs of All-Time*. Crunching the numbers was straightforward. So what did it show?

Looking at the Top 1000, 1971 and 1972 are two years with the highest representation. Looking at the Top 500 LPs, 1971 is the year with most records. Looking at the Top 100, the late 60s features highly.

To look at this in detail, I plotted the rank versus year. This showed that there was a gap in the early 80s where not many Top 1000 LPs were released. This could be seen in the other plots but, it’s clearer on the bubble plot. Also the cluster of high ranking LPs released in the 1960s is obvious.

The plot is colour-coded to show the rank, while the size of the bubbles indicates the rating. Note that rating doesn’t correlate with rank (RYM also factors in number of ratings and user loyalty, to determine this). To take the ranking into account, I calculated the “integrated score” for all albums released in a given year. The score is 1001-rank, and the summation of all of these scores for albums released in a given year gives the integrated score.

This is shown on a background of scores for each decade. Again, 1970s rule and 1971 is the peak. The shape of this profile will not surprise music fans. The first bump in the late 50s coincides with rock n roll, influential jazz records and the birth of the LP as a serious format. The 60s sees a rapid increase in density of great albums per year, hitting a peak in 1971. The decline that follows is halted by a spike in 1977: punk. There’s a relative dearth of highly rated LPs in the early 80s and things really tail off in the early 2000s. The lack of highly rated LPs in these later years is probably best explained by few ratings, due to young age of these LPs. Also diversification of music styles, tastes and the way that music is consumed is likely to play a role. The highest ranked LP on the list is Radiohead’s OK Computer (1997) which was released in a non-peak year. Note that 1991 does not stand out particularly. In fact, in the 1990s, 1994 stands out as the best year for music.

Finally, RYM has a nice classification system for music so I calculated the integrated score for these genres and sub-genres (cowpunk, anyone?). Rock (my definition) is by far the highest scoring and Singer-Songwriter is the highest scoring genre/sub-genre.

So there you have it. 1971 was the best year in music according to this analysis. Now… where’s my copy of Tago Mago.

 

* I did this mid-April. I doubt it’s changed much. This was an exercise to learn how to scrape and I also don’t think I broke the terms of service of RYM. If I did, I’ll take this post down.

The title of this post comes from ‘Very Best Years’ by The Grays from their LP ‘Ro Sham Bo’. It was released in 1994…