This is part-tip, part-adventures in code. I found out recently that it is possible to comment out multiple lines of code in Igor and thought I’d put this tip up here.
Multi-line commenting in programming is useful two reasons:
- writing comments (instructions, guidance) that last more than one line
- the ability to temporarily remove a block of code while testing
In each computer language there is the ability to comment out at least one line of code.
In Igor this is “//”, which comments out the whole line, but no more.
This is the same as in ImageJ macro language.
Now, to comment out whole sections in FIJI/ImageJ is easy. Inserting “/*” where you want the comment to start, and then “*/” where it ends, multiple lines later.
I didn’t think this syntax was available in Igor, and it isn’t really. I was manually adding “//” for each line I wanted to remove, which was annoying. It turns out that you can use Edit > Commentize to add “//” to the start of all selected lines. The keyboard shortcut in IP7 is Cmd-/. You can reverse the process with Edit > Decommentize or Cmd-\.
There is actually another way. Igor can conditionally compile code. This is useful if for example you write for Igor 7 and Igor 6. You can get compilation of IP7 commands only if the user is running IP7 for example. This same logic can be used to comment out code as follows.
The condition if 0 is never satisfied, so the code does not compile. The equivalent statement for IP7-specific compilation, is “#if igorversion()>=7”.
So there you have it, two ways to comment out code in Igor. These tips were from IgorExchange.
If you want to read more about commenting in different languages and the origins of comments, read here.
This post is part of a series of tips.
I don’t often write about music at quantixed but I recently caught Survivor’s “Eye of The Tiger” on the radio and thought it deserved a quick post.
Surely everyone knows this song: a kind of catchall motivational tune. It is loved by people in gyms with beach-unready bodies and by presidential hopefuls without permission to use it.
Written specifically for Rocky III after Sylvester Stallone was refused permission by Queen to use “Another One Bites The Dust”, it has that 1980s middle-of-the-road hard-rock-but-not-heavy-metal feel to it. The kind of track that must be filed under “guilty pleasure”. Possibly you love this song. Maybe you get ready to meet your opponents whilst listening to it? If this is you, please don’t read on.
I find it difficult listening to this track because of the timing of the intro. Not sure what I mean?
Here is a waveform of one channel for the intro. Two of the opening phrases are shown underlined. A phrase in this case is: dun, dun-dun-dun, dun-dun-dun, dun-dun-durrrr. Can you see the problem with the second of those two phrases?
Still don’t see it? In the second phrase the second of the dun-dun-duns comes in late.
I’ve overlaid the waveform again to compare phrase 1 with phrase 2.
The difference is one-eighth (quaver) and it drives me nuts. I think it’s intentional because, well the whole band play the same thing. I don’t think it’s a tape splice error, because the track sounds live and surely someone must have noticed. Finally, they play these phrases again in the outro and that point the timing is correct. No, it’s intentional. Why?
From this page Jim Peterik of Survivor says:
I started doing that now-famous dead string guitar riff and started slashing those chords to the punches we saw on the screen, and the whole song took shape in the next three days.
So my best guess is that the notes were written to match the on-screen action!
The video on YouTube is only at 220 million views (at the time of writing). Give it a listen, if my description of dun-dun-dun’s was not illustrative enough for you.
- The waveform is taken from the Eye of The Tiger album version of the song. I read that the version in the movie is actually the demo version.
- I loaded it into Igor using SoundLoadWave. I made an average of the stereo channels using MatrixOp and then downsampled the wave from 44.1 kHz so it was easier to move around.
A very occasional series on music. The name Bateman Writes, refers to the obsessive writings of the character Patrick Bateman in Bret Easton Ellis’s novel American Psycho. This serial killer had a penchant for middle of the road rock act Huey Lewis & The News.
Molecular Biology of The Cell, the official journal of the American Society for Cell Biology, recently joined a number of other periodicals in issuing guidelines for manuscripts, concerning statistics and reproducibility. I discussed these guidelines with the lab and we felt that there are two areas where we can improve:
- blind analysis
- power calculations
A post about power analysis is brewing, this post is about a solution for blind analysis.
For anyone that doesn’t know, blind analysis refers to: the person doing the analysis being blind to (not knowing) the experimental conditions. It is a way of reducing bias, intentional or otherwise, of analysis of experimental data. Most of our analysis workflows are blinded, because a computer does the analysis in an automated way so there is no way of a human biasing the result. Typically, a bunch of movies are captured, fed into a program using identical settings, and the answer gets spat out. Nothing gets excluded, or the exclusion criteria are agreed beforehand. Whether the human operating the computer is blind or not to the experimental conditions doesn’t matter.
For analysis that has a manual component we do not normally blind the analyser. Instead we look for ways to make the analysis free of bias. An example is using a non-experimental channel in the microscope image to locate a cellular structure. This means the analysis is done without “seeing” any kind of “result”, which might bias the analysis.
Sometimes, we do analysis which is almost completely manual and this is where we can improve by using blinding. Two objections raised to blinding are practical ones:
- it is difficult/slow to get someone else to do the analysis of your data (we’ve tried it and it doesn’t work well!)
- the analyser “knows” the result anyway, in the case of conditions where there is a strong effect
There’s not much we can do about the second one. But the solution to the first is to enable people to blindly analyse their own data if it is needed.
I wrote* a macro in ImageJ called BlindAnalysis.ijm which renames the files in a random fashion** and makes tsv log of the associations. The analyser can simply analyse blind_0001.tif, blind_0002.tif and then reassociate the results to the real files using this tsv.
The picture shows the macro in action. A folder containing 10 TIFFs is processed into a new folder called BLIND. The files are stripped of labels (look at the original TIFF, left and the blind version, right) and saved with blinded names. The log file keeps track of associations.
I hope this simple macro is useful to others. Feedback welcome either on this post or on GitHub.
* actually, I found an old macro on the web called Shuffler written by Christophe Leterrier. This needed a bit of editing and also had several options that weren’t needed, so it was more deleting than writing.
** it also strips out the label of the file. If you only rename files, the label remains in ImageJ so the analyser is not blind to the condition. Originally I was working on a bash script to do the renaming, but when I realised that I needed to strip out the labels, I needed to find an all-ImageJ solution.
Edit @ 2016-10-11T06:05:48.422Z I have updated the macro with the help of some useful suggestions.
The post title is taken from “Blind To The Truth” a 22 second-long track from Napalm Death’s 2nd LP ‘From Enslavement To Obliteration.
This is a quick post about the punch card feature on GitHub. This is available from Graphs within each repo and is also directly accessible via the API.
I was looking at the punch card for two of my projects: one is work related and the other, more of a kind of hobby. The punch cards were different (the work one had way more commits, 99, than the hobby, 22). There was an interesting pattern to them. Here they are overlaid. Green is the work repo. Purple is the hobby.
It says something about my working day. There’s times when I don’t do any committing, i.e. weekends during the day and most early evenings. What was interesting was that I was pretty stringent about doing hobby stuff only at set times: first thing over a coffee, over lunch, or in the evenings.
As self analysis goes, this is pretty lightweight compared to this terrifying post by Stephen Wolfram.
The post title is taken from “Calendars and Clocks” from The Coral’s debut LP
A couple of recent projects have meant that I had to get to grips more seriously with R and with MATLAB. Regular readers will know that I am a die-hard IgorPro user. Trying to tackle a new IDE is a frustrating experience, as anyone who has tried to speak a foreign language will know. The speed with which you can do stuff (or get your point across) is very slow. Not only that, but… if you could just revert to your mother tongue it would be so much easier…
What I needed was something like a Babel Fish. As I’m sure you’ll know, this fish is the creation of Douglas Adams. It allows instant translation of any language. The only downside is that you have to insert the fish into your ear.
The closest thing to the Babel Fish in computing is the cheat sheet. These sheets are typically a huge list of basic commands that you’ll need as you get going. I found a nice page which had cheat sheets which allowed easy interchange between R, MATLAB and python. There was no Igor version. Luckily, a user on IgorExchange had taken the R and MATLAB page and added some Igor commands. This was good, but it was a bit rough and incomplete. I took this version, formatted it for GitHub flavored markdown, and made some edits.
The repo is here. I hope it’s useful for others. I learned a lot putting it together. If you are an experienced user of R, MATLAB or IGOR (or better still can speak one or more of these languages), please fork and make edits or suggest changes via GitHub issues, or by leaving a comment on this page if you are not into GitHub. Thanks!
Here is a little snapshot to whet your appetite. Bon appetit!
The post title is taken from “The International Language of Screaming” by Super Furry Animals from their Radiator LP. Released as a single, the flip-side had a version called NoK which featured the backing tracking to the single. Gruff sings the welsh alphabet with no letter K.
Ten years ago today I became a PI. Well, that’s not quite true. On that day, I took up my appointment as a Lecturer at University of Liverpool, but technically I was not a PI. I had no lab space (it was under construction), I had no people, and I also had no money for research. I arrived for work. I was shown to a windowless office that I would share with another recent recruit, and told to get on with it. With what I should be getting on with, I was not quite sure.
So is this a cause for celebration?The slow start to my career as an independent scientist makes it a bit difficult to know when I should throw the party. I could mark the occasion of my lab finally becoming ready for habitation. This happened sometime in March 2007. Perhaps it should be when I did the first experiment in my new lab (April 2007). Or it could be when I received notification of my first grant award (Summer 2007), or when I hired the first person, a technician, in October 2007. It wasn’t until December of 2007 when my first postdoc arrived that the lab really got up-and-running. This was when I felt like I was actually a PI.
In retrospect, I am amazed I survived this cold start to my independent career: effectively taking a year-long involuntary break from research. But I was one of the fortunate ones. I was hired at the same time as 6 other PIs-to-be. Over time it was clear that without good support some of us were going to fail. Sure enough, after 18 months, one switched to a career in grant administration in another country. Another left for a less independent position. One more effectively gave up on the PI dream and switched to full-time teaching. But there was success. Two of the other recruits landed grants early and were in business as soon as our labs were renovated. I also managed to get some money. The other person didn’t get a grant until years later, but somehow survived and is still running a group. So of 7 potential group leaders, only 4 ended up running a research group and the success of our groups has been mixed: problems with personnel, renewing funding…
Having a Plan B is probably a good idea. It’s well publicised that the conversion rate from PhD student to Professor is cited as 0.45% (from a 2010 report in the UK). It’s important to make new students aware of this. Maybe a one-in-200 chance sounds reasonable if they are full of confidence… but they need to realise that even they persevere down the academic route, they might indeed get a “group leader job” yet it still might not work out.
I had no Plan B.
I think there were many things that the University could have done differently to ensure more success among us new starters. The obvious thing would be to give a decent startup package. Recruiting as many people as possible with the money available gets lots of people, but gives them no resources. This isn’t a recipe for success.
Also, hiring seven people with completely unconnected research interests was not a smart thing to do. With nothing in common, any help we could give each other was limited. Moreover, only a few of us had genuine research links to established faculty. This made life even more difficult. Going over this in more detail is probably not appropriate here… I am grateful that I got hired, even if things were not ideal. Anyway, I survived this early phase and my lab began to grow…
Reasons to be cheerful
I have been very fortunate to have had some great people working in my group. The best thing about being a group leader is working with smart people. Seeing each develop as a scientist and progress in their careers… this is undoubtedly the highlight.
With talented people onboard, the group really got going and we began making discoveries. My top three papers which gave me most pleasure were not necessarily our biggest hitters. These are, in chronological order:
- Our first paper from the lab is special because it signified that we were “open for business”. This came in 2009. Fiona Hood and I showed (somewhat controversially) that two clathrin isoforms behaved similarly in cells depleted of endogenous clathrin.
- Dan Booth and Fiona worked together to find the spindle clathrin complex and show that it was a microtubule crosslinker. This paper was the main thing I was aiming to do when I setup my group.
- Anna Willox and I worked on one of my favourite papers showing that there are four interaction sites on the clathrin N-terminal domain. I love this paper because it was a side project for Anna. We made a prediction based on symmetry, and a large dollop of guesswork, which turned out to be right. Very satisfying.
Of course there were many more papers and I’m proud of them all. But these three stand out.
I’m also thankful that I’ve been able to keep the lab afloat financially. Thanks to Cancer Research UK, who funded my lab right at the start and still do today. Also thanks to Wellcome, BBSRC, MRC, North West Cancer Research who all funded important projects in my lab.
The other highlight has been interacting with other groups. There have been some great collaborations; most productively with Ian Prior in Liverpool and Richard Bayliss in Leeds, as well as other stuff which didn’t generate any papers but was still a lot of fun. Moving from Liverpool to Warwick in 2013 opened up so many new possibilities which I am continuing to enjoy immensely.
“The move” was the most significant event in the history of the Royle Lab. Many circumstances precipitated it, many of which are not appropriate to discuss here. However, the main driver was “being told to get on with it” right at the start. Feeling completely free to do whatever I wanted to do was absolutely fantastic and was one of the best things about my former University. Sometimes though, the best things are also the worst. I gradually began to realise that this freedom came because nobody really cared what I was doing or if my career was a success or not. I also needed more interactions with more cell biologists and this meant moving. Ironically, after I left, the University recruited a number of promising early career cell biologists all of whom I would have enjoyed working alongside.
If you have read this far, I am impressed!
Posts like this should probably end with some pithy advice. Except there’s none I have to offer to people just starting out. Ten years is a long time and a lot has changed. What worked back then probably doesn’t work now. Many of the mistakes I made, maybe you could dodge some of those, but you will make others. That’s OK, we’re all just making it up as we go along.
So, ten years of the Royle Lab (sort of). It’s been fun. I have the best job in the world and there’s lots to celebrate. But this post explains why I won’t be celebrating today.
The post title comes from “Ten Years Gone” by Led Zeppelin from their Double LP “Physical Graffiti”.
I’m currently writing two manuscripts that each have a substantial data modelling component. Some of our previous papers have included computer code, but it was straightforward enough to have the code as a supplementary file or in a GitHub repo and leave it at that. Now with more substantial computation in the manuscript, I was wondering how best to describe it. How much detail is required?
How much explanation should be in the main text, how much is in supplementary information and how much is simply via commenting in the code itself?
I asked for recommendations for excellent cell biology papers that had a modelling component, where the computation was well described.
I got many replies and I’ve collated this list of papers below so that I can refer to them and in case it is useful for anyone who is also looking for inspiration. I’ve added the journal names only so that you can see what journals are interested in publishing cell biology with a computational component. Here they are, in no particular order:
- This paper on modelling kinetochore-microtubule attachment in pombe. Published in JCB there is also a GitHub repo for the software, kt_simul written in Python. The authors used commenting and also put a PDF of the heavy detail on GitHub.
- Modelling of signalling networks here in PLoS Comput Biol.
- This paper using Voronoi tesselations to examine tissue packing of cells in EMBO J.
- Two papers, this one in JCB featuring modelling of DNA repair and this one in Curr Biol on photoreceptors in flies.
- Cell movements via depletion of chemoattractants in PLos Biol.
- Protein liquid droplets as organising centres for biochemical reactions is a hot topic. This paper in Cell was recommended.
- Final tip was to look at PLoS Comput Biol for inspiration, searching for cell biology topics. Papers like this one on Smoldyn 2.1.
Thanks to Hadrien Mary, Robert Insall, Joachim Goedhart, Stephen Floor, Jon Humphries, Luis Escudero, and Neil Saunders for the suggestions.
The post title is taken from “The Arcane Model” by The Delgados from their album Peloton.
Statistical hypothesis testing, commonly referred to as “statistics”, is a topic of consternation among cell biologists.
This is a short practical guide I put together for my lab. Hopefully it will be useful to others. Note that statistical hypothesis testing is a huge topic and one post cannot hope to cover everything that you need to know.
What statistical test should I do?
To figure out what statistical test you need to do, look at the table below. But before that, you need to ask yourself a few things.
- What are you comparing?
- What is n?
- What will the test tell you? What is your hypothesis?
- What will the p value (or other summary statistic) mean?
If you are not sure about any of these things, whichever test you do is unlikely to tell you much.
The most important question is: what type of data do you have? This will help you pick the right test.
- Measurement – most data you analyse in cell biology will be in this category. Examples are: number of spots per cell, mean GFP intensity per cell, diameter of nucleus, speed of cell migration…
- Normally-distributed – this means it follows a “bell-shaped curve” otherwise called “Gaussian distribution”.
- Not normally-distributed – data that doesn’t fit a normal distribution: skewed data, or better described by other types of curve.
- Binomial – this is data where there are two possible outcomes. A good example here in cell biology would be a mitotic index measurement (the proportion of cells in mitosis). A cell is either in mitosis or it is not.
- Other – maybe you have ranked or scored data. This is not very common in cell biology. A typical example here would be a scoring chart for a behavioural effect with agreed criteria (0 = normal, 5 = epileptic seizures). For a cell biology experiment, you might have a scoring system for a phenotype, e.g. fragmented Golgi (0 = is not fragmented, 5 = is totally dispersed). These arbitrary systems are a not a good idea. Especially, if the person scoring is unblinded to the experimental procedure. Try to come up with an unbiased measurement procedure.
|What do you want to do?||Measurement
|Describe one group||Mean, SD||Median, IQR||Proportion|
|Compare one group to a value||One-sample t-test||Wilcoxon test||Chi-square|
|Compare two unpaired groups||Unpaired t-test||Wilcoxon-Mann-Whitney two-sample rank test||Fisher’s exact test
|Compare two paired groups||Paired t-test||Wilcoxon signed rank test||McNemar’s test|
|Compare three or more unmatched groups||One-way ANOVA||Kruskal-Wallis test||Chi-square test|
|Compare three or more matched groups||Repeated-measures ANOVA||Friedman test||Cochran’s Q test|
|Quantify association between two variables||Pearson correlation||Spearman correlation|
|Predict value from another measured variable||Simple linear regression||Nonparametric regression||Simple logistic regression|
|Predict value from several measured or binomial variables||Multiple linear (or nonlinear) regression||Multiple logistic regression|
Modified from Table 37.1 (p. 298) in Intuitive Biostatistics by Harvey Motulsky, 1995 OUP.
What do “paired/unpaired” and “matched/unmatched” mean?
Most of the data you will get in cell biology is unpaired or unmatched. Individual cells are measured and you have say, 20 cells in the control group and 18 different cells in the test group. These are unpaired (or unmatched in the case of more than one test group) because the cells are different in each group. If you had the same cell in two (or more) groups, the data would be paired (or matched). An example of a paired dataset would be where you have 10 cells that you treat with a drug. You take a measurement from each of them before treatment and a measurement after. So you have paired measurements: one for cell A before treatment, one after; one for cell B before and after, and so on.
How to do some of these tests in IgorPRO
The examples below assume that you have values in waves called data0, data1, data2,… substitute the wavenames for your actual wave names.
Is it normally distributed?
The simplest way is to plot them and see. You can plot out your data using Analysis>Histogram… or Analysis>Packages>Percentiles and BoxPlot… Another possibility is to look at skewness or kurtosis of the dataset (you can do this with WaveStats, see below)
However, if you only have a small number of measurements, or you want to be sure, you can do a test. There are several tests you can do (Kolmogorov-Smirnoff, Jarque-Bera, Shapiro-Wilk). The easiest to do and most intuitive (in Igor) is Shapiro-Wilk.
If p < 0.05 then the data are not normally distributed. Statistical tests on normally distributed data are called parametric, while those on non-normally distributed data are non-parametric.
Describe one group
To get the mean and SD (and lots of other statistics from your data):
To get the median and IQR:
The mean and sd are also stored as variables (V_avg, V_sdev). StatsQuantiles calculates V_median, V_Q25, V_Q75, V_IQR, etc. Note that you can just get the median by typing Print StatsMedian(data0) or – in Igor7 – Print median(data0). There is often more than one way to do something in Igor.
Compare one group to a value
It is unlikely that you will need to do this. In cell biology, most of the time we do not have hypothetical values for comparison, we have experimental values from appropriate controls. If you need to do this:
Compare two unpaired groups
Use this for normally distributed data where you have test versus control, with no other groups. For paired data, use the additional flag /PAIR.
For the non-parametric equivalent, if n is large computation takes a long time. Use additional flag /APRX=2. If the data are paired, use the additional flag /WSRT.
For binomial data, your waves will have 2 points. Where point 0 corresponds to one outcome and point 1, the other. Note that you can compare to expected values here, for example a genetic cross experiment can be compared to expected Mendelian frequencies. To do Fisher’s exact test, you need a 2D wave representing a contingency table. McNemar’s test for paired binomial data is not available in Igor
If you have more than two groups, do not do multiple versions of these tests, use the correct method from the table.
Compare three or more unmatched groups
For normally-distributed data, you need to do a 1-way ANOVA followed by a post-hoc test. The ANOVA will tell you if there are any differences among the groups and if it is possible to investigate further with a post-hoc test. You can discern which groups are different using a post-hoc test. There are several tests available, e.g. Dunnet’s is useful where you have one control value and a bunch of test conditions. We tend to use Tukey’s post-hoc comparison (the /NK flag also does Newman-Keuls test).
StatsAnova1Test/T=1/Q/W/BF data0,data1,data2,data3 StatsTukeyTest/T=1/Q/NK data0,data1,data2,data3
The non-parametric equivalent is Kruskal-Wallis followed by a multiple comparison test. Dunn-Holland-Wolfe method is used.
StatsKSTest/T=1/Q data0,data1,data2,data3 StatsNPMCTest/T=1/DHW/Q data0,data1,data2,data3
Compare three or more matched groups
It’s unlikely that this kind of data will be obtained in a typical cell biology experiment.
There are also operations for StatsFriedmanTest and StatsCochranTest.
Straightforward command for two waves or one 2D wave. Waves (or columns) must be of the same length
At this point, you probably want to plot out the data and use Igor’s fitting functions. The best way to get started is with the example experiment, or just display your data and Analysis>Curve Fitting…
Hazard and survival data
In the lab we have, in the past, done survival/hazard analysis. This is a bit more complex and we used SPSS and would do so again as Igor does not provide these functions.
Notes for use
The good news is that all of this is a lot more intuitive in Igor 7! There is a new Menu item called Statistics, where most of these functions have a dialog with more information. In Igor 6.3 you are stuck with the command line. Igor 7 will be out soon (July 2016).
- Note that there are further options to most of these commands, if you need to see them
- check the manual or Igor Help
- or type ShowHelpTopic “StatsMedian” in the Command Window (put whatever command you want help with between the quotes).
- Extra options are specified by “flags”, these are things like “/Q” that come after the command. For example, /Q means “quiet” i.e. don’t print the output into the history window.
- You should always either print the results to the history or put them into a table so that we can check them. Note that the table gets over written if you do the same test with different data, so printing in this case is a good idea.
- The defaults in Igor are setup OK for our needs. For example, Igor does two-tailed comparison, alpha = 0.05, Welch’s correction, etc.
- Most operations can handle waves of different length (or have flags set to handle this case).
- If you are used to doing statistical tests in Excel, you might be wondering about tails and equal variances. The flags are set in the examples to do two-tailed analysis and unequal variances are handled by Welch’s correction.
- There’s a school of thought that says that using non-parametric tests is best to be cautious. These tests are not as powerful and so it is best to use parametric tests (t test, ANOVA) when you can.
Part of a series on the future of cell biology in quantitative terms.
This post follows on from “Getting Started“.
In the lab we use IgorPRO for pretty much everything. We have many analysis routines that run in Igor, we have scripts for processing microscope metadata etc, and we use it for generating all figures for our papers. Even so, people in the lab engage with it to varying extents. The main battle is that the use of Excel is pretty ubiquitous.
I am currently working on getting more people in the lab started with using Igor. I’ve found that everyone is keen to learn. The approach so far has been workshops to go through the basics. This post accompanies the first workshop, which is coupled to the first few pages of the Manual. If you’re interested in using Igor read on… otherwise you can skip to the part where I explain why I don’t want people in the lab to use Excel.
IgorPro is very powerful and the learning curve is steep, but the investment is worth it.
These are some of the things that Igor can do: Publication-quality graphics, High-speed data display, Ability to handle large data sets, Curve-fitting, Fourier transforms, smoothing, statistics, and other data analysis, Waveform arithmetic, Matrix math, Image display and processing, Combination graphical and command-line user interface, Automation and data processing via a built-in programming environment, Extensibility through modules written in the C and C++ languages. You can even play games in it!
The first thing to learn is about the objects in the Igor environment and how they work.There are four basic objects that all Igor users will encounter straight away.
All data is stored as waveforms (or waves for short). Waves can be displayed in graphs or tables. Graphs and tables can be placed in a Layout. This is basically how you make a figure.
The next things to check out are the command window (which displays the history), the data browser and the procedure window.
- Tables are not spreadsheets! Most important thing to understand. Tables are just a way of displaying a wave. They may look like a spreadsheet, but they are not.
- Igor is case insensitive.
- Spaces. Igor can handle spaces in names of objects, but IMO are best avoided.
- Igor is 0-based not 1-based
- Logical naming and logical thought – beginners struggle with this and it’s difficult to get this right when you are working on a project, but consistent naming of objects makes life easier.
- Programming versus not programming – you can get a long way without programming but at some point it will be necessary and it will save you a lot of time.
Pretty soon, you will go beyond the four basic objects and encounter other things. These include: Numeric and string variables, Data folders, Notebooks, Control panels, 3D plots – a.k.a. gizmo, Procedures.
Why don’t we use Excel?
- Excel can’t make high quality graphics for publication.
- We do that in Igor.
- So any effort in Excel is a waste of time.
- Excel is error-prone.
- Too easy for mistakes to be introduced.
- Not auditable. Tough/impossible to find mistakes.
- Igor has a history window that allows us to see what has happened.
- Most people don’t know how to use it properly.
- Not good for biological data – Transcription factor Oct4 gets converted to a date.
- Limited to 1048576 rows and 16384 columns.
- Related: useful link describing some spreadsheet crimes of data entry.
But we do use Excel a lot…
- Excel is useful for quick calculations and for preparing simple charts to show at lab meeting.
- Same way that Powerpoint is OK to do rough figures for lab meeting.
- But neither are publication-quality.
- We do use Excel for Tracking Tables, Databases(!) etc.
The transition is tough, but worth it
Writing formulae in Excel is straightforward, and the first thing you will find is that to achieve the same thing in Igor is more complicated. For example, working out the mean for each row in an array (a1:y20) in Excel would mean typing =AVERAGE(A1:y1) in cell z1 and copying this cell down to z20. Done. In Igor there are several ways to do this, which itself can be unnerving. One way is to use the Waves Average panel. You need to know how this works to get it to do what you want.
But before you turn back, thinking I’ll just do this in Excel and then import it… imagine you now want to subtract a baseline value from the data, scale it and then average. Imagine that your data are sampled at different intervals. How would you do that? Dealing with those simple cases in Excel is difficult-to-impossible. In Igor, it’s straightforward.
Resources for learning more Igor:
- Igor Help – fantastic resource containing the manual and more. Access via Help or by typing ShowHelpTopic “thing I want to search for”.
- Igor Manual – This PDF is available online or in Applications/Igor Pro/Manual. This used to be a distributed as a hard copy… it is now ~3000 pages.
- Guided Tour of IgorPro – this is a great way to start and will form the basis of the workshops.
- Demos – Igor comes packed with Demos for most things from simple to advanced applications.
- IgorExchange – Lots of code snippets and a forum to ask for advice or search for past answers.
- Igor Tips – I’ve honestly never used these, you can turn on tips in Igor which reveal help on mouse over.
- Igor mailing list – topics discussed here are pretty advanced.
- Introduction to IgorPRO from Payam Minoofar is good. A faster start to learning to program that reading the manual.
- Hands-on experience!
Part of a series on the future of cell biology in quantitative terms.
Something that has driven me nuts for a while is the bug in FIJI/ImageJ when making montages of image stacks. This post is about a solution to this problem.
What’s a montage?
You have a stack of images and you want to array them in m rows by n columns. This is useful for showing a gallery of each frame in a movie or to separate the channels in a multichannel image.
What’s the bug/feature in ImageJ?
If you select Image>Stacks>Make Montage… you can specify how you want to layout your montage. You can specify a “border” for this. Let’s say we have a stack of 12 images that are 300 x 300 pixels. Let’s arrange them into 3 rows and 4 columns with 0 border.
So far so good. We have an image that is 1200 x 900. But it looks a bit rubbish, we need some grouting (white pixel space between the images). We don’t need a border, but let’s ignore that for the moment. So the only way to do this in ImageJ is to specify a border of 8 pixels.
Looks a lot better. Ok there’s a border around the outside, which is no use, but it looks good. But wait a minute! Check out the size of the image (1204 x 904). This is only 4 pixels bigger in x and y, yet we added all that grouting, what’s going on?
The montage is not pixel perfect.
So the first image is not 300 x 300 any more. It is 288 x 288. Hmmm, maybe we can live with losing some data… but what’s this?
The next image in the row is not even square! It’s 292 x 288. How much this annoys you will depend on how much you like things being correct… The way I see it, this is science, if we don’t look after the details, who will? If I start with 300 x 300 images, it’s not too much to ask to end up with 300 x 300 images, is it? I needed to fix this.
I searched for a while for a solution. It had clearly bothered other people in the past, but I guess people just found their own workaround.
ImageJ solution for multichannel array
So for a multichannel image, where the grayscale images are arrayed next to the merge, I wrote something in ImageJ to handle this. These macros are available here. There is a macro for doing the separation and arraying. Then there is a macro to combine these into a bigger figure.
For the exact case described above, where large stacks need to be tiled out into and m x n array, I have to admit I struggled to write something for ImageJ and instead wrote something for IgorPRO. Specifying 3 rows, 4 columns and a grout of 8 pixels gives the correct TIFF 1224 x 916, with each frame showing in full and square. The code is available here, it works for 8 bit greyscale and RGB images.
I might update the code at some point to make sure it can handle all data types and to allow labelling and adding of a scale bar etc.
The post title is taken from “Everything In Its Right Place” by Radiohead from album Kid A.