A practical guide for fluorescent confocal microscopy

Page 1

A practical guide for fluorescent confocal microscopy
by Dirk Bucher

This page is not intended to give a complete description of how a confocal microscope works and what all the possibilities of its use are. Rather, I will only attempt to give some practical advice for coming up with the best possible results for the kind of things we use the confocal microscope for. You can either flip through by using the arrow buttons or select the following chapters:
Download a standardized scan sheet here (pdf)

« Previous | Next »

page 2

A brief description of the principles of confocal microscopy

The confocal microscope has its name from the arrangement of the light path. In a confocal microscope, the illumination and detection lightpaths share a common focal plane, which is achieved by 2 pinholes that are equidistant to the specimen (see figure). Commonly, Krypton/Argon and Helium/Neon mixed gas lasers are used that give you a range of different distinct wavelengths (see below). This light is sent through a pinhole and reflected by a beamsplitter to the objective and specimen. The beamsplitter is a dichroic filter that acts as a mirror for the excitation wavelengths and is transparent to all other wavelengths. Therefore, the emitted light from the specimen (which has a wavelength spectrum above the excitation wavelength) can go through the beamsplitter to the detection pinhole and the detector (actually the beamsplitter now has been replaced by an acousto-optical device, but for the sake of understanding the principle this doesn't matter here). As a consequence of the pinhole arrangement, light arriving at the detector comes predominantly from a narrow focal plane, which improves the z-resolution significantly compared to conventional microscopy. At the high end, it is possible to achieve axial resolution in the submicron range. In the following, I will try to go through the process of preparing and scanning a fluorescent specimen, explaining a little more about the technical features of the confocal microscope to the extent you need to know them in order to set the scanning parameters in a sensible way. For further, a little more detailed information about the principles of confocal microscopy, here are some websites that do a fairly good job: http://www.science.uwaterloo.ca/physics/research/confocal/intro.html http://www.cs.ubc.ca/spider/ladic/confocal.html http://swehsc.pharmacy.arizona.edu/exppath/micro/confocal.html http://www.mih.unibas.ch/Booklet/Booklet96/Chapter1/Chapter1.html http://www.physics.emory.edu/~weeks/confocal/

...and if you're really keen, there is always "the bible": Pawley JB, ed (1995) Handbook of Biological Confocal Microscopy, second Edition. New York, London: Plenum Press.There are multiple copies of this book flying around at Brandeis. Ask Ed or me.

« Previous | Next »

Page 3

Selecting a fluorophore

I'm assuming that anyone who reads this knows what a fluorophore is, so I'm only going to talk about how to select the right one for specific applications. Remember that the lasers at the confocal microscope have distinct wavelengths. At Brandeis, the options are: 458, 476, 488, 514, 543, and 633nm. This means for example that it is not a good idea to use Texas Red/Alexa594, since the excitation maximum is far away from both the 543 and 633 laser lines. Single labeling There are obviously no constraints in terms of separating signals when using single labels. However, a couple of things have to be considered. The maximum optical resolution is limited to about a half of the wavelength of the light. In theory this means that fluorophores with low wavelength (blue light) excitation are better. On the other hand, autofluorescence from endogenous fluorophores in biological tissue is decreasing with increasing wavelength, so "the redder the better". But keep in mind that fluorophores excited by red light (like Cy5 or Alexa633) emit light in the far red part of the spectrum and are barely or not at all visible for the human eye. That means you can only see your staining under the confocal microscope. For a lot of things it's just nicer to be able to eyeball your preps under a normal epifluorescence microscope first to see if the staining is any good. In addition, the fluorophores with green excitation spectra are generally brighter than ones with blue or red excitation spectra. So I would always recommend to use fluorophores that are excited by green light (e.g., tetramethylrhodamine [TRITC], Alexa568, Cy3). Multiple labeling The good news is that for using multiple stains we are not limited at all anymore by the kind of beamsplitters and emission filters that we have. I will not go into details here about the technology, but the beamsplitter has been replaced by an acousto-optical device and the emission filters by prism optics. What this means is that we don't have to worry about the beamsplitter at all (so forget about it), and that we can freely (with 5nm resolution) select the emission range that we want to "grab" with any of the three detectors. This basically means that now separation of signals is only a matter of the spectral properties of the fluorophores used. The image on the left shows an example of a combination of three fluorophores. As you can see, both the excitation and the emission spectra are partly overlapping. There are basically three things to consider when you try to get good signal separation into three channels here (or 2, which is less problematic): 1) You want a good combination of fluorophores. Molecular Probes offers downloadable fluorescence spectra for all of their products (which include more or less everything but the cyanine dyes [Cy2, 3, and 5]): http://www.probes.com/servlets/spectra/. You can also get a file with spectra from here, that allows you to plot combinations yourself. 2) You want to set the emission filter ranges so that you get best possible separation. Keep in mind that signal separation is not only dependent on the spectral properties but also on the relative intensities of the signals. The spectra above show normalized intensities. In a real situation the relative intensity values at a given wavelength maybe completely different between 2 fluorophores. In general, you will have more channel bleedthrough with lower NA objectives. For example, I doubt that you can get good separation for the combination above with any of the air objectives. 3) If all else fails, there is always the possibility to achieve signal separation on the level of the excitation and not the emission. The idea is to only use one of the laser line at a time and get a significantly reduced response from the fluorophore whose excitation spectrum is further away form this wavelength. Our confocal lets you do that without having to do sequential scans. It can switch back and forth between scanning frames with different settings. Here's a fairly old reference (so out of date in terms of fluorophores), but it's still useful for general principles: Brelje TC, Wessendorf MW, Sorenson RL (1993) Multicolor laser scanning confocal immunofluorescence microscopy: practical application and limitations. Methods Cell Biol 38:97-181.

« Previous | Next »

Page 4


One thing that people tend to forget is that the most important (and hardest) part of doing microscopy is mounting the specimen. You can have the best stain ever because you spent weeks optimizing some protocol and it's all useless if you screw up during mounting. Two things are important: 1) You want to preserve the three-dimensional structure of the tissue as well as possible. This is especially true if you want to do morphometrics from 3D reconstructions, but also for simple maximum projections you want to have the tissue to look as close as possible to what it's like in situ. To achieve that, you want to make sure that you don't do any damage mechanically (see below), and you want to choose the right mounting medium. It has been described that potentially any histochemistry can screw up the structure of the specimen. At least in the case of arthropod wholemount preparations, the only real problem seems to be the mounting medium. Mounting in glycerol is a bad idea. It creates torsions and all kinds of anisometric shrinkage arifacts. The agent of choice in my opinion is methyl salicylate (wintergreen oil). If you dehydrate carefully (30,50,70,90,100% EtOH) and then let the tissue sit in a 50:50 EtOH:methyl salicylate for an hour or 2 before mounting in pure methyl salicylate, all you get is a little isometric shrinking (see Bucher et al (2000). Correction methods for three-dimensional reconstructions from confocal images: I. tissue shrinking and axial scaling. J Neurosci Methods 100:135-143). 2) You want to make sure that you can use the best possible objective. Higher resolution objectives have fairly short working distances and you don't want to end up not being able to use one of those just because your cover slip is a mile away from the tissue. In theory, most objectives are optimized for spherical corrections for a 170micron cover slip and if you deviate from that you will lose signal. But that is only important if you have a really, really weak signal. Usually you want a thinner cover slip to win some working distance. Most companies have cover slips in different thickness ranges. I recommend ESCO No. 0, which reliably seems to be 100 microns thick. It's nice to have a range of different ones around, because you can also use them to keep the cover slip from touching the specimen (see figure). Mounting the STG: Here's what I do. I leave the nerves (stn, dvn, avn, agn) fairly long and pin the ganglion into a thinly coated sylgard dish. After fixing, but before dehydrating (in buffer), I cut the sylgard around the ganglion with a scalpel but leave everything in place. After dehydration, I take the cut out piece of sylgard out of the 100% EtOH and transfer it to a small glas vial filled with the methyl salicylate/EtOH mixture. I leave that in a vaseline-sealed dry glass container (with silica gel on the bottom), to keep the EtOH from drawing water. After 1-2h, I transfer the sylgard piece to a sylgard coated glass dish and pin it down. I put a drop of pure methyl salicylate on top. Now I cut the nerves proximal to the pins that hold them. Then I carefully place a shard of coverslip underneath the ganglion and use that as a "gurney" to transfer the ganglion to the slide without having it fall dry and without coiling any nerves. I usually glue 2 coverslips on a microscope slide with nailpolish from the top. The STG goes into the gap between them. I carefully position everything the way I want it and then fill up with mounting medium, remove the cover slip shard, put a cover slip on top, and seal everything with nail polish. Make sure it's clear nail polish, otherwise the color will dissolve into the medium. Using 100micron thick coverslip let's me even use the 100x oil objective we have.

« Previous | Next »

page 5

Setting the gain for a scan

It's one thing to have been shown which knobs to turn to get an image and another to get some understanding of what the knobs are actually doing and how to adjust a number of free parameters during scanning. Basically, the image you will get in a given "optical situation" (i.e. with a given prep and a given objective) depends on three parameters: Laser intensity, detector gain, and pinhole size. 1. Laser intensity: Just to clarify this: There are 2 ways of regulating it. The knob on the control panel under the keys regulates the voltage that is applied to the gas. Don't touch that, Ed has set it to some standard and safe value. In the software, you find sliders from 0-100% for every line. These control an acousto-optical transmission filter (AOTF) that attenuates the intensity of the laser in a wavelength-specific way. In theory, you want the laser pretty damn bright, but of course you don't want to bleach your fluorophores too much. 2. Detector gain: The detectors (photo-multipliers) are basically cathode-arrays that amplify the light signal. Again, it would be nice to have them as sensitive as possible. The problem is that they are subject to thermic noise. That means, at too high a voltage, they count photons that are not really there and the image gets noisy. As a rule of thumb, don't set them to more than 500. 3. The pinhole: The pinhole diameter determines the "confocality" of your image. The smaller the pinhole, the narrower the focal plain and the better your axial (z-) resolution. However, the smaller the pinhole, the less signal goes through. I recommend to leave the pinhole alone. By default, the scan software sets it to a specific value for each objective. It's called "1 airy" and refers to the "airy disc", which is the diameter of the first maximum of the theoretical point spread function (never mind, if you really want to know, refer to the Pawley book). In summary, setting the parameters is a tradeoff between different things. Here is my default way of doing it:
1. Make sure the pinhole size is at "1 airy"
2. Set the detector gain to 500.
3. Use the laser intensity to get a bright enough image. If you use the "glowOver" colormap in the scan software, the brightest value (255)* shows in blue. Set the laser intensity so that blue just starts to show up in the brighter parts of your staining.

*the default setting for pixel depth (how many gray values there are to code for intensity) is 8-bit (0-255). You may find that this is not always sufficient for the intensity differences found in the kinds of stainings we produce and you find that you have to overamplify the bright parts in order to see the faint parts. There is the option to use 12-bit coding (0-4095). Hower, I don't recommend it. Your image stacks get really large and not all image softwares can read 12-bit.

« Previous | Next »

page 6

Image resolution and dimensions, part 1

Let's define the terminology first.The actual optical resolution is not trivial to determine exactly. It is defined as the smallest distance 2 point sources of light can have to still be determined as separate in the image. This does NOT mean that structures smaller than that distance cannot be seen. It just means that we can't determine their actual size anymore. In the following, I'm mainly referring to "high end" imaging with high resolution (large NA) oil objectives. You may be using air objectives for overviews and large structures, and for the convienience of easy digitization, but you don't really need to worry too much about resolution with those. It's gonna be crappy. In the physicist's paradise, the optical resolution only depends on the wavelength, the refractive indices of the media in the light path and the opening angle of the objective. I'll spare you the equations because they are of little use here. In real microscopy, the optical resolution is also dependent on many things that degrade the image. So all you gotta know is a couple of rules of thumb how to digitize in an appropriate manner, which brings us to the image resolution. Just to make this clear: I may be switching back and forth between the terms "pixel" and "voxel". For our purposes, the distinction is not that important. Pixels are the "image atoms" in 2D. The confocal microscope produces a stack of 2D images and keeps the step size information (the distance between 2 consecutive 2D images). If you think about this in a different way, you add an axial dimension to every pixel, which is what is called a "voxel" (i guess a conflation of "volume" and "pixel"). For 3D visualization algorithms, the voxel z-dimension is important, but for scanning it doesn't matter if you "think in" step size or voxel. 1) xy image resolution As a rule of thumb, the absolute maximum optical resolution you can achieve is half the wavelength of the excitation light (i.e. ~229-317nm for the laser lines we use [458-633nm]). You are never going to reach that, but you still want to make sure that your image resolution always exceeds the optical resolution. The image resolution is just given by the pixel grid you are using to digitize the image, and the field of view of the objective. The default setting on our microscope is: Field of view [microns] = 15000 / magnification The figure has a list of what that means for the different objectives we have. The standard digitizing grid is 512 * 512 pixels. This is good enough for many things (see voxel size values in the figure). However, will win a little bit when you switch to a 1024*1024 grid. Keep in mind though that this means that your image stacks are gonna be 4 times larger! Note: Leica made the size of the field of view a little too large for accuracy. With a digitizing field of view that covers most of the objective's field of view, the edges suffer a bit from spherical aberrations and maybe the larger scanning mirror angles). You can avoid that by either using a smaller field of view (using the zoom function), or by making sure that tiles in successive scans really overlap by ~ 25%.

« Previous | Next »

page 7

Image resolution and dimensions, part 2


The optical z-resolution is a lot worse than the lateral resolution, even in confocal microscopy. There is a mountain of literature about point spread functions, deconvolution, etc. Again, this is just a practical guide. In my experience, getting just below 1 micron z-resolution is about what you can hope for. The image z-resolution is approximately set by the step size of the microscope stage (see below for why this is only approximately). To be on the safe side, you can aim for 1x to 2x the xy-voxel size with standard settings. If you don't care too much about z-resolution and want to use less slices, you can always open the pinhole wider, so you don't lose information (and intensity) between slices. Concerning z-steps, you have to keep 2 things in mind: axial scaling and chromatic shift: a) Axial scaling: Light refraction is dependent on the specific properties of the medium, i.e. the refractive index determines how light is refracted at a plane between 2 different media. You can have an objective with all kinds of corrections, but the objective doesn't "know" what mounting medium you use. Therefore, the z-steps of the microscope stage are not identical with the steps of the focal plane in the specimen. Usually that means that your actual z-step is bigger than the microscope tells you, and therefore the axial dimension in uncorrected images is smaller than the real one. For low NA objectives, it is easy to calculate the scaling by using simple geometric optics (Snellius' light refraction law etc.). Here's a pdf that explains it. For higher NA objectives this doesn't work because you have to take all rays into account, and not just the peripheral ones. At some point we did some wave optics calculations (and when I say "we" I mean someone I was working with). Here are the results for a range of NAs.As you can see, this is not that bad for oil objectives, but serious for air objectives. b) Chromatic shift: The refractive index is different for different wavelengths. The objectives mostly are corrected for chromatic aberrations within their own light path, but it seems that particulary the cover slips are not that good at handling the far red wavelength. Luckily, for geometric reasons mismatches between wavelengths in the cover slip only produce shifts and not scaling errors. This means that if you do high resolution scans with different color channels, the two channels can have an offset of up to a couple of microns, which is bad news if you are looking for colocalization in fine structures. You can test that by coupling different secondary markers to the same staining and look for offsets between the color channels.

« Previous | Next »