STED-like Patterned Excitation Microscopy Many biological and medical problems might be progressed if living cells could be imaged at high resolution. Unfortunately, the high resolution imaging techniques currently available have limitations. Electron microscopy, for example, can offer enormous improvements in resolution, but unfortunately requires the sample to be cut into thin slices and placed in vacuum. Near-field scanning optical microscopy can be used on samples under physiological conditions and can give significant resolution improvement, but is restricted to studying surfaces. It is therefore desirable to improve the resolution of far-field techniques. In this study, the theories behind two of the most commonly used ‘diffraction-limit beating’ farfield techniques (saturated patterned excitation microscopy (SPEM) and stimulated emission depletion microscopy (STED)) are derived. The resolution improvements given by these techniques in experiment are also summarised. An alternative method of introducing nonlinearities into patterned excitation microscopy using a technique based on STED is then investigated. Example System- Secretory Vesicle Fusion and Exocytosis While high resolution imaging of living samples would of course be useful in many areas of biology, the specific example of secretory vesicles will be investigated in this study. Two commonly studied types of secretory vesicle are synaptic vesicles and large dense core granules. Synaptic vesicles store neurotransmitter and are located in the pre-synaptic terminal of neurons. They release their contents into the synaptic cleft by fusing with the membrane. Voltage dependent calcium channels regulate this process. Vesicles at the synapse can be divided into three groups, the readily releasable pool, the recycling pool and the reserve pool (Rizzoli et al, 2005)3). Synaptic vesicles in the readily releasable pool are docked to the cell membrane and are the first group to be released. The recycling pool is larger than the readily releasable pool but takes longer to be mobilised. Once the readily releasable pool is exhausted the recycling pool is cycled. The reserve pool contains the majority of vesicles in the nerve terminal. They have been shown to be released under intense stimulation in experiment but are thought not to be released under normal conditions (Rizzoli et al, 2005) 3). 1 Figure 1: the synaptic vesicle cycle. Image taken from http://www.wormbook.org/chapters/www_synapticfunction/synapticfig1.jpg Synaptic vesicles follow a few key steps to release their contents (Südhof, Firstly they are trafficked to the synapse using kinesin motors. They are then loaded with neurotransmitter in a process controlled by electrochemical gradients, created by proton pumps. Next they dock to the cell membrane close to their release site before being primed. Priming prepares the synaptic vesicle so that they can respond quickly when triggered to release. The synaptic vesicles fuse to the membrane and release their contents into the synaptic cleft when triggered by a calcium flux. There is debate as to whether all vesicles undergo complete fusion followed by endocytosis from the cell membrane. An alternative hypothesis is that some vesicles can follow a ‘kiss-and-run’ process in which the fusion pore only opens just wide enough to let the neurotransmitter out before closing again and the vesicle never fully fuses. Imaging vesicle behaviour at high resolution as they bind to the membrane and secrete would be useful in studying this. Another area of open debate is in synapses that release more than one vesicle at once. It is unknown whether the vesicles fuse with each other before fusing with the membrane or if they bind to the membrane separately but release at the same time (He, Xue et al, 2009)7). Synaptic vesicles are approximately 40nm in size (Qu et al, 2009)2) and can move at around 2nm/ms (Westphal et al, 2008)8). Therefore, in order to track single vesicle motion, resolution of the order of 40nm would be required. Images will also need to be taken at a high enough frame rate to be able to follow the vesicles as they move. Vesicles could be tracked if the distance they move between images is of the same order as their size. Therefore, if they have a diameter of 40nm and move at 2nm/ms, images must be taken every 20 ms (50 images per second). 2004)4). Large dense core granules store peptide hormones and are much larger and slower than synaptic vesicles and therefore easier to image. They are around 702 200nm in size. Unlike synaptic vesicles they are located all over the cell, often distant from the active zone. There are usually fewer large dense core vesicles in a cell than synaptic vesicles. Neuropeptides tend to have longer lasting effects than neurotransmitters and often correspond to specific behaviours. Around 100 are known5). Large dense core granule motion has been studied using confocal microscopy (Barg et al, 2002) 6). However, the maximum resolution achievable using confocal microscopy is around 200-250nm and so, if two vesicles move close to one another, they can no longer be resolved individually. It would therefore be useful to study them at resolution smaller than their size to be able to track each vesicle more precisely. Therefore, if the synaptic vesicles were too small and fast to image using the new method, it would still be useful to apply the technique to these. The fastest that large dense core vesicles were seen to move was 1.5nm/ms (Barg et al, 2002) 6). Therefore, if the vesicles could be studied with a frame rate of roughly 10 images per second with resolution of around 100nm, it would be possible to track them more accurately than in previous studies. Abbe’s Diffraction limit and OTF Abbe’s diffraction limit defines a maximum resolution that can be achieved using conventional light microscopy techniques. It can be easily derived by simply calculating the maximum spatial frequency of an object which will diffract light into the microscope aperture. Any periodic object can be represented as an infinite series of sin and cosine terms. ∞ π0 π(π₯) = + ∑[ππ cos(ππ₯) + ππ sin(ππ₯)] 2 π=1 Where the an and bn coefficients are related to the overlaps between the function f(x) and the sine and cosine waves in the decomposition. When illuminated by plane parallel light, each cosine and sine term will diffract light at an angle related to the spatial frequency of that term. The sinusoidal terms with higher spatial frequencies will diffract light at greater angles. Due to the finite size of the microscope aperture, not all of these diffracted frequencies will enter the microscope. Therefore the microscope cuts off high frequency terms from the sum: π π0 π(π₯) = + ∑[ππ cos(ππ₯) + ππ sin(ππ₯)]; 2 π>0 π=1 To find the maximum spatial frequency resolved by the microscope, it is easiest to study the simplest possible object; one that can be represented by a single sine pattern. The spatial frequency of this sine pattern can then be varied to find the limit at which the diffracted light will no longer enter the microscope. If the object is illuminated by plane parallel coherent monochromatic light, the maximum of the resulting diffraction will be at an angle: 3 ϑ = sin−1 ππ ≅ ππ Where s is the spatial frequency of the object and λ is the wavelength of the illuminating light. Figure 2: A sinusoidal object will diffract plane parallel light into two beams at angle π which depends on the spatial frequency s and the wavelength of light λ. The maximum angle at which light will enter the microscope is defined by the geometry of the microscope system. π ππππ₯ = tan−1 π π ≅ π Where a is the radius of the microscope aperture and R is the aperture-object distance. Therefore the maximum resolvable frequency is given by: π πππ₯ = π ππ And so the minimum detectable period (d) is: ππππ = ππ π 4 The angle corresponding minimum detectable angle at the microscope is given by: ππππ π π = π π= Figure 3: The smallest resolvable angle in the microscope is given by the wavelength of light divided by the radius of the microscope aperture. The focal length of the lens, f, should of course equal the object to microscope distance for the object to be in focus (i.e. f=R). It is also common to look at the radius of the smallest detectable spot, π, rather than the smallest detectable 1D spatial separation, π. π= π ππ = 2 2π π π The numerical aperture of the lens ππ΄ = π sin π = π sin tan−1 π ≅ π and so π π= 2ππ΄ This is Abbe’s diffraction limit. The diffraction limit defines the minimum possible length over which the point spread function must be non-zero. The point spread function (PSF) of a microscope defines how images will be blurred by the microscope (figure 4). It is defined by the response of the imaging system to a point source. Imaging any object or set of objects through a microscope is equivalent to convolving the emitted image with the PSF. ∞ π·(π) = ∫ π(π’) π(π − π’)ππ’ −∞ 5 = π(π) ⊗ π(π) D(r) is the image in the microscope, O(r) is the image emitted by the object and ρ(r) is the PSF of the microscope. ⊗ is used us short hand for a convolution. The highest numerical aperture given by modern microscopes is 1.4, so for green light (wavelength ~ 550nm) the maximum resolution is around 200-250nm. Figure 4: The image in the microscope is given by the convolution of the object image with the point spread function. The object image is smeared by the PSF and so minimising the size of the PSF gives higher resolution. This corresponds to extending the region of frequency space covered by the optical transfer function. Image from: http://upload.wikimedia.org/wikipedia/commons/c/c2/Convolution_Illustrated_eng.png The Fourier transform of the PSF is the optical transfer function (OTF). This defines the range of spatial frequencies which the microscope can resolve. In this case the diffraction limit defines the frequency after which the OTF is zero valued. In frequency space: π·(π)πΉ = π(π)πΉ πππΉ(π) where F has been used to signify a Fourier transform and the convolution has become a multiplication according to the convolution theorem. Structured Illumination to achieve a 2 fold increase in resolution (PEM) Several papers have been published demonstrating that illumination with patterned light can be used to double the resolution limit. These include laterally modulated excitation microscopy (LMEM) (Heintzmann and Cremer, 1999)21), structured illumination microscopy (Gustafsson, 2000) 16) and harmonic excitation light microscopy (HELM) (Frohn, Knapp and Stemmer, 2000) 18). All of these methods are based on using patterned light to shift Fourier components into the observable region of frequency space and hence (as suggested in Heintzmann, 6 20039)) these methods will collectively be referred to as patterned excitation microscopy (PEM) in this study. It has also been shown that patterned light can be used to obtain optical sectioning (Neil, Juskaitis and Wilson, 1997) 12). The two fold increase in resolution can easily be mathematically derived. At low excitation intensity, the emitted object image (before being smeared by the microscope PSF) is simply a linear multiplication of the excitation illumination, I(r), and the spatial dependence of the object, S(r). π(π) = π(π)πΌ(π) π·(π) = (π(π)πΌ(π)) ⊗ ρ(r) And so via convolution theorem: π·(π)πΉ = (π(π)πΉ ⊗ I(k)πΉ )OTF(k) In PEM, the excitation illumination is a periodic pattern instead of just a simple laser spot. I.e. it will be of the form of a cosine curve with a constant term added to ensure there are no negative illumination values: πΌ(π) = πΌ0 [1 + cos(π0 π + π)] If this is added to the previous equation and the Fourier transform is taken it can be seen that new information has shifted into the resolvable region of frequency space: π·(π)πΉ = πΌ[π(π)πΉ + 0.5π(π + π0 )πΉ π −ππ + 0.5π(π − π0 )πΉ π ππ ]πππΉ(π) The S(k) term relates to the image seen in conventional microscopy. In the second and third terms there has been a frequency shift of ±k0. Multiplying by the OTF, which cuts off all frequency regions greater than those allowed by the diffraction limit, leads to 3 resolvable regions: |π| ≤ ππππ₯ |π + π0 | ≤ ππππ₯ |π − π0 | ≤ ππππ₯ 7 Figure 5: The resolvable region of frequency space is doubled in one dimension as a result of using patterned excitation light. The image on the left represents the region of frequency space that is resolvable through confocal microscopy. The image on the right shows the extra regions which are resolvable using PEM and hence the 1D doubling of resolution. Diagram taken from Gustafsson, 200016). k0 is also constricted by the diffraction limit since this defines the sharpest beam which can be emitted and focused from a laser, |π0 | ≤ ππππ₯ . Therefore it can be seen that the resolvable region of the frequency spectrum has doubled: |π| ≤ 2ππππ₯ The two shifted regions of frequency space need to be separated from the original image and shifted back to their proper position in frequency space. To achieve this 3 images must be taken with three different values of the phase offset Φ: {Φ1,Φ2,Φ3}. Inverting the matrix resulting from the 3 images can then separate the components: π·1 (π) 1 0.5π −ππ1 [π·2 (π)] = πΌ0 [1 0.5π −ππ2 π·3 (π) 1 0.5π −ππ3 π(π)πππΉ(π) 0.5π ππ1 ππ2 ] [π(π + π )πππΉ(π)] 0.5π 0 ππ3 π(π − π )πππΉ(π) 0.5π 0 The resulting three components can then be shifted to give π(π)πππΉ(π), π(π)πππΉ(π − π0 ) and π(π)πππΉ(π + π0 ). Due to the linearity of the Fourier transform, this could also be performed in real space, where the shifts would be performed by multiplying by π βππ0 π . To achieve 2-dimensional resolution, this process must be repeated for at least three different angles to cover the entire region of frequency space. Therefore, a minimum of 9 images must be taken. This limits the technique to studying biological properties, which do not move significantly relative to the resolution within the time required to obtain 9 images. 8 Figure 6: Resolution can be increased in 2D (in the focal plane) by changing the angle at which the pattern is projected onto the object. However this will increase the time required for image capture. Diagram taken from Gustafsson, 200016). A 2-D periodic pattern can also be used (e.g. Fronn, Knapp and Stemmer, 2000)18) to achieve a maximum theoretical 2-fold maximum resolution improvement in both dimensions. In this case five images must be taken to separate out the original image, the two shifted kx frequency regions and the shifted ky frequency regions. To properly cover the entire frequency space, this would need to be repeated at two different angles 45 degrees apart. Inclusion of non-linearities to achieve a theoretically infinite increase in resolution (SPEM) In the previous derivation it was presumed that, at low excitation intensity, I(r), the emission from the object, O(r), is proportional to the excitation intensity multiplied by the spatial dependence of the object S(r): π(π) = π(π)πΌ(π) Recent studies (Heinzmann, Jovin and Cremer, 20029); Gustafsson, 200510)) have looked at situations where this approximation no longer holds. In this case, the non-linear dependence of the emission from the object on the excitation intensity can be expanded as a Taylor series: π(π) = π(π(π), πΌ(π)) = π(π)[π0 + π1 π(π) + π2 πΌ(π) + π3 π(π)πΌ(π) + π4 π(π)πΌ 2 (π) + β― ] = π(π)πΈπ(π) and the image in the microscope will then be given by this multiplied by the PSF: π·(π) = (π(π)πΈπ(π)) ⊗ ρ(r) 9 Em is the spatially dependent emitability: π πΈπ(π) = ∑ ππ πΌ(π)π π=1 Where constant offsets (terms with no dependence on πΌ(π)) have been left out. In fourier space: π·(π)πΉ = (π(π)πΉ ⊗ Em(k)F )OTF(k) Due to the linearity of the Fourier transform, each term in Em(k)F can be convolved with π(π)πΉ individually. This leads to terms like: π(π)πΉ ⊗ I(k)F 1) πΉ F F π(π) ⊗ [I(k) ⊗ I(k) ] 2) ………… π(π)πΉ ⊗ [I(k)F ⊗ I(k)F ⊗ … . . ] π) Where n is order of the non-linearity. Figure 7: The resolvable region of frequency space can be increased further by introducing nonlinearities in the emission. The graph on the left shows the resolvable region from confocal microscopy. The central graph shows the 1D increase in resolution from SPEM. The graph on the right shows the 2D resolution increase in the focal plane that could be achieved by varying the angle of the excitation pattern. The black region in each graph is the region resolvable by confocal microscopy. Dark grey regions are those resolvable by linear patterned excitation microscopy and light grey regions show the extra resolution increase as a result of including non-linearities. Diagram taken from Gustafsson, 200510). 10 These terms shift further information into the resolvable region of frequency space. If the cosine illumination pattern is used for I(x), it can be seen that the mth term will contain components like π(π ± ππ0 )πΉ πππΉ(π). If the nonlinearity can be described by a polynomial of order n, s the resolvable region of frequency space will be extended to (n+1)kmax. Heinzmann et al looked at saturation of the first excited state to produce non-linearities. In this case the polynomial will have infinite order and so the maximum resolution is theoretically infinite. In practice however, resolution will be restricted by signal to noise ratios, and the time required for image capture. Heinzmann et al call this method Saturated Patterned Illumination Microscopy (SPEM). Heinzmann et al11) use the approximation that the essential characteristics of the emission-absorption ratio are not lost by excluding the triplet state from calculations. The triplet state will only slightly modify the general form of the two state equation and so the two-state equation can be used: π(π) πΌ ππ πππ₯ 1 ππ + πππ₯ Where ππ is the radiation rate constant, π is the absorption cross section, π is the fluorescence lifetime and πππ₯ (π) is the photon flux proportional to πΌ(π). 1 Therefore when πππ₯ << ππ the emission will depend linearly on πΌ while when πππ₯ > 1 > ππ there will be a plateau. As with the linear patterned light case, several images are required to separate out the shifted components from the original image. If m orders of intensity are studied, s=(2m+1) images with varying phase will be required to separate out the components. As with the linear emission case, this process essentially involves inverting the matrix M defined by the equations: π·π πΉ (π) = ∑ π π=−π πππ ππ πΉ (π) πππ = ππ′ ππ₯π(2ππππ⁄π ) π ∈ {−π … π} π ∈ {0 … (π − 1)} This will also need to be repeated with the excitation pattern at different angles to increase resolution in 2D in the focal plane. This technique has been extended to include the use of a 2-dimensional saturation pattern (Heintzmann, 2003)9). Saturated Patterned Excitation Microscopy has been used to achieve resolution of less than 50nm (Gustafsson, 2005)10), (Rego et al, 2011)1). 11 Stimulated Emission Depletion Microscopy Stimulated emission depletion microscopy (STED) is another method that takes advantage of a nonlinear dependence of emission on excitation intensity to achieve resolution beyond the diffraction limit (Proposed: Hell and Wichmann, 1994)14), (First experimental evidence: Klar and Hell, 1999)17). In this case, the resolution barrier is broken by quenching the excited molecules around the edge of the focal spot through stimulated emission using a second laser. Only the molecules at the very centre of the central maxima then emit and so, the spot size is reduced. The 1994 Hell and Wichmann paper14) describes how the resolution can be improved in 1-dimension for a typical focal spot. They also simulated to calculate the potential resolution gain. A fluorophore with two electronic states, S0 and S1, was studied (Figure 8). Figure 8: The energy levels required for stimulated emission microscopy. S0 and S1 correspond to the ground and first excited electronic states of the fluorophore. L 0 and L 3 are the ground and an excited state of S0 while L1 and L2 are the ground and an excited state of L2. Taken from Hell and Wichmann, 199414). L0 and L3 are the ground vibrational state and an excited vibrational state of S0 respectively. Likewise, L2 and L1 are the ground and an excited state of S1 respectively. In the simulation, a standard PSF related to the first order Bessel function J1 was used. 2π½1 (π) 2 πππ₯π (π) = ππππ π‘ . | | π Where π is the optical unit in the focal plane: π= 2ππ ππ΄ πππ₯π NA is the numerical aperture, r is the distance from the focal plane and πππ₯π is the wavelength of the excitation light. 12 Figure 9: The setup suggested by Hell and Wichmann for stimulated emission depletion microscopy. The STED beam is split into two and focussed either side of the excitation focal spot to reduce its size. The graph on the right shows the resulting intensity distributions at the focal plane. Taken from Hell and Wichmann, 199414). An additional beam of light (the STED beam) was added to inhibit fluorescence from the outer regions of the PSF. The set up proposed by Hell and Wichmann is shown in figure 9. The STED beam (which is emitted from a separate laser) is split in two. If these two beams are focused with small offsets Δπ either side of the excitation focal spot, the STED PSFs, ππππΈπ· (π ± Δπ), will overlap with πππ₯π (π). The STED beam acts at a slightly higher wavelength than the excitation beam to induce the transition L2 -> L3 by stimulated emission. This depletes the excited state in the outer regions of the focal spot before the fluorescence emission is studied. Therefore, only fluorescence from the central region of the central maxima is detected, and so, resolution is improved beyond the diffraction limit. The STED beam will not cause significant re-excitation (L3 -> L2) of the fluorophores since vibrational relaxation (L2 -> L1) is such a rapid process that the fluorophores will not be in L2 for long enough. The system can be represented through the 4 coupled differential equations: ππ0 1 = πππ₯π π01 (π1 − π0 ) + π ππ‘ ππ£ππ 3 ππ1 1 = πππ₯π π01 (π0 − π1 ) − π ππ‘ ππ£ππ 1 ππ2 1 1 = π1 + ππππΈπ· π23 (π3 − π2 ) − ( + π) π2 ππ‘ ππ£ππ ππππ’ππ ππ3 1 1 = ππππΈπ· π23 (π2 − π3 ) + ( + π) − π ππ‘ ππππ’ππ ππ£ππ 3 Along with ∑π ππ = 1 and π0 (π‘ = 0) = 1 Where ππ (π£, π‘) are the spatially and temporally dependent population probabilities of the levels πΏπ ; π ∈ {0,1,2,3}. ππππ’ππ and ππ£ππ are the average fluorescence and vibrational lifetimes respectively. πππ represents the cross-section for the absorption πΏπ → πΏπ ; π, π ∈ {0,1,2,3}. Hence, πππ₯π π01 is the rate constant for 13 absorption and ππππΈπ· π23 is the rate constant for stimulated emission. Typical values for the πππ range from 10-16 to 10-17 cm2. The vibrational relaxations are 3 orders of magnitude faster than spontaneous emission. This can be seen since ππ£ππ is of order 1-5ps while ππππ’ππ is of order 2ns and Q is around 108 s-1. Both the excitation laser and the STED laser should be pulsed to allow excitation over the entire focal spot, followed by depletion on the outside of the focal spot before measurements are taken. The excitation pulse should arrive first and should excite the fluorophores as quickly as possible to allow rapid imaging. As soon as the excitation laser stops, the STED pulse should arrive to deplete the outside of the spot. The pulse must be strong enough to deplete the outside of the ring in a period much shorter than the lifetime of πΏ2 . However, it should act for a period greater than the lifetime of πΏ3 since this is the period required for the fluorophores to vibrationally relax to πΏ2 and hence, determines the minimum time in which it can be depleted. Figure 10: The intensity of the central maxima as a function of the full width half maxima of the effective PSF after STED as calculated by Hell and Wichmann. Graph taken from Hell and Wichmann, 199414). In Hell and Wichmann’s paper, they simulate using the previously described system to find a possible resolution improvement of 4.5 times the diffraction limit in the focal plane. The limit is set by the fact that the STED focal spots are related to Bessel functions themselves and hence the overlap will decrease the intensity of the central maximum. Figure 10 shows the intensity of the central maximum as a function of the full width half maximum of the PSF after STED. It can be seen that the maxima falls to zero at a FWHM of 0.68. Below this, the STED pulse will deplete the entire central region. However, it is suggested that if a rectangular STED beam could be used, the resolution improvement could be infinite. This is only a 1D improvement, and so several STED pulses would need to be used to surround the central maxima at different angles to give improved resolution in all directions in the focal plane. The resolution along the optical axis is not discussed. Resolution above the diffraction limit using STED was first achieved experimentally by Klar and Hell, 1999. They used a simplified version of the previously described system with only one STED beam (figure 11). This leads to a 14 skewed PSF but still gives a 1-dimensional resolution improvement of roughly 1.3 compared to confocal microscopy. Figure 11: The first experimental evidence of the use of STED to achieve sub-diffraction limit resolution. Top: The PSF resulting from confocal microscopy. Bottom: The sharpened PSF after the STED beam was added in a configuration shown in the top left insert. Figure taken from Klar and Hell, 199917). The STED technique was used by Klar et al13) to give a more significant resolution improvement of 6 times the diffraction barrier along the optical axis and 2 times in the radial direction. They used a STED beam with a PSF that surrounded the illumination PSF, roughly in the shape a hollow sphere. They also took advantage of the non-linear relationship between intensity ISTED and the population of the fluorescence state to achieve sharpening of the excitation PSF above that given by simple subtraction of the STED PSF. 15 Figure 12: Left; The excitation PSF with the optical axis aligned to z. Right; the STED PSF aligned in the same way. The effective PSF was made sharper than that which would result from a simple subtraction of the STED PSF from the excitation PSF by exploiting the non-linearities resulting from saturation. Picture taken from Klar and Hell, 200013). The non-linear relationship can be derived simply by looking at L2, the fluorescence state and L3 the vibrationally excited ground state (figure 8). Presuming all of the excited molecules have vibrationally relaxed to L2, the 4 level equations stated earlier can be simplified to: ππ2 1 = ππππΈπ· π23 (π3 − π2 ) − ( + π) π2 ππ‘ ππππ’ππ ππ3 1 1 = ππππΈπ· π23 (π2 − π3 ) + ( + π) π2 − π ππ‘ ππππ’ππ ππ£ππ 3 Assuming that the focal intensity is low enough that the vibrational relaxation occurs far more rapidly than the stimulated emission or absorption terms, then π3 ≈ 0 can be approximated. This gives, to good approximation: π2 (π, π) ∝ π −ππππΈπ· (π)π23 π −πΌπππΈπ· (π)π23 π βπ ∝π For higher intensities, π2 (π, π) is governed by the vibrational term alone since this determines the rate at which flourophores relax from π2 . It can be seen from this that there is a highly non-linear relationship between π2 and ISTED. 16 A suggested replacement to Abbe’s diffraction limit that includes the effects of STED has been derived (Westphal and Hell, 2005)15): π= π π →~ 2ππ πππΌ 2ππ πππΌ√1 + π πππ₯ πΌπππΈπ· π= πΌπ ππ‘ πππ₯ πΌπ ππ‘ is the saturation intensity of the specific fluorophore used. πΌπππΈπ· is the intensity of the STED beam at it’s highest maxima. It can be seen that for π = 0 the classical limit is roughly reached. As π increases away from this the spot width decreases continually according to an inverse square root law. Since these initial studies, STED has achieved higher and higher resolution improvements including 5.8nm resolution in the focal plane (Rittweger et al, 2009)19). Furthermore, STED has been used to study dynamic systems (Westphal et al, 2007)20). Westphal et al imaged 36nm beads with a video rate of 80 frames per second. The particles were located to within 20nm. STED has been used to study synaptic vesicle movement (Westphal et al, 8) 2008) . Fluorescently labelled synaptic vesicles were imaged at a rate of 28 frames per second with a 62nm focal spot in a 2.5μm by 1.8μm field of view. Using a focal spot of this size allowed the 40nm vesicles to be detected (roughly as one pixel) while still allowing an imaging rate fast enough to track their motion. STED-like behaviour as an alternative source of non linearity in PEM In the previous Heintzmann and Gustafsson studies, saturation in the excitation pulse was used as the source of non-linearity. In this study I hope to investigate if sharpening the excitation pattern maxima through depletion could be used as an alternative way of achieving a non-linearity in the emission. This would involve illuminating the object with a second pattern to cause stimulated emission at the edge of the emission peaks in the excitation pattern (Figure 13). 17 Figure 13: The widths of the peaks in the illumination pattern are decreased as a result of the STED beam. The full width half maxima of the effective illumination pattern after STED decrease following the inverse square root law derived by Westphal and Hell15). The full width half maxima of the illumination pattern would be decreased according to the formula derived by Westphal and Hell15): βπ → βπ √1 − πππ₯ πΌπππΈπ· πΌπ ππ‘ The coefficients, ππ , corresponding to each power of the illumination pattern in the emitability πΈπ(π) = ∑ππ=1 ππ πΌ(π)π would be found by fitting to the effective illumination pattern seen after STED. The limits to this method will be determined by the time required for the sample to be imaged, the intensity of laser required and the possibility of overheating the sample. To investigate the feasibility of this approach to resolution improvement, a microscope model was made in mathmatica to compare the new method with confocal microscopy. The first step to building this model was to program the resolution for confocal microscopy. This was done for both one and two dimensions. The focal plane is being studied using a confocal microscope and so a two-dimensional system would be physically relevant. However, the integrations in mathmatica took a very long time to run, and so, for initial calculations the one-dimensional methods were used. Calculations were performed in both positional and frequency space to check the same result was given. In real space this involved implementing the equation: π·(π) = π(π) ⊗ ρ(r) 18 = ∫ π(π )ρ(R − r)dR Where π·(π) is the image in the microscope, π(π) is the emission from the object and ρ(r) is the PSF of the microscope. In frequency space the convolution is replaced by a multiplication. This would work for any inputted object pattern and psf. However to show the resulting increase in resolution clearly, a simple object pattern of π(π) = 1 + cos(π0 ∗ π§) was used. Also, a Sinc function was used for the PSF since the Fourier transform of a Sinc function is a single square pulse (hat function). Therefore, frequencies lower than the cut off would be expected to be resolved completely while frequencies higher than this wouldn’t be seen at all. This can be seen from fig 15, which shows the amplitude of each frequency component in the image for increasing values of the frequency k. 1 Amplitude of component in microscope 0.9 0.8 0.7 0.6 0.5 Confocal 0.4 0.3 0.2 0.1 0 -3 -2 -1 0 1 2 3 Wavenumber, k, of frequency component Figure 15: The amplitude of each frequency component in the microscope image plotted against wavenumber k for a PSF of ππππ[π§] using conventional microscopy. As expected, the amplitude is 1 for values of k less than 1 and zero for frequencies above this since the OTF corresponding to this PSF if a hat function following the same shape. In two dimensions, since the OTF is radially symmetric, this corresponds to a cylindrical hat function (fig 16). For a real microscope with a cylindrical aperture 19 the PSF wouldn’t be a Sinc function and the OTF would be related to a first order Bessel function instead of a hat function. Figure 16: The visible region of k-space in two dimensions for confocal microscopy. For a Sinc function PSF this is in the shape of a cylindrical hat function. Then next step was to implement standard patterned excitation in the script to show a doubling of resolution. Again this was done in real and imaginary space to check for consistent results. The maths for implementing this is given in the PEM section. A simple pattern of πΌ(π) = 1 + cos(ππ π§ + ∅) was used with three different 2π 4π values of ∅ (0, 3 , 3 ) to get the three images required to separate out the different k-shifted components. The amplitude of each frequency component in the microscope image was again plotted for increasing values of wavenumber, k, to compare with confocal microscopy (fig 17). Amplitude of component in microscope image 1.2 1 0.8 0.6 Pa erned excita on 0.4 Confocal 0.2 0 -3 -2 -1 0 1 2 3 Wavenumber, k, of frequency component 20 Figure 17: The amplitude of each frequency component in the microscope image plotted against wavenumber k for a PSF of Sπππ[π§] using conventional microscopy and PEM. For PEM, as well as the confocal hat function between ±kmax, two extra hat functions have been added to its OTF going from 0 to ±2kmax. The resolvable region of frequency space has doubled. The noisiness of the PEM points around the expected hat function shape could be present because the integrals were performed from 10 to 10 instead of over all space. If these limits were increased, the results would be more accurate but the calculations would take longer to run. The confocal microscopy integrations had limits of ±100 and it can be seen from the graph that there is less noise in them. Since the illumination pattern varies in one particular direction, the resolution increase will be one-dimensional (fig 18: top). In order to increase resolution over the entire focal plane, images must be taken at various angles (fig 18: bottom). Figure 18: Top) The resolution increase resulting from the inclusion of a pattern which varies along the x axis. Bottom) The resolution increase resulting from taking sets of images with patterns in several different orientations 21 The final step was to include non-linearities in the emission from the object. The maths for this is given in the SPEM section. It was implemented up to order two in the illumination pattern. π·(π) = (π1 π(π)πΌ(π) + π2 π(π)πΌ(π)2 ) ⊗ ρ(r) As an initial demonstration that the method works, the contrast as a function of k was found for an image in which the two intensity coefficients, c, were equal. This is unrealistic but shows the three fold increase in resolution resulting from saturation of order two. If the coefficients are equal, the microscope image in 3 1 frequency space becomes π·(π) = 2π(π) + 2 π(π ± π0 ) + 4 π(π ± 2π0 ) and so five images were now required to separate out the various k components. Five values of 2ππ ∅ were used ( 5 , π ∈ {0 − 4}). 1.2 Amplitude of component in microcsocpe image 1 0.8 0.6 2nd order SPEM Pa erned excita on Confocal 0.4 0.2 0 -4 -3 -2 -1 0 1 2 3 4 Wavenumber, k, of frequency component Figure 19: A graph showing the increase in resolution resulting from the inclusion of a 2 nd order nonlinear relationship between emission and the excitation intensity. The green curve shows the range of frequencies covered by confocal microscopy. The red curve shows the doubling resolution resulting from standard, linear patterned excitation microscopy. The blue curve shows that the image will contain frequency components at three times the diffraction limit for 2nd order nonlinear patterned excitation microscopy. The pattern is again only varies along one direction (fig 20: top) and so several images as different orientations will be required to increase resolution across the focal plane (fig 20: bottom). 22 Figure 20: Top) The resolution increase resulting from the inclusion of non-linearities in the emission when an excitation pattern following the x-axis is used. Bottom) The resolution increase when several sets of images with patterns in different orientations are used. The next step in analysing this technique’s feasibility is to start to investigate if images can be taken quickly enough to investigate vesicle motion. To do this, a more physically realistic PSF should be put into the model. The coefficients in the emitability should then be calculated from the data relating to a real fluorophore. From this, the intensity required to get a strong enough signal to noise ratio from each extra frequency region in the time allowed between images could be calculated. To study large dense core vesicles, a resolution of 100 nm is required and so, 3 times the diffraction limit should allow them to be imaged. Therefore the image should contain components from second order in the excitation pattern with a strong enough signal to be viewed above noise. This corresponds to the case shown in figure 20. Five images are required for a one dimensional resolution increase. A two-dimensional resolution increase could be given fairly well by repeating at 6 different orientations of pattern and very well by repeating at 12 orientations. Therefore between 30-60 microscope images are required for each reconstructed image. To track the large dense core vesicles, this must be performed in around 100ms and so each image must be taken in 2-3ms. Since the lifetime of a fluorescence state is of order 2ns, this seems very feasible. However more research must be done to find the intensity of excitation light required to get a strong enough signal to noise ratio and the effects that this intensity might have on the cell. For synaptic vesicles, a resolution of 40 nm with a frame rate of around 50 per second is required. This means that 5 times the resolution given by confocal microscopy should be achieved. Therefore, components in the image coming from the 4th order in the excitation pattern must be resolvable over signal. Consequently, 23 for each orientation, 9 images are required. To give the resolution over the focal plane fairly evenly, 24 orientations are required. Therefore 216 images must be taken in 20 ms and so an image must be taken roughly every 1μs. This is still significantly longer than the time required for the fluorophore to be excited and then emit. However, it maybe difficult to alter the orientation of the illumination pattern this quickly. Furthermore, the π(π ± 5π0 ) components are likely to be very faint in the image and so a very high excitation intensity may be required which could overheat the cell. References 1) E.Rego, L.Shao, J.Macklin, L.Winoto, G.Johansson, N.Kamps-Hughes, M.Davidson, M.Gustafsson, “Nonlinear structured-illumination microscopy with a photoswitchable protein reveals cellular structures at 50-nm resolution”,PNAS, Published online before print December 12, 2011, doi: 10.1073/pnas.1107547108. 2) L.Qu, Y.Akbergenova, Y.Hu, T.Schikorski, "Synapse-to-synapse variation in mean synaptic vesicle size and its relationship with synaptic morphology and function". The Journal of Comparative Neurology Vol 514 (4), 343–352, Mar 2009 3) Rizzoli, Silvio O; Betz, William J, "Synaptic vesicle pools", Nature Reviews Neuroscience Vol 6 (1), 57–69, Jan 2009. 4) T.Südhof, “The synaptic vesicle cycle”, Annual Review of Neuroscience, Vol 27, 509-547, July 2004, 5) http://www.neuropeptides.nl/ 6) S.Barg, C.Ollofsson, J.Schriever-Abeln, A.Wendt, S.Gebre-Medhin, E.Renstrom, P.Rorsman, “Delay between Fusion Pore Opening and Peptide Release from Large Dense-Core Vesicles in Neuroendocrine Cells”, Neuron, Vol 33, 287299, Jan 2002. 7) L.He, L.Xue, J.Xu, B.McNeil, L.Bai, E.Melicoff, R.Adachi, L.Wu, “Compound Vesicle Fusion Increases Quantal Size and Potentiates Synaptic Transmission”, Nature, Vol 459, 93-97, May 2009. 8) V.Westphal, S.Rizzoli, M.Lauterbach, D.Kamin, R.Jahn, S.Hell, “Video-Rate FarField Optical Nanoscopy Dissects Synaptic Vesicle Movement”, Vol 320, 246249, April 2008. 9) R.Heintzmann, “Saturated patterned excitation microscopy with two24 dimensional excitation patterns”, Vol 34, 283-291, 2003 10)M.Gustafsson, “Nonlinear structured-illumination microscopy: Wide-field fluorescence imaging with theoretically unlimited resolution”, PNAS, Vol 102, 13081-13096, Sep 2005 11)R.Heintzmann, T.Jovin, C.Cremer, “Saturated patterned excitation microscopy – a concept for optical resolution improvement”, J.Opt.Soc.Am, Vol 19 [8], 1599-1609, March 2002. 12)M.Neil, R.Juskaitis, T.Wilson, “Method of obtaining optical sectioning by using structured light in a conventional microscope”, Optics Letters, Vol 22, 19051907, Dec 1997. 13)T.Klar, S.Jakobs, M.Dyba, Al.Egner, S.Hell, “Fluorescence microscopy with diffraction resolution barrier broken by stimulated emission”, PNAS, Vol 97, 8206-8210, July, 2000. 14)S.Hell and J.Wichmann, “Breaking the diffraction resolution limit by stimulated emission: stimulated-emission-depletion fluorescence microscopy”, Optical Letters, Vol 18, 780-782, June 1994. 15)V.Westphal, S.Hell, “Nanoscale Resolution in the Focal Plane of an Optical Micrscope”, Physical Review Letters, Vol 94, 143903, April 2005. 16)M.Gustafsson, “Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy”, Journal of Microscopy, Vol 198, 82-87, March 2000. 17)T.Klar, S.Hell, “Subdiffraction resolution in far-field fluorescence microscopy”, Optical Letters, Vol 24, 954-956, July 1999. 18)J.Frohn, H.Knapp, A.Stemmer, “True optical resolution beyond the Rayleigh limit achieved by standing wave illumination”, PNAS, Vol 97, 7232-7236, April 2000. 19)E.Rittweger, K.Han, S.Irvine, C.Eggeling, S.Hell, "STED microscopy reveals crystal colour centres with nanometric Resolution.", Nature Photonics, Vol 3 (3), 144–147, 2009. 20)V. Westphal, M.Lauterbach, A.DiNicola, S.Hell, "Dynamic far-field fluorescence nanoscopy". New Journal of Physics, Vol 9 (12), 435, 2007. 21)R.Heintzmann, R.Cremer, “Laterally modulated excitation microscopy: improvement of resolution by using a diffraction grating.” In I.J.Bigio et al, Optical Biopsies and Microscopic Techniques III,1999 25