Help support MIT OpenCourseWare by shopping at Amazon.com! MIT OpenCourseWare offers direct links to Amazon.com to purchase the books cited in this course. Click on the Amazon logo to the left of any citation and purchase the book from Amazon.com, and MIT OpenCourseWare will receive up to 10% of all purchases you make. Your support will enable MIT to continue offering open access to MIT courses. |
Logistics
Each project will be carried out by a team of 5±1 students from the 2.710 roster. Given our enrollment, we expect 3-4 teams to emerge. Please read through the 7 suggested project descriptions and determine your project preferences. We will finalize the teaming arrangements in lecture 15. Each team will work with a mentor who will assist in collecting data, clarifying key concepts, and organizing the presentations. The mentors will also suggest a few simple simulations for each project to give you a feel of what these advanced topics can accomplish, and what the difficulties/limitations are.
The projects will be presented in lecture 24. Each presentation should be designed to last for 30 minutes with approximately 10 additional minutes for questions. It is up to the team to arrange one, two or more speakers to represent the team's work; if you all decide to go on stage please be prepared to manage the 30 minute overall team time between 4 to 6 speakers, and switch swiftly between speakers to avoid delays. To further cut delays, we will also ask that you bring your presentations in a CD-ROM and use a common computer, which we will provide.
The presentations will be attended by the class, the instructor team, and guest faculty. Only faculty will give grades, but all present will vote for the best presentation. The winners will receive a grade bonus, and an invitation to dinner at a local restaurant with local Optics professionals.
* If you are enrolled in 2.71, you are welcome to participate in the projects on a voluntary basis. You cannot receive credit for this work, however, unless you switch to 2.710. This must be done before drop date, and it will require some additional paperwork since the add date is past by now. You would also have to do some additional homework problems that were assigned in 2.710 but not 2.71.
1. Volume Holography - 1.a) Imaging or 1.b) Data Storage
2. Maskless Lithography
3. Computerized Tomography
4. Heterodyne Interferometry
5. Synthetic Aperture Radar (SAR)
6. Cubic Phase Mask (CPM)
7. Negative Index of Refraction
Project Selections
A "thick" or "volume" hologram is an optical element whose properties have significant variation in the longitudinal (axial) direction; i.e., it violates the thin transparency assumption. The distinguishing characteristic of a volume hologram, as opposed to other thick optical elements is that the volume hologram is recorded optically as the interference pattern between two mutually coherent optical beams throughout the volume of a thick photosensitive medium. Typical thicknesses range between 30μm and 1cm. As in traditional holography, typically one of the two recording beams, carries no information, e.g. it may be a plane or spherical wave. This is called the "reference" beam. The second beam, known as the "object" or "signal" beam, carries the information to be stored in the hologram. The thick nature of volume hologram causes qualitative difference in the way they behave compared to thin holograms and diffractive optical elements. For example, to obtain any significant diffraction from a volume hologram, one has to illuminate it with a replica of the reference beam, a condition known as "Bragg matching." If the illumination beam is different than the reference that recorded the volume hologram, then in most cases the illumination beam simply propagates through the volume hologram without creating any significant diffraction. This phenomenon is called "Bragg selectivity." (There are some exceptions when diffraction is obtained from a volume hologram, even though the illumination is not a replica of the reference beam; these cases are known as "Bragg degeneracies.") Also, the pseudoscopic reconstruction and higher diffraction orders are absent from the diffraction pattern created by a volume hologram.
The goal of this project is to explain to the class the qualitative differences between volume holography and traditional (thin) holography, and describe some applications of the unique properties of volume holograms to either imaging or data storage.
1.a) Volume Holographic Imaging
In volume holographic imaging, a volume holographic element is used as a lens, and the Bragg selectivity and degeneracy properties of volume holography are exploited to acquire images that contain 3D and spectral information. 3D information means that the image, unlike that of a regular lens, is formed as if the object were sliced in the longitudinal (axial) direction and the slices obtained independently for each different axial location. (With a regular imaging lens, portions of an object that are at different axial locations form images that are defocused.) For reflective objects, the volume holographic lens then acquires the height of the object h(x,y), whereas for transparent objects the volume holographic lens acquires the optical density n(x,y,z). Spectral information means that if the object is polychromatic, then the imaging instrument separates the colors, as in a rainbow, so that the colors can be measured independently at the image plane. These advanced properties make volume holographic lenses promising for numerous advanced imaging applications in military surveillance, homeland security, biomedical diagnosis, and manufacturing inspection, among others.
Resources
Barbastathis, George, and David J. Brady. "Multidimensional Tomographic Imaging Using Volume Holography." Proceedings of the IEEE 87, no. 12 (December 1999): 2098-2120.
Barbastathis, George, Michal Balberg, and David J. Brady. "Confocal Microscopy with a Volume Holographic Filter." Optics Letters 24, no. 12 (June 15, 1999): 811-813.
Liu, Wenhai, Demetri Psaltis, and George Barbastathis. "Real-time Spectral Imaging in Three Spatial Dimensions." Optics Letters 27, no. 10 (May 15, 2002): 854-856.
Sinha, Arnab, and George Barbastathis. "Volume Holographic Telescope." Optics Letters 27, no. 19 (October 1, 2002): 1690-1692.
1.b) Volume Holographic Data Storage
In volume holographic storage, a volume holographic element is formed by multiple exposures on a thick holographic material. Each exposure is carried out with a different reference beam and a "page" of information, e.g. a frame of an analog film or a 2D array of digitally coded data. After the exposure sequence is complete, if we replicate one of the reference beams, then the corresponding stored page will be Bragg matched and will be diffracted onto a camera, where it is detected; whereas all the remaining pages are Bragg mismatched and do not diffract (i.e., they are "silent.) Thus, it has been demonstrated that thousands of pages of data can be "multiplexed" in a relatively small volume hologram, such as a mm-thick disk with the same form factor as the DVD, or a 10 cubic cm "sugarcube." Volume holographic storage faces many challenges, primarily in terms of materials and recording costs; if it materializes, it will be a strong candidate to replace media such as CDs and DVDs in consumer applications that require much higher storage densities, e.g. computer gaming.
Resources
Demetri's Sci. Am. Paper
Hesselink's Science paper
George's book chapter
Semiconductor lithography uses optical exposure of photoresist to write very small patterns (such as electronic circuits, MEMS actuators, etc.) on silicon substrates. The desired pattern is first written on a mask, which is then optically projected on the substrate. The mask itself is very expensive, and time consuming to make (typical costs for professional masks used by Intel, for example, are $1M and 1 week per mask). This makes economic sense for high production volumes, but it can be prohibitively expensive for specialized "boutique" elements such as military electronics, research devices, etc.
An alternative method for lithography is "maskless," where no mask is used, but instead an optical beam is modulated in real time to create the desired pattern directly on the resist. One way to do that is to use an array of Fresnel zone plates in combination with a MEMS-type light modulator, such as the Texas Instruments DMD or the Silicon Light Machines GLV. The job of the zone plate is to focus a beamlet of light onto a "pixel" on the resist, whereas the modulator turns pixels on and off depending on the desired pattern. Because the pixel spacing equals the spacing between the zone plates, the substrate must be raster-scanned to complete the exposure. This method is called Zone Plate Array Lithography (ZPAL) and it was invented by Prof. Henry I. Smith of MIT EECS.
The goal of the project is to overview the properties and advantages of ZPAL, estimate its limitations in terms of zone plate spacing and numerical aperture, and discuss it in the context of other mask-based and maskless lithography techniques.
Resources
Smith, Henry I. "A Proposal for Maskless, Zone-plate-array Nanolithography." J. Vac. Sci. Technol. B 14, no. 6 (Nov/Dec 1996): 4318-4322.
Menon, Rajesh, D. J. D. Carter, Dario Gil, and Henry I. Smith. "Zone-Plate-Array Lithography (ZPAL): Simulations for System Design."
Gil, Dario, Rajesh Menon, Xudong Tang, Henry I. Smith, and D. J. D. Carter. "Parallel Maskless Optical Lithography for Prototyping, Low-volume Production, and Research." J. Vac. Sci. Technol. B 20, no. 6 (Nov/Dec 2002): 2597-2601.
Computerized tomography is the mathematical basis of a number of imaging techniques which capture a 3D object as a "density map" of the form ρ(x,y,z). In the original inception, a semi-transparent 3D object, such as a human body, is exposed to a bundle of collimated X-ray radiation, and the radiation is measured after going through the body. Since X-rays have a wavelength so small that they essentially propagate through the body in straight lines (without diffraction) the only observable change is absorption, which varies in different parts of the body, e.g. bone, tissue, etc. The effect of absorption is cumulative, so that the logarithm of the observed intensity after the bundle passes through the body is proportional to the line integral of the density ρ(x,y,z) of the body along the line where each X-ray strikes the body. The process is repeated from several angles, and the collected X-ray images are then used to "reconstruct" the density ρ(x,y,z) from its line integrals.
The goal of the project is to describe to the class two mathematical techniques that are used in the computerized tomography reconstructions: the Radon transform (which maps a density function ρ(x,y,z) to its line integrals) and the Fourier-slice theorem (which relates the Fourier transforms of the measurements to the Fourier transform of the density function itself). The mathematical formulation is common to a number of different hardware implementations: X-ray tomography, as described above, and also positron-emission tomography (PET), magnetic-resonance imaging (MRI), and coherence images obtained with the rotational shear interferometer (RSI) all function according to the Radon transform and Fourier-slice theorem.
Resources
Born, M. and, E. Wolf. Principles of Optics. 7th ed. Cambridge: Cambridge University Press, 1997, section 4.11 (p. 217). ISBN: 9780521639217.
The heterodyne method is a form of interferometry which is particularly resilient to noise and vibration problems that limit the practical use of traditional interferometers. It is used to ultra-accurate distance measurements, and typically yields resolutions of the order of 1-10nm using visible light Heterodyne interferometry is implemented with a Michelson interferometer where each arm is at an optical frequency compared to the other arm which is slightly different, typically by a few MHz, compared to the other arm. Since the frequency of the light itself is much higher, of the order of 10e15Hz, the light beams from the two arms of the interferometer form a low-frequency beat signal whose phase delay can be detected very accurately. This forms the basis of the distance measurement.
The goal of the project is to overview the two principal methods of constructing the two frequencies used in heterodyne interferometers, namely the Zeeman effect and acousto-optic modulators, and overview for the class the operation and performance characteristics of commercial realizations of these instruments.
Resources
Heterodyning
5) Synthetic Aperture Radar (SAR) Imaging
SAR is motivated by the "cartographer's problem:" a plane is flying over a terrain with the purpose of accurately mapping the topography of the terrain (obviously this has significant military applications as well, e.g. surveillance and reconnaissance.) Since the angle subtended by the plane towards any feature on the terrain is very small (the distance is on the order of a few miles) we expect this form of imaging to yield very poor resolution. However, the flight of the plane over a small target beneath could be exploited to yield a much larger effective aperture if only we could integrate the images from different angles. SAR accomplishes exactly that by coherently integrating microwave-frequency signals obtained from the ground throughout relatively long flight distances. It can accomplish spectacular resolution on the ground, typically better than 3-5cm, using microwave frequencies of comparable wavelengths (mm up to a few cm.)
The goal of the project is to describe SAR to the class in simple terms, and explain the concept of the "ambiguity function" which is used to describe the trade-off between resolution and field of view in SAR. Some practical limitations (e.g. absorption of microwaves from humidity in the atmosphere) and accomplishments of modern SAR systems (e.g. imaging of military targets through foliage) should also be described.
Resources
Skolnik, M. I. Introduction to Radar Systems. 3rd ed. New York: McGraw Hill, 2000. ISBN: 9780072909807.
6) Extended Depth-of-field Imaging using Cubic Phase Masks
Anyone who has attempted to take close-up pictures of friends against pretty landscapes gets an intuitive feel of the "depth of field" problem in imaging: one can have the person's face or the landscape in perfect focus, but not both. This limitation of optical systems originates from the quadratic phase function impulse response of Fresnel propagation, which the lens's own quadratic phase function cancels for objects in focus. For objects out of focus, the residual quadratic phase leads to defocus, or blur, which becomes severe as the distance from the object to the focal plane increases. So this appears to be a fundamental limitation of optical imaging, yet a few years ago a research group at the University of Colorado managed to circumvent it by a clever trick: they deliberately distorted the quadratic phase of the lens by adding a cubic phase (this can be implemented relatively easily through a specially manufactured refractive element with a cubic surface profile). As one would expect, the result of the cubic distortion is defocus throughout the depth of the camera, i.e. objects in and out of focus now all become blurred. However, the cubic phase has the special property that the amount of defocus is independent of depth, i.e. both the person and the landscape behind him/her are blurred by approximately the same kernel. So if one could undo the defocus due to the cubic phase, one would retrieve an image which would be sharp throughout a much extended depth, compared to that given by a traditional lens. It turns out that deblurring can be implemented very easily with digital processing of the blurred images. The processing amounts to a so-called "deconvolution" kernel, which simply convolves the blurred image with a kernel devised to undo the blur, i.e. sharpen the image. The process of deliberately blurring images with the cubic phase mask and then recovering sharp images with extended depth has come to be known as "Cubic Phase Mask" (CPM) imaging, and it is being commercialized by CDM Optics, a start-up company based in Boulder, Colorado. (The company is named after the initials of the founders, not the cubic phase mask!) More generally, there is an entire class of distortions that can achieve the same result of extending the depth of field with a varying degree of efficacy, and the cubic phase is a special case.
The goals of the project are (i) to overview the operation of CPM imaging for the class, and perform a few simple numerical simulations of blur and deblur with artificial objects to illustrate the extended depth of field quality (ii) describe the potential of this property for other types of imaging, e.g. microscopy for biomedical and bioengineering applications.
Resources
Dowski, Edward R., Jr., and W. Thomas Cathey. "Extended Depth of Field through Wave-front Coding." Applied Optics 34, no. 11 (April 10, 1995): 1859-1866.
Cathey, W. Thomas, and Edward R. Dowski. "New Paradigm for Imaging Systems." Applied Optics 41, no. 29 (October 10, 2002): 6080-6092.
7) Negative Index of Refraction, Left-handed Materials, and the "Perfect Lens"
"Normal" refractive materials such as water and glass have refractive index n>1, and we are used to describing Snell's refraction laws based on that assumption. What would happen if a material could have n<=-1? This may appear to be an "academic" question, and indeed at first let us take it to be just that. The first surprising conclusion is that one would then be able to focus light from a point source with unit magnification by a simple flat interface between air and the fictitious material with n=-1 (other combinations would still focus but with different magnifications.) Draw it yourselves to see how it works!
It turns out that negative n is actually possible, and it has already been implemented experimentally in the microwave regime. It is important to remember that n is a "phenomenological" parameter; in other words, it emerges from the solution of Maxwell's equations as the square root of the product of relative dielectric permittivity ε and magnetic permeability μ of the medium where light is propagating. In traditional materials, the quantities ε and μ themselves are positive and as a result Maxwell's equations show that the fields E, B and the wave-vector k form a right-handed triad, as we saw in class. On the other hand, it can be shown that if ε and μ could become negative, then E, B, k would be a left-handed triad, which phenomenologically is equivalent to negative n. In certain metals, such as Ag, and certain frequencies, ε and μ can indeed become negative; the same can happen in artificial, or engineered materials, also known as "meta-materials," where one deliberately introduces current loops in the opposite direction of the right-handed magnetic field to create conditions of left-handedness. Taking the effective negative n for granted, numerous interesting effects have been shown in simulation, e.g. ultra-fine (sub-wavelength) resolution in imaging systems, giant dispersion, and directional radiation from antennas. For example, the "perfect lens," a term coined by Pendry of Imperial College in London for a slab of negative index acting as a lens, can image sources with sub-wavelength spacing with contrast at least a factor of 5 better than the equivalent near-field system with positive n.These observations, together with recent experimental demonstrations in the microwave regime have spawned tremendous interest in left-handed materials. The goal of this project is to describe to the class the principles and properties of left-handedness and comment on the promise as well as the practical limitations of realizing these materials in future optical systems.
Resources
Smith, David R., and Norman Kroll. "Negative Refractive Index in Left-Handed Materials." Physical Review Letters 85, no. 14 (October 2, 2000): 2933-2936.
Pendry, J. B. "Negative refraction makes a Perfect Lens." Physical Review Letters 85, no. 18 (October 30, 2000): 3966-3969.
Shelby, R. A., D. R. Smith, S. C. Nemat-Nasser, and S. Schultz. "Microwave Transmission through a Two-dimensional, Isotropic, Left-handed Metamaterial." Applied Physics Letters 78, no. 4 (January 22, 2001): 489-491.