Tuesday, August 24, 2010

Nice old optical technique

Traditional black and white photography uses chemicals that are sensitive to light, particularly silver halides. As they are sensitive to a broad range of wavelengths covering the visible, they will react under any light, and so cannot distinguish between them. Colour photos (not digital photos) work by having different layers that are sensitive to different wavelengths, red green and blue, and so each reacts to different parts of the spectrum and the image can be reconstructed. Early colour photography however had to use a different technique, as such sentitive dyes were not available until 1907 and did not become common until the 20s and 30s. There are famous examples of colour photos before this however, such as Sergei Mikhailovich Produkin-Gorskii's amazing photos of the Russian Empire taken between 1909 and 1912. His technique took three photos in relatively quick succession, passing each one through a red, green or blue filter, and the three pictures were later recombined to make these images. You can see the time lapse in a few of them, for example the rippling of the waves in picture 14, movement in the doorway, to the left of the door and (my favourite)  the man on the right scratching his nose in 19. These are great pictures which were quite technically difficult to achieve at the time


Wednesday, July 21, 2010

Cool flexible OLED display

Ok, so it has been a while since the last post. This is very interesting though;

Samsung are on the verge of releasing a new plastic OLED display. The electronics of course will be on a different board, but the entirity of the screen itself is made of plastics thin films and organic materials and is very flexible. It's so flexible in fact that you can smack it with a hammer and it won't break... look at this:


Tuesday, April 27, 2010

The European Extremely Large Telescope; Adaptive Optics and Resolution Power

Europe has recently announced the construction of the world's largest optical telescope the imaginatively named European Extremely Large Telescope (E-ELT).... and it is extremely large - though not as large as the proposed Overwhelmingly Large Telescope, that was scaled down to this project.

The primary mirror (main mirror that collects the light) will be 42m across. Using current technology, it is not really possible (or necessary in fact) to make a single mirror that large, and so this immense mirror will actually be made of many smaller hexagonal elements which will be shaped and connected together to make a single large mirror. The arrangement of the mirror elements will look something like this when complete.

But why have such a large mirror? What are the advantages? Well one advantage of course will be that it collects more light than a smaller mirror. That light is then condensed (via a couple of other mirrors) onto the detector. More light is good, since we can see more distant, and darker objects, since we can collect more of the photons from those objects.

Another important issue is the resolving power of the mirror, also known as the diffraction limit. If we have two objects next to one another - for example two lines, or point sources of light like stars, they will subtend an angle at any receiver, for demonstration, I will choose the eye.

How close can those two objects be, before the eye can no longer make them objects? The answer to that depends on a number of factors; the wavelength of the light we are attempting to detect the objects with, the angle between the two objects, and the diameter of the aperture. The aperture for the eye is the pupil, for a single lens it is the lens itself, for this telescope it is the primary mirror, and for more complex optical structures like cameras and binoculars, it depends on the structure itself, but roughly speaking we can say it is the largest hole for which light can get all the way through to the area of the detector. There is a simple formula that tells us what the resolution limit of an aperture is:

So, if we consider green light with a wavelength of about 510nm, and the diameter of this mirror, the smallest angle that the telescope can resolve will be just 0.00000086 degrees or 0.003 arcseconds. (an arcminute is 1/60th of a degree, and an arcsecond is 1/60th of an arcminute)

To give an example of this. If we built this telescope in London, then we could resolve a 1cm marble at a distance of around 1500km - that's about the distance to Rome.

So now we have an idea of why they are making this so big. Why are they building it in Chile?

There are plenty reasons for this, most of which are quick. The site has around 350 cloudless nights per year. Clouds get in the way, and so no clouds is good. The site is very dry, which means less water vapour which can absorb lots of the light you want to capture. The site is atmospherically stable and high up. Again the atmosphere can asborb lots of the light you want to capture, and also the stability means that the stars will "twinkle" less.

Why do stars twinkle, and how do we get around this problem?

The air is not perfectly still as we know - there is wind that we are familiar with on the ground and even high up, and the atmosphere itself is not perfectly smooth, and fluctuates a bit. Because of this there are small refractive index changes through the atmosphere. These redirect the light a bit - sometimes spreading it, sometimes condensing it, or sending it to one side, and very randomly. It is this effect that causes the twinkle. From the perspective of optics, we can say that it distorts the "wavefront" - a series of nice parallel lines that chart out the peak of each wave, become increasingly bent and twisted as the light passes through the air.
So how can we get rid of this effect? We don't know what the distortions will look like, and they are changing all the time, and very rapidly. There are two main ways currently in use. One is to take lots and lots of pictures, and then select the clean ones. Another is to use adaptive optics.

Adaptive Optics Summary

In adaptive optics, the idea is to create a wavefront where we know what it should look like beforehand, then have a look at the distortions to the wavefront, and then use a bendy mirror to reverse the effects of the distortion. In telescopes, this is done mostly by creating a laser guide star high in the atmosphere. A Laser is used to excite sodium in the upper atmosphere. This then creates a wavefront which is distorted in the same way as light from the stars. The details of the wavefront are then collected on a wavefront sensor such as a Shack-Hartman sensor. This information is then sent to a computer, which calculates the deformation required to reverse this distortion, and then a deformable mirror is adjusted up to a thousand times a second, to remove the distortions.

The following image is a negative of this system in use. On the left is a star with an AO system turned off, and on the right, with it on.

as we can see, the star goes from a fuzzy blob to a distinct point of light.

Adaptive optics will be used in the E-ELT.

The Uses of the E-ELT

I will finish off by mentioning some of the applications of the E-ELT. Of course it will be used to provide spectacular images, of a level of detail unsurpassed by anything we have so far, but it would also allow us in principle to analyse the spectra of planets around nearby stars - to tell us the chemical and mineral details of the surfaces and atmospheres of those objects. It will also allow us to see some of the earlies objects to be formed in the universe, as well as tell us much about fundamental physics - from the physical constants early in the universe, to the nature of dark matter and dark energy. With first light (the first capture of light from the telescope) estimated in around 2016, that is certainly a time to watch. Of course no post about the ELT would be complete without a picture of what it will look like. I have also highlighted a little man (or woman!) in the picture, to give a sense of scale!

Quickfire Question: How do Fiber Optics work?

When you stick something in water - something like a pencil or a ruler is best since they are straight, you can see the object appear to bend at the surface of water.  This is due to the differences in refractive index between the water and the air.

All materials have a refractive index, because of the way that they interact with light. The vacuum, free space, has a refractive index "n" of 1, and all normal materials (negative refractive index is something I can cover another time!) have a refractive index higher than one. To give a couple of examples, for air, n is 1.0008, for water n is 1.330, for most ordinary glass, n is 1.51 and for diamond, n is 2.417. 

In a previous post, I mentioned Snell's Law,  this simple law relates the angles of incidence and refraction, and the refractive indices of the materials.

For something passing from a low refractive index to a high one at any angle, we can see that the light gets through, but what about the other way? If we try to calculate snell's law for certain angles, we see that the formula can't produce a result. At a very particular angle known as the critical angle, light can no longer escape from a high index material to a low one, and the light reflects from the surface.

This reflection is known as Total Internal Reflection. You can see total internal reflection when swimming underwater in a pool - look at the water's surface at a shallow angle, and it looks like a mirror.

This principle of total internal reflection is used in fiber optics to keep the light inside the fiber. A simple fiber optic is made of two materials - a core, with a high refractive index and a cladding with a low refractive index. Because of the TIR effect, light continuously reflects from the boundary, and is carried along the fiber.

There are a number of different sorts of optical fibre. Multimode fibers are generally wide compared to the wavelength of light, and as a result light can bounce at different angles (modes). some light may pass straight along the core, and some may bounce a lot from the edges. This causes the light to spread out. When the core is much narrower, then we may have a Monomode fibre, where the light can only pass in a straight line through the core (the mathematics of this are more complicated). As a result, the light does not spread out (due to reflections anyway!). The kind of fiber described above is known as a step index fibre, because the core immediately jumps from high to low index in the cladding. However one may also have a graded index fibre where the refractive index drops slowly towards the edge.

FIber optics are used in a broad range of applications, from telecommunications, to lighting applications, sensor applications and are commonly used for imaging in surgery. There are many other issues, complexities and types of fibres which build on the basic background introduced here.

Friday, April 16, 2010

Light and the Age of the Universe: George Gamow

In my discussions on the Cosmic Microwave background, I realised I had made a horrible omission: a guy called George Gamow. While I do not wish to take any of the well deserved credit from the winners of the Nobel Prizes, Gamow was one of those names who was sadly lost in history. He had actually predicted the CMB, or something like it, back in 1948. Here is a nice little article about it.


Tuesday, April 13, 2010

Quickfire Question: How do LEDs work?

LEDs, or Light Emitting Diodes are very common devices used in a wide variety of applications from some street signage, power indicators, transmitters in remote controls and even LED torches. They are very efficient devices, which much like sodium lamps, convert most of the current passing through them into light, with very little loss as heat, but how do they work?

All LEDs are made from semiconductor materials - materials which have conducting properties somewhere between insulators (like glass) and conductors (like metals). Semiconductors can be carefully constructed to perform a variety of applications, such as diodes, which only allow current to pass through in one direction, to transistors - which either allow current to pass, or stop it, depending on the voltage at a "gate", solar cells and much more elaborate structures ranging from logic circuits all the way up to computer chips.

Like the previously mentioned diodes, LEDs only allow current to pass through in one direction, and when the current passes through, light is emitted. There are a couple of ways that LEDs can be constructed, I will concentrate on the simplest.

Semiconductors can be "doped" with other materials, which can either donate electrons (n-type semiconductors), or can accept electrons (p-type semiconductors). The former have extra electrons which can flow through the semiconductor from the negative to the positive terminal, and the latter have "holes" which are like positively charged electrons, that flow from the positive to the negative terminal. When the two meet they can recombine, and release energy in the form of light.

By varying the dopants, we can manipulate the wavelength of light that is emitted. Other methods of varying the colour are more elaborate, and involve the use of quantum tunneling, different sorts of junctions, and even adding additional materials. As we can see, the spectrum of conventional LEDs tends to be very pure:

producing a very limited spread of colour in each LED, however the addition of other chemicals such as phosphors, can "down convert" high frequency light such as blue, and re-emit that light in a broader spread of wavelengths. This is a common method of producing white LEDs.

More recently another type of LED has been developed, known as an Organic LED, or OLED. In place of inorganic materials such as Indium and Gallium, OLEDs use carbon based chemicals (hence organic) that emit light. The semiconductor properties of these materials are similar, though the emission is somewhat different, having a much broader spectrum. Some of the details and issues surrounding OLEDs will be covered in a later post.

Monday, April 12, 2010

Basic Optics: The principles of imaging - lenses and pinholes

We are all familiar with imaging - everything we see results from the imaging of the world on to our retina. Cameras image the world onto a film or a CCD, usually through a lens. Projectors display images on a screen, but how and why does imaging work.

If we imagine the light either bouncing, or being emitted from an object. That light passes through a hole, and then on to a screen. How do we know whether an image will form? For a large hole, like the one in the following picture, the light from any point on the object, on the right hand side ( I have chosen a picture of Darwin) may land on several points on the screen. As a result, the image will appear bright (because plenty light gets through the hole) but blurry (because the light from a point can hit a larger area on the screen.

The more we shrink the hole down, the more the light from the object is limited on the screen - however the less light gets through, so we have a much more sharply defined object, but it's also much darker.

Finally, if we introduce a lens into the larger hole, the light is bent so that (if the object and image are in the right places) all the light passing through the hole will land at the same point on the screen, and so we now have a bright object in good focus.

For a pinhole, it does not matter where the object and screen are, the image will always be in focus, however for a lens it does. There is a simple formula which tells us where the object and image are, depending on the focal length of the lens. The focal length is the distance at which an object at infinity is focussed. So for example when you hold a magnifying glass to focus the sun on to a point, it is the distance from the paper at which the spot is smallest and hottest. The formula that tells us where the object and image are is:

S1 and S2 are the object and image distances. It doesn't matter which way round, though the magnification will be affected by the different possible object and image distances.

This is a very simplified formula though, and depends on a number of considerations being true. The formula relies on what is known as the paraxial approximation - all the rays of light must be passing fairly close to the optical axis - a straight line passing out from the centre of the lens, perpendicular to the lens. if the rays pass close to the edge of the lens, or at a steep angle to the lens, then the image may be distorted, causing a number of optical aberrations (spherical aberrations, coma, field curvature). Also it ignores the different refractive indices of different wavelengths of light. In the same way as light is bent as it passes through a prism or a raindrop, and split up into different colours, the light of different colours passing through a lens may be focussed in different places. This is called chromatic aberration - and may often be seen towards the edges of lenses or pictures.

Wednesday, March 3, 2010

Light and the Age of the Universe - The Discovery and Analysis of the CMB

Discovery of the Cosmic Microwave Background

The Comic Microwave Background was discovered pretty much by accident by Arno Penzias and Robert Wilson who were working for Bell Laboratories, looking for signals from radio waves reflected from balloons. In the course of their experiments, they had to eliminate all noise sources such as radio broadcasts, and even a "white dielectric substance" left on the inside of the detector horn by a family of pigeons who had taken nest there. Once they had got rid of and accounted for every bit of noise they could, they noticed that there was a constant microwave hiss, from every direction, day and night - they had discovered the Cosmic Microwave Background.

The antenna where they made this discovery is now a national monument in the US:

They still did not know what they had found however, but when a friend of theirs told them about a still unpublished paper by Jim Peebles talking about the possibility of finding a signal like theirs, and what it would mean, they began to realise the significance of their discovery. The papers by Peebles and his colleagues, and the paper published by Penzias and Wilson were published together in Astrophysical Journal Letters. Penzias and Wilson won the 1978 Nobel Prize for their work.

The Cosmic Background Explorer

It was thought from early on after the discovery, that there would be small anisotropies (differences depending on direction) in the CMB, but ground based measurements were not good enough to measure them. It was not until the COsmic Background Explorer (COBE) was launched in 1989 that these anisotropies were first observed.

These fluctuations were very small, just one part in 10,000 of the average temperature. The resolution was still relatively low however, and so there was still much detail to be found. One additional important piece of evidence however came out of this - the match between the theoretically predicted Black Body curve based on the Big Bang model, and the experimental curve. The two matched precisely:

These results earned another Nobel Prize, but this time for the principal investigators on the COBE project; George Smoot and John Mather. The CMB wasn't the only thing that COBE was analysing however, and there were other important experiments and discoveries made. A good outline of the COBE satellite's other results can be found here.

The Wilkinson Microwave Anisotropy Probe (WMAP)

The next satellite to look at the CMB was WMAP. This time dedicated to the analysis of the CMB. After the success of COBE, WMAP was designed to not only view the CMB at higher resolution and sensitivity, but also to look at other features of the CMB such as polarization in order to give a better understanding of the early universe. There are a number of interesting results from WMAP, which will continue to operate until (currently) September 2010, and more details can be found here.

A brief summary of some of the WMAP results
  • The universe is 13.73 billion years old (the most accurate figure we have))
  • The universe is very flat (Euclidean)
  • Around 23% of the universe is dark matter.
  • The anisotropies appear to be random (though there are some hints of deviations from simple randomness which could give further clues into the early nature of the universe)

The Future Exploration of the CMB

This article has provided only a brief outline of the discovery and analysis of the Cosmic Microwave background. There are a number of features that have not been discussed, such as doppler shift, polarization and so on, and there is still much work to be done in understanding the details of the CMB. Although WMAP only has a few months of life left, the European Planck observatory, which started to take measurements in 2009, and is expected to begin to release results in 2012.

Light and the Age of the Universe - the Cosmic Microwave Background

Our main window to understanding the universe is light and the electromagnetic spectrum. Trapped here on earth, there is very little of the universe that we can actually touch and test with our own hands, but light provides an amazing tool. The Cosmic Microwave Background is perhaps on of the best methods we have of finding the age of the universe.

All objects that are in thermal equilibrium - that is, the matter and EM radiation in the objects are the same temperature - have what is known as a black body spectrum - EM radiation with properties that are a function of the temperature of that object only. That spectrum might be modified a little but atomic absorption and emission lines, but the fundamental black body spectrum will remain. The spectrum looks like the curves on this graph:

Each curve represents a black body emitter with a particular temperature, shown in Kelvin (roughly the temperature in degrees plus 273, where 0 is absolute zero). The Sun, indeed all stars have a black body spectrum. In the case of the sun, the surface, and black body temperature is about 6000K, so it looks not so dissimilar from the 5000K curve. You can see that the black body spectrum continues beyond the visible - indeed the IR part is what is responsible for heat from the sun. The earth has a black body spectrum of about 278K (5.5 Celsius), which peaks in the infra red.

So what does this have to do with the age of the universe? Well when the universe was a mere 400,000 years old, about 13.7 billion years ago, everything was very much closer together, though space was expanding rapidly, and so the universe was much hotter than it is now, so hot in fact that there were no atoms, there was just a sea or plasma of hydrogen and helium nuclei (and a bit of lithium) electrons, Electromagnetic radiation and other subatomic particles (earlier than this there weren't even nuclei, but that's earlier than we are interested in here). The universe was still too hot for the electrons to bind to the nuclei, and so photons were constantly being absorbed and re-emitted by the various charged particles that were around, and the universe was in a state of equilibrium between matter and radiation. This means there was a black body spectrum. Eventually, as the universe expanded electrons no longer had enough energy to constantly escape binding to the nuclei, and they finally bound, becoming hydrogen and helium atoms. There was still substantial interaction between matter and radiation, particularly in the form of scattering, such as Compton Scattering and Thompson Scattering. The universe continued to cool as it expanded further, and eventually cooled down to a temperature of about 4000K at which the scattering dropped off. The radiation at this point became decoupled from the matter in the universe, as the universe became transparent, though the shape of the spectrum remained imprinted on the light that passed on, and continued traveling through the universe.

In the intervening billions of years, space itself continued to stretch. Imagine drawing a wave on a balloon, and blowing up the balloon. You will see the wavelength becomes longer and longer. The same effect occurs to the radiation, but now the the very space of the universe is expanding, so photons that initially had a short wavelength, over time were stretched out so the wavelength was longer and longer, so long in fact, that the BB spectrum which peaked at 4000K now peaks at a temperature of just 2.725K - barely above absolute zero.

This temperature is the same in all directions, though there are tiny fluctuations, which resulted from small changes to the very uniform distribution of matter and energy in the early universe as we can see in the (Wilkinson Microwave Anisotropy Probe ) WMAP satellite image below. It is these tiny imperfections that seeded the collapse of the primordial matter into the stars and galaxies of the universe today.

By knowing the rate at which the universe is expanding, which we can measure from the red shift of other features of the universe such as distant stars, galaxies and quasars, and these initial temperatures (which we know from looking at hydrogen and helium in the lab) we can then deduce the age of the universe (in much the same way as we can determine how long a cup of water has been standing on a table for if we know it was boiling when it was put there)...

...13.7 billion years old.

Monday, March 1, 2010

Quickfire Question: How do incandescent (filament) bulbs work?

We are all familiar with incandescent bulbs, which have been until relatively recently been the most popular sort of bulb.

A voltage is placed across a metal filament held in an inert gas like argon, neon or nitrogen, in order to stop the gas from reacting with the filament and allowing the bulb to live longer. The filament has a high resistance to the current flowing through it, and this heats the filament, causing the atoms in the filament to vibrate. As the atoms vibrate, they then radiate energy in the form of light.

An important point that can be made here, is that all vibrating atoms and molecules will radiate somewhere on the electromagnetic spectrum. The hotter they are, the faster they vibrate, and thus the higher the energy (and frequency) the photons are that they emit.

A problem with this sort of lamp, is that they are very inefficient. Because the light is generated by heating, large amounts of energy is lost in unwanted heat. Also as the filament is heated, it slowly evaporates over time and eventually breaks, leading to a relatively short lifetime. It is mainly for these reasons that there is a move to using energy saving bulbs, which both have a longer life, and also produce the same amount of light for less energy input.

Sunday, February 28, 2010

Quickfire Question: Why are street lamps amber?

We are familiar with the colour of many street lamps, the amber glow of the sodium vapour:

But how do these lamps work, why are they the colour that they are, and why do we use them?

(1) How do Sodium Lamps Work

Inside the tube, there is a small amount of sodium metal, with a little neon and argon gas in there. A voltage is applied across the tube, and this excites the outer electrons in the neon and argon gas warming the sodium and also being responsible for the faint red glow that you can see before the light turns on fully. Eventually the sodium vaporizes and due to the voltage, its outer electrons too are excited. When those electrons spontaneously drop back into their ground state, they emit light at a very particular wavelength - the amber light that we see.

(2) Why are Sodium Lamps that colour?

Only one electron is excited by the electrical discharge, and only with enough energy to jump up one energy level, so when they drop back to the ground state they can only emit a very pure colour of light. The wavelength of sodium light is 589.3nm. (actually there is a little more going on, meaning there are two lines very close to one another at 589.0 and 589.6nm). This is known as monochromatic light, and it is also the reason it is hard to pick out any colour of objects lit by a sodium lamp alone.

(3) Why do we use sodium lamps?

There are a number of reasons, and I will outline a few: Sodium lamps are very efficient, because most of the energy is turned straight into light, unlike incandescent filament lamps, which turn a lot of the energy into heat. That wavelength is also pretty close to the optimal response of the human eye:

This graph (called the scotopic response curve) shows how the eye responds (side axis) to different wavelengths of light (on the bottom axis). We can see the sodium lamp is pretty close to the middle. That means that a lower level of light is needed to see things clearly.

They also use no mercury or dangerous metals, and so are easily disposed of, keeping the cost down even further.

Saturday, February 27, 2010

The Science of Optics: Polarization of light, Water, Insects and 3D Cinema.

As mentioned earlier, light is a transverse wave, but as it is in 3D space (rather than on a surface like water) it can oscillate in any direction perpendicular to the direction of motion. The particular polarization depends on the relationship between the electric and magnetic field, but the simplest polarization is linear polarization - the light oscillates in a flat plane. When interacting with materials, it is often useful to be able to determine the relationship of the polarization to the material or angle of reflection. Here we will consider reflection from a mirror.

The light travels in the direction of the black line, and bounces off the mirror (the grey shape) - so the light takes a path that stays in the plane of the blue shape. The blue wave oscillates perpendicularly to the blue shape and is known as perpendicular or s-polarization. The red wave oscillates in the plane of the blue shape and is known as parallel or p-polarization. The light can also oscillate in any direction perpendicular to the direction of travel, and so some component of the light can be perpendicular, and some can be parallel.

This is very important in reflections, because different amounts of the perpendicular and parallel components can be reflected from a surface. The light that reflects from water at a shallow angle for example is almost all p-polarized light. That means if we take a polarizing sheet or polarized glasses (remember it has to be linear polarization - 3D glasses from cinemas are usually circularly polarized, so this won't work) and hold them in front of the reflection, we can cut out almost all of the reflected light and see into the water.

This image shows two photos of a puddle - one without a polarizer and one with a polarizer. The polarizer removes all of the light reflected from the surface, and so the reflection of the building disappears.

Interestingly, when locusts are swarming, they avoid areas of ground where there are large amounts of horizontally polarized light, because that means the light is reflected from water, meaning they avoid lakes and only land where there is food. You can read more about that here.

Some scattered light is also polarized, particularly light that is Rayleigh scattered. Rayleigh scattering occurs when the object that the light scatters from is very much smaller than the wavelength of light. Rayleigh scattering is stronger for shorter (bluer) wavelengths of light than for longer (redder) wavelengths. A good example of this is the scattering of sunlight that makes the sky blue.

As sunlight passes through the atmosphere, more of the blue light is scattered than the longer wavelengths, and so the sky appears blue. Just like the reflection from the water, this light is also partially polarized (though not totally, because of multiple reflections that can mess the polarizations up a bit). The polarization of the sky is in a direction that is tangental to a circle drawn around the sun.

As a result of this, insects which can detect the polarization of the light can tell where the sun is in the sky, even on cloudy days, and without being able to see the sun or shadows. Since this polarization follows the sun as it moves through the sky, this allows insects like bees to find the same patch of flowers even as the day goes on.

Circular Polarization and 3D Cinemas

So far I have described linear polarization, but light can also be circularly polarized. If we imagine some light traveling in the x direction, oscillating at an angle between the y and z direction, we can project its components in the y and z direction like this:

As we can see, they are in phase. This means they are doing the same thing i.e. they are both maximum at the same time, zero at the same time and minimum at the same time. But what happens if they are out of phase?

When we add them together, we can see that the electric field now rotates around the x-direction. This is known as circularly polarized light. The light can either spin clockwise as it moves, or counter-clockwise. Just like with the linear polarizer, we can have polarizers that let through only one circular polarization of light and block the other, and this is the technique that some 3D cinemas use - One lens blocks light that is clockwise polarized, and one lens blocks light that is counterclockwise polarized. This means that different images can be sent to each eye, and then your brain can make a 3D image from these.

Circular polarization is used rather than linear polarization, because if one image was projected using horizontally polarized light, and the other using vertically polarized light, the glasses would have to be perfectly oriented all the time, or you could keep picking up a bit of the wrong image in your eyes, making you see a double image (like you see if you take the glasses off). Circularly polarized light is not affected in this way. Note that only some 3D cinemas use this technique - others have switching glasses, that very rapidly block and unblock the eye, allowing your eye to see alternate images.

Optics and Life: Strange Sight - The world in Ultra Violet

We are all familiar with rainbows, showing us the full spectrum of colour that we can see- red, orange, yellow, green, blue, indogo and violet, but the electromagnetic spectrum continues beyond both sides of the rainbow. Red is the longer wavelength (around 600nm), and longer we have infra-red (which is pretty much responsible for the radiated heat you feel from a fire or the Sun), microwaves and the longest - radio waves. Beyond violet we have ultra violet (UV), x-rays and gamma rays.

As you can see, the actual bit of the electromagnetic spectrum that we can see is very narrow. What would it be like if we could see beyond our limited range?

Well as a matter of fact, many organisms can. Indeed it is often an essential part of their lives. Pollinating insects such as bees can see into the Ultra Violet, and it is for visibility to bees that flowers have co-evolved their colours (along with the insects ability to discern them). But you might ask - if bees can see into the UV, then what do flowers look like to them? Well often they are very different indeed, here are a few examples:

This is the common dandlion - on the left is the normal visible light image, and on the right is the UV image. This is colour shifted so we can see it, but nevertheless shows us that there is a strong two tone image, with the bright part in the middle of the flower, telling the bees where the nectar is.

This one is an evening primrose. Again yellow to us, but the insects can see lines, almost like landing strips on the runway, pointing to the pollen and nectar in the center.

UV photography does require special equipment. Firstly, you need to be able to cut out the visible light using filters, and then you need detectors that are capable of imaging the UV light, and you also need lenses that can focus the light. More information can be found here:


Other organisms can see into the Infra-red. This is particularly useful, because water does not absorb infra red light so easily as other wavelengths, and so the fish can see further.

Some can even see different polarizations of light - again many bees and insects. This is particularly useful, as it allows them to see what direction they are going in, and possibly even see predators underwater.

Wednesday, February 10, 2010


We are all pretty familiar with lasers these days, from laser pens to the sorts of lasers that evil masterminds use to cut British Secret agents in half (an example here being James Bond and Goldfinger), but what are they exactly?

Laser is actually an acronym, and it stands for Light Amplification by Stimulated Emission of Radiation. let's go through the terms. Light... well we know about that. Amplification - makes things brighter - easy so far. Radiation - another word for the electromagnetic spectrum, which includes light, so again, no problem. But what about Stimulated Emission? what does that mean? To explain this, I will start off with spontaneous emission in atoms. This also works with molecules as well, but atoms are a little easier to explain.

Normally atoms rest in their ground state, all the electrons in the atom are as low as they can be. The energy levels in atoms are limited, so if we imagine them as shelves, only two electrons can go on the bottom shelf, eight on the next, eighteen on the next and so on, with as many electrons in the atom as there are protons. Now when an atom becomes excited either by being heated or when it absorbs a photon, one or more of these electrons can jump into a higher shell (I will refer the difference in energy between the lower and upper level as the energy gap) . Left on its own, the atom may only stay in this state for a limited length of time, after which the electron will then drop down again to the ground state, emitting a photon along the way. The energy of the photon is the same as the energy gap. The important thing here though, is that length of time - it is only an average and is quite random. The electron spontaneously decays to the ground state, without any influence, and the emission of the photon in this case is called spontaneous emission.

Now imagine that we have our atom in its excited state, just like before, only this time, the atom is hit by some electromagnetic radiation of the same energy as this energy gap. This can jiggle the atom and force the electron to decay back to its ground state, so now we have two photons which can then go on to hit a couple more excited atoms, and we have four photons and so on. This kind of forced decay of electrons into their ground state is called stimulated emission. The stimulated emission is usually seeded by some spontaneous emission occurring in the laser.

There are limits to this of course; for one we need to have lots of excited atoms, and once we run out of excited atoms then we can't have any more stimulated emission. To get the excited atoms in the first place we have to dump lots of energy into the lasing material, and we do this in a technique known as pumping. There are lots of ways to pump a laser, but they all amount to pretty much the same thing - dumping lots of energy into the laser material, so that when photons pass through the cavity, they can stimulate the emission of more photons. The lasing medium itself has to be something that we can excite relatively easily, and there are lots of different materials such as ruby, special glasses (like Yttrium Aluminium Garnet), dye lasers and even gas lasers like Helium Neon (HeNe) lasers and Argon Ion lasers. Probably the most familiar sorts of lasers to us now are semiconductor lasers, used everywhere from DVD players to laser pointers.

A special feature of this light is that it is coherent. If we remember the previous post about the wave properties of light, coherent means that all the photons of one frequency are oscillating in step with one another. That means they always constructively interfere. This is unlike light from a normal fluorescent or incandescent bulb, which emits incoherent light (by spontaneous emission). It is this coherence that makes lasers so powerful, and why you have to be extremely careful when using a 4W laser like an Argon Ion laser (you have to wear special glasses and even stray reflections can burn your skin), even though the actual energy it emits is far less than a 60W bulb..

Laser Cavities

To make the laser more useful and to form a stronger pulse or beam, we can take our laser pump material and stick mirrors on the ends. One of the mirrors reflects all the light, and one of them reflects almost all of the light - usually something like 99%. Now the light can bounce backwards and forwards in the cavity. These mirrors are often specially shaped to make the beam as stable as possible. You do have to be careful here though, as if the amount of energy in the cavity gets too high - the light gets too bright - then you can start to get odd effects happening in the cavity such as self focussing, which can easily blow a hole in the lasing material. For glasses of course this will break the laser and the pump material will need replacing, and this presents a problem for high powered lasers.

Hopefully this has given you an idea about the basics of lasers. I haven't delved into any of the mathematics here, but you may be unsuprised to know that Einstein was a pivotal figure here once again, as he determined that stimulated emission had to occur.

Tuesday, February 9, 2010

The Science of Optics: What is a Photon?

Once scientists started developing a better understanding of how light works - how it propagates and its effects. One of the biggest mysteries has been - what is it exactly? Is it a wave? Is it a particle? It turns out that it is in some ways neither, and in some ways both, and now I will explain why we know this.

The Wavelike properties of light

The wavelike properties of light are just like water waves - although they are not attached to a surface like water. They are transverse waves, which means the oscillation occurs at right angles to the direction the wave travels in, like in water, where the wave oscillates up and down, but the wave appears to travel along the surface. This is unlike sound waves which are longitudinal waves. In these sorts of waves, the oscillation occurs in the direction of the wave movement, like along a slinky spring.

How do we know that light is a wave? Well to test this, we can use a property known as interference. If we have two sources of waves that are correlated with one another - that is they are both of the same frequency and start off in phase, then there are a number of experiments that we can do. The simplest of these experiments is the Young's Double Slit experiment.

In this we have two slits that act as the sources and we shine light through them. The light is then projected from there on to a screen and we observe the pattern on the screen. When the path length is the same, the light arrives in phase and we get constructive interference - the two waves add up and we get a bright line. Where the path lengths are half a wavelength out of phase (in the water, this would be where a wave peak meets a wave trough) we get destructive interference and we get a dark line. If we know a few facts like the distance from the screen to the slits, the distances between the slits and the distance between the bright fringes, then we can do a bit of trigonometry and work out the wavelength. A more detailed explanation can be found on wikipedia's page on the Double-Slit experiment.

Interference can only occur where we have waves, so we know that light is a wave.

The Particulate nature of light

If light is a wave, then as the wave expands out into space, the energy will become ever thinner along with it, because energy is conserved. The wavelength however will change. For a water wave, the energy of the wave is related to its amplitude - the bigger the amplitude, the bigger the wave and the more energy. Just look out at sea - you need a pretty big wave with a lot of energy to throw an oil tanker around, but yachts can be thrown around by much smaller waves.

If we shine light onto a metal, then under certain conditions, electrons are thrown off the metal. This is because the electrons absorb the light, and if they have enough energy to escape the surface, they can jump off, in a sense like boiling a kettle where if the water has enough energy, it can leave the liquid and become a gas. This is called the photoelectric effect. Investigations into the conditions of the photoelectric effect threw up a rather strange phenomenon, and eventually it was Einstein who explained it.

It was noticed that if we shine really intense light on to a metal, and then measure the energy of the electrons coming off it, the energy of these electrons was the same as if we shone much dimmer light of the same frequency. The intensity is like the amplitude I mentioned earlier, so obviously this is pretty weird. Imagine our yacht going over a wave, for a really big wave, our unfortunate captain might be flung overboard, but for a little wave he can stay on the deck sipping his tea and enjoying the cruise, but for electrons, this isn't what happened at all. Regardless of the amplitude, the electrons were all flung from the surface with the same energy. What did change though, was the number of electrons that were thrown from the surface.
When the frequency of the light was changed, it was found that the higher the frequency, the more energy the electrons had when they were flung off. If the frequency was below a certain value, then no electrons would be thrown off at all, regardless of how bright the light was.

Einstein was the first to explain this in 1905, in his Nobel Prize winning paper "On a Heuristic Viewpoint Concerning the Production and Transformation of Light". The idea he presented, was that light came in discrete packets that he called "light quanta". The amplitude of light related to the number of these quanta, but the energy of each quanta was related to its frequency. Electrons could absorb one of these quanta and if there was enough energy to escape the attraction of the surface then they could, with the remaining energy going into the kinetic energy of the electron.

This explains the results perfectly - increased amplitude, or numbers of these quanta results in more electrons of the same energy being thrown off the surface, and increased frequency results in the electrons having more energy.

So we now know that light consists of quanta, or particles.

But hold on, we already said that light was a wave, and we demonstrated it with experiments. Now we are saying that light is a particle, and also we demonstrated it with experiments. Scientifically we can alter the theory, but we can't just toss out facts we don't like, so what is going on here? Enter Quantum Mechanics.

In Quantum mechanics, things may have both wavelike and particulate properties. It doesn't really make sense to ask whether something is a wave or a particle. Those after all are just descriptions of things that we are familiar with, that have certain sets of properties. Quantum Mechanics shows us that all things have properties that are both wavelike and particle like. I'll stick with photons here, since that's what I'm talking about, and it's what this blog is all about, but the best way to describe this, is that photons propagate in a manner that is wave like (they spread out over space), but interact in a way that is particle like (they interact discretely). This of course raises loads more interesting questions, like "well where is the photon if it is spread out all over the place but only interacts once, and in one place" but for now, this will have to do. If you do want to confuse people though, if someone happens to ask "what is a photon" a good concise description of it would be "a quantized excitation of the electromagnetic field" - I will get on to the electromagnetic field another time.

A much more in depth explanation of the photon and the history of the photon can be found in W.E Lamb's paper "Anti Photon, Applied Physics B, vol 60 p77-84 (1995).

Friday, January 15, 2010

The Ancient History of Optics

People have puzzled and pondered over how we see for many thousands of years, though a more complete understanding of optics was not truly available to us until more recent centuries through the work of scientists like Newton, Young, Maxwell and Dirac. Some of the earliest known writings on optics date back to the Ancient Greeks. The majority of the earliest ideas on optics were largely speculation. In the fifth century BC for example, the greek philosopher Empedocles hypothesized that the eye contained fire, which shone out from the eye illuminating objects so that they could be seen. Of course this raises many questions, such as "why can't we see at night then" and "why don't objects appear incredibly bright when several people look at them" - for the former question Empedocles thought that there may be some interaction between the rays from the eyes, and rays from a source like the Sun or a candle. Unfortunately the concept of Occam's Razor (entia non sunt multiplicanda praeter necessitatem - or entities must not be multiplied beyond necessity) wasn't devised until the 14th century, or Empedocles' contemporaries may have suggested doing away with the eye-beams and just leaving the rays from bright objects like the Sun. Still, it was relatively early days, and there were many great thinkers to come, who would apply their minds to the problem of light and vision.

To backtrack a little here, while there was no theoretical understanding of light and optics, People had still made practical applications of optical components and light itself. One of the earliest known lenses for example was the Nimrud lens, shown here - a piece of shaped glass some 3000 years old in the remains of the ancient city of Nimrud, which lies within modern-day Iraq. This lens was discovered by Austen Henry Layard in the mid 19th century, and may have been used as a magnifying lens, either for looking at objects or starting fires. Many similar lenses exist through the ancient Greek, Roman, Babylonian and Egyptian cultures. These though tended to be carved from crystal or glass spheres filled with water.

Back on to the theory. A couple of hundred years after Empedocles speculated on how vision worked, Euclid wrote Optics, which was one of the first texts to study the geometrics of optical systems in more mathematical detail. Ptolemy, this time a Roman Citizen (though possibly of greek ancestry) who lived between the first and second centuries BC, extended this work futher. His writings are considered some of the most important writings on optics before Newton, although they survive only as translations into Arabic - the originals having been lost. In this, he introduces many of the important properties of optical systems, talking about light, refraction, reflection and colour (all things I will get on to later)

Not all of these early studies were limited to the Greek and Roman empires though, much important early work was also carried out by Arabic scholars such as Ibn Sahl (10th century BC) who discovered the law of refraction (now known as Snell's law) and Ibn Al-Haytham (10th-11th century BC) who did away with Empedocles' rays from the eyes and more carefully defined what the rays were.

Things lay relatively quiet on the optics front then, until the Rennaisance when scientists such as Johannes Kepler and Willebrord Snellius began investigating the mathematical and physical behaviour of light. That though, will have to wait until later.

Let there be Light

Hi and thanks for taking a look at my blog. I'm an optical physicist, having studied optics for several years from a lowly undergraduate and Masters student at Imperial College London, before completing a PhD in Organic LED technology, and now working in Biomedical Sensors. The aim of this blog is going to be a combination of updates of the latest news in optics covering a wide variety of areas (just whatever catches my eye at the time) and explanations of the various principles of optics. It's intended for a lay audience, but I may get a bit technical at times - this is a good opportunity for you to learn!

I hope after reading some of these articles, you can learn something new, and understand a bit more about the Light side of Science.