Atoms From Space: the use and abuse of satellite imagery


MALE SPEAKER: Our guest today
is Phil Christensen. He’s a professor of geology at
Arizona State University, and my former boss. He runs a research center that
does daily operations for several instruments on board
Mars Spacecraft, and he’s just sort of an all-around
smart guy. So he’s here to talk to us today
about remote sensing. PHILLIP CHRISTENSEN:
Greetings. It’s a pleasure to be here. As Michael said, I’m at ASU. I’m a geologist initially, then
became a geophysicist, got interested in Mars. And unfortunately to date,
have then had to become interested in remote sensing,
because that’s the only way we have of studying Mars. So I’ve spent most of my last
20 years or so building a variety of instruments. I thought I’d start off by sort
of summarizing the last 20 years of my life– that’s sort of a frightening
statement to say these four machines summarize what you’ve
done for the last 20 years. But these are instruments
about yea big, mostly infrared, we send all
of them to Mars. So what I like to do today is
talk a little bit about just a quick introduction to sort of
the physics of remote sensing so we’re all on the same point,
and then show a few examples, and then just talk
about some applications. I’m reasonably well-aware of
what you guys are doing in terms of remote sensing
with visible, and Google Earth, et cetera. What I want to try to do
today, at least at an introductory level, is discuss
some of the other aspects of remote sensing: other
wavelengths, the importance of time, et cetera, to
see what other information one could get. And if nothing more, just sort
of set the stage a little bit to get you thinking about, OK,
maybe we could do that for the earth, or maybe there might be
something useful there, or maybe there might be an end
result that could be incorporated into the things
that you’re doing. Pardon me, but not knowing the
audience really well, I thought I’d just give a little
bit of overview about what do I think remote sensing is,
and some of the key physics behind it. And just to summarize, in the
simplest case, remote sensing is just using images. I mean, everyone in this room
has been doing remote sensing since you were born. Your eyes are probably the
single best remote sensing instruments ever designed. Your ears aren’t bad either. And so in the simplest case,
we are collecting visible light, spatial patterns,
temporal variability, and using that information to
construct a three-dimensional model of the world around us. The next level is to take
full use of the electromagnetic spectrum. Our eyes evolved to focus on
the light put out from the sun, but there’s a whole
spectrum of electromagnetic energy that goes
beyond visible. I want to talk a little
bit about that. And then to me what remote
sensing really is about is extracting quantitative
information from all of that information. Not just the patterns that
we see, but quantitative, compositional, physical
information about the nature of the surface, whatever
planet it’s on. Just going back to high school
real quick, I’ll remind everybody visible light,
about a half a micron. I’m going to mostly today be
talking about near infrared and thermal infrared, which have
wavelengths of about 10 microns, but obviously you can
do everything from gamma ray and x-ray remote sensing, all
the way out into radar and microwave. So there’s this– whatever that is– nine orders of magnitude of
variation in the wavelength of light, and to say that we use,
sort of, from 0.3 to 0.7 microns out of that nine orders
of magnitude, it’s a pretty limited piece of the
electromagnetic spectrum that most of us use. And I’ll be talking a little bit
today about both reflected and emitted light. Again, most visible light
sensors use the sun as a source, and they look at the
reflected or scattered light that comes off of surfaces. We can have other sources
of energy. We can illuminate the scene with
microwaves or radar, and we can certainly look at just
energy that’s emitted. Every object which is
not at absolute zero is emitting energy. All of you are emitting a lot
of energy right now, and we can sense that and make use of
that emitted part of the spectrum as well. I think I’ll get through the
word slides here pretty quick, and then get down to looking
at some images. Just again, as a quick review,
remote sensing is typically broken up into two
broad categories. One is active remote sensing,
where you’re actually providing a source. That source traditionally has
been radar or lidar So you are illuminating the scene. That has some real advantages
because you can control the wavelength. You can control the illumination
angle. You can control how much
power you put out. You can control both the
observing and illumination geometries. You can work when it’s
night, you can work when it’s cloud covered. So active systems have
a lot of benefits. Obviously they take
a lot more energy. You have to actually illuminate the source yourself. The other part of remote sensing
is the passive side, and this has traditionally been
one of the largest parts of remote sensing. I think you can break it up,
again, into three basic categories where the physics
of what’s going on is fundamentally different. You can look at very high-energy
interactions, and that would include things like
gamma rays and x-rays. Here on the earth, we’re
shielded by our atmosphere. Not very many gamma rays and
x-rays get to the ground. But on airless bodies like Mars,
there’s a lot of cosmic rays, a lot of high-energy
sources that get to the surface, excite the atom at the
nucleus level, generate gamma rays and x-rays, which
we then observe. We can tell, literally, what
elements are on the surface by looking at the spectrum of
gamma rays that come off. So there, the interaction is
going on at the nuclear level. What I’ll call moderate
energy is ultraviolet invisible photons. Those have enough energy to
excite or interact with the electrons that are surrounding
the nucleus. So slightly lower energy, but
there, as we’ll see in a minute, we’re actually
exciting electrons to different energy levels. And then the final one is sort
of the low energy, infrared, and even out into the microwave.
Now the energy of the photon barely has enough
energy to cause these bound atoms within a structure
to vibrate. So each one of those then, based
on the energy of the electromagnetic wave that’s
hitting the material interacts at a different level. And each one of these– x-ray,
UV vis, infrared– is providing a unique piece of
information about the atomic structure of the material that
it’s interacting with. And I’ll spend a couple minutes
talking about that. OK, some basic physics
to remind everyone. Light, of course, is an
oscillating electromagnetic wave that has a wavelength,
a velocity– the velocity of light– and a frequency. And those are very
closely related. Velocity equals wavelength
times frequency, OK? And frequency, as we’ll see
in a minute, is important. The frequency of the light
depends a lot on how it is going to interact with
the material. I think one of the most useful
equations to keep in mind is e equals h nu. The energy that a photon or a
wave contains is directly proportional to its frequency. So that gets back to what I was
just saying in the last point, that short wave,
high-frequency photons carry a lot of energy. Long wave, low-frequency
photons carry much less energy. And therefore, they interact
at the atomic level very differently. The other thing to keep in mind
is electromagnetic waves, as the name implies, are
oscillating electric and magnetic fields. And so if light is an
oscillating electric field, and I interact with something
that has charged particles, either electrons or nuclei,
that electric field can interact with those
charged particles. And it’s the nature of that
interaction, then, that causes light to be absorbed. It’s the reason why
things have color. It’s the reason why water heats
up in a microwave. It’s the process by which light
interacts with material. It’s the fact that I’ve got
oscillating electric fields interacting with charged
particles, and those charged particles are going to move when
they’re bathed by that electric field. You convert the energy in the
wave into the kinetic motions of the material. OK, just a couple more
points here. Similarly, at the same time that
these electric fields are oscillating, the atoms in any
crystal structure are also oscillating. They’re bound in there by the
forces that are holding those atoms together, but they’re
certainly not balls on sticks like you see in a
chemistry class. A much better model would
be balls on springs. So I’ve always been trying to
find somebody would make a beautiful model of a complex
molecule where you have these little masses, and they are
bound together by springs. All those masses are free
to move around, OK? And just like in any system
of masses and springs, the motions of that system are
going to have a resonant frequencies. They are going to vary with
the mass of the atoms, and they’re going to vary with the
strength of the bonds. So in any given crystal
structure, you’ve got these masses and they’re held together
by these bonds. Every one of those pairs of
atoms has a specific resonant or harmonic frequency that it’s
going to naturally want to vibrate at, OK? And that’s then sort of the key
to how light is going to interact with this material. Because what happens then is
if I have this oscillating electromagnetic wave that’s
coming along, if the frequency of this oscillation– say, some frequency, nu 2– matches the frequency of this
harmonic resonance frequency within the material, then this
electric field will excite these charged particles, cause
them to move, and in the process, reduce or subtract
energy from that wave so that when the wave comes out
the other side, it’s reduced in amplitude. So this is just a fancy way of
saying, this light is absorbed by this material. But the thing that makes it
important here is if I have a wave that’s at some other
frequency, that’s different from this resonant frequency
of the material, then that wave tends to pass through
the material more or less unchanged. There isn’t a strong
interaction, and so the light can come out. So in this particular process,
then, you can imagine that if I took a spectrum of white
light and illuminated a surface, the waves that come
out the other side– the spectrum that comes
out the other side– I’m going to be missing those
frequencies that correspond to these natural vibrational frequencies within the crystal. Let me come back to that. So that’s two pieces of
basic background. The third piece, if we’re going
to talk in particular, I want to focus now on sort of
infrared spectroscopy. I want to talk about
the emitted part of remote sensing. And to do that, we need to talk about the source function. What energy is being emitted
by a material? I’m sure you’re all, at least
at some level, familiar with the concept of a black
body curve. Planck, back in the early
1900’s, derived this equation from first principles of physics
using the quantum nature of the energy structure
of materials, and was able to predict, then, what
we observe. That is that the amount of
energy emitted by material varies with wavelength. So here, I’m showing wavelength
in microns and temperatures– 0 degrees centigrade, 20
degrees centigrade. So this blue curve is basically
a spectrum of the light that you’re giving off. All of you are emitting. All of you are at about 20
degrees C, maybe 30 degrees C. And you are emitting energy,
and the energy that you are emitting is peaked at about
10 or 12 microns. If I were to heat you up to
6,000 degrees C, what would your peak wavelength be if
you were 6,000 degrees C? AUDIENCE: Somewhere
in the visible. PHILLIP CHRISTENSEN: Somewhere
in the visible. The surface of the sun is 6,000
degrees, and these exact same curves hold just as well
for those temperatures as our temperatures. And so if you plug in 6,000
degrees C, two things happen. One is, there’s a lot more
energy emitted, and the wavelength at which it peaks
shifts to shorter and shorter wavelengths. So the sun is peaked down at
about half a micron, which is exactly why our eyes have
evolved to be most sensitive to light at about
a half a micron. As we’ll see in a minute, the
world is a whole lot prettier place in the infrared than it
is in the visible, at least the natural world is, rocks
and minerals are, As a geologist, I would be much
better suited with infrared eyes, able to look at the world,
than with these crummy visible ones that
I’m stuck with. But the fortunate thing is we
can build detectors that allow us to see in the infrared. OK, so materials are
emitting energy. And in an ideal world, they’re
doing it as predicted by Planck from his famous equation,
and the energy coming off of an ideal
surface would look something like that. But in Planck’s world, things
weren’t bound together. There were no bound atoms.
Everything was just free to move around. It was not bound to
anything else. When I start creating a crystal
structure where I have atoms bound together, then we
have these harmonic motions, these preferred frequencies, and
now the light that’s being emitted no longer follows
perfectly Planck’s function. We start to see these absorption
bands which are due to the vibrational motions
of the material. The upper curve shows– the black line is the idealized
Planck function. The red curve is a measured
spectrum of a mineral, in this particular case, just a nice,
simple quartz crystal. And then, typically what
spectroscopists do is they take the ratio of those two,
create something called the emissivity. The beauty of the emissivity
is it’s independent temperature. If I just measure the straight
radiance, then those curves are moving up and down with
temperature, the shape of those curves is changing
slightly. But the ratio of the energy
emitted from a national service to an ideal surface
stays the same. So this emissivity doesn’t
change whether I’m measuring it at 20 below 0 or at 500
degrees Centigrade. This emissivity of quartz
won’t change. OK, so to quickly come back to
our vibrational model, what’s happening is– if I can explain this sort
of quickly and simply– deep inside the material, I’ve
got all these motions, and it’s generating this
Planck-like distribution of energy. As that energy tries
to escape from that crystal, it’s being absorbed– preferentially absorbed– at frequencies or wavelengths
that correspond to these harmonic frequencies. So this particular frequency
that corresponds to a wavelength of say, 20 microns,
that’s one of these harmonic frequencies. So the energy that’s trying
to get out is preferentially absorbed. The atom likes that energy. It vibrates at that energy. It takes the energy out of the
wave. And so what we see coming out of the surface,
there’s a deficit of energy at that particular frequency. So the point of all this is that
these infrared spectra, which we can quantify remarkably
well, you can get right down to the basic physics
of vibrational modes and all this other kind of
stuff, there’s is beautiful quantitative basis
of all this. But for remote sensing, what we
really care about is that because every known substance
on the planet has a unique crystal structure, it will
therefore have unique vibrational modes, and it will
therefore produce a unique infrared spectrum. So I can make a list of minerals
that mean something to a geologist: quartz, clay,
olivine, calcite, this is what they make concrete out of. Each one of these minerals,
then, has a very unique and diagnostic infrared spectrum
that’s based on its internal crystal structure. It turns out that a lot of
what we know about the chemistry and the crystal
structure of materials has come over the last hundred
years from studying their infrared spectra. And it turns out, you can get
incredibly detailed with this. It’s one thing to say, this
mineral is sort of different from that one. But these are examples of
minerals that I don’t expect you to know, but these are
classes of minerals which are very, very similar, but if I
start replacing, say, the iron atoms with magnesium atoms, that
changes the fundamental frequencies because the mass
of those atoms is changing, the frequency of the system is
changing, and these absorption bands shift around. So it’s a very diagnostic tool
down to very precise levels of what the crystal structure
of a material is. And the beauty of an infrared
spectrum from a remote sensing point of view is I can measure
from orbit around Mars the light that’s given off, measure
the spectrum, and determine what the composition
of the surface is. I don’t have to touch it. I don’t have to put it
into an instrument. I don’t have to do anything
to that surface. I can just measure its infrared
spectrum, and get a very unique identification of
what mineral’s on the surface. And we’ve done that. And I’ll show some examples
a little bit. Although I’ve spent the last 25
years studying Mars, and I think I only have one slide
in here on Mars. So for better or worse, if you
wanted to hear about Mars, you’re going to be
disappointed. OK, the other the thing that
you get from an infrared measurement, which I think is
extremely interesting, is this idea of temperature. The spectral information is
telling as composition, but the total magnitude of the
energy emitted is telling as temperature. The hotter something is,
the more it emits. OK, that’s fine, but we can take
that information and use it in a very quantitative way. You can develop very
sophisticated models that predict the temperature of a
surface as a function of time of day, based on the physical
properties of that surface: its grain size, its
conductivity, its sub-surface layering, whether it’s
a metallic material. You can put all that stuff into
a model and predict how the temperature will vary
with the time of day. So for example, large, rocky
materials will hold their heat really well at night. They’ve stored a lot of heat
during the day, and then that heat comes back out at night. So rocky materials will
stay warm at night. Very fine grain sand and dust
materials were not able to conduct heat into the interior,
were not able to store any heat, so at night,
that upper layer cools off very rapidly and there’s no
stored heat to come back out and warm the surface. So we get these diurnal curves
that very significantly on whether a material is
rocky or fine grain. So for example, here’s a place
on Mars that here, it’s very warm at night, and here,
it’s relatively cool during the day. We can go into these thermal
models, measure those temperatures, and say, OK,
this is actually bedrock. This is pure, solid
rock sitting on the surface of Mars. This stuff that was relatively
cold at night and heated up during the day, that’s say,
millimeter sized gravel. So you can actually get very
quantitative about this, and be able to pin down the particle
size of surfaces. I can tell whether it’s
millimeter gravel or two millimeter diameter gravel,
based on these very precise temperature measurements. OK, so from the infrared, then,
we get composition. We also get some indication
of the physical nature. And then finally, I wanted
to come back. So that was the vibrational
spectroscopy, the low energy end. If you remember back, I was
talking about the moderate energy, UV-visible light. That’s where the photons have
enough energy to actually interact with the electrons. And if you think, if you
remember back to energy level diagram of hydrogen or sodium,
the electrons in the cloud around an atom are at a ground
state, they can be excited to an excited state. If that energy to go from a
ground state to the next level up is equal to the energy of a
photon, then what happens? That photon excites
the electron. Oftentimes, there’ll be a
collision within the atom. Instead of the electron just
jumping back down again and giving off a photon, before
it can do that, another atom will collide. That energy that was stored in
the higher energy state is converted to kinetic energy
of the atoms. And so again, if I took a
flashlight, shined it in. If I had light whose energy
corresponds to one of these energy states, it’ll excite the
electron, and what comes out the other side will be– I won’t see that photon coming
out the other side. This is the basic concept of
why objects have color. This is what’s going on in
dyes and going on in the things that you see
with your eyes. It’s photons exciting electrons
and being removed from the beam. OK, and just some
quick examples. This is down in the
visible light. So this is half-micron, one
micron, two microns. Our eyes end up cut off here
at about 0.7 microns. These absorption bands are due
to electrons being excited within atoms within
the material. So a lot of remote sensing, near
infrared remote sensing that’s done, has to do with
these electronic transitions that are going on, where photons
are being absorbed and used to excite electrons. The problem with electronic
spectroscopy is there’s only a very small number of elements
where this process actually occurs at these kinds
of wavelengths. And specifically, it’s the
transition metals. I know I was crummy in chemistry
in high school, I can never remember what the
transition metals are, but iron is one. Copper is a good example. So basically, this kind of
spectroscopy works in nature in iron-bearing minerals. OK, well if you think about
driving around in the deserts, most of the color that you see
is yellows, and oranges, and browns, and butterscotches,
and tans. Almost all of that is
due to iron-bearing minerals, or iron stains. And those stains, those
iron-bearing minerals, have colors because of these
electronic transitions. If a mineral doesn’t have iron
in it, like quartz, for example, that’s why it’s
colorless, because there are no elements in there that can
interact with the photons and have energy levels at the
appropriate levels to absorb the visible light. So it works great for
iron-bearing minerals, but the world isn’t necessarily
covered with iron-bearing minerals. All right. So that’s the end of
the physics lesson. Let’s look at some
real examples. And again, what I’m going to try
to focus on for the next little bit is things that
you’re not used to. I mean, things that don’t have
to do with looking at the world in visible light. This is a Landsat image. Landsat is a space-borne
imager. It has seven bands. You can use bands one, two,
and three, and construct a natural, normal light image
that you would recognize. This is an image of a delta, I
think in the Amazon region, constructed of some of the
other bands, think the wavelengths that your
eye doesn’t see. But I would still say, this is
pretty close to what humans have been doing all along,
looking at the world. OK, so that’s sort of what the
world would look like. I want to show another
example. This is Phoenix. This is a more or less
normal Landsat scene. It’s not quite red,
green, blue. It’s green, red,
near infrared. I think many of you are
familiar with this. This is a false color image. We’ve taken each of the
wavelengths and shifted it down one. So green is displayed in the
red gun of this projection; red, and green, and near
infrared in red. As a result, vegetation, which
is very bright, very reflective, in near
infrared light– because it interacts
really well, the electrons are easily excited– vegetation shows
up bright red. One of the things, though, that
you can do if you want to emphasize certain things
and bring out certain compositional information, you
can actually make ratios of those bands. So instead of just making a
color image, I can take two bands and ratio them, and so
this is an example of taking one of the bands, where
vegetation very reflective, and one where it’s not, making
a ratio of that. And so here, any vegetation is
very clearly identified as being in this ratio. You also see other things that
show up, and my point is not to dwell on this, but to simply
say, you see certain patterns and certain information
if you look at just a, quote, “normal”
color image. You see other information and
other patterns if you start ratioing the images or in some
other way trying to extract information from them. So for example, clear
differences, say, right here between urbanized and
non-urbanized actually can show up better in some of these
ratios than they do in the original, just in
the color image. This is an example of what I
said about what the world would look like if you
had infrared eyes. I’ve taken this set
of spectra– and you can fly a spectrometer,
but they’re big, and expensive, and complicated,
and they’re hard to get to work. I can also build a camera, an
infrared camera, that’s relatively straightforward and
only has three filters. And if I put one of those
filters at 10 microns, one at 9, and one at 8, and took an
image, and then I displayed the emitted energy at 10 microns
as red, 9 as green, and 8 as blue. Then for example, if you look at
quartz, it’s emitting a lot in what then becomes the red
band, not much in green, and not much in blue. So in a false color infrared
image, that quartz would look red. Similarly, if I take a mineral
that’s bright at this wavelength and is absorbing at
these two other wavelengths and display it, it’s
going to look blue. So if I flew an infrared camera,
three-band infrared camera, across the desert,
instead of– you fly along, you look out the
window, the world is brown and gray and tan– in the infrared, this is
literally what the world would look like if you had
infrared eyes. And to a geologist, or someone
interested in the composition, the nature, the makeup of
the surface, this is what you would see. And this is basically a geologic
map of this mountain range without having to
do anything at all. I mean, geologists spend years
detailed mapping to try to find all these different mineral
deposits, and rock outcrops, and different
types of material. This infrared image is
essentially giving you that information instantaneously. I can see quartz-rich sands, and
I can see volcanic rocks, and I can see gypsum-bearing
salts, et cetera. It’s a tremendously powerful
tool for mapping the composition of the
earth’s surface. This is an example of the
surface temperature in Scottsdale, Arizona. Bright is warm, and in this
case, pretty darn warm. We flew this in August. Those
surface temperatures are probably 140 degrees
Fahrenheit. Cool, dark is cool, and
it’s pretty darn cool. The cool temperatures are
probably about 80 degrees Fahrenheit. So where would you want
to have your house? Next to the 140 degree baking
hot asphalt and dirt, or next to the 80 degree, nice cool
lake and golf course? Well, OK. But the point is, these infrared
images actually provide a tremendous amount of
information about temperature which feeds into all sorts of
other important things. In Phoenix, we’re trying to
actually use these to measure lake temperatures, and swimming
pool temperatures, and calculate evaporation
rates, and how that’s affecting the humidity. And there’s all kinds of
environmental impact of all of these crazy golf courses and all
these lakes that show up extremely well just by looking
at the temperature effect of those things. And I’ll just touch on this. This is a really detailed
spectrum, again, from about 20 microns– sorry about the units– down
to about 5 microns. So it’s an infrared spectrum,
extremely high resolution, looking at gases in the
atmosphere– in this case, the atmosphere of Mars. Even down, if you look at
that box, you can build spectrometers that have that
kind of resolution. This is how we’re monitoring
trace gases in atmospheres. So there’s a tremendous amount
of information that you can get about the composition
of the atmospheric gas. When I was talking with Michael
a little bit, one of the topics that came up about
what can you do with remote sensing from a practical point
of view, a lot of the time, you’re dealing with a scene that
was taken looking through the atmosphere, and you’re
trying to separate the atmosphere from the surface. If you’re just trying to make a
beautiful map of the ground, you want to remove
the atmosphere. If you’re an atmospheric
scientist, you want to remove the surface. So I used to call this
atmospheric correction, but the atmospheric guys got
annoyed about that. So now I try to say, we’re
separating the two. There are basically
two ways to go. You can do very sophisticated,
radiative transfer modeling, and there are packages that do
that, where you put up a radiosonde, and you measure
the temperature, and the pressure, and the water vapor,
and all the stuff, and it goes into incredibly complicated
model. And you try to remove that
signal from the ground. Or you can do scene-based
approximations, which in the real world the only practical
way to go. And there’s a bunch of
ways to do that. And I’ll just show a
quick illustration. This is an infrared scene, one
of these beautiful, three-band infrared color scenes. And this particular scanner is
a line scanner that looks out in both directions. So looking straight down,
we’re looking through a certain amount of atmosphere. Off to the side of the image,
we’re looking through almost twice as much atmosphere. And as a result, you
see this band of yellowing down the side. This is an example of this
type of problem. A really simple way around that
is to say, I’m going to assume that, statistically, the
ground is sort of randomly distributed. So what I’ve done here is I’ve
just taken a column average of the entire scene, and those
three lines, then, represent– a single line, that’s the
average of all the lines in the image. You can see the effect of
the atmosphere with this absorption that’s going
on over on the edge. I could try to model that, or
I could be really simplistic and say, I’m just going to
take this signature and subtract it from every
line in the image. So this is just a simple
scene-based example, and you can see what just something
even that simple has done. This material, which is yellow
in the original data, suddenly looks like it’s supposed to do
in the scene-corrected data. So there’s a lot of very simple
ways that people have devised over the years in a
practical sense to separate the tremendous amount
of information that’s in these images. OK, let me just quickly show– I’m not a radar expert, but I
didn’t want to talk about remote sensing and not
mention radar. There’s a tremendous amount of
information in radar as well. Here what you’re seeing is all
of this bright material is basically man made, metal-rich
material. This is a scene from a radar
experiment that was flown in the shuttle, again,
over Scottsdale. And you see natural desert,
watered fields, but this incredible signature from
the metal in the city. With radar, you can transmit
these electric waves. You can transmit them this way
and that way, and which way you transmit them has a huge
effect on what happens. So for example, the main
difference between those two scenes is whether or not the
wave was transmitted vertically or horizontally. In the first case,
it’s vertical. The electric field and the
wave was oscillating vertically, and so where I had
electric power lines, there was very little interaction. In the second case, the wave was
oscillating horizontally. Where I have electric power
lines, I get a very strong interaction. So one of the beauties
of radar is you can actually tune– you can design your experiment
so that you can pick up different things. And this is just an example of
three different radar images combined into a color image
with a fantastic amount of information in there. OK. Let me just quickly go over
a couple of applications. This one I showed
an example of. This is, again, an infrared
multi-spectral image of some mountain range in the Southwest.
Remarkably, very few people use this type of
information, even today, to try to do compositional
mapping of the world. This is the type of
instruments that we’ve flown to Mars. We have better maps of Mars
than we do of the earth of trying to look for rock types
and minerals on the planet. One of the things you can do,
in addition to make pretty images, you can actually look
at the spectral signature. You have all this information. You can take that spectral
information and try to classify the scene. So for example, this is a
Landsat image of Phoenix. This is taking that spectral
information and classifying the scene– I’m going to zoom in on–
this is the airport. So you can make this– again, it’s an automated tool. You define what the rules are:
what each one of your classes, what it consists of, what type
of spectral signature it has. And so we’re looking at
cultivated, grass, vegetation, commercial, compacted soils,
water, asphalt, concrete. So you can make these
classification images of a city that then you’ve extracted
a lot of information about that city, and you’ve done
a lot more than just make a pretty picture of the city. I’ve actually got information
that I can use to study, for example, study the city over
time, see how desert’s being converted to agriculture,
agriculture into urban use, et cetera. OK, the other thing that you can
do with remote sensing of cities in particular is look
at temporal changes. This is an example of Beijing
from a very early 1978, relatively low-resolution
version of a Landsat instrument. This sort of ugly gray color
here is most of the urbanized part of Beijing. In this false color image,
fields and vegetation are showing up red. And it doesn’t take much to sort
of note the difference between 1978 and 2004. In that same area two years
ago, there’s virtually no agriculture or vegetation
left in this region. AUDIENCE: –seasonal difference
there, too. PHILLIP CHRISTENSEN: In this
particular case, there is a seasonal difference. In an ideal world, you certainly
wouldn’t want that. But one of the problems– I was talking earlier– it’s remarkably hard to go back
very far, even from the satellite data. Oftentimes, we only had
one image a year that was even acquired. But you’re right. And that certainly complicates
the whole situation. And particularly in China,
where their cities are evolving quickly, it’s
remarkable how much change is occurring. Shanghai was a relatively
small city surrounded by vegetation 30 years ago. Today, it’s grown
dramatically. I think I have one more
example, Hong Kong. You can use remote sensing in
the time domain as well to really look for temporal
changes in the earth’s surface. You hear a lot about things like
deforestation and rain forest, but actually some of the
most dramatic changes are occurring around cities
as they are growing spectacularly. From the night time temperature,
you can study things like urban
heat islands. I’m assuming you’ve
heard of this. This is a night time temperature
image of Phoenix, and a couple things
are apparent. First of all, where are the
warmest parts of the city? Well they’re the roads, the
buildings, that huge bright blob up there is the airport. It’s pretty easy to see
where the man made sources of heat are. What’s interesting, though, and
it sort of surprised a lot of people is how warm the
natural surfaces are as well. So sometimes, if you converted
this mountain into houses, you might, in that particular
case, actually lower the temperature. So studying heat island
developments in cities is complicated and these types
of infrared data are extremely useful. This, for example, is the
Gila River, which doesn’t flow at all. There’s no water in
that river at all. It’s all underground, but
there’s enough underground moisture flowing through this
quote, unquote dry riverbed that the evaporation from
that moisture is cooling off the surface. So the point here is there’s
tremendous amount of information in these
infrared images. Another quick example. This is a handheld infrared
camera looking out at Tempe Town Lake, a small little
man made lake just real close to ASU. And I wanted to use this
to illustrate a– well, I guess I won’t. I’ll have to come back to it. So even simple– you don’t have the fancy
aircraft or satellite infrared imagery, just handheld infrared
cameras now are becoming inexpensive enough
that you can begin to put these in strategic places and
monitor things like water temperature, and surface
temperature, and road temperatures, et cetera. One of the points that I was
going to make is oftentimes– for those of you who are trying
to do any type of scene classification, or automated
scene identification– one of the things that we have
real trouble with is telling water from asphalt. They’re both really,
really dark. And in fact, there’s cases of
automated rovers that drive out onto a lake because it’s
smooth, it’s dark, it looks like a perfectly nice
surface to drive on. The one place that water and
asphalt really differ from is in their temperature. So if you had a temperature
information as well, then suddenly identifying two dark
things is trivially easy. So one of the things that we’re
trying to do is identify water based on temperature. You can also do things like
monitor volcanic eruptions. This is a satellite image
in the visible, with the superimposed infrared image. It’s an active eruption taking
place in Kamchatka from space. And so, for volcanic hazards
and volcanic eruptions, obviously, the infrared’s
a powerful tool. Weather gets a lot of attention,
as well it should, but there’s other things that
go on in the atmosphere besides just rainfall
and cold fronts. This is an example,
and this is the Western coast of Africa. That’s Gibraltar and that’s
Spain, for scale. So this is a huge piece of
Africa and part of Europe. And this is a big dust plume
that’s blowing off of the Sahara Desert. These dust plumes are not
uncommon, and oftentimes, they reach all the way across
to North America. You can easily detect Saharan
dust in the air in New York City, and from space with
remote sensing, you can certainly track these things. You can see them coming. On the other side the
world, you see the same sort of effect. This is, again, dust being
blown off of the Chinese continent and headed east. So
there’s a tremendous amount of things that go on in
the atmosphere besides just weather. So again, just to plant seeds
of things: pollution, aerosols, volcanic ash, volcanic
dust. These are additional things that you
could monitor from space. Just a couple more applications,
some things that we’re doing that I wanted
to mention. As part of a NASA activity a
couple years ago, a few years back, I started up a thing
called 100 Cities Project, whose goal was to try
to monitor 100 cities around the world. And in doing that, we were
trying to collect all kinds of information, not just satellite
data, not just remote sensing data, but
socio-economic data, geographic data, land use
data, zoning data. And what we’re finding is when
you start combining those other levels of information with
the imaging data, then suddenly economists, and
sociologists, and all kinds of folks get really interested
in these data sets. And so for things like Google
Earth that have this beautiful imagery, what we’re trying to do
is look at other pieces of information that you can add to
that, whether it’s things like the depth to the water
table, well logs, traffic accidents, air pollution,
air quality, thunderstorm tracks, whatever. There’s a tremendous amount of
information, much of which you can get remotely, that you can
add to the image data. And this is just an example
where we’ve built up this system that’s trying to use some
of the satellite data, in our case, temperature data as
well, to add to the existing visible imagery. One other thing that I just
wanted to mention. This is something that Michael,
actually, was working on back when he was at ASU. This is an example of six or
seven global data sets that we have for Mars. We have a total of 57 of these,
and everything from mineral type, to rock type,
to where dust storms have occurred, to changes in surface
albedo, to nighttime temperature, to elevation
data. What you find is when you put
this 56 data sets together, then suddenly people are doing
very sophisticated research on these global data sets. And I don’t think any of the
ones that are in our set of 56 are classic visible
remote sensing. They’re almost all
thermal, radar, topography, laser, et cetera. So I think this is an example
where on Mars, we’re probably doing more sophisticated
combinations of remote sensing data than we’re actually able
to do here on the earth. And I think you could, easily,
develop these very similar types of data sets here
for the earth. I wanted to show one
other example. This is an example of a
infrared spectrometer. So it’s that full spectral
resolution that I showed for the Martian atmosphere. You can actually develop these,
now, where they’re imaging systems, where each one
of those pixels is a full 2,000 point spectrum. So for each time step, for each
pixel, I can then take that full spectrum and identify,
in this particular case, gases that are
being released. So for pollution monitoring, for
looking at what’s coming out of smokestacks, for what’s
coming out of cities, this next generation of remarkably
sophisticated imaging, hyper-spectral instruments, it’s
a tool and a technique which I think is going to have
tremendous potential and get a lot of use. Only within the last few years
have we been able to build instruments that are capable of
doing this, build computer systems that are capable of
processing these data. But I think it’s an extremely
exciting way of looking at the world in much more complex
terms than just with RGB. I think I’m going to skip this
one and just close with a couple of other thoughts. I just came back from a– I spent last week at a NASA
conference with folks who are looking at what astronauts
are going to do when they go to the moon. And without getting into the
detailed politics of that, in NASA’s mind we’re going to send
astronauts to the moon, and the scientific community was
then tasked with, what are they going to do when
they get there? You could easily argue that
you have that problem backwards, but. So I was tasked with, what could
you do observing the earth from the moon? And my first thought was,
that’s really stupid. We have 25 earth-orbiting
satellites, and we have geostationary weather
satellites, and we’re looking at the earth just fine
from the earth. But it turns out that there are
actually a few interesting things that you can do. For example, this is the earth
from geostationary orbit. This is the classic weather
satellite view. It turns out, those things
are pretty far away. They’re 30,000 kilometers up,
but they’re still close enough that you don’t come close to
seeing the entire earth. So for example, you can’t see
the polar regions at all. Up at the top, you can see
Alaska and the Aleutian Islands are just about
disappearing over the limb of the planet. Viewed from the moon, that’s
what the earth would look like at that same time. And so there’s two particular
things that are of real interest. And you may know this,
but the polar routes from North America to Asia
always go over the Aleutian Islands, and oftentimes they
go over in the polar night. And the Aleutian Islands are the
single most volcanically active place on the earth. And at any given time, there
are a dozen or so volcanoes that are erupting. Twice, on two separate
occasions, 747s have flown into ash clouds unknowingly and
had all four engines stop. In both cases, those engines
were started before that airplane crashed. But this band of islands in
the Aleutians, it’s a dangerous place to go. So one of the things that people
talked about is putting sensors up that are constantly
staring at that chain of islands, monitoring volcanic
eruptions as a heads up for jets not flying through them. The other thing you can do–
clearly, the poles are extremely interesting from a
climate change point of view, and we don’t have good,
continuous coverage of the poles So there are actually
things one could do from the moon looking back
at the earth. And it turns out, with a modest
sized telescope, you can actually see– you can get 500 meter per pixel
imagery of the earth from the moon. These are some examples. I won’t go into them, but these
are some of the examples of the infrared mapping that
we’re doing on Mars. And again, I think it’s a sad
statement to say that we probably have better data on
the composition and the physical nature of the Martian
surface than we do the earth, simply because we’ve flown
some more sophisticated instruments to Mars than we’ve
flown to the earth. Yeah, right. AUDIENCE: There are also less
nasty plants on Mars [INAUDIBLE] PHILLIP CHRISTENSEN: Yeah,
there’s definitely fewer nasty plants on Mars than there
are here on the earth. And as a geologist, I do think
that’s a good thing, but not everybody does. OK, and so I think, that
just a summary then. Remote sensing across the
electromagnetic spectrum and through time, I think, has a lot
of potential for providing a lot of quantitative
information about the world that people are going to want
more and more access to, and are going to, I think, be able
to come up with more and more applications for how they
actually might use that data. And I’ll certainly stop there,
and I apologize for droning on like a lecturer. I will certainly
take questions. [APPLAUSE] Yeah. AUDIENCE: When you were
discussing emissivity, you described a process that
I think corresponds to incandescence. Is there anything useful
in fluorescence? PHILLIP CHRISTENSEN: The comment
was, in describing emissivity, I was describing
something that was similar to fluorescence, and that certainly
is the case. You get these waves, and they
excite things, and you can be excited and then cascade
back down, or you can be excited and– and fluorescence is one of those
processes where waves are absorbed, electrons are
excited, and then as that electron comes back down to its
original ground state, it can give off photons. If it comes down in a series
of steps, than it actually gives off photons whose
wavelength is different than what it absorbed, hence,
fluorescence. And yes, there’s a lot of
information in that. Typically, you illuminate in the
ultraviolet and materials then emit or fluoresce at
visible wavelengths. But there’s a lot of information
in that. I find scorpions in our
backyard because they fluoresce at night, and so
you can find them easier. But that’s a good example
of another way of looking at the spectrum. Yeah. AUDIENCE: What fraction of the
earth’s land surface can you map, get geological maps
from, because there are not too many plants? PHILLIP CHRISTENSEN: The comment
was, what fraction of the earth’s surface can you map
geologically, without the interference of plants? Well, Southern California,
I can do fine. Arizona, I can do fine. Rainforest, no, Northeast
forest, no. It turns out, even if you have
like 20% of the ground covered with plants, or 30%, 40%, you
can still get a really nice signature through it. One of the interesting facts
is that huge parts– from a human perspective– large parts of the population
of the earth live in relatively arid places. So monitoring those, mapping
those is really useful. The other thing, from a climate
point of view, the arid places on the planet are
the ones that are undergoing the most rapid change. We can detect changes
in vegetation. So you can detect
desertification, if you will, in arid regions. The polar regions are extremely
interesting, looking at alpine glaciers and seeing
how they’re retreating. So there’s a lot of really
interesting processes that you can map that are going on in
places that don’t have a lot of vegetation. –on Mars, you could
do it on the earth. AUDIENCE: Which implies that
there is no material that is identical to another material. In the infrared spectrum, the
difference in the ultraviolet is invisible. PHILLIP CHRISTENSEN: Yes. I guess the point I was trying
to make, and I didn’t make it very well was that every
material, by definition, has a unique crystal structure. It’s made up of a unique set
of atoms that are bound together in a unique way. AUDIENCE: So this
is not valid for characterizing amorphous material? PHILLIP CHRISTENSEN: You can
identify that it’s an amorphous material, but things
like glasses, for example, don’t have nearly as complex an
infrared spectrum as, say, a crystalline material. So any crystal, mineral, plant,
man made, whatever– it’s got a crystal structure,
it will have a absolutely unique infrared spectrum. And if you have enough– and enough depends on what
you’re looking for– but for geologic materials,
for biologic materials, several hundred spectral bands
is plenty to be able to identify those uniquely. All right. Thanks very much.

One Comment

Add a Comment

Your email address will not be published. Required fields are marked *