Download as pdf or txt
Download as pdf or txt
You are on page 1of 3

Intro to Remote Sensing

Fall 2017
Canadian Centre for Remote Sensing:
Fundamentals of Remote Sensing

Chapter 1: Introduction
1. Why do clouds appear white?

As the light enters a cloud or dust pockets the RGB wavelengths get scattered
evenly in all directions and causes the eye to see a mix of all the bands called
Nonselective Scattering, and when you mix all the colors you get white.

2. Landsat, MODIS, and ASTER are all examples of passive or active


sensors? Explain.

Passive sensors is when there is an external source supplying the energy to


capture an image. This means that the sun’s light is bouncing off an object and a
sensor is capturing the wavelengths it receives to create an image. This is also
true with infrared waves when it is night time. Active sensing is when the sensor
supplies the energy needed to create an image. An example of this is when
boats use sonar to map the ocean floor. The system sends down a wave that
bounces off the floor and objects slowing down the sonar at different rates and a
sensor picks up the returning wave and is able to create a image from the data.

3. How will you explain a band (or channel)?

A band is a selected wavelength of color from the light spectrum. The visible
spectrum that humans can see includes Red, Blue and Green. When a sensor
combines these three bands it can form an image that is true to what we see.

Chapter 2: Sensors
1. What is nadir?

Nadir is the point directly below the sensor collecting the data. This data will
show the direct top of the object, if an object is further from the nadir point it will
show more of the sides of the object and will seem to be bending away from the
nadir point.

2. Explain Spatial, Spectral, and Temporal resolution.


a. Spatial resolution:
Spatial resolution is the rating of how small of an object the sensor and pick up.
Some sensors are good at spotting large changes such as lake sizes and finer
sensors and track how many fish are in that lake.

b. Spectral resolution:

Spectral resolution is the rating of how fine the sensor can detect the wavelength
in the light spectrum. lower resolution will be able to see a wide view of the
spectrum such as RGB and a higher resolution would be able to split the RGB up
into three bands and can even sense the strength of each band.

c. Temporal resolution:

Temporal resolution determines the the time in between when the sensor can
revisit a point for the next set of images. If a sensor takes a picture of a tree and
several days later when it comes back to take the next photo the tree has fallen
over the photos next to each other will show the tree standing and the same tree
fallen over.

3. Does the edge or the center of an image have greater distortion? Explain.

The edges of an image will have more distortion due to the increased angle and
distance away from the camera taking the image. The center of the image is
what will be more in focus and will have the most detail. The further from the
focal point you get the increased about of light scatter and interfering light will be
collected in the image.

Chapter 4: Image Analysis


1. Name and describe 3 of 7 visual elements used for interpretation.

Shape- Shape is the physical boundaries of an object that help us identify what
we are looking at shapes that are straight or have a sharp edge are typically man
made or have had influence.

Pattern- Patterns are repeating sequences that are most common with man
made objects. We find these in street layout, orchards and other urban areas.

Shadow- Shadows help us identify the profile of an object seeing how the image
is often taken directly above the object. Shadows can also tell us the height of an
object if we know the angle the sun was at when the image was taken.
2. What does stretching an image mean?

In this case stretching an image is taking the prominent bands making up an


image and assigning new values so that you can eliminate unwanted bands. This
will limit the range to focus on the details you want to work with.

3. Name and explain a technique to convert an image to information.

Digital image classification assigns a value to each pixel according to its spectral
band information. You can then use spectral pattern recognition to spot patterns
in this information to identify larger features such as water, forests, crops or
urban areas.

Chapter 5: Applications
1. List 3 applications and explain why they would be of interest to the
students in your geographic area.

If it was possible to get a satellite view of the West valley parking lot on
different days of the week to gather information on the parking lot usage
throughout the week. This could find out when the busiest days where the
“Hotspots” are in the lot so parking in those areas will be faster ex. if you don't
mind parking a few stalls away and cared more about time then you could head
right to the empty spots instead of searching through full isles.

Studies have shown that having parks in dense urban areas increase
people’s mental and physical health. With satellite imagery we will be able to find
a good mixture of parks in cities and find areas that are lacking in recreational
areas. With an ideal spacing of parks people will be able to access these parks
and exercise or escape the hussle of everyday life.

With the current fires burning the forests we will be facing the effects from
the lost of trees this winter and next rainy season. Trees help soak up the water
act as anchor systems for the mountainside along with the grass that act as a net
to hold down the dirt from becoming mud. With the plant life gone we will be
facing huge landslides and flooding, I propose that we use imagery to identify
these problems areas before it happens and estimate the damage it will cause.
This information can be used to warn the communities living in the area of this
danger so they can prepare or evacuate as soon as possible.

You might also like