Professional Documents
Culture Documents
Nest User Manual 5.1
Nest User Manual 5.1
Nest User Manual 5.1
General
● Overview
● Definitions
● Supported Products
● Abstracted Metadata
DAT
● Display and Analysis Tool
Tool Views
❍ Tool Windows
❍ Project View
❍ Product View
❍ Pixel Info View
❍ Image View
❍ Navigation Window
❍ Colour Manipulation
❍ Layer Manager
❍ World Map
❍ WorldWind View
❍ Product Library
❍ Preferences Dialog
❍ Settings Dialog
❍ Wave Mode Polar View
Product Readers
❍ Open SENTINEL-1 Product
❍ Open ENVISAT Product
❍ Open ERS (.E1, .E2) Product
❍ Open ERS1/2 CEOS Product
❍ Open JERS CEOS Product
❍ Open Radarsat-1 CEOS Product
❍ Open Radarsat-2 Product
❍ Open TerraSarX Product
❍ Open Cosmo-Skymed Product
❍ Open ENVI Product
❍ Open PolsarPro Product
❍ Open ALOS PALSAR CEOS Product
❍ Open ALOS AVNIR2 CEOS Product
❍ Open ALOS PRISM CEOS Product
❍ Open GeoTIFF Product
❍ Open ImageIO Product
❍ Open BEAM-DIMAP Product
❍ Open LANDSAT 5 TM Product
❍ Open NetCDF Product
❍ Open HDF Product
❍ Open Generic Binary
❍ Open Complex Generic Binary
❍ Open GETASSE30 Tile
❍ Open GTOPO30 Tile
❍ Open ACE DEM Tile
❍ Open SRTM DEM Tile
❍ Import Geometry/Shape
Product Writers
❍ Export BEAM-DIMAP Product
❍ Export GeoTIFF Product
❍ Export HDF-5 Product
❍ Export NetCDF Product
❍ Export ENVI GCP File
❍ Export Displayed Image
❍ Export Google Earth KMZ
❍ Export Roi Pixels
❍ Export Transect Pixels
❍ Export Color Palette
Imaging Tools
❍ RGB-Image Profile
❍ No Data Overlay
❍ Bitmask Overlay
❍ Bitmask Editor
Analysis Tools
❍ Geometry Management
❍ Mask and ROI Management
❍ Product GCP Manager
❍ Product Pin Manager
❍ Product/Band Information
❍ Product/Band Geo-Coding Information
❍ Band Statistics
❍ Band Histogram
❍ Band Scatter Plot
❍ Band Transect Profile Plot
❍ Band Transect Co-ordinate List
❍ Compute ROI-Mask Area
Graph Processing
● Introduction
Graph Tools
❍ Graph Processing Tool
❍ Command Line Reference
❍ Graph Builder Tool
❍ Batch Processing
Analysis Operators
❍ Principle Component Analysis
❍ EM Cluster Analysis
❍ K-Means Cluster Analysis
❍ Data Analysis
Utility Operators
❍ Create Stack
❍ Create Subset
❍ Band Arithmetic
❍ Convert Datatype
❍ Over Sample
❍ Under Sample
❍ Fill DEM Hole
❍ Image Filtering
SAR Operators
❍ Apply Orbit Correction
❍ Calibration
❍ Remove Antenna Pattern
❍ GCP Selection
❍ Multilook
❍ Speckle Filter
❍ Multi-Temporal Speckle Filter
❍ Warp
❍ WSS Deburst
❍ WSS Mosaic
❍ S1 TOPSAR Deburst and Merge
Geometry Operators
❍ Create Elevation
❍ Range Doppler Terrain Correction
❍ SAR Simulation Terrain Correction
❍ SAR Simulation
❍ Geolocation Grid Ellipsoid Correction
❍ Average Height RD Ellipsoid Correction
❍ Map Projection
❍ Slant Range to Ground Range
❍ Mosaic
InSAR
❍ Interferometric functionality
❍ Computation and subtraction of 'flat earth' phase
❍ Estimation and subtraction of topographic phase
❍ Coherence estimation
❍ Azimuth filtering
❍ Range filtering
❍ Phase filtering
❍ Slant to height
❍ Differential InSAR
❍ Unwrapping
❍ Snaphu Data Export
❍ Snaphu Data Import
❍ InSAR Stack Overview
Ocean Tools
❍ Object Detection
❍ Oil Spill Detection
❍ Create Land Mask
❍ Wind Field Estimation
Development
● General Design
● Detailed Design
● Open Source Development
● Building the Source
● Developing a Reader
● Developing a Writer
● Developing an Operator
Tutorials
● Remote Sensing
● Quick Start With The DAT
● Coregistration
● Orthorectification
● Command Line Processing
F.A.Q.
● Installation
● General
NEST Overview
Overview
The Next ESA SAR Toolbox (NEST) is an open source (GNU GPL) toolbox for reading, post-
processing, analysing and visualising the large archive of data (from Level 1) from ESA SAR
missions including ERS-1 & 2, ENVISAT and in the future Sentinel-1. In addition, NEST
supports handling of products from third party missions including JERS-1, ALOS PALSAR,
TerraSAR-X, RADARSAT-1&2 and Cosmo-Skymed. NEST has been built using the BEAM
Earth Observation Toolbox and Development Platform.
Architecture Highlights
Main Features
NEST is being developed by Array Systems Computing Inc. of Toronto Canada under ESA
Contract number 20698/07/I-LG. InSAR functionalities are being developed by PPO.labs
and Delft University of Technology.
❍ DIMAP
❍ GeoTIFF
❍ HDF 4 & HDF 5
❍ NetCDF
❍ ENVI
❍ PolsarPro
❍ Generic Binary
❍ SRTM
❍ ASTER Global DEM
❍ ACE
❍ GETASSE30
❍ GTOPO30 tiles
For further details on which products are support, please see the Supported_Mission-
Product_vs_Operators_table.xls
Source Code
The complete NEST software has been developed under the GNU public license and comes
with full source code in JavaTM.
An Application Programming Interface (API) is provided with NEST to allow easy extension
by users to add new data readers/writers of other formats and to support data formats of
future missions. Plug-in modules can be developed separately and shared by the user
community. Processors can be easily extended without needing to know about the
complexities of the whole software.
Supported Platforms
NEST is programmed in Java™ to allow a maximum portability. The NEST software has
been successfully tested under MS Windows™ XP®, Vista and 7 as well as under Linux,
Solaris® and Mac OS X operating systems.
Definitions, Acronyms, Abbreviations
Definition of Terms
Product An object that contains remote sensing data for a scene on the earth. A
product can contain meta-data, geo-coding information, tie-point grids and
bands. All band raster datasets within a product have the same pixel
resolution and share the same geo-coding.
Band A raster dataset of a product. The band's sample values are usually the
measurements of a sensor.
Tie-point grid An auxiliary (geophysical) raster dataset of a product. Tie-point grids
usually provide less sample values than a band. Missing values, with
respect to the full pixel resolution of a product, are obtained by a linear
interpolation. The tie-point grid data normally does not originate from the
sensor which provided the product's measurement data.
Geo-Coding Provides the geodetic co-ordinates for a given pixel of a product. An image
is geo-coded if it is somehow possible to find the geographical latitude and
longitude values for any pixel. ENVISAT products store their geo-coding
information in the tie-point grids named latitude and longitude. If a
product has been map transformed, it is known to be geo-referenced and
as such, tie-point information is no longer required.
Geo-Reference An image is geo-referenced if any point in the image can be found in a
corresponding reference map by a linear transformation. Every pixel in the
image has the same size if expressed in map units. Geo-referencing a geo-
coded image includes image warping, applying a well known map projection
and pixel re-sampling.
Map Graphic representation of the physical features (natural, artificial, or both)
of a part or the whole of the Earth's surface, by means of signs and
symbols or photographic imagery, at an established scale, on a specified
projection, and with the means of orientation indicated.
Map Projection Orderly system of lines on a plane representing a corresponding system of
imaginary lines on an adopted terrestrial or celestial datum surface. Also,
the mathematical concept for such a system. For maps of the Earth, a
projection consists of a graticule of lines representing parallels of latitude
and meridians of longitude or a grid.
Graticule Network of parallels and meridians on a map or chart. A geographic
graticule is a system of coordinates of latitude and longitude used to define
the position of a point on the surface of the Earth with respect to the
reference ellipsoid.
Grid In connection with maps: a network of uniformly spaced parallel lines
intersecting at right angles. When superimposed on a map, it usually carries
the name of the projection used for the map – that is, Lambert grid,
transverse Mercator grid, universal transverse Mercator grid.
Pixel coordinates Pixel values always refer to the upper left corner of the pixel. Pixel co-
ordinates are always zero based, the pixel at X=0,Y=0 refers to the upper
left pixel of an image and the upper left corner of that pixel.
Pixel value In general a composite of red, green and blue sample values resulting in a
colour as part of an image.
Geodetic co-ordinates Geodetic co-ordinates are given as latitude and longitude values and always
refer - if not otherwise stated - to the WGS-84 ellipsoid. The geodetic co-
ordinates of a pixel, always refer to the upper left corner of the pixel.
Acronyms, Abbreviations
AATSR Advanced Along Track Scanning Radiometer
ADS Annotation Data Set in an ENVISAT data product
ASAR Advanced Synthetic Aperture Radar
BEAM Acronym for Basic ENVISAT Toolbox for (A)ATSR and MERIS
COTS Commercial Off-The-Shelf Software
ECSS European Co-operation for Space Standardisation (documents available at ESTEC at the
Requirements and Standards Division)
EnviView Software developed at ESTEC to visualise and analyse the Envisat data. See User
services section at http://envisat.esa.int/
EO Earth Observation
ESA European Space Agency (see http://www.esa.it/export/esaCP/index.html)
ESRIN European Space Research Institute (see http://www.esa.it/export/esaCP/index.html)
ESTEC European Space Research and Technology Centre (see http://www.esa.it/export/esaCP/
index.html)
ENVISAT ESA satellite (see http://envisat.esa.int/)
GADS Global Annotation Data Set in an ENVISAT data product
HDF Hierarchical Data Format (see http://www.hdfinfo.com/)
HDF-EOS Extended HDF format (see http://ecsinfo.gsfc.nasa.gov/iteams/HDF-EOS/HDF-EOS.html)
MDS Measurement Data Set in an ENVISAT data product
MERIS Medium Resolution Imaging Spectrometer Instrument (see http://envisat.esa.int/)
MODIS Moderate Resolution Imaging Spectroradiometer (see http://modis.gsfc.nasa.gov/
MODIS/)
MPH Main Product Header in an ENVISAT data product
NEST Next ESA SAR Toolbox
OSSD Open Source Software Development
SAR Synthetic Aperture Radar (http://earth.esa.int)
SPH Specific Product Header in an ENVISAT data product
SW Software
NEST Supported Products
Product Readers
NEST is able to ingest a variety of data products into a common internal representation.
An API is provided with NEST to allow users to add data ingestion of other formats and to
support data formats of future missions.
Product Writers
NEST supports various data conversion options and output to common georeferenced data
formats for import into 3rd party software.
● Beam DIMAP
● GeoTiff
● HDF 5
● NetCDF
● ENVI
● JPG
● BMP
● PNG
● Google Earth KMZ
New Writer Plugins could also be developed with the easy to use API
Abstracted Metadata
Abstracted Metadata
A variety of data products can be ingested into a common internal representation. For metadata, this is done
using the Abstracted Metadata.
The Abstracted Metadata is an extract of information and parameters from the actual metadata of the product.
The idea behind this is firstly to list the needed parameters to run tools and algorithms and secondly to modify
these in line with the processing over the product. In fact, the parameters read from the abstracted metadata can
be changed as the result of any processing. According to this concept, the abstracted metadata can be considered
a dynamic header.
Each Product Reader knows how to read a particular file format and map the metadata to the Abstracted
Metadata. For any fields that do not exist in a product, a default dummy value of 99999 is used.
The abstracted metadata can be edited and changed after having saved the product in the internal BEAM
DIMAP format. Therefore if external information is available, for instance the user can modify and update the
dummy values in the abstracted metadata.
You may search through the metadata by clicking on the Search Metadata menu item in the SAR Tools Menu. Enter
a partial string of a metadata field name and all entries will be shown in a metadata table.
Importing Metadata
If you are importing a file with no metadata, for example a bitmap or JPEG with the ImageIO reader or
extra metadata for an ENVI product, you can create an XML file that will be read in as the Abstracted Metadata
and be used within the processing.
The filename for the metadata data should be called either metadata.xml or filename.xml when importing
filename.hdr or filename.jpg
The format of the file should be as follows:
Only name and value are required for each attribute the others could be optional. Also, not all attributes are
required. If any are missing then
defaults will be used.
Any elements placed in tie-point-grids will be imported as a tie-point grid band and be interpolated to the
image raster dimensions at run time.
The latitude and longitude tie points will be used to create the product geocoding.
You can export the metadata from a currently opened product by clicking on Export Metadata in the Product
Writers menu.
DAT Application
A new image view is simply created by double-clicking on a tie-point grid or spectral/geophysical band. You can
open as many images as your computer's RAM allows. After you have opened an image view you can inspect
the images with the Navigation Window.
Tool Windows
Tool Windows
Tool windows are used in DAT to display information and properties of the currently
selected data product or a component of the currently selected data product. They are also
used to manage and edit the properties of the current view, such as pins, ROIs or the
colours of the current image view. In contrast to image or metadata views, the DAT has
only a single instance of each particular tool window. It can be shown, hidden, docked or
floating. Tool windows can also be grouped together. You can also save and reload the
current layout of tool windows and tool bars.
You will find all available tool windows in the View/Tool Windows menu. Many tool
windows also have corresponding tool bar icons.
Grouping
To group multiple tool windows in one window you have to move the mouse pointer, while
dragging one window, on the title bar of an other tool window while holding down the CTRL
key. You will get one window with one tab for each grouped tool window.
To remove a tool window from a group you have to drag the tab out of the grouped window
stack.
Layout Management
The view menu contains the entry 'Manage Layout'. This menu consists of items to manage
the current layout.
Context Menu
The context menu is invoked by clicking with the right mouse button on the title bar of a
tool window.
Project View
The Project View is a convenient tool for managing your data products. A Project will help
organize your data by storing all related work in one folder. To create a project, select New
Project from the File menu or Toolbar. A dialog will prompt you for a project folder location
and project file name. By default a Project will be created with folders for ProductSets,
Graphs, External Product Links, Imported Products and Processed Products.
Whenever you open a new product, a link to that product will show up under External Product
Links. When processing data, the output folder will default to your project's Processed
Products folder.
Within ProductSets, Graphs, Imported Products, Processed Products and any other user
created folder, the project folders mirror the file structure of the physical hard disk. Therefore
any change you make to the physical project folders on disk will be reflected in your project.
Right click on a folder to create a new sub-folder, rename or remove a folder. Right click on a
file to open or remove the file.
Importing Products
Products can be imported from the External Product Links or by using the Product Library tool.
Imported products are converted to DIMAP format and saved within your Project folder.
ProductSets
A Product Set is a list of products you would like to group to apply the same processing to
them in a graph.
To create a product set, right click on the ProductSet folder and create a new
ProductSet. Drag and drop products from a project onto the table in the ProductSet
dialog. From the GraphBuilder use the ProductSet by adding a ProductSetReader and
dragging the ProductSet into its dialog. The dialog table should be populated with the list of
products in the ProductSet.
You may drag and drop a ProductSet into the Batch Processing tool to apply a graph to each
of the products in the ProductSet.
DAT's Products View
Products View
All products when opened are added to the Products View's open product list. The product
list is a tree view with up to five root nodes for each open product:
You can quickly open an image view for a band or tie-point grid by double-clicking on an
item of the expanded product root nodes. A Metadata View is opened if you click on a
metadata node.
The information concerning each pixel can be analysed interactively in the pixel information
view.
You can un-dock each section within the Pixel View using the floating-button ( ), and dock
it back by using the docking-button ( ) in the header bar.
The information displayed belongs to the current image pixel beneath the mouse pointer:
● Geo-location:
Displays the image position, the geographic-location and also the map co-ordinates if
the current product is map-projected.
● Tie Point Grids:
Shows the values of the tie-point grids.
● Time Info:
The time information associated with the current line.
● Bands:
The value of the pixel beneath the mouse pointer.
● Flags:
Displays the state of the flags at the current pixel.
If a pin is selected in the current image view, you can select Snap to selected pin in order
to "freeze" the pixel information to the position of the currently selected pin.
Note: Flag values are only displayed if a corresponding flag dataset has been loaded. Use
the right mouse button over a flag dataset in the product view in order to load a flag
dataset's sample values.
Note: In the preferences dialog you can deselect the option that only pixel values of
displayed or loaded bands are shown.
Image View
Image View
The image view displays the sample values of raster datasets such as bands and tie-point grids as an image.
Slider Bar
By default the horizontal and vertical slider bars are disabled for the image view. You can activate the sliders again
in the preferences dialog in the section Layer Properties.
If the slider bars are visible you have also a small button in the lower right corner of the image view which zooms
to the fill image bounds if you click on it
Navigation Control
The navigation control, which is located in the upper left corner of the image view, can be used to pan, zoom
and rotate the image. The visibility of the control is dimmed if the mouse pointer is not located over the control and
it becomes visible when the mouse gets near the control.
If you rotate the image, you can use the CTRL-Key to change the stepping of the rotation angle from
continuous values to a discrete stepping of quarter of 90°.
The left image shows the dimmed version of the control the right one shows the active control with a rotation.
You can deactivate the navigation control in the preferences dialog in the section Layer Properties
Entries:
● Copy Pixel Info to Clipboard - copies all sample values at the current pixel position, their names and physical
units (which you also can see in the pixel view) to the clipboard
● Show ROI Overlay - toggles the visibility state of the ROI overlay
● Show Graticule Overlay - toggles the visibility state of the graticule overlay
● Show Pin Overlay - toggles the visibility state of the pin overlay
● Create Subset from View - opens a new product subset dialog, with predefined spatial subset scene from
the current image view
Note: The Copy Pixel Info to Clipboard command copies information as tabulator-separated text into the
clipboard and may therefore be pasted directly into a spreadsheet application (e.g. MS Excel).
The Navigation Window is used to move the viewport of an image view, to zoom in and out of it and to rotate
the image in steps of 5 degrees using the spinner control below the image preview. The current viewport is
depicted by a semi-transparent rectangle which can be dragged in order to move the viewport to another location.
It also provides a slider used to zoom in and out of the view:
The text box at the left side of slider can be used to adjust the zoom factor manually. You can enter decimal
value which sets the zoom factor of the view to this value. Also you can enter the zoom factor in the same format
as it is displayed.
The Navigation window additionally provides the following features via its tool buttons:
Zoom In
A click on the Zoom-In-Tool will increase the magnification of the image in discrete steps, centered on the
image view. The result will be displayed instantly and the magnification value will be refreshed.
Zoom Out
A click on the Zoom-Out-Tool will decrease the magnification of the image in discrete steps, centered on the
image view. The result will be displayed instantly and the magnification value will be refreshed.
Zoom All
A click on the Zoom-All-Tool will adjust the magnification so that the whole image fits into the image view. The
result will be displayed instantly and the magnification value in the editor will be refreshed. The same effect can
be achieved by clicking on the icon in the lower right corner of the image view.
Synchronise
The following image shows DAT with six image views that have been arranged with the "Tile Evenly" command in
the Window Menu. When the Synchronise-Button is pressed, all available tools of the Navigation Window operate
on all open image views.
As a result,
● All open image views show the same section of the image,
● Dragging the highlighted area around results in simultaneous scrolling of all open image views,
● Moving the slider or applying any of the zooming tools -including the value field- will be reflected instantly in
all image view windows.
The Colour Manipulation Window
Overview
If you are opening an image view of a data product's band or tie-point grid, DAT either loads image settings from
the product itself (BEAM-DIMAP format only) or uses default colour settings. The colour manipulation window is
used to modify the colours used for the image. Depending on the type of the source data used for the images, the
colour manipulation window offers four different editors:
A.1: Editor for images of a single, spectral/geophysical band A.2: Editor for images of a single, spectral/geophysical band
in Sliders mode in Table mode
B.: Editor for images of a single, index-coded band C. Editor for images using separate R,G,B channels
To open the colour manipulation window, use the corresponding icon icon in the main toolbar or select View/
Tool Windows/Colour Manipulation from the main menu.
Changes in the colour manipulation window will be become effective only if the Apply button is pressed.
Table Mode
By changing the Editor option from Sliders to Table, the sample value to colour assignment can be done in a
table:
Here you can enter the colour and sample values directly by clicking into a table cell.
Labels and colours are simply changed by clicking into the corresponding table cell. In the More Options panels
you can adjust the No-Data Colour.
Common Functions
No-Data Colour
In the More Options panel of all editors you can adjust the No-Data Colour. This colour which will be used for
no-data pixels in the source band(s). If you select None, no-data sample will be transparent in the image.
Histogram Matching
It is sometimes desirable to transform an image so that its histogram matches that of a specified functional form.
It is possible to apply an equalized or normalized histogram matching to images which can often improve image
quality.
A click on the icon opens a dialog where you can select the bands to which you can assign the current colour
palette. If the destination band has a similar pixel value range, the slider positions are exactly preserved;
otherwise, they are proportionally distributed over the valid range of the destination band.
Click the icon to import colour palette definition files and the icon to export the current colour
manipulation settings.
The colour palette information used for the current image can also be exported into an image file. Click the
context menu item Export Color Legend over an open image view in order export the colour legend.
The colour palette can be also exported as a Color Palette Table. Choose from the File menu Color Palette to
export the table as a *.csv or *.txt file.
Slider Auto-Adjustment
A click on the icon adjusts the sliders to cover 95% of all pixels in the band.
A click on the icon adjusts the sliders to cover 100% (the full range) of all pixels in the band.
A click on the icon distributes the inner sliders evenly between the first and the last slider.
Click on the icon to zoom in vertically or on the icon to zoom into the histogram horizontally.
Click on the icon to zoom out vertically or on the icon to zoom out of the histogram horizontally.
Reset
The reset icon is used to revert the window to use default values.
Context help
The help icon opens the Help for the current context.
The Layer Manager Window
Overview
The layer manager is used control what content is shown in the current image view and how it is
displayed.
You can open the layer manager tool window from the View / Tool Windows / Layer Manager
menu or by clicking the icon in the tool bar. A layer can be shown or hidden by using the check
box left to the layer name.
Any number of layers may be open at one time. Each layer will have a rendering order, and much like
a painter’s algorithm, will be drawn on top of each other. The user willhave control over the layer
ordering and the translucency of the images. Finer visibility control can be achieved by selecting a
layer and using the Transparency slider. Visibility changes will directly take effect in the image view.
Functions
Add layer: Opens the Add Layer assistant window which lets you add a new layer to the current
view.
Remove layer: Removes the selected layer. Note that not all layers can be removed.
Edit layer: Opens the Layer Editor tool window which lets you alter the display properties of the
selected layer.
Move up: Moves the selected layer up so that it is displayed on top of all layers following it in the
list.
Move down: Moves the selected layer down so that it is displayed underneath of all overlying
layers in the list.
Move left: Moves the layer into the parent group (if any).
Move down: Moves the layer into a child group (if any).
Layer Editors
Layer editors lets you alter layer properties in order to control the display of layer data. Different layer
types have different layer editors. The following screenshot shows the layer editor of the pin layer:
Changes to the properties in this window are directly propagated to the layer selected in the layer
manager and display updates will occur immediately in the current image view.
Adding Layers
The Add Layer assistant window shows a number of layer sources. It depends on the type of the
current image view which layer sources are present in this list.
In addition to the raster data products, the DAT will be able to load and display vector data as layers.
This will prove helpful for overlaying shoreline, political boundaries, navigational charts, etc., and using
this data to mask land or water areas.
The following standard layers are always usable:
● Image from Web Map Service (WMS) - Downloads a thematic image map layer from a
dedicated internet WMS. Applicable to map-projected (geo-referenced) views only. A good list of
public services is provided at http://www.skylab-mobilesystems.com/en/wms_serverlist.html.
● Wind Speed from MERIS ECMWF annotation data - Displays wind speed vectors. Applicable to
MERIS L1b/L2 band views only.
Select a layer source and press > Next. Depending on the type of layer selected, the assistant window
will guide you through one or more option pages. Once the Finish button is enabled, you can add the
new layer to the current image view.
Layer manager tool view after adding an ESRI shapefile layer
The World Map
When working with satellite data, it is often not obvious at first sight which region of the
world is covered by the data product. To facilitate finding the location of the product on the
globe, DAT has a built-in world map that shows the projection of the product boundaries on
a virtual globe.
To invoke the World Map simply click on the globe icon in the main toolbar or select World
Map from the 'View' menu. This will open a window similar to the one below.
To navigate around the World Map, left click on the map and drag to mouse to pan. Use the
middle mouse wheel to zoom in and out.
Place names, Nasa's Blue Marble and aerial optical data will automatically be downloaded as
long as your computer is connected to the internet.
The World Map is intended to show you a quick reference of where your product is on the
globe. To overlay the actual data and view it in 3D, use the 3D WorldWind View.
The WorldWind View
The WorldWind View allows you to view the world in 3D, automatically download and view imagary and elevation
data via Web Mapping Services (WMS) and overlay your SAR data.
The WorldWind View has been created using NASA's WorldWind Java SDK.
A 3D video card with updated drivers is necessary. World Wind has been tested on Nvidia, ATI/AMD, and
Intel platforms using Windows and Ubuntu Linux.
Note: Update your video card drivers. (How to update windows drivers) (ATI/AMD - Nvidia - Intel)
Navigate using the left mouse button to pan by clicking and dragging and the mouse wheel to zoom in and out.
Click and drag the right mouse button to tilt and rotate the camera angle.
In the layer menu, you can select the layers to be shown including
● Place Names,
● Opened Products
● Open Street Map
● MS Virtual Earth
● Landsat 7
● BlueMarble
Product Library
Product Library
The Product Library tool optimizes the identification of data products in a database for fast retrievable of
the metadata of locally stored products. Search results are displayed in a table listing product name, path,
mission, product type, acquisition date, pass, pixel spacing, etc. without actually opening the original products.
The footprint of each image is outlined on the world map over top of Blue Marble images with a place names
vector layer. Multilooked quicklooks of the images are also generated and stored in the database for quick previewing.
The Product Library can optionally automatically add new metadata to the database whenever a product is
manually opened. The user may also define a list of repository folders that will be scanned recursively for new
or modified products.
The product readers of the Toolbox are able to abstract the metadata from each product into the Generic
Product Model of the Toolbox. Thereby, the Product Library and all processing tools of the Toolbox are able to
work with the metadata in this common form without requiring the user to manually input any metadata.
Products may be searched in terms of mission, product type, beam mode, ground location, date and time
of acquisition, processing history, previously defined AOIs, and suggested image pairing. Products may also
be searched by graphically drawing an area of interest on the world map and querying the database for products
that cover the AOI.
The user may then select from the resulting table of products which products to open, add to a project, or
batch process directly from the Product Library.
Importing into a Project
After having selected the products you wish to import, press the Import to Project button to convert the products
into DIMAP format and add them to the currently opened project. If a project is not currently opened you will
be prompted to create a new project.
If you would like to simply open the products without converting them into DIMAP format then press the
Open Selected button to add them to the DAT Product View list.
Batch Processing
To Batch Process a list of selected products press the Batch Process button. This will open the Batch Processing
dialog and add your products to the input list. If you right click on the Batch Processing button, a popup menu
will appear with all the graphs from the User Graphs menu. You may select one and then the Batch Processing
dialog will default to the graph selected.
The Preferences Dialog
On the left side of the Preferences dialog window you can see a thematical tree where you can select the context of
the settings you want to change. In the following example, a screen shot is shown where the settings for the user
interface behavior can be edited.
UI Behavior
This preferences page contains general user interface behavior and memory management settings.
UI Appearance
● UI Font:
Sets DAT user interface font name and size.
● UI Look and Feel:
Lets you select the appearance (Look and Feel) of the user interface of DAT.
Product Settings
Geo-location Display
DAT uses an image coordinate system whose origin (x=0, y=0) is the upper left corner of the upper left pixel. Image
X-coordinates increase to the right, Y-values increase downwards. The center of the pixel in the origin is then located
at (x=0.5, y=0.5).
Data Input/Output
Image Display
● Interpolation method
Here you can choose how neighbouring pixel colours are interpolated. Choose
❍ Nearest Neighbour to prevent colours from being mixed,
❍ Bi-linear or Bi-Cubic to smooth the image.
Note: choosing other than Nearest Neighbour may slow down image handling. But: on Mac OS X, Bi-linear is the
System Default and therefore fastest.
● Background colour
Choose a background colour for the image displayed in the image view.
● Show image border
Here you can specify if an image border should be visible in the image view. If yes, you can also set the border
size and colour.
● Show pixel border in magnified views
Define whether a border should be drawn around a pixel under magnification when the mouse cursor points at it.
No-data overlay
This preferences page provides options to customize DAT's no-data overlay.
● Color:
Sets the fill colour of a ROI.
● Transparency:
Sets the transparency of a filled ROI.
Graticule Overlay
This preferences page provides options to customize DAT's graticule overlay.
● Grid behaviour:
❍ Compute latitude and longitude steps
The step size of the grid lines will be computed automatically.
❍ Average grid size in pixels
Defines the size in pixels of the grid cells.
❍ Latitude step (dec. degree):
Sets the grid latitude step in decimal degrees.
❍ Longitude step (dec. degree):
Sets the grid longitude step in decimal degrees.
● Line appearance:
❍ Line colour:
Sets the colour of the grid lines.
❍ Line width:
Sets the width of the grid lines.
❍ line transparency:
Sets the transparency of the grid lines.
● Text appearance:
❍ Show text labels:
Sets the visibility of the text labels.
❍ Text foreground colour:
Sets the colour of the graticule text (latitude and longitude values).
❍ Text background colour:
Sets the background colour of the graticule text (latitude and longitude values).
❍ Text background transparency:
Sets the transparency of the background colour of the graticule text.
Pin Overlay
This preferences page provides options to customize DAT's pin overlay.
GCP Overlay
This preferences page provides options to customize DAT's GCP overlay.
● Outline appearance:
❍ Outline shape
If selected, the outline of a ROI is also drawn.
❍ Shape outline colour:
Sets the colour of a shape's outline.
❍ Shape outline transparency:
Sets the transparency of a shape's outline. Low values produce high coverage.
❍ Shape outline width:
Sets the width of a ROI's outline.
● Fill appearance:
❍ Fill shape
If selected, the shape area is also drawn.
❍ Shape fill colour:
Sets the colour of a shape's area.
❍ Shape fill transparency:
Sets the transparency of a shape's area. Low values produce high coverage.
ROI Overlay
This preferences page provides options to customize DAT's ROI overlay.
● Color:
Sets the fill colour of a ROI.
● Transparency:
Sets the transparency of a filled ROI.
RGB Profiles
This preference page is used to edit the RGB profiles used for RGB image creation from various product types. A RGB-
Profile defines the arithmetical band expressions to be used for the red, green and blue components of an RGB
image. For detailed information about RGB-Profiles please refer to the chapter RGB-Image Profile located at DAT/
Tools/Imaging Tools
Profile Lets you Select one of the actual stored RGB-Profiles to use for creation of the new image view.
RGB Channels
Use the to edit the expression for the specific channel by using the expression editor.
Note: The arithmetical expressions are not validated by DAT; keep careful to use the correct syntax.
Please refer to the Arithmetic Expression Editor documentation for the syntax and capabilities of expressions.
Logging
This preferences page provides options to customize DAT's logging behavior.
● Enable logging
If this option is selected, DAT writes a log file which can be used to reconstruct user interactions and to trace
system failures.
● Log filename prefix:
Here you can enter a prefix for the log file name. The disk file name will be assembled using this prefix plus a log
file version number and an identification number. Log files are always written in the folder log located in the
NEST user directory. Under windows, this folder would be c:\Document and Settings\user\.nest\. Under Linux,
this folder would be /home/user/.nest/.
● Echo log output (effective only with console)
If DAT is started with a text console window using the %NEST_HOME%/DAT.bat (Windows) or
$NEST_HOME/DAT.sh (Linux) scripts, the log file entries are also printed out to the console window.
● Log extra debugging information
Sets DAT into the debugging mode which can be helpful to find software bugs.
The Settings Dialog
Settings
The Settings Dialog allows you to customize the default data path directories. From this window it is possible
choose the root directory in which the Toolbox looks for the default Digital Elevation models and the supported
orbits files.
String values specified here can use variables pointing to other entries or environment variables. For example,
the root demPath can hold the path to all your DEM files and the aceDEMDataPath could be ${demPATH}\ACE
to specify the location of the ACE DEM files.
Changes made via the Settings dialog will be saved in a settings.xml file.
Wave Mode Polar View
ENVISAT ASAR has two level 1 products and one level 2 product:
ASA_WVI_1P: The product can contain up to 400 SLC imagettes, each 10 km x 10 km in size, which are
acquired every 100 km. The imagettes are 1 look in azimuth and 1 look in range. The WVI product also contains
the Cross Spectra of the imagettes.
The Cross Spectra is a polar grid of complex data with 24 bins in wavelength and 36 bins in direction (each
10 degrees wide).
Polar View
The Polar View is used for displaying Cross Spectra and Wave Spectra from ENVISAT ASAR Wave mode products.
The Polar View shows one record at a time. You may use the buttons and slider at the bottom of the view to
change the currently viewed record. The Animate button allows the view to cycle through records automatically.
Readouts for Peak Direction and Wavelength, Min/Max Spectrum and Wind Speed and Direction can be viewed on
the right hand side of the polar plot.
Cursor readouts for Wavelength, Direction and Value can be viewed on the left hand side of the polar plot as
you move your mouse over each bin.
● ASAR products
● MERIS L1b/L2 RR/FR/FRG/FSG products
● AATSR TOA L1b and NR L2
MER_FR__1P MERIS Full Resolution Geolocated and Calibrated TOA Radiance Product
MER_FR__1P MERIS Full Resolution Geolocated and Calibrated TOA Radiance Product
Valid until 2002-12
MER_LRC_2P MERIS Extracted Cloud Thickness and Water Vapour Product for Meteo Users
MER_RRC_2P MERIS Extracted Cloud Thickness and Water Vapour Product
MER_RRV_2P MERIS Extracted Vegetation Indices Product
ERS products in ENVISAT format have extension .E1 for ERS 1 and .E2 for ERS 2
Import ERS1/2 SAR in CEOS Format
● SLC,
● SGF,
● SGX,
● SSG,
● SCW,
● SCN
Import RADARSAT-2 SAR
Replacing Metadata
You may also want to replace the metadata in the newly imported PolsarPro output with the
metadata from the original product. To do so, both products must be of the same
dimensions and both must be currently opened in the DAT. Select the PolsarPro product
and from the Utilities -> Metadata menu select Replace Metadata. Then in the dialog that
pops up select the original product where the metadata will come from. This will replace the
PolsarPro's empty metadata with that of the original product.
This can also be done from a graph using the ReplaceMetadata operator. The operator
takes two products as input. The first product connected should be the original product
where you want the metadata to come from.
Import ALOS PALSAR
Import AVNIR-2
The AVNIR-2 reader enables DAT to import CEOS formatted AVNIR-2 Level-1 data products.
Note: The AVNIR-2 reader is implemented in line with the "ALOS AVNIR-2 Level 1 Product
Format Descriptions Rev.G" (http://www.eorc.jaxa.jp/ALOS/doc/format.htm) and is
capable of reading data products of Level 1A, 1B1 and 1B2.
A brief description about the sensor characteristics can be found at http://www.eorc.jaxa.jp/
ALOS/about/avnir2.htm
Import PRISM
Import PRISM
The PRISM reader enables DAT to import CEOS formatted PRISM Level-1 data products.
Note: The PRISM reader is implemented in line with the "ALOS PRISM Level 1 Product
Format Descriptions Rev.G" (http://www.eorc.jaxa.jp/ALOS/doc/format.htm) and is
capable of reading data products of Level 1A, 1B1 and 1B2.
A brief description about the sensor characteristics can be found at http://www.eorc.jaxa.jp/
ALOS/about/prism.htm
Import GeoTIFF
Import GeoTIFF
This command allows to import the data of a GeoTIFF file.
GeoTIFF is an extension to the TIFF 6.0 specification. These image files contain additional
information about their georeference and their spatial resolution. A wide range of projected
coordinate systems are supported including UTM, US State Plane, and National Grids. For
further information have a look at the following documentations:
● GeoTIFF Homepage
● TIFF Specification
For features and limitations of the GeoTIFF import and constraints see the following list:
● BMP
● PNG
● GIF
● JPEG
● BPM
● PPM
● PGM
● RAW
Abstracted Metadata can be imported by creating a metadata.xml file in the same folder as
the image.
The BEAM-DIMAP Data Format
Introduction
The DIMAP format has been developed by SPOT-Image, France. The software uses a
special DIMAP profile called BEAM-DIMAP.
The BEAM-DIMAP is the standard I/O product format for the software.
Overview
A data product stored in this format is composed of
● a single product header file with the suffix .dim in XML format containing the product
meta-data and
● an additional directory with the same name plus the suffix .data containing ENVI® -
compatible images for each band.
ENVI
samples = 1100
lines = 561
bands = 1
header offset = 0
file type = ENVI Standard
data type = 4
interleave = bsq
byte order = 1
An ENVI header file starts with the text string ENVI to be recognized by ENVI as a native
file header. Keywords within the file are used to indicate critical file information. The
following keywords are used by the BEAM-DIMAP format:
description a character string describing the image or processing performed.
samples number of samples (pixels) per image line for each band.
lines number of lines per image for each band.
bands number of bands per image file. For BEAM-DIMAP the value is always 1
(one).
header offset refers to the number of bytes of imbedded header information present in
the file. These bytes are skipped when the ENVI file is read. For BEAM-
DIMAP the value is always 0.
file type refers to specific ENVI defined file types such as certain data formats and
processing results. For BEAM-DIMAP the value is always the string
"ENVI Standard".
data type parameter identifying the type of data representation, where 1=8 bit byte;
2=16-bit signed integer; 3=32-bit signed long integer; 4=32-bit floating
point; 5=64- bit double precision floating point; 6=2x32-bit complex, real-
imaginary pair of double precision; 9=2x64-bit double precision complex,
real-imaginary pair of double precision; 12=16-bit unsigned integer; 13=32-
bit unsigned long integer; 14=64-bit unsigned integer; and 15=64-bit
unsigned long integer.
interleave refers to whether the data are band sequential (BSQ), band interleaved by
pixel (BIP), or band interleaved by line (BIL). For BEAM-DIMAP the value
is always "bsq".
byte order describes the order of the bytes in integer, long integer, 64-bit integer,
unsigned 64-bit integer, floating point, double precision, and complex data
types; Byte order=0 is Least Significant Byte First (LSF) data (DEC and MS-
DOS systems) and byte order=1 is Most Significant Byte First (MSF) data
(all others - SUN, SGI, IBM, HP, DG). For BEAM-DIMAP the value is
always 1 (Most Significant Byte First = Big Endian Order).
x-start and y-start parameters define the image coordinates for the upper left hand pixel in the
image. The values in the header file are specified in "file coordinates," which
is a zero-based number.
map info lists geographic coordinates information in the order of projection name
(UTM), reference pixel x location in file coordinates, pixel y, pixel easting,
pixel northing, x pixel size, y pixel size, Projection Zone, "North" or "South"
for UTM only.
projection info parameters that describe user-defined projection information. This keyword
is added to the ENVI header file if a user-defined projection is used instead
of a standard projection.
band names allows entry of specific names for each band of an image.
wavelength lists the center wavelength values of each band in an image.
Import Landsat TM
Import Landsat TM
The Landsat TM reader enables DAT to import Fast formatted Landsat TM data products.
More about Landsat TM characteristics can be found at http://landsat.usgs.gov/resources/
project_documentation.php.
Import NetCDF
Import NetCDF
The NetCDF reader enables DAT to import NetCDF data products.
The NetCDF reader supports any image-like NetCDF file structure in NetCDF version 3 or 4.
Note: You can find additional information about NetCDF at http://www.unidata.ucar.edu/
software/netcdf/.
Import HDF
Note: The Generic Binary reader currently only supports BSQ formatted data.
Import Complex Generic Binary
Note: The Complex Generic Binary reader currently only supports BSQ (Band-Sequential)
formatted data.
GETASSE30 Elevation Model
I. Mean sea surface height over sea and height over land, both referenced to the WGS84 ellipsoid.
Resolution: 30 arc second latitude and longitude
Unit: meter
File name example: 45S045W.GETASSE30 where the first number is the latitude of the most South West pixel and
the second number its longitude
Data format: binary, 1800*1800 signed 16-bit integer values, big endian order
The GETASSE30 elevation data
1. Over land, between 60N and -60S, where SRTM30 DEM data are available, the output value is the sum of
SRTM30 elevation and the EGM96 geoid height.
2. Over land, between 60N and -60S, where SRTM30 DEM data are not available, the output value is the sum of
ACE elevation and the EGM96 geoid height.
3. Over land, above 60N and -60S, where ACE data are available, the output value is the sum of ACE elevation
and the EGM96 geoid height.
4. Over sea, where neither ACE DEM nor SRTM30 data are not available, if MSS data are available, then the
output value is the MSS.
5. Over sea, where neither ACE DEM nor SRTM30 data are not available, if MSS data are not available, then the
output value is the EGM96 value.
● Flag
Import Geometry
You can either import transect data or an ESRI Shapefile.
If the current product is geo-coded, which is always true for ENVISAT products or map
projected products, and the geodetic co-ordinates are given, the pixel co-ordinates are
rejected. In this case, VISAT computes the actual pixel co-ordinates for the current
product.
For example (note, 5th column is ignored):
Either pixel or geodetic co-ordinates must be given. Again, if geodetic co-ordinates are
present, they override the point's pixel co-ordinate if present, since they are recomputed
by VISAT. The columns can appear in any order.
For example (note, columns named "Index" and "radiance_11" are ignored):
● GeoTIFF Homepage
● TIFF Specification
For features and limitations of the GeoTIFF export and constraints of interaction with other
GIS tools are given in the following list:
● All bands of a GeoTiff file must have the same data type. Therefore a common data
type is identified which is suitable for all bands. Double precision data type is not
supported.
● The data of a band is always written raw and unscaled.
● Virtual and filtered bands can only be reimported by the Toolbox.
● All projections available in Reprojection are also supported by the GeoTIFF writer. As a
fallback, if a map projection or other geocoding is not directly supported, a tie-point
grid is written to the GeoTiff file.
The exported GeoTiff files were tested for compatibility with ENVI and ArcGis. While the
files are fully compatible with ENVI, ArcGIS has problems to understanding
stereographic projections.
● An index coded band can only be correctly decoded by other GIS tools, if it is written as
an image with one band.
Export HDF5
The ENVI Ground Control Point (GCP) files are ASCII files that contain the pixel coordinates
of tie points selected from a base and warp image using ENVI’s registration utilities. They
are assigned the file extension .pts by default and begin with the keywords "ENVI
Registration GCP File." For image-to-image registration, pixel coordinates are listed in the
order Base image X, Y, Warp Image X, Y. An example of a typical image-to-image .pts file
is shown here:
For image-to-map registration, coordinates are listed as map X, map Y, image X, image Y.
An example of a typical image-to-map .pts file is shown here:
● raster data
● colour legend
● pins
The exported image is saved in the KML the Keyhole Markup Language (link: http://earth.
google.com/kml/). To view this file a version of Google Earth (link: http://earth.google.
com/) is necessary.
Export ROI Pixels
The command is similar to Export Transect pixels and also the export format is identical.
The "Copy to Clipboard" option will copy the ROI pixel values to the clipboard. This may
take a few moments, depending on the number of pixels.
The "Write to File" option brings up a file selection dialog to select the destination directory
and to assign a name to the text file. The written file may easily be imported with any text
editor or spreadsheet software.
Note: in low memory situations, it may be a better to export to a file instead of copying to
the clipboard.
Export Transect Pixels
File Format
The exported file is separated into two sections. The Header Section contains general
information about the exported colour palette. The Data Section contains the Color Palette
Table. When exporting the colour palette, the variables in curly braces are replaced by their
current value.
Header Section
# Band: {BAND_NAME}
# Sample unit: {SAMPLE_UNIT}
# Minimum sample value: {MIN_VALUE}
# Maximum sample value: {MAX_VALUE}
# Number of colors: {COLOR_COUNT}
Data Section
ID;Sample;RGB
{COLOR_INDEX};{SAMPLE_VALUE};{RED}, {GREEN}, {BLUE}
RGB-Image Profile
RGB-Image Profile
In this window you are asked to define the RGB channels for a new RGB image view. You
are able to load defined RGB-Profiles or to create and store new profiles.
Profile
Selects one of the actual stored RGB-Profiles to use for creation of the new image view.
RGB Channels
Red - Defines the arithmetical expression for the red channel.
Green - Defines the arithmetical expression for the green channel.
Blue - Defines the arithmetical expression for the blue channel.
Use the to edit the expression for the specific channel by using the expression editor.
RGB-Profile File
RGB-Profile files must have the extension ".rgb". Multiple default profiles provided are located
in the $NEST_HOME$/auxdata/rgb_profiles.
A RGB-Profile file contains several entries. The syntax of an entry is 'EntryName =
EntryValue'. Normally one entry is written in one line, but you can use the '\' character to
indicate that the next line also belongs to the value. Empty lines and lines beginning with the
'#' character are ignored.
The possible entries for an RGB-Profile are listed in the following table:
No-Data Overlay
Toggles the overlay of a no-data mask within a band's image view. The overlay properties
can be modified using the no-data overlay page in the preferences dialog.
The no-data mask is similar to a bitmask overlay and masks out all the no-data pixels in
the selected image view.
If the property no-data value of a band or the valid pixel expression of a band is set, a no-
data overlay can be displayed within the current image view.
Bitmask Overlay Window
In this window you may activate the overlay of flags and combinations of them on a loaded band
image view. Simply activate the checkboxes in the column with the icon to toggle the overlay
with that specific flag.
Use the icon to create new bitmask expressions. This will also use the Bitmask Expression
Editor.
Use the icon to copy bitmask expressions. This will also use the Bitmask Expression Editor.
Use the icon to edit (change name, colour, transparency etc.) a selected bitmask. Double-
clicking on a row has the same effect.
Use the icon to import bitmask expressions from files and to save them.
Similarly you can use the icon to import and the icon to export multiple bitmask
expressions at the same time.
Use the and icons to order the overlay sequence of the bitmasks.
Bitmask Editor
If you chose to edit a bitmask the Bitmask Editor appears like in the following screenshot:
In the upper area of the window, the properties name, description, colour and transparency can be changed. In
the lower part of the window, the expression can be defined with a few mouse clicks or typed into the text field
on the right.
Geometry ROI
The associated mask will always have the same name as the geometry container which created it and can
serve as possible ROI for the selected band or tie-point grid without any additional user interaction. Once
the geometry is created (e.g. simply by drawing it, see below), its associated geometry mask can be used
as ROI in the various analysis tools, such as the Statistics, Histogram, and Scatter Plot tool windows.
Multiple geometry ROIs can be defined by creating new geometry containers as described below.
Polyline: Single-click (press and release) left mouse button for the start point, move line segment and click
to add a vertex point, move to end point and double-click to finalize the polyline.
Rectangle: Press left mouse button for the start point, drag line to end point and release left mouse button.
Ellipse: Similar to rectangle; Press left mouse button for the start point, drag line to end point and release
left mouse button.
Polygon: Similar to polyline; Single-click (press and release) left mouse button for the start point, move line
segment and click to add a vertex point, move to end point and double-click to close the polygon.
Editing geometries
Geometries may be edited in a number of ways once they have been selected. Note that editing or
deleting a geometry will automatically affect the mask associated with the geometry's container. Use the
Select tool to select geometries which shall be edited:
Select a single geometry by clicking it. Select one or more geometries by dragging a selection rectangle
around them. Hold down the control key while selecting in order to add or remove geometries from the
current selection set.
Clicking selected geometries multiple times lets them step through a number of selection modes allowing
for different editing modes which are further described below.
Move: Selected shapes can be moved to another location simply by dragging them with the mouse.
Move vertex: If single selected geometries are clicked once again, the selection mode changes
depending on the geometry type. The first mode lets you move the vertexes of lineal and polygonal
geometries by dragging the appearing vertex handles.
Scale: The next selection mode (click again) lets you scale the size of a geometry by dragging the
appearing size handles.
Cut, Copy, Paste: Use these commands from the Edit menu or use the keys Control X, Control C,
Control V to cut or copy geometries into the operating system's clipboard and to paste them into the
same or another view.
Delete: Use the command from the Edit menu or use the Delete key.
In data modelling terms, a mask is a product node similar to a band or tie-point grid. It has a unique name
and comprises a image (raster data) whose sample data type is Boolean. Each data product may comprise
virtually any number of masks.
Not only the mask definitions but also their use in conjunction with a raster data set such as a band or tie-point
grid are part of the data model stored within the product. A product "remembers" for a certain band or tie-point grid
A number of product formats define a default mask set. E.g. the Envisat MERIS L1 and L2 product types define
a mask for each of their quality flags.
The manager allows creating new masks, editing mask properties and delete existing masks. It also allows
for creating new masks based on logical combinations of existing masks. Furthermore masks may be imported
and exported. If an image view is selected, the manager tool window can also be used to control the visibility and
its role as a possible ROI for the currently displayed band. When the mask's ROI role is selected, it becomes
available in the various raster data analysis tools, such as the Statistics, Histogram, and Scatter Plot tool windows.
Difference: Creates the logical difference of two or more selected masks (in top-down order).
Inv. Difference: Creates the logical difference of two or more selected masks (in bottom-up order).
Edit: Edits the definition of the selected mask. Double-clicking a mask entry in the table has the same effect.
In contrast to a pin a GCP is fixed to a geographical position while the pin is not. GCPs can be used
to create a GCP geo-coding for a product or improve an existing geo-coding.
GCPs are displayed as symbols at their geographical positions in image views associated with the
current product. GCPs are stored in the current product and available again if the product is re-
opened.
New GCPs can be created with the GCP tool It is also possible to create and remove GCPs by using
the GCP manager.
Exports all values of the displayed table to a flat text file. The exported text is tabulator-separated and
may therefore be imported directly into a spreadsheet application (e.g. MS Excel).
Centers the image view on the selected GCP.
Pins are displayed as symbols at their geographical positions in image views associated
with the current product. Pins are stored in the current product and available again if the
product is re-opened.
In DAT, pins can be used to "freeze" the pixel info view to the selected pin in order to
display the values of the pixel associated with the selected pin.
New pins can be created with the pin tool and removed using the delete pin command in
the Edit Menu. It is also possible to create and remove pins by using the pin manager.
In the following, the tool buttons of the pin manager are explained.
Creates a new pin and adds it to the product.
Exports selected part of the displayed table to a flat text file. The exported text is tabulator-
separated and may therefore be imported directly into a spreadsheet application (e.g. MS
Excel).
Centers the image view on the selected pin.
Product/Band Information
This dialog shows general properties of a loaded product, band or tie-point grid and their parent product.
Note: A mouse right-click within the properties data area brings up a context menu with the item Copy data
to clipboard. This will copy the diagram data as tabulated text to the system clipboard. The copied text can
then be pasted directly into a spreadsheet application (e.g. Microsoft® Excel).
Geo-Coding Information
Geo-Coding Information
This dialog shows the geo-coding information for the selected data product. Geo-coding enables DAT to
transform pixel co-ordinates to geographical co-ordinates (WGS-84 datum) and vice versa. Geo-coding can be
either based on a map projection (product is geo-referenced) or based on tie point grids (product is geo-
coded). If a product is not geo-referenced, DAT uses the tie point grids "latitude" and "longitude" for geo-
coding.
For tie point grid based geo-coding, the transformation of a geographical co-ordinate to a pixel position is
more complicated than the other way round. DAT uses either an iterative algorithm or a polynomial
approximation depending on the root mean square error (RMSE) of the approximation. If the RMSE is
underneath half a pixel, the approximation is used instead of the iteration because the latter can sometimes
have no clear attraction point and would yield to infinite looping.
Note: A mouse right-click within the geo-coding information area brings up a context menu with the item
Copy data to clipboard. This will copy the diagram data as tabulated text to the system clipboard. The copied
text can then be pasted directly into a spreadsheet application (e.g. Microsoft® Excel).
Statistics
Statistics Display
The Statistics Display shows statistical information about the band or an active ROI.
For the whole band, the following statistical information is given (see Figure 1):
For user selected homogeneous ROI, the following statistics are computed (see Figure 2):
Figure 2. Statistical information for user selected ROI
Note: A mouse right-click within the statistics data area brings up a context menu with the item
Copy data to clipboard. This will copy the diagram data as tabulated text to the system clipboard.
The copied text can then be pasted directly into a spreadsheet application (e.g., Microsoft® Excel).
Histogram
Histogram Display
This dialog displays a histogram for the selected band. If a ROI is defined, you may restrict the
computation to the pixels within that ROI. You can also set the number of bins used for the
histogram creation, and set the range manually or let it compute automatically.
Context Menu
A click with the right mouse button on the diagram brings up a context menu which consists of
the following menu items:
● Properties...
Edit several properties (colors, axes, etc.) of the diagram.
Context Menu
A click with the right mouse button on the diagram brings up a context menu which consists of
the following menu items:
● Properties...
Edit several properties (colors, axes, etc.) of the diagram.
Context Menu
A click with the right mouse button on the diagram brings up a context menu which consists of
the following menu items:
● Properties...
Edit several properties (colors, axes, etc.) of the diagram.
The dialog is divided into four tabs, each providing specific subset options.
Spatial Subset
If you are not interested in the whole image, you may specify an area of the product to be
loaded. You can select the area either by dragging the blue surrounding rectangle in the
preview (see figure above) or by editing the appropriate fields. If you drag the rectangle,
the field values change simultaneously.
You can also specify a sub-sampling, by setting the values of Scene step X and Scene
step Y.
Band Subset
This tab is used to select the bands you want to have in your product subset.
Metadata Subset
The tab lets you select/deselect meta-data records and tables. By default, all meta-data
tables are deselected, because they can be very huge for ENVISAT products and may
reduce DAT's performance.
Band Arithmetic
Band Arithmetic
The band arithmetic tool is used to create new image sample values derived from existing
bands, tie-point grids and flags. The source data can originate from all currently open and
spatially compatible input products. The source data is combined by an arithmetic
expression to generate the target data. By default, a new image view is automatically
opened for the new sample values. You can disable this behaviour in the preferences dialog.
Please refer to the expression editor documentation for the syntax and capabilities of
expressions.
After the new band has been created (or an existing one has been overwritten), you can
change to DAT's Product View in order to open an image view in order to inspect the
resulting samples.
Parameters
Target Product:
Select the target product where the new band will be added.
Name:
Specifies the name of the new band. The name must not be empty and the target
product must not contain a band with the same name.
Description:
An optional description can be entered here.
Unit:
An optional unit can be entered here.
Virtual (save expression only, don't write data)
If the option is checked a virtual band is created. This means that only the expression is
stored and the data is re-computed if needed. If this option is unchecked, the data is
computed once and stored in the product.
Replace NaN and infinity results by
Sometimes an expression can result to a NaN (Not a Number)or infinity value, in these
cases these values will be replaced by the value specified here. This value will also be
used as the no-data value of the band.
Expression:
This field takes the arithmetic expression which is used to create new data samples.
Please refer to the expression editor documentation for the syntax and capabilities of
expressions.
Edit Expression... button
Opens the expression editor which provides a a convenient way to create valid
arithmetic expressions.
Product:
Selects the current input product providing source bands, tie-point grids and flags.
Data Sources:
The list of available data sources provided by the selected input product. Click on a data source to move
it into the expression text field.
Show Bands checkbox
Checks whether or not the bands of a product are shown in the list of available data sources.
Show Tie Point Grids checkbox
Checks whether or not the tie-point grids of a product are shown in the list of available data sources.
Show single Flags checkbox
Checks whether or not the flags of a product are shown in the list of available data sources.
Expression:
The expression text field. You can also directly edit the expression here.
Select All Button Selects the entire text in the expression text field.
Clear Button Clears the entire text in the expression text field.
Undo Button Undoes multiple last edits in the expression text field.
OK Button
Accepts the expression.
Expression Syntax
The syntax for valid expressions used in DAT is almost the same as used in the C, C++ or Java
programming languages. However, currently not supported are any kind of type conversions or type
castings or object accessing operations.
X == Y Equal to
X != Y Not equal to
X < Y Less than
X <= Y Less than or equal to
X > Y Greater then
X >= Y Greater then or equal to
Arithmetic Operators
X + Y Plus
X - Y Minus
X * Y Division
X / Y Multiplication
X % Y Modulo (remainder)
Unary Operators
+ X Arithmetic positive sign, no actual operation, equivalent to 1 * X
- X Arithmetic negation, equivalent to -1 * X
! X Logical NOT of Boolean argument X
not X
~ X Bitwise NOT of integer argument X
Mathematical Constants
PI PI = 3.14159265358979323846. The double value that is closer than any other to PI
E E = 2.7182818284590452354. The double value that is closer than any other to E, the base of the
natural logarithms
NaN NaN = 0.0 / 0.0. A constant holding a Not-a-Number (NaN) value
X The X-position of the current pixel.
Y The Y-position of the current pixel.
Mathematical Functions
Reprojection Dialog
Use Reprojection to create new product with a projected Coordinate Reference System (CRS).
I/O Parameters
Source Product
Name: Here the user
specifies the name of the
source product. The
combo box presents a list
of all products opened.
The user may select one
of these or, by clicking on
the button next to the
combo box, choose a
product from the file
system.
Target Product
Name: Used to specify
the name of the target
product.
Save as: Used to specify
whether the target
product should be saved
to the file system. The
combo box presents a list
of file formats, currently
BEAM-DIMAP, GeoTIFF,
and HDF5. The text field
allows to specify a target
directory.
Open in DAT: Used to
specify whether the
target product should be
opened. When the target
product is not saved, it is
opened automatically.
Projection Parameters
Coordinate Reference
System (CRS)
Custom CRS: The
transformation used by
the projection can be
selected. Also the
geodetic datum and
transformation
parameters can be set, if
possible for the selected
transformation.
Predefined CRS: By
clicking on the Select...
button a new dialog is
shown where a
predefined CRS can be
selected.
Use CRS of: A product
can be selected to use its
projected Coordinate
Reference System. This
will have the effect that
source product will cover
the same geographic
region on the same CRS.
Which means that both
products are collocated.
Output Settings
Preserve resolution: If
unchecked the Output
Parameters... is enabled
and the upcoming dialog
lets you edit the output
parameters like easting
and northing of the
reference pixel, the pixel
size and the scene height
and width.
Reproject tie-point
grids: Specifies whether
or not the tie-point grids
shall be included. If they
are reprojected they will
appear as bands in the
target product and not
any more as tie-point
grids.
No-data value: The
default no-data value is
used for output pixels in
the projected band which
have either no
corresponding pixel in the
source product or the
source pixel is invalid.
Resampling Method:
You can select one
resampling method for
the projection. For a brief
description have a look at
Resampling Methods.
Output Information
Displays some
information about the
output, like scene width
and height, the
geographic coordinate of
the scene center and
short description of the
selected CRS.
When clicking the Show
WKT... button the
corresponding Well-
Known Text of the
currently defined CRS is
shown.
Resampling Methods
Resampling Methods
If a product is projected, it comes up that the pixel centers of the target product generally do not correspond to
the centers of the pixels of the input product. Resampling is the process of determination and interpolation of
pixels in the source product for computation of the pixel values in the target product. The effects of resampling
will especially be visible if the pixels in the target product are larger than the source pixels.
Three different resampling methods are available for this computation.
Nearest Neighbour
Every pixel value in the output product is set to the nearest input pixel value.
Pros Cons
Very simple, fast Some pixels get lost and others are duplicated
No new values are calculated by interpolation Loss of sharpness
Fast, compared to Cubic Convolution resampling
The following figure demonstrates the calculation of a new resampled pixel value.
Bi-linear Interpolation
Calculation of the new pixel value is performed by the weight of the four surrounding pixels.
Pros Cons
Extremes are balanced Less contrast compared to Nearest Neighbour
Image loses sharpness compared to Nearest Neighbour New values are calculated which are not present in the input product
Following figure demonstrates the calculation of the new pixel value.
Cubic Convolution
Calculation of the new pixel value is performed by weighting the 16 surrounding pixels.
Pros Cons
Extremes are balanced Less contrast compared to Nearest Neighbour
Image is sharper compared to Bi-linear Interpolation New values are calculated which are not present in the input product
Slow, compared to Nearest Neighbour resampling
In the first step the average value for each line is calculated, afterwards the new pixel value is calculated with
the four new average values P'(1) - P'(4) similar to the preceding calculation.
Input Product:
You must specify an input product here. Note that this command operates on entire
products, so you might want to create a product subset first.
Output Product
Name: You can specify the output product's name here. The name must be unique
within DAT's open product list.
Description: You can enter a short description text for the new product here.
Flip Data
horizontally radiobutton
Mirrors the tie-point grids and bands of the product along the central vertical axis.
vertically radiobutton
Mirrors the tie-point grids and bands of the product along the central horizontal axis.
horizontally & vertically radiobutton
Mirrors the tie-point grids and bands of the product along both the central vertical and
horizontal axes.
OK Button
Applies the flip transposition to the input product, creates the specified output product and
closes the dialog.
Pixel Geo-Coding
Pixel Geo-Coding
The Pixel Geo-Coding can be used if the user has two bands filled with accurate latitude
and longitude values for each pixel. For example the FSG and FRG products have corrected
latitude ("corr_lat") and longitude ("corr_lon") bands. These bands can be used to replace
the current geo-coding associated with the product.
● Longitude Band
The band which keeps the longitude values for each pixel.
● Latitude Band
The band which keeps the latitude values for each pixel.
● Valid Mask
This mask is used by the search algorithm when trying to find the pixel position for a
given lat/lon. Only pixels which meet the expression are included in the search.
● Search Radius
This value is used by the search algorithm when trying to find the pixel position for a
given lat/lon. A higher value means that the area to search for the proper pixel position
is greater and the risk not to find the searched value is smaller. The greater the value
is, the longer takes the search.
● Elevation Model
Let's you select an elevation model.
● Band name
The name of the new created elevation band.
Image Filtering
Image Filtering
The operator creates filtered image bands by applying convolution or non-linear filters to the selected band.
Filters are used to perform common image processing operations e.g. sharpening, blurring or edge enhancement.
Note: When storing a product containing a filtered band, the data of the band is not stored with the product.
Only the information how to compute the data is stored. This behaviour is similar to virtual bands.
● Detect Lines
Horizontal Edges, Vertical Edges, Left Diagonal Edges, Right Diagonal Edges, Compass Edge Detector,
Roberts Cross North-West, Roberts Cross North-East
● Detect Gradients (Emboss)
Sobel North, Sobel South, Sobel West, Sobel East, Sobel North East
● Smooth and Blurr
Arithmetic 3x3 Mean, Arithmetic 4x4 Mean, Arithmetic 5x5 Mean, Low-Pass 3x3, Low-Pass 5x5
● Sharpen
High-Pass 3x3 #1, High-Pass 3x3 #2, High-Pass 5x5
● Enhance Discontinuities
Laplace 3x3, Laplace 5x5
● Non-Linear Filters
Minimum 3x3, Minimum 5x5, Maximum 3x3, Maximum 5x5, Mean 3x3, Mean 5x5,
Median 3x3, Median 5x5, Standard Deviation 3x3, Standard Deviation 5x5,
Root-Mean-Square 3x3, Root-Mean-Square 5x5
● The operator also supports user defined convolution filter kernel. The user define kernel can be browsed and
selected from User Defined Kernel file in the UI.
● The user defined kernel must be saved in ASCII file in a matrix format. The first line of the file contains two
integers to indicate the dimension (rows and columns) of the matrix. For example, a user defined 3x3 low-
pass filter can be saved in file lop_3_3.txt in the following format:
3 3
1 11
1 11
1 11
Example images
Compass Edge Detector Filter Low-Pass 5x5 Filter
1. Source Bands: All bands (real or virtual) of the source product. User can select one or more bands for
producing filtered images. If no bands are selected, then by default all bands are selected.
2. Filters: Pre-defined filters.
3. User Defined Kernel File: User defined filter kernel file.
Convert SLC to Detected GR
Graphs
Graph Processing is used to apply operations to the data to allow you to create your own
processing chains.
A graph is a set of nodes connected by edges. In this case, the nodes will be the
processing steps. The edges will show the direction in which the data is being passed
between nodes; therefore it will be a directed graph. The graph will have no loops or
cycles, so it will be a Directed Acyclic Graph (DAG).
The sources of the graph will be the data product readers, and the sinks can be either a
product writer or an image displayed on the DAT.
The GPF uses a Pull Model, wherein a request is made from the sink backwards to the
source to process the graph. This request could be to create a new product file or to update
a displayed image. Once the request reaches a source, the image is pulled through the
nodes to the sink. Each time an image passes through an operator, the operator
transforms the image, and it is passed down to the next node until it reaches the sink.
The graph processor will not introduce any intermediate files unless a writer is optionally
added anywhere in the sequence
Tiling
The memory management allows very large data products that can not be all stored in
available memory, to be handled by the processing tools and visualization. To do so, a
tiled approach is used.
The dataset is divided into workable areas called tiles consisting of a subset of the data
read from disk in one piece. Only the data for tiles being visualized is read in, and in some
cases the data could be down-sampled to view the desired area at the expense of
resolution.
Depending on the tool, data is ingested for a tile or a set of tiles, and processing is applied
only to the current set of tiles. The data is then written to a file and released from
memory. The process is then repeated on a new set of tiled data from the large data
product.
From the DAT, in order to allow zooming out and viewing of the entire image, a pyramid of
tiled images at different resolutions is used. Tiling is generally transparent to the user.
Operators
In order to provide the greatest flexibility to the end user, processing algorithms such as
orthorectification and co-registration are broken down into unit processing steps called
Operators. These operators may be reused to create new processing graphs for other
purposes.
The Toolbox includes various processing modules that could be run either from the DAT
GUI or from the command line such as:
● Data Conversion
● Band Arithmetic
● Image Filtering
● Statistics & Data Analysis
● Ellipsoid Correction
● Terrain Correction
● Co-Registration
● Reprojection
● Subset
● Calibration
● Multilooking
● Apply Orbit Correction
● Create Stack
● Create Elevation
● Resampling
● Interferometry
● Plus many more
Graph Processing Tool
GPT
The Graph Processing Tool (GPT) is the command line interface for executing graphs created
using the Graph Builder. Data sources and parameters found in the graph file can be replaced
at the command line using arguments passed to the GPT.
The GPT could also be used to read a product, execute a single operator and produce output in
the specified format without the use of a graph.
Usage:
Description:
graph (DAG). Processing graphs are represented using XML. More info
about processing graphs, the operator API, and the graph XML format can
Arguments:
<source-file-i> The <i>th source product file. The actual number of source
Options:
-c <cache-size> Sets the tile cache size in bytes. Value can be suffixed
-q <parallelism> Sets the maximum parallelism used for the computation, i.e.
● asar-caibrate.bat - a windows batch file as an example how to calibrate all ASAR products
in a given folder
● ers-cailbrate.bat - a windows batch file as an example how to calibrate all ERS products in
a given folder
● coregister.bat - a windows batch file as an example how to run multiple graphs together,
in this case, GCPSelectionGraph and WarpGraph
● MapProjGraph.xml - a graph for applying a map projection on a product
● TC_Graph.xml - a graph for applying Terrain Correction on a product
● write.xml - a graph to write products in another format, in this case, it will write to GeoTIFF
Command Line Reference
Usage:
gpt <op>|<graph-file> [options] [<source-file-1> <source-file-2> ...]
Description:
This tool is used to execute raster data operators in batch-mode.
The operators can be used stand-alone or combined as a directed acyclic
graph (DAG). Processing graphs are represented using XML. More info
about processing graphs, the operator API, and the graph XML format can
be found in the documentation.
Arguments:
<op> Name of an operator. See below for the list of <op>s.
<graph-file> Operator graph file (XML format).
<source-file-i> The <i>th source product file. The actual number of source
file arguments is specified by <op>. May be optional for
operators which use the -S option.
Options:
-h Displays command usage. If <op> is given, the specific
operator usage is displayed.
-e Displays more detailed error messages. Displays a stack
trace, if an exception occurs.
-t <file> The target file. Default value is './target.dim'.
-f <format> Output file format, e.g. 'GeoTIFF', 'HDF5',
'BEAM-DIMAP'. If not specified, format will be derived
from the target filename extension, if any, otherwise the
default format is 'BEAM-DIMAP'. Ony used, if the graph
in <graph-file> does not specify its own 'Write' operator.
-p <file> A (Java Properties) file containing processing
parameters in the form <name>=<value>. Entries in this
file are overwritten by the -P<name>=<value> command-line
option (see below).
-c <cache-size> Sets the tile cache size in bytes. Value can be suffixed
with 'K', 'M' and 'G'. Must be less than maximum
available heap space. If equal to or less than zero, tile
caching will be completely disabled. The default tile
cache size is '512M'.
-q <parallelism> Sets the maximum parallelism used for the computation, i.e.
the maximum number of parallel (native) threads.
The default parallelism is '2'.
-x Clears the internal tile cache after writing a complete
row of tiles to the target product file. This option may
be useful if you run into memory problems.
-T<target>=<file> Defines a target product. Valid for graphs only. <target>
must be the identifier of a node in the graph. The node's
output will be written to <file>.
-S<source>=<file> Defines a source product. <source> is specified by the
operator or the graph. In an XML graph, all occurrences of
${<source>} will be replaced with references to a source
product located at <file>.
-P<name>=<value> Defines a processing parameter, <name> is specific for the
used operator or graph. In an XML graph, all occurrences of
${<name>} will be replaced with <value>. Overwrites
parameter values specified by the '-p' option.
-inFolder For graphs with ProductSetReaders such as coregistration,
all products found in the specified folder and subfolders
will be used as input to the ProductSetReader
-printHelp Prints the usuage help for all operators
Operators:
AdaptiveThresholding Detect ships using Constant False Alarm Rate detector.
Apply-Orbit-File Apply orbit file
BandMaths Create a product with one or more bands using
mathematical expressions.
Calibration Calibration of products
Interferogram Compute interferograms from stack of coregistered
images : JBLAS implementation
Convert-Datatype Convert product data type
Coherence Estimate coherence from stack of coregistered images
Create-LandMask Creates a bitmask defining land vs ocean.
CreateElevation Creates a DEM band
CreateStack Collocates two or more products based on their geo-codings.
Data-Analysis Computes statistics
DeburstWSS Debursts an ASAR WSS product
EMClusterAnalysis Performs an expectation-maximization (EM) cluster analysis.
Ellipsoid-Correction-GG GG method for orthorectification
Ellipsoid-Correction-RD Ellipsoid correction with RD method and average scene height
Fill-Hole Fill holes in given product
GCP-Selection2 Automatic Selection of Ground Control Points
Image-Filter Common Image Processing Filters
KMeansClusterAnalysis Performs a K-Means cluster analysis.
LinearTodB Converts bands to dB
Mosaic Mosaics two or more products based on their geo-codings.
Multi-Temporal-Speckle-Filter Speckle Reduction using Multitemporal Filtering
Multilook Averages the power across a number of lines in both the
azimuth and range directions
Object-Discrimination Remove false alarms from the detected objects.
Oil-Spill-Clustering Remove small clusters from detected area.
Oil-Spill-Detection Detect oil spill.
Oversample Oversample the datset
ProductSet-Reader Adds a list of sources
Read Reads a product from disk.
RemoveAntennaPattern Remove Antenna Pattern
ReplaceMetadata Replace the metadata of the first product with that of the second
Reprojection Applies a map projection
SAR-Simulation Rigorous SAR Simulation
SARSim-Terrain-Correction Orthorectification with SAR simulation
SRGR Converts Slant Range to Ground Range
Speckle-Filter Speckle Reduction
SubsetOp Create a spatial subset of the source product.
Terrain-Correction RD method for orthorectification
Undersample Undersample the datset
Warp2 Create Warp Function And Get Co-registrated Images
Wind-Field-Estimation Estimate wind speed and direction
Write Writes a data product to a file.
WriteRGB Creates an RGB image from three source bands.
------------------------
Terrain-Correction
------------------------
Usage:
gpt Terrain-Correction [options]
Description:
RD method for orthorectification
Source Options:
-Ssource=<file> Sets source 'source' to <filepath>.
This is a mandatory source.
Parameter Options:
-PapplyRadiometricNormalization=<boolean> Sets parameter 'applyRadiometricNormalization'
to <boolean>.
Default value is 'false'.
-PauxFile=<string> The auxiliary file
Value must be one of 'Latest Auxiliary
File', 'Product Auxiliary File', 'External Auxiliary File'.
Default value is 'Latest Auxiliary File'.
-PdemName=<string> The digital elevation model.
Value must be one of 'ACE', 'GETASSE30',
'SRTM 3Sec GeoTiff'.
Default value is 'SRTM 3Sec GeoTiff'.
-PdemResamplingMethod=<string> Sets parameter 'demResamplingMethod' to <string>.
Value must be one of
'NEAREST_NEIGHBOUR', 'BILINEAR_INTERPOLATION', 'CUBIC_CONVOLUTION'.
Default value is 'BILINEAR_INTERPOLATION'.
-PexternalAuxFile=<file> The antenne elevation pattern gain auxiliary
data file.
-PexternalDEMFile=<file> Sets parameter 'externalDEMFile' to <file>.
-PexternalDEMNoDataValue=<double> Sets parameter 'externalDEMNoDataValue'
to <double>.
Default value is '0'.
-PimgResamplingMethod=<string> Sets parameter 'imgResamplingMethod' to <string>.
Value must be one of
'NEAREST_NEIGHBOUR', 'BILINEAR_INTERPOLATION', 'CUBIC_CONVOLUTION'.
Default value is 'BILINEAR_INTERPOLATION'.
-PincidenceAngleForGamma0=<string> Sets parameter 'incidenceAngleForGamma0'
to <string>.
Value must be one of 'Use incidence angle
from Ellipsoid', 'Use projected local incidence angle from DEM'.
Default value is 'Use projected local
incidence angle from DEM'.
-PincidenceAngleForSigma0=<string> Sets parameter 'incidenceAngleForSigma0'
to <string>.
Value must be one of 'Use incidence angle
from Ellipsoid', 'Use projected local incidence angle from DEM'.
Default value is 'Use projected local
incidence angle from DEM'.
-PpixelSpacingInDegree=<double> The pixel spacing in degrees
Default value is '0'.
-PpixelSpacingInMeter=<double> The pixel spacing in meters
Default value is '0'.
-PprojectionName=<string> The projection name
Default value is 'Geographic Lat/Lon'.
-PsaveBetaNought=<boolean> Sets parameter 'saveBetaNought' to <boolean>.
Default value is 'false'.
-PsaveDEM=<boolean> Sets parameter 'saveDEM' to <boolean>.
Default value is 'false'.
-PsaveGammaNought=<boolean> Sets parameter 'saveGammaNought' to <boolean>.
Default value is 'false'.
-PsaveLocalIncidenceAngle=<boolean> Sets parameter 'saveLocalIncidenceAngle'
to <boolean>.
Default value is 'false'.
-PsaveProjectedLocalIncidenceAngle=<boolean> Sets parameter
'saveProjectedLocalIncidenceAngle' to <boolean>.
Default value is 'false'.
-PsaveSelectedSourceBand=<boolean> Sets parameter 'saveSelectedSourceBand'
to <boolean>.
Default value is 'true'.
-PsaveSigmaNought=<boolean> Sets parameter 'saveSigmaNought' to <boolean>.
Default value is 'false'.
-PsourceBands=<string,string,string,...> The list of source bands.
Graph XML Format:
<graph id="someGraphId">
<version>1.0</version>
<node id="someNodeId">
<operator>Terrain-Correction</operator>
<sources>
<source>${source}</source>
</sources>
<parameters>
<sourceBands>
<band>string</band>
<.../>
</sourceBands>
<demName>string</demName>
<externalDEMFile>file</externalDEMFile>
<externalDEMNoDataValue>double</externalDEMNoDataValue>
<demResamplingMethod>string</demResamplingMethod>
<imgResamplingMethod>string</imgResamplingMethod>
<pixelSpacingInMeter>double</pixelSpacingInMeter>
<pixelSpacingInDegree>double</pixelSpacingInDegree>
<projectionName>string</projectionName>
<saveDEM>boolean</saveDEM>
<saveLocalIncidenceAngle>boolean</saveLocalIncidenceAngle>
<saveProjectedLocalIncidenceAngle>boolean</saveProjectedLocalIncidenceAngle>
<saveSelectedSourceBand>boolean</saveSelectedSourceBand>
<applyRadiometricNormalization>boolean</applyRadiometricNormalization>
<saveSigmaNought>boolean</saveSigmaNought>
<saveGammaNought>boolean</saveGammaNought>
<saveBetaNought>boolean</saveBetaNought>
<incidenceAngleForSigma0>string</incidenceAngleForSigma0>
<incidenceAngleForGamma0>string</incidenceAngleForGamma0>
<auxFile>string</auxFile>
<externalAuxFile>file</externalAuxFile>
</parameters>
</node>
</graph>
-----------------------------------
Multi-Temporal-Speckle-Filter
-----------------------------------
Usage:
gpt Multi-Temporal-Speckle-Filter [options]
Description:
Speckle Reduction using Multitemporal Filtering
Source Options:
-Ssource=<file> Sets source 'source' to <filepath>.
This is a mandatory source.
Parameter Options:
-PsourceBands=<string,string,string,...> The list of source bands.
-PwindowSize=<string> Sets parameter 'windowSize' to <string>.
Value must be one of '3x3', '5x5', '7x7',
'9x9', '11x11'.
Default value is '3x3'.
Graph XML Format:
<graph id="someGraphId">
<version>1.0</version>
<node id="someNodeId">
<operator>Multi-Temporal-Speckle-Filter</operator>
<sources>
<source>${source}</source>
</sources>
<parameters>
<sourceBands>
<band>string</band>
<.../>
</sourceBands>
<windowSize>string</windowSize>
</parameters>
</node>
</graph>
-----------------------------
Ellipsoid-Correction-RD
-----------------------------
Usage:
gpt Ellipsoid-Correction-RD [options]
Description:
Ellipsoid correction with RD method and average scene height
Source Options:
-Ssource=<file> Sets source 'source' to <filepath>.
This is a mandatory source.
-Ssource=<file> Sets source 'source' to <filepath>.
This is a mandatory source.
Parameter Options:
-PapplyRadiometricNormalization=<boolean> Sets parameter 'applyRadiometricNormalization'
to <boolean>.
Default value is 'false'.
-PauxFile=<string> The auxiliary file
Value must be one of 'Latest Auxiliary
File', 'Product Auxiliary File', 'External Auxiliary File'.
Default value is 'Latest Auxiliary File'.
-PdemName=<string> The digital elevation model.
Value must be one of 'ACE', 'GETASSE30',
'SRTM 3Sec GeoTiff'.
Default value is 'SRTM 3Sec GeoTiff'.
-PdemResamplingMethod=<string> Sets parameter 'demResamplingMethod' to <string>.
Value must be one of
'NEAREST_NEIGHBOUR', 'BILINEAR_INTERPOLATION', 'CUBIC_CONVOLUTION'.
Default value is 'BILINEAR_INTERPOLATION'.
-PexternalAuxFile=<file> The antenne elevation pattern gain auxiliary
data file.
-PexternalDEMFile=<file> Sets parameter 'externalDEMFile' to <file>.
-PexternalDEMNoDataValue=<double> Sets parameter 'externalDEMNoDataValue'
to <double>.
Default value is '0'.
-PimgResamplingMethod=<string> Sets parameter 'imgResamplingMethod' to <string>.
Value must be one of
'NEAREST_NEIGHBOUR', 'BILINEAR_INTERPOLATION', 'CUBIC_CONVOLUTION'.
Default value is 'BILINEAR_INTERPOLATION'.
-PincidenceAngleForGamma0=<string> Sets parameter 'incidenceAngleForGamma0'
to <string>.
Value must be one of 'Use incidence angle
from Ellipsoid', 'Use projected local incidence angle from DEM'.
Default value is 'Use projected local
incidence angle from DEM'.
-PincidenceAngleForSigma0=<string> Sets parameter 'incidenceAngleForSigma0'
to <string>.
Value must be one of 'Use incidence angle
from Ellipsoid', 'Use projected local incidence angle from DEM'.
Default value is 'Use projected local
incidence angle from DEM'.
-PpixelSpacingInDegree=<double> The pixel spacing in degrees
Default value is '0'.
-PpixelSpacingInMeter=<double> The pixel spacing in meters
Default value is '0'.
-PprojectionName=<string> The projection name
Default value is 'Geographic Lat/Lon'.
-PsaveBetaNought=<boolean> Sets parameter 'saveBetaNought' to <boolean>.
Default value is 'false'.
-PsaveDEM=<boolean> Sets parameter 'saveDEM' to <boolean>.
Default value is 'false'.
-PsaveGammaNought=<boolean> Sets parameter 'saveGammaNought' to <boolean>.
Default value is 'false'.
-PsaveLocalIncidenceAngle=<boolean> Sets parameter 'saveLocalIncidenceAngle'
to <boolean>.
Default value is 'false'.
-PsaveProjectedLocalIncidenceAngle=<boolean> Sets parameter
'saveProjectedLocalIncidenceAngle' to <boolean>.
Default value is 'false'.
-PsaveSelectedSourceBand=<boolean> Sets parameter 'saveSelectedSourceBand'
to <boolean>.
Default value is 'true'.
-PsaveSigmaNought=<boolean> Sets parameter 'saveSigmaNought' to <boolean>.
Default value is 'false'.
-PsourceBands=<string,string,string,...> The list of source bands.
Graph XML Format:
<graph id="someGraphId">
<version>1.0</version>
<node id="someNodeId">
<operator>Ellipsoid-Correction-RD</operator>
<sources>
<source>${source}</source>
<source>${source}</source>
</sources>
<parameters>
<sourceBands>
<band>string</band>
<.../>
</sourceBands>
<demName>string</demName>
<externalDEMFile>file</externalDEMFile>
<externalDEMNoDataValue>double</externalDEMNoDataValue>
<demResamplingMethod>string</demResamplingMethod>
<imgResamplingMethod>string</imgResamplingMethod>
<pixelSpacingInMeter>double</pixelSpacingInMeter>
<pixelSpacingInDegree>double</pixelSpacingInDegree>
<projectionName>string</projectionName>
<saveDEM>boolean</saveDEM>
<saveLocalIncidenceAngle>boolean</saveLocalIncidenceAngle>
<saveProjectedLocalIncidenceAngle>boolean</saveProjectedLocalIncidenceAngle>
<saveSelectedSourceBand>boolean</saveSelectedSourceBand>
<applyRadiometricNormalization>boolean</applyRadiometricNormalization>
<saveSigmaNought>boolean</saveSigmaNought>
<saveGammaNought>boolean</saveGammaNought>
<saveBetaNought>boolean</saveBetaNought>
<incidenceAngleForSigma0>string</incidenceAngleForSigma0>
<incidenceAngleForGamma0>string</incidenceAngleForGamma0>
<auxFile>string</auxFile>
<externalAuxFile>file</externalAuxFile>
</parameters>
</node>
</graph>
---------------
PCA-Image
---------------
Usage:
gpt PCA-Image [options]
Description:
Computes PCA Images
Source Options:
-SsourceProduct=<file> Sets source 'sourceProduct' to <filepath>.
This is a mandatory source.
-----------
Merge
-----------
Usage:
gpt Merge [options]
Description:
Merges an arbitrary number of source bands into the target product.
Parameter Options:
-PbaseGeoInfo=<string> The ID of the source product providing the geo-coding.
-PproductName=<string> The name of the target product.
Default value is 'mergedProduct'.
-PproductType=<string> The type of the target product.
Default value is 'UNKNOWN'.
Graph XML Format:
<graph id="someGraphId">
<version>1.0</version>
<node id="someNodeId">
<operator>Merge</operator>
<sources/>
<parameters>
<productName>string</productName>
<productType>string</productType>
<baseGeoInfo>string</baseGeoInfo>
<band>
<product>string</product>
<name>string</name>
<newName>string</newName>
<namePattern>string</namePattern>
</band>
<.../>
</parameters>
</node>
</graph>
--------------------
Speckle-Filter
--------------------
Usage:
gpt Speckle-Filter [options]
Description:
Speckle Reduction
Source Options:
-Ssource=<file> Sets source 'source' to <filepath>.
This is a mandatory source.
Parameter Options:
-PdampingFactor=<int> The damping factor (Frost filter only)
Valid interval is (0, 100].
Default value is '2'.
-PedgeThreshold=<double> The edge threshold (Refined Lee filter only)
Valid interval is (0, *).
Default value is '5000'.
-Penl=<double> The number of looks
Valid interval is (0, *).
Default value is '1.0'.
-PestimateENL=<boolean> Sets parameter 'estimateENL' to <boolean>.
Default value is 'false'.
-Pfilter=<string> Sets parameter 'filter' to <string>.
Value must be one of 'Mean', 'Median',
'Frost', 'Gamma Map', 'Lee', 'Refined Lee'.
Default value is 'Mean'.
-PfilterSizeX=<int> The kernel x dimension
Valid interval is (1, 100].
Default value is '3'.
-PfilterSizeY=<int> The kernel y dimension
Valid interval is (1, 100].
Default value is '3'.
-PsourceBands=<string,string,string,...> The list of source bands.
Graph XML Format:
<graph id="someGraphId">
<version>1.0</version>
<node id="someNodeId">
<operator>Speckle-Filter</operator>
<sources>
<source>${source}</source>
</sources>
<parameters>
<sourceBands>
<band>string</band>
<.../>
</sourceBands>
<filter>string</filter>
<filterSizeX>int</filterSizeX>
<filterSizeY>int</filterSizeY>
<dampingFactor>int</dampingFactor>
<edgeThreshold>double</edgeThreshold>
<estimateENL>boolean</estimateENL>
<enl>double</enl>
</parameters>
</node>
</graph>
-----------------
Calibration
-----------------
Usage:
gpt Calibration [options]
Description:
Calibration of products
Source Options:
-Ssource=<file> Sets source 'source' to <filepath>.
This is a mandatory source.
Parameter Options:
-PauxFile=<string> The auxiliary file
Value must be one of 'Latest Auxiliary
File', 'Product Auxiliary File', 'External Auxiliary File'.
Default value is 'Latest Auxiliary File'.
-PcreateBetaBand=<boolean> Create beta0 virtual band
Default value is 'false'.
-PcreateGammaBand=<boolean> Create gamma0 virtual band
Default value is 'false'.
-PexternalAuxFile=<file> The antenne elevation pattern gain auxiliary
data file.
-PoutputImageScaleInDb=<boolean> Output image scale
Default value is 'false'.
-PsourceBands=<string,string,string,...> The list of source bands.
Graph XML Format:
<graph id="someGraphId">
<version>1.0</version>
<node id="someNodeId">
<operator>Calibration</operator>
<sources>
<source>${source}</source>
</sources>
<parameters>
<sourceBands>
<band>string</band>
<.../>
</sourceBands>
<auxFile>string</auxFile>
<externalAuxFile>file</externalAuxFile>
<outputImageScaleInDb>boolean</outputImageScaleInDb>
<createGammaBand>boolean</createGammaBand>
<createBetaBand>boolean</createBetaBand>
</parameters>
</node>
</graph>
-----------------------------
Ellipsoid-Correction-GG
-----------------------------
Usage:
gpt Ellipsoid-Correction-GG [options]
Description:
GG method for orthorectification
Source Options:
-Ssource=<file> Sets source 'source' to <filepath>.
This is a mandatory source.
Parameter Options:
-PimgResamplingMethod=<string> Sets parameter 'imgResamplingMethod' to <string>.
Value must be one of
'NEAREST_NEIGHBOUR', 'BILINEAR_INTERPOLATION', 'CUBIC_CONVOLUTION'.
Default value is 'BILINEAR_INTERPOLATION'.
-PprojectionName=<string> The projection name
Default value is 'Geographic Lat/Lon'.
-PsourceBands=<string,string,string,...> The list of source bands.
Graph XML Format:
<graph id="someGraphId">
<version>1.0</version>
<node id="someNodeId">
<operator>Ellipsoid-Correction-GG</operator>
<sources>
<source>${source}</source>
</sources>
<parameters>
<sourceBands>
<band>string</band>
<.../>
</sourceBands>
<imgResamplingMethod>string</imgResamplingMethod>
<projectionName>string</projectionName>
</parameters>
</node>
</graph>
---------------------
CreateElevation
---------------------
Usage:
gpt CreateElevation [options]
Description:
Creates a DEM band
Source Options:
-Ssource=<file> Sets source 'source' to <filepath>.
This is a mandatory source.
Parameter Options:
-PdemName=<string> The digital elevation model.
Value must be one of 'ACE', 'GETASSE30', 'SRTM 3Sec GeoTiff'.
Default value is 'SRTM 3Sec GeoTiff'.
-PelevationBandName=<string> The elevation band name.
Default value is 'elevation'.
-PexternalDEM=<string> The external DEM file.
Default value is ' '.
-PresamplingMethod=<string> Sets parameter 'resamplingMethod' to <string>.
Value must be one of 'Nearest Neighbour',
'Bilinear Interpolation', 'Cubic Convolution'.
Default value is 'Bilinear Interpolation'.
Graph XML Format:
<graph id="someGraphId">
<version>1.0</version>
<node id="someNodeId">
<operator>CreateElevation</operator>
<sources>
<source>${source}</source>
</sources>
<parameters>
<demName>string</demName>
<elevationBandName>string</elevationBandName>
<externalDEM>string</externalDEM>
<resamplingMethod>string</resamplingMethod>
</parameters>
</node>
</graph>
---------------
Reproject
---------------
Usage:
gpt Reproject [options]
Description:
Reprojection of a source product to a target Coordinate Reference System.
Source Options:
-ScollocateWith=<file> The source product will be collocated with this product.
This is an optional source.
-Ssource=<file> The product which will be reprojected.
This is a mandatory source.
Parameter Options:
-Pcrs=<string> A text specifying the target Coordinate Reference
System, either in WKT or as an authority code. For appropriate EPSG authority codes see (www.
epsg-registry.org). AUTO authority can be used with code 42001 (UTM), and 42002
(Transverse Mercator) where the scene center is used as reference. Examples: EPSG:4326, AUTO:42001
-Peasting=<double> The easting of the reference pixel.
-PelevationModelName=<string> The name of the elevation model for the
orthorectification. If not given tie-point data is used.
-Pheight=<integer> The height of the target product.
-PincludeTiePointGrids=<boolean> Whether tie-point grids should be included in the
output product.
Default value is 'true'.
-PnoDataValue=<double> The value used to indicate no-data.
-Pnorthing=<double> The northing of the reference pixel.
-Porientation=<double> The orientation of the output product (in degree).
Valid interval is [-360,360].
Default value is '0'.
-Porthorectify=<boolean> Whether the source product should be orthorectified.
(Not applicable to all products)
Default value is 'false'.
-PpixelSizeX=<double> The pixel size in X direction given in CRS units.
-PpixelSizeY=<double> The pixel size in Y direction given in CRS units.
-PreferencePixelX=<double> The X-position of the reference pixel.
-PreferencePixelY=<double> The Y-position of the reference pixel.
-Presampling=<string> The method used for resampling of floating-point raster data.
Value must be one of 'Nearest', 'Bilinear', 'Bicubic'.
Default value is 'Nearest'.
-Pwidth=<integer> The width of the target product.
-PwktFile=<file> A file which contains the target Coordinate Reference
System in WKT format.
Graph XML Format:
<graph id="someGraphId">
<version>1.0</version>
<node id="someNodeId">
<operator>Reproject</operator>
<sources>
<source>${source}</source>
<collocateWith>${collocateWith}</collocateWith>
</sources>
<parameters>
<wktFile>file</wktFile>
<crs>string</crs>
<resampling>string</resampling>
<includeTiePointGrids>boolean</includeTiePointGrids>
<referencePixelX>double</referencePixelX>
<referencePixelY>double</referencePixelY>
<easting>double</easting>
<northing>double</northing>
<orientation>double</orientation>
<pixelSizeX>double</pixelSizeX>
<pixelSizeY>double</pixelSizeY>
<width>integer</width>
<height>integer</height>
<orthorectify>boolean</orthorectify>
<elevationModelName>string</elevationModelName>
<noDataValue>double</noDataValue>
</parameters>
</node>
</graph>
---------------
Fill-Hole
---------------
Usage:
gpt Fill-Hole [options]
Description:
Fill holes in given product
Source Options:
-Ssource=<file> Sets source 'source' to <filepath>.
This is a mandatory source.
Parameter Options:
-PNoDataValue=<double> Sets parameter 'NoDataValue' to <double>.
Default value is '0.0'.
-PsourceBands=<string,string,string,...> The list of source bands.
Graph XML Format:
<graph id="someGraphId">
<version>1.0</version>
<node id="someNodeId">
<operator>Fill-Hole</operator>
<sources>
<source>${source}</source>
</sources>
<parameters>
<sourceBands>
<band>string</band>
<.../>
</sourceBands>
<NoDataValue>double</NoDataValue>
</parameters>
</node>
</graph>
-------------------
Data-Analysis
-------------------
Usage:
gpt Data-Analysis [options]
Description:
Computes statistics
Source Options:
-SsourceProduct=<file> Sets source 'sourceProduct' to <filepath>.
This is a mandatory source.
--------------------
GCP-Selection2
--------------------
Usage:
gpt GCP-Selection2 [options]
Description:
Automatic Selection of Ground Control Points
Source Options:
-SsourceProduct=<file> Sets source 'sourceProduct' to <filepath>.
This is a mandatory source.
Parameter Options:
-PapplyFineRegistration=<boolean> Sets parameter 'applyFineRegistration' to <boolean>.
Default value is 'true'.
-PcoarseRegistrationWindowHeight=<string> Sets parameter 'coarseRegistrationWindowHeight'
to <string>.
Value must be one of '32', '64', '128',
'256', '512', '1024'.
Default value is '128'.
-PcoarseRegistrationWindowWidth=<string> Sets parameter 'coarseRegistrationWindowWidth'
to <string>.
Value must be one of '32', '64', '128',
'256', '512', '1024'.
Default value is '128'.
-PcoherenceThreshold=<double> The coherence threshold
Valid interval is (0, *).
Default value is '0.6'.
-PcoherenceWindowSize=<int> The coherence window size
Valid interval is (1, 10].
Default value is '3'.
-PcolumnInterpFactor=<string> Sets parameter 'columnInterpFactor' to <string>.
Value must be one of '2', '4', '8', '16'.
Default value is '2'.
-PfineRegistrationWindowHeight=<string> Sets parameter 'fineRegistrationWindowHeight'
to <string>.
Value must be one of '32', '64', '128',
'256', '512', '1024'.
Default value is '128'.
-PfineRegistrationWindowWidth=<string> Sets parameter 'fineRegistrationWindowWidth'
to <string>.
Value must be one of '32', '64', '128',
'256', '512', '1024'.
Default value is '128'.
-PgcpTolerance=<double> Tolerance in slave GCP validation check
Valid interval is (0, *).
Default value is '0.5'.
-PmaxIteration=<int> The maximum number of iterations
Valid interval is (1, 10].
Default value is '2'.
-PnumGCPtoGenerate=<int> The number of GCPs to use in a grid
Valid interval is (10, 10000].
Default value is '200'.
-ProwInterpFactor=<string> Sets parameter 'rowInterpFactor' to <string>.
Value must be one of '2', '4', '8', '16'.
Default value is '2'.
-PuseSlidingWindow=<boolean> Use sliding window for coherence calculation
Default value is 'false'.
Graph XML Format:
<graph id="someGraphId">
<version>1.0</version>
<node id="someNodeId">
<operator>GCP-Selection2</operator>
<sources>
<sourceProduct>${sourceProduct}</sourceProduct>
</sources>
<parameters>
<numGCPtoGenerate>int</numGCPtoGenerate>
<coarseRegistrationWindowWidth>string</coarseRegistrationWindowWidth>
<coarseRegistrationWindowHeight>string</coarseRegistrationWindowHeight>
<rowInterpFactor>string</rowInterpFactor>
<columnInterpFactor>string</columnInterpFactor>
<maxIteration>int</maxIteration>
<gcpTolerance>double</gcpTolerance>
<applyFineRegistration>boolean</applyFineRegistration>
<fineRegistrationWindowWidth>string</fineRegistrationWindowWidth>
<fineRegistrationWindowHeight>string</fineRegistrationWindowHeight>
<coherenceWindowSize>int</coherenceWindowSize>
<coherenceThreshold>double</coherenceThreshold>
<useSlidingWindow>boolean</useSlidingWindow>
</parameters>
</node>
</graph>
-------------
CplxIfg
-------------
Usage:
gpt CplxIfg [options]
Description:
Compute interferograms from stack of coregistered images
Source Options:
-SsourceProduct=<file> Sets source 'sourceProduct' to <filepath>.
This is a mandatory source.
-----------------
PassThrough
-----------------
Usage:
gpt PassThrough [options]
Description:
Sets target product to source product.
Source Options:
-SsourceProduct=<file> Sets source 'sourceProduct' to <filepath>.
This is a mandatory source.
-------------
Coherence
-------------
Usage:
gpt Coherence [options]
Description:
Estimate coherence from stack of coregistered images
Source Options:
-SsourceProduct=<file> Sets source 'sourceProduct' to <filepath>.
This is a mandatory source.
Parameter Options:
-PcoherenceWindowSizeAzimuth=<int> Size of coherence estimation window in Azimuth direction
Valid interval is (1, 20].
Default value is '10'.
-PcoherenceWindowSizeRange=<int> Size of coherence estimation window in Range direction
Valid interval is (1, 20].
Default value is '2'.
Graph XML Format:
<graph id="someGraphId">
<version>1.0</version>
<node id="someNodeId">
<operator>Coherence</operator>
<sources>
<sourceProduct>${sourceProduct}</sourceProduct>
</sources>
<parameters>
<coherenceWindowSizeAzimuth>int</coherenceWindowSizeAzimuth>
<coherenceWindowSizeRange>int</coherenceWindowSizeRange>
</parameters>
</node>
</graph>
----------------
Interferogram
----------------
Usage:
gpt Interferogram [options]
Description:
Compute interferograms from stack of coregistered images : JBLAS implementation
Source Options:
-SsourceProduct=<file> Sets source 'sourceProduct' to <filepath>.
This is a mandatory source.
Parameter Options:
-PorbitPolynomialDegree=<int> Degree of orbit (polynomial) interpolator
Value must be one of '1', '2', '3', '4', '5'.
Default value is '3'.
-PsrpNumberPoints=<int> Number of points for the 'flat earth phase' polynomial estimation
Value must be one of '301', '401', '501', '601', '701',
'801', '901', '1001'.
Default value is '501'.
-PsrpPolynomialDegree=<int> Order of 'Flat earth phase' polynomial
Value must be one of '1', '2', '3', '4', '5', '6', '7', '8'.
Default value is '5'.
Graph XML Format:
<graph id="someGraphId">
<version>1.0</version>
<node id="someNodeId">
<operator>Interferogram</operator>
<sources>
<sourceProduct>${sourceProduct}</sourceProduct>
</sources>
<parameters>
<srpPolynomialDegree>int</srpPolynomialDegree>
<srpNumberPoints>int</srpNumberPoints>
<orbitPolynomialDegree>int</orbitPolynomialDegree>
</parameters>
</node>
</graph>
---------------------
Create-LandMask
---------------------
Usage:
gpt Create-LandMask [options]
Description:
Creates a bitmask defining land vs ocean.
Source Options:
-Ssource=<file> Sets source 'source' to <filepath>.
This is a mandatory source.
Parameter Options:
-PbyPass=<boolean> Sets parameter 'byPass' to <boolean>.
Default value is 'false'.
-Pgeometry=<string> Sets parameter 'geometry' to <string>.
-PinvertGeometry=<boolean> Sets parameter 'invertGeometry' to <boolean>.
Default value is 'false'.
-PlandMask=<boolean> Sets parameter 'landMask' to <boolean>.
Default value is 'true'.
-PsourceBands=<string,string,string,...> The list of source bands.
-PuseSRTM=<boolean> Sets parameter 'useSRTM' to <boolean>.
Default value is 'true'.
Graph XML Format:
<graph id="someGraphId">
<version>1.0</version>
<node id="someNodeId">
<operator>Create-LandMask</operator>
<sources>
<source>${source}</source>
</sources>
<parameters>
<sourceBands>
<band>string</band>
<.../>
</sourceBands>
<landMask>boolean</landMask>
<useSRTM>boolean</useSRTM>
<geometry>string</geometry>
<invertGeometry>boolean</invertGeometry>
<byPass>boolean</byPass>
</parameters>
</node>
</graph>
----------------------
Apply-Orbit-File
----------------------
Usage:
gpt Apply-Orbit-File [options]
Description:
Apply orbit file
Source Options:
-Ssource=<file> Sets source 'source' to <filepath>.
This is a mandatory source.
Parameter Options:
-PorbitType=<string> Sets parameter 'orbitType' to <string>.
Value must be one of 'DORIS Precise (ENVISAT)', 'DORIS
Verified (ENVISAT)', 'DELFT Precise (ENVISAT, ERS1&2)', 'PRARE Precise (ERS1&2)'.
Default value is 'DORIS Verified (ENVISAT)'.
Graph XML Format:
<graph id="someGraphId">
<version>1.0</version>
<node id="someNodeId">
<operator>Apply-Orbit-File</operator>
<sources>
<source>${source}</source>
</sources>
<parameters>
<orbitType>string</orbitType>
</parameters>
</node>
</graph>
----------------
Oversample
----------------
Usage:
gpt Oversample [options]
Description:
Oversample the datset
Source Options:
-Ssource=<file> Sets source 'source' to <filepath>.
This is a mandatory source.
Parameter Options:
-PazimuthSpacing=<float> The azimuth pixel spacing
Default value is '12.5'.
-PheightRatio=<float> The height ratio of the output/input images
Default value is '2.0'.
-PoutputImageBy=<string> Sets parameter 'outputImageBy' to <string>.
Value must be one of 'Image Size', 'Ratio',
'Pixel Spacing'.
Default value is 'Image Size'.
-PrangeSpacing=<float> The range pixel spacing
Default value is '12.5'.
-PsourceBands=<string,string,string,...> The list of source bands.
-PtargetImageHeight=<int> The row dimension of the output image
Default value is '1000'.
-PtargetImageWidth=<int> The col dimension of the output image
Default value is '1000'.
-PwidthRatio=<float> The width ratio of the output/input images
Default value is '2.0'.
Graph XML Format:
<graph id="someGraphId">
<version>1.0</version>
<node id="someNodeId">
<operator>Oversample</operator>
<sources>
<source>${source}</source>
</sources>
<parameters>
<sourceBands>
<band>string</band>
<.../>
</sourceBands>
<outputImageBy>string</outputImageBy>
<targetImageHeight>int</targetImageHeight>
<targetImageWidth>int</targetImageWidth>
<widthRatio>float</widthRatio>
<heightRatio>float</heightRatio>
<rangeSpacing>float</rangeSpacing>
<azimuthSpacing>float</azimuthSpacing>
</parameters>
</node>
</graph>
-----------------
CreateStack
-----------------
Usage:
gpt CreateStack [options]
Description:
Collocates two or more products based on their geo-codings.
Parameter Options:
-Pextent=<string> The output image extents.
Value must be one of 'Master', 'Minimum', 'Maximum'.
Default value is 'Master'.
-PmasterBands=<string,string,string,...> The list of source bands.
-PresamplingType=<string> The method to be used when resampling the slave
grid onto the master grid.
Value must be one of 'NONE',
'NEAREST_NEIGHBOUR', 'BILINEAR_INTERPOLATION', 'CUBIC_CONVOLUTION'.
Default value is 'NONE'.
-PsourceBands=<string,string,string,...> The list of source bands.
Graph XML Format:
<graph id="someGraphId">
<version>1.0</version>
<node id="someNodeId">
<operator>CreateStack</operator>
<sources>
<sourceProducts>${sourceProducts}</sourceProducts>
</sources>
<parameters>
<masterBands>
<band>string</band>
<.../>
</masterBands>
<sourceBands>
<band>string</band>
<.../>
</sourceBands>
<resamplingType>string</resamplingType>
<extent>string</extent>
</parameters>
</node>
</graph>
--------------
WriteRGB
--------------
Usage:
gpt WriteRGB [options]
Description:
Creates an RGB image from three source bands.
Source Options:
-Sinput=<file> Sets source 'input' to <filepath>.
This is a mandatory source.
Parameter Options:
-Pblue=<int> The zero-based index of the blue band.
-Pfile=<file> The file to which the image is written.
-PformatName=<string> Sets parameter 'formatName' to <string>.
Default value is 'png'.
-Pgreen=<int> The zero-based index of the green band.
-Pred=<int> The zero-based index of the red band.
Graph XML Format:
<graph id="someGraphId">
<version>1.0</version>
<node id="someNodeId">
<operator>WriteRGB</operator>
<sources>
<input>${input}</input>
</sources>
<parameters>
<red>int</red>
<green>int</green>
<blue>int</blue>
<formatName>string</formatName>
<file>file</file>
</parameters>
</node>
</graph>
--------------------------
RemoveAntennaPattern
--------------------------
Usage:
gpt RemoveAntennaPattern [options]
Description:
Remove Antenna Pattern
Source Options:
-Ssource=<file> Sets source 'source' to <filepath>.
This is a mandatory source.
Parameter Options:
-PsourceBands=<string,string,string,...> The list of source bands.
Graph XML Format:
<graph id="someGraphId">
<version>1.0</version>
<node id="someNodeId">
<operator>RemoveAntennaPattern</operator>
<sources>
<source>${source}</source>
</sources>
<parameters>
<sourceBands>
<band>string</band>
<.../>
</sourceBands>
</parameters>
</node>
</graph>
------------------
Image-Filter
------------------
Usage:
gpt Image-Filter [options]
Description:
Common Image Processing Filters
Source Options:
-SsourceProduct=<file> Sets source 'sourceProduct' to <filepath>.
This is a mandatory source.
Parameter Options:
-PselectedFilterName=<string> Sets parameter 'selectedFilterName' to <string>.
-PsourceBands=<string,string,string,...> The list of source bands.
-PuserDefinedKernelFile=<file> The kernel file
Graph XML Format:
<graph id="someGraphId">
<version>1.0</version>
<node id="someNodeId">
<operator>Image-Filter</operator>
<sources>
<sourceProduct>${sourceProduct}</sourceProduct>
</sources>
<parameters>
<sourceBands>
<band>string</band>
<.../>
</sourceBands>
<selectedFilterName>string</selectedFilterName>
<userDefinedKernelFile>file</userDefinedKernelFile>
</parameters>
</node>
</graph>
------------
Mosaic
------------
Usage:
gpt Mosaic [options]
Description:
Mosaics two or more products based on their geo-codings.
Parameter Options:
-Paverage=<boolean> Average the overlapping areas
Default value is 'false'.
-PnormalizeByMean=<boolean> Normalize by Mean
Default value is 'false'.
-PpixelSize=<double> Pixel Size (m)
Default value is '0'.
-PresamplingMethod=<string> The method to be used when resampling the slave
grid onto the master grid.
Value must be one of
'NEAREST_NEIGHBOUR', 'BILINEAR_INTERPOLATION', 'CUBIC_CONVOLUTION'.
Default value is 'NEAREST_NEIGHBOUR'.
-PsceneHeight=<int> Target height
Default value is '0'.
-PsceneWidth=<int> Target width
Default value is '0'.
-PsourceBands=<string,string,string,...> The list of source bands.
Graph XML Format:
<graph id="someGraphId">
<version>1.0</version>
<node id="someNodeId">
<operator>Mosaic</operator>
<sources>
<sourceProducts>${sourceProducts}</sourceProducts>
</sources>
<parameters>
<sourceBands>
<band>string</band>
<.../>
</sourceBands>
<resamplingMethod>string</resamplingMethod>
<average>boolean</average>
<normalizeByMean>boolean</normalizeByMean>
<pixelSize>double</pixelSize>
<sceneWidth>int</sceneWidth>
<sceneHeight>int</sceneHeight>
</parameters>
</node>
</graph>
----------------------------
Create-Coherence-Image
----------------------------
Usage:
gpt Create-Coherence-Image [options]
Description:
Create Coherence Image
Source Options:
-Ssource=<file> Sets source 'source' to <filepath>.
This is a mandatory source.
Parameter Options:
-PcoherenceWindowSize=<int> The coherence window size
Valid interval is (1, 10].
Default value is '5'.
Graph XML Format:
<graph id="someGraphId">
<version>1.0</version>
<node id="someNodeId">
<operator>Create-Coherence-Image</operator>
<sources>
<source>${source}</source>
</sources>
<parameters>
<coherenceWindowSize>int</coherenceWindowSize>
</parameters>
</node>
</graph>
--------------------------
AdaptiveThresholding
--------------------------
Usage:
gpt AdaptiveThresholding [options]
Description:
Detect ships using Constant False Alarm Rate detector.
Source Options:
-Ssource=<file> Sets source 'source' to <filepath>.
This is a mandatory source.
Parameter Options:
-PbackgroundWindowSizeInMeter=<double> Background window size
Default value is '1000.0'.
-PguardWindowSizeInMeter=<double> Guard window size
Default value is '400.0'.
-Ppfa=<double> Probability of false alarm
Default value is '6.5'.
-PtargetWindowSizeInMeter=<int> Target window size
Default value is '75'.
Graph XML Format:
<graph id="someGraphId">
<version>1.0</version>
<node id="someNodeId">
<operator>AdaptiveThresholding</operator>
<sources>
<source>${source}</source>
</sources>
<parameters>
<targetWindowSizeInMeter>int</targetWindowSizeInMeter>
<guardWindowSizeInMeter>double</guardWindowSizeInMeter>
<backgroundWindowSizeInMeter>double</backgroundWindowSizeInMeter>
<pfa>double</pfa>
</parameters>
</node>
</graph>
---------------------
ReplaceMetadata
---------------------
Usage:
gpt ReplaceMetadata [options]
Description:
Replace the metadata of the first product with that of the second
Parameter Options:
-Pnote=<string> Sets parameter 'note' to <string>.
Default value is 'Replace the metadata of the first product with that of
the second'.
Graph XML Format:
<graph id="someGraphId">
<version>1.0</version>
<node id="someNodeId">
<operator>ReplaceMetadata</operator>
<sources>
<sourceProducts>${sourceProducts}</sourceProducts>
</sources>
<parameters>
<note>string</note>
</parameters>
</node>
</graph>
---------------
BandMaths
---------------
Usage:
gpt BandMaths [options]
Description:
Create a product with one or more bands using mathematical expressions.
Parameter Options:
-PbandExpression=<string> Sets parameter 'bandExpression' to <string>.
-PbandName=<string> Sets parameter 'bandName' to <string>.
-PbandNodataValue=<string> Sets parameter 'bandNodataValue' to <string>.
-PbandUnit=<string> Sets parameter 'bandUnit' to <string>.
Graph XML Format:
<graph id="someGraphId">
<version>1.0</version>
<node id="someNodeId">
<operator>BandMaths</operator>
<sources>
<sourceProducts>${sourceProducts}</sourceProducts>
</sources>
<parameters>
<targetBands>
<targetBand>
<name>string</name>
<expression>string</expression>
<description>string</description>
<type>string</type>
<validExpression>string</validExpression>
<noDataValue>string</noDataValue>
<unit>string</unit>
<spectralBandIndex>integer</spectralBandIndex>
<spectralWavelength>float</spectralWavelength>
<spectralBandwidth>float</spectralBandwidth>
</targetBand>
<.../>
</targetBands>
<variables>
<variable>
<name>string</name>
<type>string</type>
<value>string</value>
</variable>
<.../>
</variables>
<bandName>string</bandName>
<bandUnit>string</bandUnit>
<bandNodataValue>string</bandNodataValue>
<bandExpression>string</bandExpression>
</parameters>
</node>
</graph>
-------------------------
Oil-Spill-Detection
-------------------------
Usage:
gpt Oil-Spill-Detection [options]
Description:
Detect oil spill.
Source Options:
-Ssource=<file> Sets source 'source' to <filepath>.
This is a mandatory source.
Parameter Options:
-PbackgroundWindowSize=<int> Background window size
Default value is '13'.
-Pk=<double> Threshold shift from background mean
Default value is '2.0'.
-PsourceBands=<string,string,string,...> The list of source bands.
Graph XML Format:
<graph id="someGraphId">
<version>1.0</version>
<node id="someNodeId">
<operator>Oil-Spill-Detection</operator>
<sources>
<source>${source}</source>
</sources>
<parameters>
<sourceBands>
<band>string</band>
<.../>
</sourceBands>
<backgroundWindowSize>int</backgroundWindowSize>
<k>double</k>
</parameters>
</node>
</graph>
----------------
LinearTodB
----------------
Usage:
gpt LinearTodB [options]
Description:
Converts bands to dB
Source Options:
-Ssource=<file> Sets source 'source' to <filepath>.
This is a mandatory source.
Parameter Options:
-PsourceBands=<string,string,string,...> The list of source bands.
Graph XML Format:
<graph id="someGraphId">
<version>1.0</version>
<node id="someNodeId">
<operator>LinearTodB</operator>
<sources>
<source>${source}</source>
</sources>
<parameters>
<sourceBands>
<band>string</band>
<.../>
</sourceBands>
</parameters>
</node>
</graph>
---------------------------
KMeansClusterAnalysis
---------------------------
Usage:
gpt KMeansClusterAnalysis [options]
Description:
Performs a K-Means cluster analysis.
Source Options:
-Ssource=<file> Sets source 'source' to <filepath>.
This is a mandatory source.
Parameter Options:
-PclusterCount=<int> Sets parameter 'clusterCount' to <int>.
Valid interval is (0,100].
Default value is '14'.
-PiterationCount=<int> Sets parameter 'iterationCount' to <int>.
Valid interval is (0,10000].
Default value is '30'.
-PrandomSeed=<int> Seed for the random generator, used
for initialising the algorithm.
Default value is '31415'.
-ProiMaskName=<string> The name of the ROI-Mask that should be used.
-PsourceBandNames=<string,string,string,...> The names of the bands being used for the
cluster analysis.
Graph XML Format:
<graph id="someGraphId">
<version>1.0</version>
<node id="someNodeId">
<operator>KMeansClusterAnalysis</operator>
<sources>
<source>${source}</source>
</sources>
<parameters>
<clusterCount>int</clusterCount>
<iterationCount>int</iterationCount>
<randomSeed>int</randomSeed>
<sourceBandNames>string,string,string,...</sourceBandNames>
<roiMaskName>string</roiMaskName>
</parameters>
</node>
</graph>
-----------
Warp2
-----------
Usage:
gpt Warp2 [options]
Description:
Create Warp Function And Get Co-registrated Images
Source Options:
-SsourceProduct=<file> Sets source 'sourceProduct' to <filepath>.
This is a mandatory source.
Parameter Options:
-PinterpolationMethod=<string> Sets parameter 'interpolationMethod' to <string>.
Value must be one of 'Nearest-neighbor
interpolation', 'Bilinear interpolation', 'Linear interpolation', 'Cubic convolution (4
points)', 'Cubic convolution (6 points)', 'Truncated sinc (6 points)', 'Truncated sinc (8
points)', 'Truncated sinc (16 points)'.
Default value is 'Bilinear interpolation'.
-PopenResidualsFile=<boolean> Show the Residuals file in a text viewer
Default value is 'false'.
-PrmsThreshold=<float> The RMS threshold for eliminating invalid GCPs
Valid interval is (0, *).
Default value is '1.0'.
-PwarpPolynomialOrder=<int> The order of WARP polynomial function
Value must be one of '1', '2', '3'.
Default value is '2'.
Graph XML Format:
<graph id="someGraphId">
<version>1.0</version>
<node id="someNodeId">
<operator>Warp2</operator>
<sources>
<sourceProduct>${sourceProduct}</sourceProduct>
</sources>
<parameters>
<rmsThreshold>float</rmsThreshold>
<warpPolynomialOrder>int</warpPolynomialOrder>
<interpolationMethod>string</interpolationMethod>
<openResidualsFile>boolean</openResidualsFile>
</parameters>
</node>
</graph>
---------------------------
Object-Discrimination
---------------------------
Usage:
gpt Object-Discrimination [options]
Description:
Remove false alarms from the detected objects.
Source Options:
-Ssource=<file> Sets source 'source' to <filepath>.
This is a mandatory source.
Parameter Options:
-PmaxTargetSizeInMeter=<double> Maximum target size
Default value is '400.0'.
-PminTargetSizeInMeter=<double> Minimum target size
Default value is '80.0'.
Graph XML Format:
<graph id="someGraphId">
<version>1.0</version>
<node id="someNodeId">
<operator>Object-Discrimination</operator>
<sources>
<source>${source}</source>
</sources>
<parameters>
<minTargetSizeInMeter>double</minTargetSizeInMeter>
<maxTargetSizeInMeter>double</maxTargetSizeInMeter>
</parameters>
</node>
</graph>
-------------
PCA-Min
-------------
Usage:
gpt PCA-Min [options]
Description:
Computes minimum for PCA
Source Options:
-SsourceProduct=<file> Sets source 'sourceProduct' to <filepath>.
This is a mandatory source.
------------------
Reprojection
------------------
Usage:
gpt Reprojection [options]
Description:
Applies a map projection
Source Options:
-ScollocateWith=<file> The source product will be collocated with this product.
This is an optional source.
-Ssource=<file> The product which will be reprojected.
This is a mandatory source.
Parameter Options:
-Pcrs=<string> A text specifying the target Coordinate
Reference System, either in WKT or as an authority code. For appropriate EPSG authority codes
see (www.epsg-registry.org). AUTO authority can be used with code 42001 (UTM), and
42002 (Transverse Mercator) where the scene center is used as reference. Examples:
EPSG:4326, AUTO:42001
-Peasting=<double> The easting of the reference pixel.
-PelevationModelName=<string> The name of the elevation model for
the orthorectification. If not given tie-point data is used.
-Pheight=<integer> The height of the target product.
-PincludeTiePointGrids=<boolean> Whether tie-point grids should be included in
the output product.
Default value is 'true'.
-PnoDataValue=<double> The value used to indicate no-data.
-Pnorthing=<double> The northing of the reference pixel.
-Porientation=<double> The orientation of the output product (in degree).
Valid interval is [-360,360].
Default value is '0'.
-Porthorectify=<boolean> Whether the source product should be
orthorectified. (Not applicable to all products)
Default value is 'false'.
-PpixelSizeX=<double> The pixel size in X direction given in CRS units.
-PpixelSizeY=<double> The pixel size in Y direction given in CRS units.
-PpreserveResolution=<boolean> Whether to keep original or use custom resolution.
Default value is 'true'.
-PreferencePixelX=<double> The X-position of the reference pixel.
-PreferencePixelY=<double> The Y-position of the reference pixel.
-Presampling=<string> The method used for resampling of floating-
point raster data.
Value must be one of 'Nearest', 'Bilinear', 'Bicubic'.
Default value is 'Nearest'.
-PsourceBands=<string,string,string,...> The list of source bands.
-Pwidth=<integer> The width of the target product.
-PwktFile=<file> A file which contains the target Coordinate
Reference System in WKT format.
Graph XML Format:
<graph id="someGraphId">
<version>1.0</version>
<node id="someNodeId">
<operator>Reprojection</operator>
<sources>
<source>${source}</source>
<collocateWith>${collocateWith}</collocateWith>
</sources>
<parameters>
<wktFile>file</wktFile>
<crs>string</crs>
<resampling>string</resampling>
<includeTiePointGrids>boolean</includeTiePointGrids>
<referencePixelX>double</referencePixelX>
<referencePixelY>double</referencePixelY>
<easting>double</easting>
<northing>double</northing>
<orientation>double</orientation>
<pixelSizeX>double</pixelSizeX>
<pixelSizeY>double</pixelSizeY>
<width>integer</width>
<height>integer</height>
<orthorectify>boolean</orthorectify>
<elevationModelName>string</elevationModelName>
<noDataValue>double</noDataValue>
<sourceBands>
<band>string</band>
<.../>
</sourceBands>
<preserveResolution>boolean</preserveResolution>
</parameters>
</node>
</graph>
-----------------
Undersample
-----------------
Usage:
gpt Undersample [options]
Description:
Undersample the datset
Source Options:
-Ssource=<file> Sets source 'source' to <filepath>.
This is a mandatory source.
Parameter Options:
-PazimuthSpacing=<float> The azimuth pixel spacing
Default value is '12.5'.
-PfilterSize=<string> Sets parameter 'filterSize' to <string>.
Value must be one of '3x3', '5x5', '7x7'.
Default value is '3x3'.
-PheightRatio=<float> The height ratio of the output/input images
Default value is '0.5'.
-Pmethod=<string> Sets parameter 'method' to <string>.
Value must be one of 'Sub-Sampling',
'LowPass Filtering'.
Default value is 'LowPass Filtering'.
-PoutputImageBy=<string> Sets parameter 'outputImageBy' to <string>.
Value must be one of 'Image Size', 'Ratio',
'Pixel Spacing'.
Default value is 'Image Size'.
-PrangeSpacing=<float> The range pixel spacing
Default value is '12.5'.
-PsourceBands=<string,string,string,...> The list of source bands.
-PsubSamplingX=<int> Sets parameter 'subSamplingX' to <int>.
Default value is '2'.
-PsubSamplingY=<int> Sets parameter 'subSamplingY' to <int>.
Default value is '2'.
-PtargetImageHeight=<int> The row dimension of the output image
Default value is '1000'.
-PtargetImageWidth=<int> The col dimension of the output image
Default value is '1000'.
-PwidthRatio=<float> The width ratio of the output/input images
Default value is '0.5'.
Graph XML Format:
<graph id="someGraphId">
<version>1.0</version>
<node id="someNodeId">
<operator>Undersample</operator>
<sources>
<source>${source}</source>
</sources>
<parameters>
<sourceBands>
<band>string</band>
<.../>
</sourceBands>
<method>string</method>
<filterSize>string</filterSize>
<subSamplingX>int</subSamplingX>
<subSamplingY>int</subSamplingY>
<outputImageBy>string</outputImageBy>
<targetImageHeight>int</targetImageHeight>
<targetImageWidth>int</targetImageWidth>
<widthRatio>float</widthRatio>
<heightRatio>float</heightRatio>
<rangeSpacing>float</rangeSpacing>
<azimuthSpacing>float</azimuthSpacing>
</parameters>
</node>
</graph>
---------------
Multilook
---------------
Usage:
gpt Multilook [options]
Description:
Averages the power across a number of lines in both the azimuth and range directions
Source Options:
-Ssource=<file> Sets source 'source' to <filepath>.
This is a mandatory source.
Parameter Options:
-PnAzLooks=<int> The user defined number of azimuth looks
Valid interval is [1, *).
Default value is '1'.
-PnRgLooks=<int> The user defined number of range looks
Valid interval is [1, *).
Default value is '1'.
-Pnote=<string> Sets parameter 'note' to <string>.
Default value is 'Currently, detection for
complex data is performed without any resampling'.
-PoutputIntensity=<boolean> For complex product output intensity or i and q
Default value is 'true'.
-PsourceBands=<string,string,string,...> The list of source bands.
Graph XML Format:
<graph id="someGraphId">
<version>1.0</version>
<node id="someNodeId">
<operator>Multilook</operator>
<sources>
<source>${source}</source>
</sources>
<parameters>
<sourceBands>
<band>string</band>
<.../>
</sourceBands>
<nRgLooks>int</nRgLooks>
<nAzLooks>int</nAzLooks>
<outputIntensity>boolean</outputIntensity>
<note>string</note>
</parameters>
</node>
</graph>
----------
Read
----------
Usage:
gpt Read [options]
Description:
Reads a product from disk.
Parameter Options:
-Pfile=<file> The file from which the data product is read.
Graph XML Format:
<graph id="someGraphId">
<version>1.0</version>
<node id="someNodeId">
<operator>Read</operator>
<sources/>
<parameters>
<file>file</file>
</parameters>
</node>
</graph>
-------------------
PCA-Statistic
-------------------
Usage:
gpt PCA-Statistic [options]
Description:
Computes statistics for PCA
Source Options:
-Ssource=<file> Sets source 'source' to <filepath>.
This is a mandatory source.
Parameter Options:
-PeigenvalueThreshold=<double> The threshold for selecting eigenvalues
Valid interval is (0, 100].
Default value is '100'.
-PnumPCA=<int> The number of PCA images output
Valid interval is (0, 100].
Default value is '1'.
-PselectEigenvaluesBy=<string> Sets parameter 'selectEigenvaluesBy' to <string>.
Value must be one of 'Eigenvalue Threshold',
'Number of Eigenvalues'.
Default value is 'Eigenvalue Threshold'.
-PshowEigenvalues=<boolean> Show the eigenvalues
Default value is '1'.
-PsourceBands=<string,string,string,...> The list of source bands.
-PsubtractMeanImage=<boolean> Subtract mean image
Default value is '1'.
Graph XML Format:
<graph id="someGraphId">
<version>1.0</version>
<node id="someNodeId">
<operator>PCA-Statistic</operator>
<sources>
<source>${source}</source>
</sources>
<parameters>
<sourceBands>
<band>string</band>
<.../>
</sourceBands>
<selectEigenvaluesBy>string</selectEigenvaluesBy>
<eigenvalueThreshold>double</eigenvalueThreshold>
<numPCA>int</numPCA>
<showEigenvalues>boolean</showEigenvalues>
<subtractMeanImage>boolean</subtractMeanImage>
</parameters>
</node>
</graph>
----------------------
Convert-Datatype
----------------------
Usage:
gpt Convert-Datatype [options]
Description:
Convert product data type
Source Options:
-Ssource=<file> Sets source 'source' to <filepath>.
This is a mandatory source.
Parameter Options:
-PsourceBands=<string,string,string,...> The list of source bands.
-PtargetDataType=<string> Sets parameter 'targetDataType' to <string>.
Value must be one of 'int8', 'int16',
'int32', 'uint8', 'uint16', 'uint32', 'float32', 'float64'.
Default value is 'float32'.
-PtargetScalingStr=<string> Sets parameter 'targetScalingStr' to <string>.
Value must be one of 'Truncate', 'Linear (slope
and intercept)', 'Linear (between 95% clipped Histogram)', 'Logarithmic'.
Default value is 'Linear (slope and intercept)'.
Graph XML Format:
<graph id="someGraphId">
<version>1.0</version>
<node id="someNodeId">
<operator>Convert-Datatype</operator>
<sources>
<source>${source}</source>
</sources>
<parameters>
<sourceBands>
<band>string</band>
<.../>
</sourceBands>
<targetDataType>string</targetDataType>
<targetScalingStr>string</targetScalingStr>
</parameters>
</node>
</graph>
-----------------
TileStackOp
-----------------
Usage:
gpt TileStackOp [options]
Source Options:
-Ssource=<file> Sets source 'source' to <filepath>.
This is a mandatory source.
Parameter Options:
-PsourceBands=<string,string,string,...> The list of source bands.
Graph XML Format:
<graph id="someGraphId">
<version>1.0</version>
<node id="someNodeId">
<operator>TileStackOp</operator>
<sources>
<source>${source}</source>
</sources>
<parameters>
<sourceBands>
<band>string</band>
<.../>
</sourceBands>
</parameters>
</node>
</graph>
-----------------------
EMClusterAnalysis
-----------------------
Usage:
gpt EMClusterAnalysis [options]
Description:
Performs an expectation-maximization (EM) cluster analysis.
Source Options:
-Ssource=<file> Sets source 'source' to <filepath>.
This is a mandatory source.
Parameter Options:
-PclusterCount=<int> Sets parameter 'clusterCount' to <int>.
Valid interval is (0,100].
Default value is '14'.
-PincludeProbabilityBands=<boolean> Determines whether the posterior
probabilities are included as band data.
Default value is 'false'.
-PiterationCount=<int> Sets parameter 'iterationCount' to <int>.
Valid interval is (0,10000].
Default value is '30'.
-PrandomSeed=<int> Seed for the random generator, used
for initialising the algorithm.
Default value is '31415'.
-ProiMaskName=<string> The name of the ROI-Mask that should be used.
-PsourceBandNames=<string,string,string,...> The names of the bands being used for the
cluster analysis.
Graph XML Format:
<graph id="someGraphId">
<version>1.0</version>
<node id="someNodeId">
<operator>EMClusterAnalysis</operator>
<sources>
<source>${source}</source>
</sources>
<parameters>
<clusterCount>int</clusterCount>
<iterationCount>int</iterationCount>
<randomSeed>int</randomSeed>
<sourceBandNames>string,string,string,...</sourceBandNames>
<roiMaskName>string</roiMaskName>
<includeProbabilityBands>boolean</includeProbabilityBands>
</parameters>
</node>
</graph>
-----------
Write
-----------
Usage:
gpt Write [options]
Description:
Writes a data product to a file.
Source Options:
-Ssource=<file> The source product to be written.
This is a mandatory source.
Parameter Options:
-PclearCacheAfterRowWrite=<boolean> If true, the internal tile cache is cleared after a
tile row has been written. Ignored if writeEntireTileRows=false.
Default value is 'false'.
-PdeleteOutputOnFailure=<boolean> If true, all output files are deleted after a failed
write operation.
Default value is 'true'.
-Pfile=<file> The output file to which the data product is written.
-PformatName=<string> The name of the output file format.
Default value is 'BEAM-DIMAP'.
-PwriteEntireTileRows=<boolean> If true, the write operation waits until an entire
tile row is computed.
Default value is 'true'.
Graph XML Format:
<graph id="someGraphId">
<version>1.0</version>
<node id="someNodeId">
<operator>Write</operator>
<sources>
<source>${source}</source>
</sources>
<parameters>
<file>file</file>
<formatName>string</formatName>
<deleteOutputOnFailure>boolean</deleteOutputOnFailure>
<writeEntireTileRows>boolean</writeEntireTileRows>
<clearCacheAfterRowWrite>boolean</clearCacheAfterRowWrite>
</parameters>
</node>
</graph>
------------
Subset
------------
Usage:
gpt Subset [options]
Description:
Create a spatial and/or spectral subset of data product.
Source Options:
-Ssource=<file> The source product to create the subset from.
This is a mandatory source.
Parameter Options:
-PbandNames=<string,string,string,...> Sets parameter 'bandNames' to <string,
string,string,...>.
-PcopyMetadata=<boolean> Sets parameter 'copyMetadata' to <boolean>.
Default value is 'false'.
-PfullSwath=<boolean> Forces the operator to extend the subset
region to the full swath.
Default value is 'false'.
-PgeoRegion=<geometry> The region in geographical coordinates
using WKT-format,
e.g. POLYGON((<lon1> <lat1>,
<lon2> <lat2>, ..., <lon1> <lat1>))
(make sure to quote the option due to spaces
in <geometry>)
-Pheight=<int> Sets parameter 'height' to <int>.
Default value is '1000'.
-PregionX=<int> Sets parameter 'regionX' to <int>.
Default value is '0'.
-PregionY=<int> Sets parameter 'regionY' to <int>.
Default value is '0'.
-PsubSamplingX=<int> Sets parameter 'subSamplingX' to <int>.
Default value is '1'.
-PsubSamplingY=<int> Sets parameter 'subSamplingY' to <int>.
Default value is '1'.
-PtiePointGridNames=<string,string,string,...> Sets parameter 'tiePointGridNames' to
<string,string,string,...>.
-Pwidth=<int> Sets parameter 'width' to <int>.
Default value is '1000'.
Graph XML Format:
<graph id="someGraphId">
<version>1.0</version>
<node id="someNodeId">
<operator>Subset</operator>
<sources>
<source>${source}</source>
</sources>
<parameters>
<regionX>int</regionX>
<regionY>int</regionY>
<width>int</width>
<height>int</height>
<subSamplingX>int</subSamplingX>
<fullSwath>boolean</fullSwath>
<geoRegion>geometry</geoRegion>
<subSamplingY>int</subSamplingY>
<tiePointGridNames>string,string,string,...</tiePointGridNames>
<bandNames>string,string,string,...</bandNames>
<copyMetadata>boolean</copyMetadata>
</parameters>
</node>
</graph>
--------------------------
Oil-Spill-Clustering
--------------------------
Usage:
gpt Oil-Spill-Clustering [options]
Description:
Remove small clusters from detected area.
Source Options:
-Ssource=<file> Sets source 'source' to <filepath>.
This is a mandatory source.
Parameter Options:
-PminClusterSizeInKm2=<double> Minimum cluster size
Default value is '0.1'.
Graph XML Format:
<graph id="someGraphId">
<version>1.0</version>
<node id="someNodeId">
<operator>Oil-Spill-Clustering</operator>
<sources>
<source>${source}</source>
</sources>
<parameters>
<minClusterSizeInKm2>double</minClusterSizeInKm2>
</parameters>
</node>
</graph>
--------------
SubsetOp
--------------
Usage:
gpt SubsetOp [options]
Description:
Create a spatial subset of the source product.
Source Options:
-SsourceProduct=<file> Sets source 'sourceProduct' to <filepath>.
This is a mandatory source.
Parameter Options:
-PgeoRegion=<geometry> WKT-format, e.g. POLYGON((<lon1> <lat1>,
<lon2> <lat2>, ..., <lon1> <lat1>))
(make sure to quote the option due to spaces
in <geometry>)
-Pheight=<int> Sets parameter 'height' to <int>.
Default value is '1000'.
-PregionX=<int> Sets parameter 'regionX' to <int>.
Default value is '0'.
-PregionY=<int> Sets parameter 'regionY' to <int>.
Default value is '0'.
-PsourceBands=<string,string,string,...> The list of source bands.
-PsubSamplingX=<int> Sets parameter 'subSamplingX' to <int>.
Default value is '1'.
-PsubSamplingY=<int> Sets parameter 'subSamplingY' to <int>.
Default value is '1'.
-Pwidth=<int> Sets parameter 'width' to <int>.
Default value is '1000'.
Graph XML Format:
<graph id="someGraphId">
<version>1.0</version>
<node id="someNodeId">
<operator>SubsetOp</operator>
<sources>
<sourceProduct>${sourceProduct}</sourceProduct>
</sources>
<parameters>
<sourceBands>
<band>string</band>
<.../>
</sourceBands>
<regionX>int</regionX>
<regionY>int</regionY>
<width>int</width>
<height>int</height>
<subSamplingX>int</subSamplingX>
<subSamplingY>int</subSamplingY>
<geoRegion>geometry</geoRegion>
</parameters>
</node>
</graph>
----------
SRGR
----------
Usage:
gpt SRGR [options]
Description:
Converts Slant Range to Ground Range
Source Options:
-Ssource=<file> Sets source 'source' to <filepath>.
This is a mandatory source.
Parameter Options:
-PinterpolationMethod=<string> Sets parameter 'interpolationMethod' to <string>.
Value must be one of 'Nearest-
neighbor interpolation', 'Linear interpolation', 'Cubic interpolation', 'Cubic2
interpolation', 'Sinc interpolation'.
Default value is 'Linear interpolation'.
-PsourceBands=<string,string,string,...> The list of source bands.
-PwarpPolynomialOrder=<int> The order of WARP polynomial function
Valid interval is [1, *).
Default value is '4'.
Graph XML Format:
<graph id="someGraphId">
<version>1.0</version>
<node id="someNodeId">
<operator>SRGR</operator>
<sources>
<source>${source}</source>
</sources>
<parameters>
<sourceBands>
<band>string</band>
<.../>
</sourceBands>
<warpPolynomialOrder>int</warpPolynomialOrder>
<interpolationMethod>string</interpolationMethod>
</parameters>
</node>
</graph>
--------------------
SAR-Simulation
--------------------
Usage:
gpt SAR-Simulation [options]
Description:
Rigorous SAR Simulation
Source Options:
-Ssource=<file> Sets source 'source' to <filepath>.
This is a mandatory source.
Parameter Options:
-PdemName=<string> The digital elevation model.
Value must be one of 'ACE', 'GETASSE30', 'SRTM
3Sec GeoTiff'.
Default value is 'SRTM 3Sec GeoTiff'.
-PdemResamplingMethod=<string> Sets parameter 'demResamplingMethod' to <string>.
Value must be one of
'NEAREST_NEIGHBOUR', 'BILINEAR_INTERPOLATION', 'CUBIC_CONVOLUTION'.
Default value is 'BILINEAR_INTERPOLATION'.
-PexternalDEMFile=<file> Sets parameter 'externalDEMFile' to <file>.
-PexternalDEMNoDataValue=<double> Sets parameter 'externalDEMNoDataValue' to <double>.
Default value is '0'.
-PsaveLayoverShadowMask=<boolean> Sets parameter 'saveLayoverShadowMask' to <boolean>.
Default value is 'false'.
-PsourceBands=<string,string,string,...> The list of source bands.
Graph XML Format:
<graph id="someGraphId">
<version>1.0</version>
<node id="someNodeId">
<operator>SAR-Simulation</operator>
<sources>
<source>${source}</source>
</sources>
<parameters>
<sourceBands>
<band>string</band>
<.../>
</sourceBands>
<demName>string</demName>
<demResamplingMethod>string</demResamplingMethod>
<externalDEMFile>file</externalDEMFile>
<externalDEMNoDataValue>double</externalDEMNoDataValue>
<saveLayoverShadowMask>boolean</saveLayoverShadowMask>
</parameters>
</node>
</graph>
----------------
DeburstWSS
----------------
Usage:
gpt DeburstWSS [options]
Description:
Debursts an ASAR WSS product
Source Options:
-Ssource=<file> Sets source 'source' to <filepath>.
This is a mandatory source.
Parameter Options:
-Paverage=<boolean> Sets parameter 'average' to <boolean>.
Default value is 'false'.
-PproduceIntensitiesOnly=<boolean> Sets parameter 'produceIntensitiesOnly' to <boolean>.
Default value is 'false'.
-PsubSwath=<string> Sets parameter 'subSwath' to <string>.
Value must be one of 'SS1', 'SS2', 'SS3', 'SS4', 'SS5'.
Default value is 'SS1'.
Graph XML Format:
<graph id="someGraphId">
<version>1.0</version>
<node id="someNodeId">
<operator>DeburstWSS</operator>
<sources>
<source>${source}</source>
</sources>
<parameters>
<subSwath>string</subSwath>
<produceIntensitiesOnly>boolean</produceIntensitiesOnly>
<average>boolean</average>
</parameters>
</node>
</graph>
-------------------------------
SARSim-Terrain-Correction
-------------------------------
Usage:
gpt SARSim-Terrain-Correction [options]
Description:
Orthorectification with SAR simulation
Source Options:
-Ssource=<file> Sets source 'source' to <filepath>.
This is a mandatory source.
Parameter Options:
-PapplyRadiometricNormalization=<boolean> Sets parameter 'applyRadiometricNormalization'
to <boolean>.
Default value is 'false'.
-PauxFile=<string> The auxiliary file
Value must be one of 'Latest Auxiliary
File', 'Product Auxiliary File', 'External Auxiliary File'.
Default value is 'Latest Auxiliary File'.
-PexternalAuxFile=<file> The antenne elevation pattern gain auxiliary
data file.
-PimgResamplingMethod=<string> Sets parameter 'imgResamplingMethod' to <string>.
Value must be one of
'NEAREST_NEIGHBOUR', 'BILINEAR_INTERPOLATION', 'CUBIC_CONVOLUTION'.
Default value is 'BILINEAR_INTERPOLATION'.
-PincidenceAngleForGamma0=<string> Sets parameter 'incidenceAngleForGamma0'
to <string>.
Value must be one of 'Use incidence angle
from Ellipsoid', 'Use projected local incidence angle from DEM'.
Default value is 'Use projected local
incidence angle from DEM'.
-PincidenceAngleForSigma0=<string> Sets parameter 'incidenceAngleForSigma0'
to <string>.
Value must be one of 'Use incidence angle
from Ellipsoid', 'Use projected local incidence angle from DEM'.
Default value is 'Use projected local
incidence angle from DEM'.
-PopenShiftsFile=<boolean> Show range and azimuth shifts file in a
text viewer
Default value is 'false'.
-PpixelSpacingInDegree=<double> The pixel spacing in degrees
Default value is '0'.
-PpixelSpacingInMeter=<double> The pixel spacing in meters
Default value is '0'.
-PprojectionName=<string> The projection name
Default value is 'Geographic Lat/Lon'.
-PrmsThreshold=<float> The RMS threshold for eliminating invalid GCPs
Valid interval is (0, *).
Default value is '1.0'.
-PsaveBetaNought=<boolean> Sets parameter 'saveBetaNought' to <boolean>.
Default value is 'false'.
-PsaveDEM=<boolean> Sets parameter 'saveDEM' to <boolean>.
Default value is 'false'.
-PsaveGammaNought=<boolean> Sets parameter 'saveGammaNought' to <boolean>.
Default value is 'false'.
-PsaveLocalIncidenceAngle=<boolean> Sets parameter 'saveLocalIncidenceAngle'
to <boolean>.
Default value is 'false'.
-PsaveProjectedLocalIncidenceAngle=<boolean> Sets parameter
'saveProjectedLocalIncidenceAngle' to <boolean>.
Default value is 'false'.
-PsaveSelectedSourceBand=<boolean> Sets parameter 'saveSelectedSourceBand'
to <boolean>.
Default value is 'true'.
-PsaveSigmaNought=<boolean> Sets parameter 'saveSigmaNought' to <boolean>.
Default value is 'false'.
-PwarpPolynomialOrder=<int> The order of WARP polynomial function
Value must be one of '1', '2', '3'.
Default value is '1'.
Graph XML Format:
<graph id="someGraphId">
<version>1.0</version>
<node id="someNodeId">
<operator>SARSim-Terrain-Correction</operator>
<sources>
<source>${source}</source>
</sources>
<parameters>
<rmsThreshold>float</rmsThreshold>
<warpPolynomialOrder>int</warpPolynomialOrder>
<imgResamplingMethod>string</imgResamplingMethod>
<pixelSpacingInMeter>double</pixelSpacingInMeter>
<pixelSpacingInDegree>double</pixelSpacingInDegree>
<projectionName>string</projectionName>
<saveDEM>boolean</saveDEM>
<saveLocalIncidenceAngle>boolean</saveLocalIncidenceAngle>
<saveProjectedLocalIncidenceAngle>boolean</saveProjectedLocalIncidenceAngle>
<saveSelectedSourceBand>boolean</saveSelectedSourceBand>
<applyRadiometricNormalization>boolean</applyRadiometricNormalization>
<saveSigmaNought>boolean</saveSigmaNought>
<saveGammaNought>boolean</saveGammaNought>
<saveBetaNought>boolean</saveBetaNought>
<incidenceAngleForSigma0>string</incidenceAngleForSigma0>
<incidenceAngleForGamma0>string</incidenceAngleForGamma0>
<auxFile>string</auxFile>
<externalAuxFile>file</externalAuxFile>
<openShiftsFile>boolean</openShiftsFile>
</parameters>
</node>
</graph>
---------------------------
Wind-Field-Estimation
---------------------------
Usage:
gpt Wind-Field-Estimation [options]
Description:
Estimate wind speed and direction
Source Options:
-Ssource=<file> Sets source 'source' to <filepath>.
This is a mandatory source.
Parameter Options:
-PsourceBands=<string,string,string,...> The list of source bands.
-PwindowSizeInKm=<double> Window size
Default value is '20.0'.
Graph XML Format:
<graph id="someGraphId">
<version>1.0</version>
<node id="someNodeId">
<operator>Wind-Field-Estimation</operator>
<sources>
<source>${source}</source>
</sources>
<parameters>
<sourceBands>
<band>string</band>
<.../>
</sourceBands>
<windowSizeInKm>double</windowSizeInKm>
</parameters>
</node>
</graph>
-----------------------
ProductSet-Reader
-----------------------
Usage:
gpt ProductSet-Reader [options]
Description:
Adds a list of sources
Parameter Options:
-PfileList=<string,string,string,...> Sets parameter 'fileList' to <string,string,string,...>.
Graph XML Format:
<graph id="someGraphId">
<version>1.0</version>
<node id="someNodeId">
<operator>ProductSet-Reader</operator>
<sources/>
<parameters>
<fileList>string,string,string,...</fileList>
</parameters>
</node>
</graph>
Graph Builder
Graph Builder
Graphs can be created visually with the Graph Builder, processed directly from the DAT and
then saved as XML files. These saved graphs can then also be used as the input for the
command line Graph Processing Tool (GPT) with a different set of input data products.
The Graph Builder allows the user to assemble graphs from a list of available operators and
connect operator nodes to their sources
Right click on the top panel to add an Operator. As Operators are added, their
corresponding OperatorUIs are created and added as tabs to a property sheet. The
OperatorUIs accept user input for the Operator’s parameters.
Connect Operator graph nodes by moving the mouse over the left edge of an Operator
node until a circle appears. Then drag the mouse over the Operator node you wish to
designate as the source. Every node except for a Reader node will require a source.
Before saving or processing a graph, the Graph Builder calls each operator to validate its
parameters. Validation also occurs at the graph level to ensure there are no cycles and
that each operator has the appropriate connections.
Graph Batch Processing
Batch Processing
The Batch Processing tool available via the DAT allows you to execute a single reader/writer
graph for a set of products. Select the Batch Processing tool from the Graphs menu and then
press the "Load" button to browse for a previously saved graph. Next, add products in the
IO tab by pressing the "Add" button or dragging and dropping a ProductSet or Products from
the Project or Products views. Set the target folder where the output will be written to and
then press "Run".
Batch Processing can also be called from within the Product Library. In the Product Library,
select the products you would like to import and then press the Batch Processing button. In
this way you may pre-process a list of products before working with them in the DAT.
Principal Component Analysis
1. Average the pixels across the input images to compute a mean image. Optionally
subtract the computed mean image from each input image.
2. Subtract the mean value of each input image (or image from step 1) from itself to
produce zero-mean images.
3. Compute covariance matrix from the zero-mean images given in step 2.
4. Perform eigenvalue decomposition of the covariance matrix.
5. Compute PCA images by multiplying the eigenvector matrix by the zero-mean
images given in step2. Here the user can select the eigenvectors instead of using all
vectors. The selection is done with a user input threshold, which is in percentage, on
the eigenvalues. For example, in the case of three input images, a1, a2 and a3
(where a1 » a2 » a3) are the eigenvalues, if the threshold is 80% and (a1+a2) »
80%, then a3 will not used in computing the PCA images. Only two PCA images will
be produced.
Parameters Used
The following parameters are used by the operator:
1. Source Bands: All bands (real or virtual) of the source product. You may select one
or more bands for performing PCA. If no bands are selected, then by default all
bands will be selected.
2. Eigenvalue Threshold: The threshold used in the eigenvalue selection for producing
the final PCA images.
3. Show eigenvalues: Checkbox indicating that eigenvalues are displayed automatically.
4. Subtract Mean Image: Checkbox indicating that the mean image of user selected
input images will be subtracted from each input image before Principal Component
Analysis is applied.
Expectation Maximization (EM) Cluster Analysis
Introduction
Cluster analysis (or clustering) is the classification of objects into different groups, or more precisely, the
partitioning of a data set into subsets (clusters or classes), so that the data in each subset (ideally) share
some common trait - often proximity according to some defined distance measure. Data clustering is a
common technique for statistical data analysis, which is used in many fields, including machine learning,
data mining, pattern recognition, image analysis and bioinformatics. The computational task of classifying
the data set into k clusters is often referred to as k-clustering.
Algorithm
The EM algorithm can be regarded as a generalization of the k-means algorithm. The main differences are:
1. Pixels are not assigned to clusters. The membership of each pixel to a cluster is defined by a (posterior)
probability. For each pixel, there are as many (posterior) probability values as there are clusters and for
each pixel the sum of (posterior) probability values is equal to unity.
2. Clusters are defined by a prior probability, a cluster center, and a cluster covariance matrix. Cluster
centers and covariance matrixes determine a Mahalanobis distance between a cluster center and a pixel.
3. For each cluster a pixel likelihood function is defined as a normalized Gaussian function of the
Mahalanobis distance between cluster center and pixels.
4. Posterior cluster probabilities as well as cluster centers and covariance matrixes and are recalculated
iteratively. In the E-step, for each cluster, the cluster prior and posterior probabilities are recalculated.
In the M-step all cluster centers and covariance matrixes are recalculated from the updated posteriors,
so that the resulting data likelihood function is maximized.
5. When the iteration is completed, each pixel is assigned to the cluster where the posterior probability is
maximal.
The algorithm is described in detail on the Wikipedia entry on Expectation maximization. Use this algorithm
when you want to perform a cluster analysis of a small scene or region-of-interest and are not satisfied with
the results obtained from the k-means algorithm.
The result of the cluster analysis is written to a band named class_indices. The values in this band indicate
the class indices, where a value '0' refers to the first cluster, a value of '1' refers to the second cluster, etc.
The class indices are sorted according to the prior probability associated with cluster, i.e. a class index of '0'
refers to the cluster with the highest probability.
Note that an index coding is attached to the class_indices band, which can be edited in the Color
Manipulation Window. It is possible to change the label and the color associated with a class index. The last
columns of the color manipulation window lists the location of the cluster centers. Further information on the
clusters is listed in the Cluster-Analysis group of the product metadata.
User Interface
The EM cluster analysis tool can be invoked by selecting the EM Cluster Analysis command in the Image
Analysis submenu. In the command line it is available by means of the Graph Processing Tool gpt. Please
type gpt EMClusterAnalysis -h for further information.
Selecting the EM Cluster Analysis command from the Image Analysis menu pops up the following dialog:
Button Group
Run Creates the target product. The cluster analysis is actually deferred until its band data are accessed,
either by writing the product to a file or by viewing its band data. When the Save as option is checked,
the cluster analysis is triggered automatically.
Close Closes the dialog.
Help Displays this page in the Help.
Further information
A good starting point for obtaining further information on cluster analysis terms and algorithms is the
Wikipedia entry on data clustering.
K-Means Cluster Analysis
Introduction
Cluster analysis (or clustering) is the classification of objects into different groups, or more precisely, the
partitioning of a data set into subsets (clusters or classes), so that the data in each subset (ideally) share
some common trait - often proximity according to some defined distance measure. Data clustering is a
common technique for statistical data analysis, which is used in many fields, including machine learning,
data mining, pattern recognition, image analysis and bioinformatics. The computational task of classifying
the data set into k clusters is often referred to as k-clustering.
Algorithm
The k-means clustering tool is capable of working with arbitrary large scenes. Given the number of clusters
k, the basic algorithm implemented is:
1. Randomly choose k pixels whose samples define the initial cluster centers.
2. Assign each pixel to the nearest cluster center as defined by the Euclidean distance.
3. Recalculate the cluster centers as the arithmetic means of all samples from all pixels in a cluster.
4. Repeat steps 2 and 3 until the convergence criterion is met.
The convergence criterion is met when the maximum number of iterations specified by the user is exceeded
or when the cluster centers did not change between two iterations. This algorithm should be your primary
choice for performing a cluster analysis. For the analysis of large scenes, this algorithm is strongly
recommended.
The result of the cluster analysis is written to a band named class_indices. The values in this band indicate
the class indices, where a value '0' refers to the first cluster, a value of '1' refers to the second cluster, etc.
The class indices are sorted according to the number of members in the corresponding cluster, i.e. a class
index of '0' refers to the cluster with the most members.
Note that an index coding is attached to the class_indices band, which can be edited in the Color
Manipulation Window. It is possible to change the label and the color associated with a class index. The last
columns of the color manipulation window lists the location of the cluster centers. The cluster centers are
also listed in the Cluster-Analysis group of the product metadata.
User Interface
The k-means cluster analysis tool can be invoked by selecting the K-Means Cluster Analysis command in
the Image Analysis submenu. In the command line it is available by means of the Graph Processing Tool
gpt. Please type gpt KMeansClusterAnalysis -h for further information.
Selecting the K-Means Cluster Analysis command from the Image Analysis menu pops up the following
dialog:
Button Group
Run Creates the target product. The cluster analysis is actually deferred until its band data are accessed,
either by writing the product to a file or by viewing its band data. When the Save as option is checked,
the cluster analysis is triggered automatically.
Close Closes the dialog.
Help Displays this page in the Help.
Further information
A good starting point for obtaining further information on cluster analysis terms and algorithms is the
Wikipedia entry on data clustering.
Data Analysis
1. Mean
2. Standard Deviation
3. Coefficient of Variation
4. Equivalent Number of Looks
Create Stack
When two products are collocated, the band data of the slave product is resampled into the
geographical raster of the master product. In order to establish a mapping between the samples
in the master and the slave rasters, the geographical position of a master sample is used to find
the corresponding sample in the slave raster. If there is no sample for a requested geographical
position, the master sample is set to the no-data value which was defined for the slave band. The
collocation algorithm requires accurate geopositioning information for both master and slave
products. When necessary, accurate geopositioning information may be provided by ground
control points. The metadata for the stack product is copied from the metadata of the master
product.
Output Extents
User can select one of the following three extents for the collocated images:
● Master: the master image extents (Master extents are always used with "None" resampling)
● Maximum: both the common coverage and the non overlapping areas
● Minimum: only the common coverage
Parameters Used
The following parameters are used by this operator:
1. Master Band: All bands (real or virtual) of the selected product. User can select one band (for
real image) or two bands (i and q bands for complex image) as master band for co-
registration.
2. Slave Band: All bands (real or virtual) of the selected product. User can select one band (for
real image) or two bands (i and q bands for complex image) as slave band for co-registration.
3. Resampling Type: It specifies the resampling method.
4. Output Extents: The output image extent.
Product Subset
Subset Operator
If you are not interested in the whole image of a product, you may specify an area of the
product to be loaded. You can select the area by entering the top left corner and the width and
height. You can also specify a sub-sampling in the X or Y directions.
Band Arithmetic
The Convert Datatype Operator performs gain conversion. The data will be formatted to be
able to adjust for the dynamic range of the data in the following ways:
This will make it possible to be able to convert between the following data types:
● 8-bit integer
● 16-bit integer
● 32-bit integer
● 32-bit float
● complex integer (16 bits + 16 bits)
● complex float (32 bits + 32 bits)
Oversample
Oversample Operator
This operator upsamples a real or complex image through frequency domain zero-padding.
The algorithm takes into account the value of the Doppler Centroid Frequency when
padding the azimuth spectrum. For real image, the upsampled image is also a real image,
and for complex image, the up sampled image is a complex image.
Parameters Used
If the upsampled image is output by image size, then the following parameters are used by
the operator:
1. Source Bands: All bands (real or virtual) of the source product. User can select one or
more bands for producing upsampled images. If no bands are selected, then by default
all bands are selected.
2. Output Image By: The method for determining upsampled image dimension.
3. Output Image Rows: The row size of the upsampled image.
4. Output Image Columns: The column size of the upsampled image.
5. Use PRF Tile Size: Checkbox indicating the tile size in processing is PRF by image width.
In not checked, system computed tile size is used. In case the image is large, system
computed tile size should be used to avoid memory problem.
If the upsampled image is output by image dimension ratio, then the following parameters
are used by the operator:
1. Source Bands: All bands (real or virtual) of the source product. User can select one or
more bands for producing upsampled images. If no bands are selected, then by default
all bands are selected.
2. Output Image By: The method for determining upsampled image dimension.
3. Width Ratio: The ratio of the upsampled image width and the source image width.
4. Height Ratio: The ratio of the upsampled image height and the source image height.
5. Use PRF Tile Size: Checkbox indicating the tile size in processing is PRF by image width.
In not checked, system computed tile size is used. In case the image is large, system
computed tile size should be used to avoid memory problem.
If the upsampled image is output by pixel spacing, then the following parameters are used
by the operator:
1. Source Bands: All bands (real or virtual) of the source product. User can select one or
more bands for producing upsampled images. If no bands are selected, then by default
all bands are selected.
2. Output Image By: The method for determining upsampled image dimension.
3. Range Spacing: The range pixel spacing of the upsampled image.
4. Azimuth Spacing: The azimuth pixel spacing of the upsampled image.
5. Use PRF Tile Size: Checkbox indicating the tile size in processing is PRF by image width.
In not checked, system computed tile size is used. In case the image is large, system
computed tile size should be used to avoid memory problem.
Undersample
Undersample Operator
This operator downsamples a real or complex image using sub-sampling method or lowpass
filtering method.
Undersampling Method
Low-Pass Kernel
● The pre-defined low-pass kernel has three dimensions: 3x3, 5x5 and 7x7.
● The elements of the low-pass kernel are all 1's.
Parameters Used
If the sub-sampling method is selected for the downsampling, then the following
parameters are used by the operator:
1. Source Band: All bands (real or virtual) of the source product. User can select one or
more bands for producing downsampled images. If no bands are selected, then by
default all bands are selected.
2. Under-Sampling Method: Sub-Sampling method.
3. Sub-Sampling in X: User provided sub-sampling rate in range.
4. Sub-Sampling in Y: User provided sub-sampling rate in azimuth.
If the Lowpass Filtering method is selected for the downsampling, and the downsampled
image is output by image size, then the following parameters are used by the operator:
1. Source Band: All bands (real or virtual) of the source product. User can select one or
more bands for producing downsampled images. If no bands are selected, then by
default all bands are selected.
2. Under-Sampling Method: Kernel Filtering method.
3. Filter Size: The lowpass filter size.
4. Output Image By: The method for determining output image dimension.
5. Output Image Rows: The row size of the downsampled image.
6. Output Image Columns: The column size of the downsampled image.
If the Lowpass Filtering method is selected for the downsampling, and the downsampled
image is output by image dimension ratio, then the following parameters are used by the
operator:
1. Source Band: All bands (real or virtual) of the source product. User can select one or
more bands for producing downsampled images. If no bands are selected, then by
default all bands are selected.
2. Under-Sampling Method: Kernel Filtering method.
3. Filter Size: The lowpass filter size.
4. Output Image By: The method for determining output image dimension.
5. Width Ratio: The ratio of the downsampled image width and the source image width.
6. Height Ratio: The ratio of the downsampled image height and the source image height.
If the Lowpass Filtering method is selected for the downsampling, and the downsampled
image is output by pixel spacing, then the following parameters are used by the operator:
1. Source Band: All bands (real or virtual) of the source product. User can select one or
more bands for producing downsampled images. If no bands are selected, then by
default all bands are selected.
2. Under-Sampling Method: Lowpass Filtering method.
3. Filter Size: The lowpass filter size.
4. Output Image By: The method for determining output image dimension.
5. Range Spacing: The range pixel spacing of the downsampled image in meters.
6. Azimuth Spacing: The azimuth pixel spacing of the downsampled image in meters.
Fill DEM Hole
● For ASAR product, DORIS precise orbit file generated by the Centre de Traitement
Doris Poseidon (CTDP) and Delft University can be applied. It provides the satellite
positions and velocities in ECEF coordinates every 60 seconds.
● For ERS product, DELFT precise orbit file generated by Delft Institute for Earth-Oriented
Space Research (DEOS) can be applied. It provides the satellite ephemeris information
(latitude, longitude, height) every 60 seconds. The operator first converts the satellite
position from (latitude, longitude, height) to ECEF coordinates, then computes the
velocity information numerically.
● Also for ERS product, PRARE precise orbit file generated by Delft University can be
applied. It provides the same information every 30 seconds.
● DELFT orbit files can be downloaded automatically from the DELFT FTP server.
● DORIS and PRARE orbit files must be manually downloaded and placed in the folders
specified in the settings dialog.
Parameters Used
The following parameters are used by the operator:
1. Orbit Type: User can select the type of orbit file for the application. Currently the
following orbit file types are supported:
❍ DORIS_VOR
❍ DORIS_POR
❍ DELFT_PRECISE_ENVISAT
❍ DELFT_PRECISE_ERS_1
❍ DELFT_PRECISE_ERS_2
❍ PRARE_PRECISE_ERS_1
❍ PRARE_PRECISE_ERS_2
Calibration
Calibration Operator
The objective of SAR calibration is to provide imagery in which the pixel values can be directly related to the radar
backscatter of the scene. Though uncalibrated SAR imagery is sufficient for qualitative use, calibrated SAR images
are essential to quantitative use of SAR data.
Typical SAR data processing, which produces level 1 images, does not include radiometric corrections and
significant radiometric bias remains. Therefore, it is necessary to apply the radiometric correction to SAR images
so that the pixel values of the SAR images truly represent the radar backscatter of the reflecting surface. The
radiometric correction is also necessary for the comparison of SAR images acquired with different sensors, or
acquired from the same sensor but at different times, in different modes, or processed by different processors.
This Operator performs different calibrations for ASAR, ERS, ALOS and Radarsat-2 products deriving the sigma
nought images. Optionally gamma nought and beta nought images can also be created.
Product Supported
● ASAR (IMS, IMP, IMM, APP, APS, APM, WSM) and ERS products (SLC, IMP) are fully supported
● Third party SAR mission: not fully supported. Please refer to Supported_Products_4A.xls file available from
http://liferay.array.ca:8080/web/nest/documentation
ASAR Calibration
For ground range detected products, the following corrections are applied:
● incidence angle
● absolute calibration constant
In the event that the antenna pattern used to process an ASAR product is superseded, the operator removes the
original antenna pattern, and apply a new, updated one. The old antenna pattern gain data is obtained from the
external XCA file specified in the metadata of the source product. The new antenna pattern gain data and
the calibration constant are obtained from the user specified XCA file. For XCA file selection, user has the following
options:
● latest auxiliary file (the most recent XCA file available in the local repository)
● product auxiliary file (the XCA file specified in the product metadata)
● external auxiliary file (user provided XCA file)
If "product auxiliary file" is selected, then no retro-calibration is performed, i.e. no antenna pattern gain is
removed or applied. By default the latest XCA file available for the product is used.
For slant range complex products, the following corrections are applied:
● incidence angle
● absolute calibration constant
● range spreading loss
● antenna pattern gain
The antenna pattern gain data and the calibration constant are obtained from the user specified XCA file. For XCA
file selection, user has the following options:
● latest auxiliary file (the most recent XCA file available in the local repository)
● external auxiliary file (user provided XCA file)
By default, the latest auxiliary file available for the product will be used for the calibration.
The default output of calibration is sigma0 image. User can also select gamma0 and beta0 images outputting as
virtual bands in the target product.
In the following, the calibration process is related to the different type of ASAR products.
IMS Products
The sigma nought image can be derived from ESA’s ASAR level 1 IMS products as the follows (adapted from [1]):
where
APS Products
The methodology to derive the sigma nought is the same of the IMS data but an additional factor (R / Rref) must
be taken into account:
ERS Calibration
The operator is able to calibrate ERS VMP, ERS PGS CEOS and ERS PGS ENVISAT ESA standard products
generated by different ESA Processing and Archiving Facilities, such as the German PAF (D-PAF), the Italian PAF
(I-PAF) and the United-Kingdom PAF (UK-PAF), and at the acquisitions stations such as PDHS-K (Kiruna) and
PDHS-E (Esrin).
For ERS-1 ground range product, the following corrections are applied:
● incidence angle
● calibration constant
● replica pulse power variations
● analogue to digital converter non-linearity
For ERS-1 slant range product, the following corrections are applied:
● incidence angle
● calibration constant
● analogue to digital converter non-linearity
● antenna elevation pattern
● range spreading loss
For ERS-2 ground range product, the following corrections are applied:
● incidence angle
● calibration constant
● analogue to digital converter non-linearity
For ERS-2 slant range product, the following corrections are applied:
● incidence angle
● calibration constant
● analogue to digital converter non-linearity
● antenna elevation pattern
● range spreading loss
Therefore the operator applies only the absolute calibration constant correction to the products.
For detailed ALOS PALSAR product calibration algorithm, reader is referred to [3].
Radarsat-2 Calibration
The operator performs absolute radiometric calibration for Radarsat 2 products by applying the sigma0, beta0 and
gamma0 look up tables provided in the product.
For detailed Radarsat-2 product calibration algorithm, reader is referred to [4].
TerraSAR-X
The operator performs absolute radiometric calibration for TerraSAR-X products by applying the simplified
approach where Noise Equivalent Beta Naught is neglected. Only Calibration constant correction and Incidence
angle correction are applied.
For detailed TerraSAR-X product calibration algorithm, reader is referred to [5].
Cosmo-SkyMed
The operator performs absolute radiometric calibration for Cosmo-SkyMed products by applying few product factor
corrections.
For detailed Cosmo-SkyMed product calibration algorithm, reader is referred to [6].
Parameters Used
The parameters used by the operator are as follows:
1. Source Band: All bands (real or virtual) of the source product. User can select one or more bands for
calibration. If no bands are selected, then by default, all bands are used for calibration. The operator is able to
detect the right input band.
2. Auxiliary File: User selected XCA file for antenna pattern correction. The following options are available: Latest
Auxiliary File, Product Auxiliary File (for detected product only) and External Auxiliary File. By default, the
Latest Auxiliary File is used.
3. Scale in dB: Checkbox indicating that the calibrated product is saved in dB scale. If not checkmarked, then
the product is saved in linear scale.
4. Create gamma0 virtual band: Checkbox indicating that gamma0 image is created as a virtual band. If not
checkmarked, no gamma0 image is created.
5. Create beta0 virtual band: Checkbox indicating that beta0 image is created as a virtual band. If not
checkmarked, no beta0 image is created.
Reference:
[1] Rosich B., Meadows P., Absolute calibration of ASAR Level 1 products, ESA/ESRIN, ENVI-CLVL-EOPG-TN-03-
0010, Issue 1, Revision 5, October 2004
[2] Laur H., Bally P., Meadows P., Sánchez J., Schättler B., Lopinto E. & Esteban D., ERS SAR Calibration:
Derivation of σ0 in ESA ERS SAR PRI Products, ESA/ESRIN, ES-TN-RS-PM-HL09, Issue 2, Rev. 5f, November 2004
[3] Lavalle M., Absolute Radiometric and Polarimetric Calibration of ALOS PALSAR Products, Issue 1, Revision 2,
01/04/2008
[4] RADARSAT Data Product Specification, RSI-GS-026, Revision 3, May 8, 2000
[5] Radiometric Calibration of TerraSAR-X data - TSXX-ITD-TN-0049-radiometric_calculations_I1.00.doc, 2008
[6] For further details about Cosmo-SkyMed calibration please contact Cosmo-SkyMed Help Desk at info.cosmo@e-
geos.it
Remove Antenna Pattern
This operator removes antenna pattern and range spreading loss corrections applied to the
original ASAR and ERS products. For ERS product, it also removes replica pulse power
correction and applies the analogue to digital converter (ADC) power less correction. This
operator cannot be applied to multilooked product. Details of the functions of the operator
are given below.
ASAR Products
For ground range detected products, the following corrections are removed:
For slant range complex products, such as ASAR IMS, APS products, no antenna pattern
or range spreading correction has been applied, therefore the operater is not applicable to
these products.
ERS Products
For ground range products, the following operations are performed:
Other Products
For other products, such as ALOS PALSAR and RadarSAT-2 products, the operator is not
applicable.
The parameter used by the operator is as follows:
1. Source Band: All bands (amplitude or intensity) of the source product. User can select
one or more bands. If no bands are selected, then by default, all bands are used for
the operation. The operator is able to detect the right input band.
Reference:
[1] Rosich B., Meadows P., Absolute calibration of ASAR Level 1 products, ESA/ESRIN,
ENVI-CLVL-EOPG-TN-03-0010, Issue 1, Rev. 5, October 2004
[2] Laur H., Bally P., Meadows P., Sánchez J., Schättler B., Lopinto E. & Esteban D., ERS
SAR Calibration: Derivation of σ0 in ESA ERS SAR PRI Products, ESA/ESRIN, ES-TN-RS-PM-
HL09, Issue 2, Rev. 5f, November 2004
GCP Selection
The co-registration is accomplished through two major processing steps: GCP selection and WARP. In GCP
selection, a set of uniformly spaced Ground Control Points (GCPs) in the master image are generated first,
then their corresponding GCPs in the slave image are computed. In WARP processing step, these GCP pairs
are used to construct a WARP distortion function, which establishes a map between pixels in the master
and slave images. With the WARP function computed, the co-registered image is generated by mapping
the slave image pixels onto master image.
This operator computes slave GCPs by coarse registration or coarse and fine registrations depending on
the input images are real or complex. For real input images, coarse registration is performed, while for
complex images both coarse and fine registrations are performed. The fine registration uses the image
coherence technique to further increase the precision of the GCPs.
Coarse Registration
The coarse registration is achieved using a cross correlation operation between the images on a series of
imagettes defined across the images. The major processing steps are listed as the follows:
1. For a given master GCP, find initial slave GCP using geographical position information of GCP.
2. Determine the imagettes surrounding the master and slave GCPs using user selected coarse
registration window size.
3. Compute new slave GCP position by performing cross-correlation of the master and slave imagettes.
4. If the row or column shift of the new slave GCP from the previous position is no less than user
selected GCP tolerance and the maximum number of iteration is not reached, then move the slave
imagette to the new GCP position and go back to step 3. Otherwise, save the new slave GCP and
stop.
Those GCPs, for which the maximum number of iterations has been reached or its final GCP shift is still
greater than the tolerance, are eliminated as invalid GCPs.
Fine Registration
The additional fine registration for complex images is achieved by maximizing of the complex coherence
between the images at a series of imagettes defined across the images. It is assumed the coarse
registration has been performed before this operation. Some major processing steps are given below:
1. For each given master-slave GCP pair, get complex imagettes surrounding the master and slave
GCPs using user selected coarse registration window size.
2. Compute initial coherence of the two imagettes.
3. Start from the initial slave GCP position, the best sub-pixel shift of slave GCP is computed such that
the slave imagette at the new GCP position gives the maximum coherence with the master
imagette. Powell's method is used in the optimization [1].
This processing step is optional for complex image co-registration and user can skip it by
uncheckmarking the "Apply fine Registration" box in the dialog box.
Coherence Computation
Given master imagette I1 and slave imagette I2, there are two ways to compute the coherence of the
two complex imagettes.
● Method 1: Let I1 and I2 be RxC imagettes and denote by I*2 the complex conjugate of I2. Then the
coherence is computed by
● Method 2: The coherence is computed with a 3x3 (user can change the size) sliding window in two
steps:
1. First for each pixel in the imagette, a 3x3 window centered at the pixel is determined for both
master and slave imagettes, and coherence is computed for the two windows using equation
above.
2. Average coherences computed for all pixels in the imagette to get the final coherence for the
imagette.
User can select the method to use by selecting radio button "Compute Coherence with Sliding Window".
Parameters Used
The parameters used by the Operator are as follows:
1. Number of GCPs: The total number of GCPs used for the co-registration.
2. Coarse Registration Window Width: The window width for cross-correlation in coarse GCP selection.
It must be power of 2.
3. Coarse Registration Window Height: The window height for cross-correlation in coarse GCP
selection. It must be power of 2.
4. Row Interpolation Factor: The row upsampling factor used in cross correlation operation. It must be
power of 2.
5. Column Interpolation Factor: The column upsampling factor used in cross correlation operation. It
must be power of 2.
6. Max Iterations: The maximum number of iterations for computing coarse slave GCP position.
7. GCP Tolerance: The stopping criterion for slave GCP selection.
8. Apply fine Registration: Checkbox indicating applying fine registration for complex image co-
registration.
9. Coherence Window Size: The dimension of the sliding window used in coherence computation.
10. Coherence Threshold: Only GCPs with coherence above this threshold will be used in co-registration.
11. Fine Registration Window Width: The window width for coherence calculation in fine GCP selection.
It must be power of 2.
12. Fine Registration Window Height: The window height for coherence calculation in fine GCP selection.
It must be power of 2.
13. Compute Coherence with Sliding Window: If selected, sliding window with dimension given in 9 will
be used in coherence computation. Otherwise, coherence will be computed directly from all pixel in
the Fine Registration Window without using sliding window.
Reference:
[1] William H. Press, Brian P. Flannery, Saul A. Teukolsky, Willaim T. Vetterling, Numerical Recipes in C:
The Art of Scientific Computing, second eidition, 1992
Multilook
Multilook Operator
Generally, a SAR original image appears speckled with inherent speckle noise. To reduce this
inherent speckled appearance, several images are incoherently combined as if they
corresponded to different looks of the same scene. This processing is generally known as
multilook processing. As a result the multilooked image improves the image interpretability.
Additionally, multilook processing can be used to produce an application product with nominal
image pixel size.
Multilook Method
There are two ways to implement the multilook processing:
This operator implements the space-domain multilook method by averaging a single look
image with a small sliding window.
● GR square pixel: the user specifies the number of range looks while the number of
azimuth looks is computed based on the ground range spacing and the azimuth
spacing. The window size is then determined by the number of range looks and the
number of azimuth looks. As a result, image with approximately square pixel spacing
on the ground is produced.
● Independent looks: the number of looks in range and azimuth can be selected
independently. The window size is then determined by the number of range looks and
the number of azimuth looks.
Parameters Used
The following parameters are used by the operator:
1. Source Band: All bands (real or virtual) of the source product. User can select one or
more bands for producing multilooked images. If no bands are selected, then by default
all bands are selected.
2. GR Square Pixel: If selected, the number of azimuth looks is computed based on the
user selected number of range looks, and range and azimuth spacings are
approximately the same in the multilooked image.
3. Independent Looks: If selected, the number of range looks and the number of azimuth
looks are selected independently by the user.
4. Number of Range Looks: The number of range looks.
5. Number of Azimuth Looks: The number of azimuth looks.
6. Mean GR Square Pixel: The average of the range and azimuth pixel spacings in the
multilooked image. It is computed based on the number of range looks, the number of
azimuth looks and the source image pixel spacings, and is available only when 'GR
Square Pixel' is selected.
7. Output Intensity: This checkbox is for complex product only. If not checked, any user
selected bands (I, Q, intensity or phase) are multilooked and output individually. If
checked, user can only select I/Q or intensity band and the output is multilooked
intensity band.
Reference: Small D., Schubert A., Guide to ASAR Geocoding, RSL-ASAR-GC-AD, Issue 1.0,
March 2008
Speckle Filter
Filters Supported
The operator supports the following speckle filters for handling speckle noise of
different distributions (Gaussian, multiplicative or Gamma):
● Mean
● Median
● Frost
● Lee
● Refined Lee
● Gamma-MAP
Parameters Used
For most filters, the following parameters should be selected (see figure 1 for example):
1. Source Band: All bands (real or virtual) of the source product. User can select one or
more bands for producing filtered images. If no bands are selected, then by default all
bands will be selected. For complex product, only the intensity band can be selected.
2. Filter: The speckle filter.
3. Size X: The filtering kernel width.
4. Size Y: The filtering kernel height.
5. Frost Damping Factor: The damping factor for Frost filter.
Figure 1. Dialog box for Mean filter.
For Frost filter, one extra parameter should be selected (see figure 2):
For Refined Lee filter, the following parameter should be selected (see Figure 3):
1. Edge Threshold: A threshold for detecting edges. Area of 7x7 pixels with local variance
lower than this threshold is considered flat and normal Local Statistics Filter is used for
the filtering. If the local variance is greater than the threshold, then the area is
considered as edge area and Refined Lee filter will be used for the filtering.
Figure 3. Dialog box for Refined Lee filter.
Reference:
[1] J. S. Lee, E. Pottier, Polarimetric SAR Radar Imaging: From Basic to Applications, CRC
Press, Taylor & Francis Group, 2009.
[2] G. S. Robinson, “Edge Detection by Compass Gradient Masks”, Computer Graphics and
Image Processing, vol. 6, No. 5, Oct. 1977, pp 492-502.
[3] V. S. Frost, J. A. Stiles, K. S. Shanmugan, J. C. Holtzman, \A Model for Radar Images
and Its Application to Adaptive Digital Filtering of Multiplicative Noise", IEEE Transactions
on Pattern Analysis and Machine Intelligence, Vol. PAMI-4, pp. 157-166, 1982
[4] Mansourpour M., Rajabi M.A., Blais J.A.R., “Effects and Performance of Speckle Noise
Reduction Filters on Active Radar and SAR Images”, http://people.ucalgary.ca/~blais/
Mansourpour2006.pdf
Speckle Filter
for k = 1, ..., N, where E[I] is the local mean value of pixels in a window centered at (x, y)
in image I.
Pre-Processing Steps
The operator has the following two pre-processing steps:
1. The first step is calibration in which σ0 is derived from the digital number at each pixel.
This ensures that values of from different times and in different parts of the image are
comparable.
2. The second step is registration of the multitemporal images.
Here it is assumed that pre-processing has been performed before applying this operator.
The input to the operator is assumed to be a product with multiple calibrated and co-
registered bands.
Parameters Used
The following parameters are used by the operator:
1. Source Band: All bands (real or virtual) of the source product. User can select one or
more bands for producing the filtered image. If no bands are selected, then by default
all bands will be selected.
2. Window Size: Dimension of the sliding window that is used in computing spatial
average in each image of the temporal sequence. The supported window sizes are 3x3,
5x5, 7x7, 9x9 and 11x11.
Reference: S. Quegan, T. L. Toan, J. J. Yu, F. Ribbes and N. Floury, “Multitemporal ERS SAR
Analysis Applied to Forest Mapping”, IEEE Transactions on Geoscience and Remote Sensing,
vol. 38, no. 2, March 2000.
Warp
Warp Operator
The Warp operator is a component of coregistration. This operator computes a warp function from
the master-slave ground control point (GCP) pairs produced by GCP Selection operator, and
generates the final co-registered image.
1. First a warp function is computed using the initial master-slave GCP pairs.
2. Then the master GCPs are mapped to the slave image with the warp function, and the
residuals between the mapped master GCPs and their corresponding slave GCPs are
computed. The root mean square (RMS) and the standard deviation for the residuals are also
computed.
3. Next, the master-slave GCP pairs are filtered with the mean RMS. GCP pairs with RMS greater
than the mean RMS are eliminated.
4. The same procedure (step 1 to 3) is repeated up to 2 times if needed and each time the
remaining master-slave GCP pairs from previous elimination are used.
5. Finally the master-slave GCP pairs are filtered with the user selected RMS threshold and the
final warp function is computed with the remaining master-slave GCP pairs.
Residual File
The residual file is a text file containing information about master and slave GCPs before and after
each elimination. The residual for a GCP pair is the errors introduced by the warping function and
can be used as a good indicator of the quality of the warp function. It is often very useful to check
the information contained within the residual file to see if the co-registration process can be
considered to have been successful. For example, the "RMS mean" value can be used as an
approximate figure of merit for the co-registration. User can view the residual file by checkmarking
the "Show Residuals" box in the dialog box. Detailed information contained in the residual file are
listed below:
● Band name
● Warp coefficients
● Master GCP coordinates
● Slave GCP coordinates
● Row and column residuals
● Root mean square errors (RMS)
● Row residual mean
● Row residual standard deviation
● Column residual mean
● Column residual standard deviation
● RMS mean
● RMS standard deviation
Parameters Used
The following parameters are used by the operator:
1. RMS Threshold: The criterion for eliminating invalid GCPs. In general, the smaller the
threshold, the better the GCP quality, but lower the number of GCPs.
2. Warp Polynomial Order: The degree of the warp polynomial.
3. Interpolation Method: The interpolation method used computing co-registered slave image
pixel value.
4. Show Residuals: Display GCP residual file if selected.
WSS Deburst
The first step in performing the azimuth debursting is to find all range lines belonging to the
same zero-Doppler time.
Produce intensities only: The complex data is converted to intensity and the peak of each
burst is used.
Average intensities: When producing intensities, the coresponding burst lines are mean
averaged.
If "Product intensities only" is not selected, then complex data will be produced along
with virtual bands for intensity and phase.
WSS Mosaic
ASAR WSS products differ from conventional image products in that the data from the five
subswaths acquired by five antenna beams SS1 through SS5 are stored in separate image
records. The five WSS beams acquire data with a substantial overlap (typically several hundred
range samples, ~ 9 Km). The incidence angle variation of 16 to 43 degrees across beams SS1
through SS5 creates large differences in the nominal near and far range backscatter intensities.
An ASAR WSS product is delivered as a single data file containing the subswath data records
arranged sequentially.
Use can use the Graph for Deburst, Calibrate, Detect and Mosaic an ASAR WSS product. It will
deburst and split the WSS product into five subswath products, apply calibration and multilook
and then mosaic them back into one product.
SENTINEL-1 TOPSAR Deburst and Merge
For the TOPSAR IW and EW SLC products, each product consists of one image per swath
per polarization. IW products have 3 swaths and EW have 5 swaths. Each sub-swath image
consists of a series of bursts, where each burst was processed as a separate SLC image.
The individually focused complex burst images are included, in azimuth-time order, into a
single subswath image, with black-fill demarcation in between, similar to the ENVISAT
ASAR Wide ScanSAR SLC products.
For IW, a focused burst has a duration of 2.75 sec and a burst overlap of ~50-100
samples. For EW, a focused burst has a duration of 3.19 sec. Overlap increases in range
within a sub- swath.
Images for all bursts in all sub-swaths of an IW SLC product are re-sampled to a common
pixel spacing grid in range and azimuth. Burst synchronisation is ensured for both IW and
EW products.
Unlike ASAR WSS which contains large overlap between beams, for S-1 TOPSAR, the
imaged ground area of adjacent bursts will only marginally overlap in azimuth just enough
to provide contiguous coverage of the ground. This is due to the one natural azimuth look
inherent in the data.
For GRD products, the bursts are concatenated and sub-swaths are merged to form one
image. Bursts overlap minimally in azimuth and sub-swaths overlap minimally in range.
Bursts for all beams have been resampled to a common grid during azimuth post-
processing.
In the range direction, for each line in all sub-swaths with the same time tag, merge
adjacent sub-swaths. For the overlapping region in range, merge along the optimal sub-
swath cut. The optimal cut is defined from the Noise Equivalent Sigma Zero (NESZ) profiles
between two sub-swaths. The NESZ is provided in the product.
If the two NESZ profiles intersect inside the overlapping region, the position of the
intersection point is the optimal cut.
If the two profiles do not intersect, all the points in the overlapping region are taken from
the sub-swath that has the lowest NESZ over the overlap region.
In the azimuth direction, bursts are merged according to their zero Doppler time. Note that
the black-fill demarcation is not distinctly zero at the end or start of the burst. Due to
resampling, the data fades into zero and out. The merge time is determined by the average
of the last line of the first burst and the first line of the next burst. For each range cell, the
merging time is quantised to the nearest output azimuth cell to eliminate any fading to zero
data
Create Elevation
topographic distortions.
Terrain Correction allows geometric overlays of data from different sensors and/or
geometries.
Orthorectification Algorithm
The Range Doppler Terrain Correction Operator implements the Range Doppler
orthorectification method [1] for geocoding SAR images from single 2D raster radar
geometry. It uses available orbit state vector information in the metadata or external
precise orbit (only for ERS and ASAR), the radar timing annotations, the slant to ground
range conversion parameters together with the reference DEM data to derive the precise
geolocation information.
Products Supported
● ASAR (IMS, IMP, IMM, APP, APM, WSM), ERS products (SLC, IMP), RADARSAT-2,
TerraSAR-X are fully supported.
● Some third party missions are not fully supported. Please refer to the
Supported_Mission-Product_vs_Operators_table.xls
DEM Supported
Currently, only the DEMs with geographic coordinates (Plat, Plon, Ph) referred to global
geodetic ellipsoid reference WGS84 (and height in meters) are properly supported.
Various different types of Digital Elevation models can be used (ACE, GETASSE30, ASTER,
SRTM 3Sec GeoTiff).
By default directory C:\AuxData and two sub-directories DEMs and Orbits are used to store
the DEMs. However, the AuxData directory and DEMs sub-directories are customizable from
the Settings dialog (which can be found under Edit tab in the main menu bar).
The location of the default DEMs must be specified in dataPath field in the Setting Dialog in
order to be properly used by Terrain Correction Operator.
The STRM v.4 (3” tiles) from the Joint Research Center FTP (xftp.jrc.it) will automatically
be downloaded in tiles for the area covered by the image to be orthorectified. The tiles will
be downloaded to the folder C:\AuxData\DEMs\SRTM_DEM\tiff.
The Test Connectivity functionality under the Help tab in the main menu bar allows user to
verify if the SRTM downloading is working properly.
Please note that for ACE and SRTM, the height information (being referred to geoid EGM96)
is automatically corrected to obtain height relative to the WGS84 ellipsoid. For Aster Dem
height correction is not yet applied.
Note also that the SRTM DEM covers area between -60 and 60 degrees latitude. Therefore,
for orthorectification of product of high latitude area, different DEM should be used.
User can also use external DEM file in Geotiff format which, as specified above, must be
with geographic coordinates (Plat, Plon, Ph) referred to global geodetic ellipsoid reference
WGS84 (and height in meters).
Pixel Spacing
Besides the default suggested pixel spacing computed with parameters in the metadata,
user can specify output pixel spacing for the orthorectified image.
The pixel spacing can be entered in both meters and degrees. If the pixel spacing in one
unit is entered, then the pixel spacing in another unit is computed automatically.
The calculations of the pixel spacing in meters and in degrees are given by the following
equations:
pixelSpacingInDegree = pixelSpacingInMeter / EquatorialEarthRadius * 180 / PI;
pixelSpacingInMeter = pixelSpacingInDegree * PolarEarthRadius * PI / 180;
where EquatorialEarthRadius = 6378137.0 m and PolarEarthRadius = 6356752.314245 m
as given in WGS84.
Projection Supported
Right now the following projections are supported:
● Geographic Lat/Lon
● Lambert Conformal Conic
● Stereographic
● Transverse Mercator
● UTM
● Universal Polar Stereographic North
● Universal Polar Stereographic South
Radiometric Normalization
This option implements a radiometric normalization based on the approach proposed by
Kellndorfer et al., TGRS, Sept. 1998 where
In current implementation θDEM is the local incidence angle projected into the range plane
and defined as the angle between the incoming radiation vector and the projected surface
normal vector into range plane[2]. The range plane is the plane formed by the satellite
position, backscattering element position and the earth centre.
Note that among σ0, γ0 and β0 bands output in the target product, only σ0 is real band while
γ0 and β0 are virtual bands expressed in terms of σ0 and incidence angle. Therefore, σ0 and
incidence angle are automatically saved and output if γ0 or β0 is selected.
For σ0 and γ0 calculation, by default the projected local incidence angle from DEM [2] (local
incidence angle projected into range plane) option is selected, but the option of incidence
angle from ellipsoid correction (incidence angle from tie points of the source product) is
also available.
ENVISAT ASAR
The correction factors [3] applied to the original image depend on if the product is complex
or detected and the selection of Auxiliary file (ASAR XCA file).
● Latest AUX File (& use projected local incidence angle computed from DEM):
The most recent ASAR XCA available from C:\Program Files\NEST4A\auxdata
\envisat compatible with product date is automatically selected. According to this
XCA file, calibration constant, range spreading loss and antenna pattern gain are
obtained.
❍ Applied factors:
1. apply projected local incidence angle into the range plane correction
2. apply calibration constant correction based on the XCA file
3. apply range spreading loss correction based on the XCA file and DEM
geometry
4. apply antenna pattern gain correction based on the XCA file and DEM
geometry
● External AUX File (& use projected local incidence angle computed from DEM):
User can select a specific ASAR XCA file available from the installation folder or from
another repository. According to this selected XCA file, calibration constant, range
spreading loss and antenna pattern gain are computed.
❍ Applied factors:
1. apply projected local incidence angle into the range plane correction
2. apply calibration constant correction based on the selected XCA file
3. apply range spreading loss correction based on the selected XCA file
and DEM geometry
4. apply antenna pattern gain correction based on the selected XCA file
and DEM geometry
● Latest AUX File (& use projected local incidence angle computed from DEM):
The most recent ASAR XCA available from the installation folder compatible with
product date is automatically selected. Basically with this option all the correction
factors applied to the original SAR image based on product XCA file used during the
focusing, such as antenna pattern gain and range spreading loss, are removed first.
Then new factors computed according to the new ASAR XCA file together with
calibration constant and local incidence angle correction factors are applied during
the radiometric normalisation process.
❍ Applied factors:
1. remove antenna pattern gain correction based on product XCA file
2. remove range spreading loss correction based on product XCA file
3. apply projected local incidence angle into the range plane correction
4. apply calibration constant correction based on new XCA file
5. apply range spreading loss correction based on new XCA file and DEM
geometry
6. apply new antenna pattern gain correction based on new XCA file and
DEM geometry
● Product AUX File (& use projected local incidence angle computed from DEM):
The product ASAR XCA file employed during the focusing is used. With this option
the antenna pattern gain and range spreading loss are kept from the original product
and only the calibration constant and local incidence angle correction factors are
applied during the radiometric normalisation process.
❍ Applied factors:
1. apply projected local incidence angle into the range plane correction
2. apply calibration constant correction based on product XCA file
● External AUX File (& use projected local incidence angle computed from DEM):
User can select a specific ASAR XCA file available from the installation folder or from
another repository. Basically with this option all the correction factors applied to the
original SAR image based on product XCA file used during the focusing, such as
antenna pattern gain and range spreading loss, are removed first. Then new factors
computed according to the new selected ASAR XCA file together with calibration
constant and local incidence angle correction factors are applied during the
radiometric normalisation process.
❍ Applied factors:
1. remove antenna pattern gain correction based on product XCA file
2. remove range spreading loss correction based on product XCA file
3. apply projected local incidence angle into the range plane correction
4. apply calibration constant correction based on new selected XCA file
5. apply range spreading loss correction based on new selected XCA file
and DEM geometry
6. apply new antenna pattern gain correction based on new selected XCA
file and DEM geometry
Please note that if the product has been previously multilooked then the radiometric
normalization does not correct the antenna pattern and range spreading loss and only
constant and incidence angle corrections are applied. This is because the original antenna
pattern and the range spreading loss correction cannot be properly removed due to the
pixel averaging by multilooking.
If user needs to apply a radiometric normalization, multilook and terrain correction to a
product, then user graph “RemoveAntPat_Multilook_Orthorectify” could be used.
ERS 1&2
For ERS 1&2 the radiometric normalization cannot be applied directly to original ERS
product.
Because of the Analogue to Digital Converter (ADC) power loss correction , a step before is
required to properly handle the data. It is necessary to employ the Remove Antenna
Pattern Operator which performs the following operations:
For Single look complex (SLC, IMS) products
After having applied the Remove Antenna Pattern Operator to ERS data, the radiometric
normalisation can be performed during the Terrain Correction.
The applied factors in case of "USE projected angle from the DEM" selection are:
1. apply projected local incidence angle into the range plane correction
2. apply absolute calibration constant correction
3. apply range spreading loss correction based on product metadata and DEM geometry
4. apply new antenna pattern gain correction based on product metadata and DEM
geometry
To apply radiometric normalization and terrain correction for ERS, user can also use one of
the following user graphs:
● RemoveAntPat_Orthorectify
● RemoveAntPat_Multilook_Orthorectify
RADARSAT-2
● In case of "USE projected angle from the DEM" selection, the radiometric normalisation
is performed applying the product LUTs and multiplying by (sin •DEM/sin •el), where
•DEM is projected local incidence angle into the range plane and •el is the incidence
angle computed from the tie point grid respect to ellipsoid.
● In case of selection of "USE incidence angle from Ellipsoid", the radiometric
normalisation is performed applying the product LUT.
These LUTs allow one to convert the digital numbers found in the output product to sigma-
nought, beta-nought, or gamma-nought values (depending on which LUT is used).
TerraSAR-X
● In case of "USE projected angle from the DEM" selection, the radiometric normalisation
is performed applying
1. projected local incidence angle into the range plane correction
2. absolute calibration constant correction
● In case of " USE incidence angle from Ellipsoid " selection, the radiometric
normalisation is performed applying
1. projected local incidence angle into the range plane correction
2. absolute calibration constant correction
Please note that the simplified approach where Noise Equivalent Beta Naught is neglected
has been implemented.
Cosmo-SkyMed
● In case of "USE projected angle from the DEM" selection, the radiometric normalisation
is performed deriving σ0Ellipsoid [7] and then multiplying by (sinθDEM / sinθel), where
θDEM is the projected local incidence angle into the range plane and θel is the incidence
angle computed from the tie point grid respect to ellipsoid.
● In case of selection of "USE incidence angle from Ellipsoid", the radiometric
normalisation is performed deriving σ0Ellipsoid [7]
Definitions:
1. The local incidence angle is defined as the angle between the normal vector of the
backscattering element (i.e. vector perpendicular to the ground surface) and the
incoming radiation vector (i.e. vector formed by the satellite position and the
backscattering element position) [2].
2. The projected local incidence angle from DEM is defined as the angle between the
incoming radiation vector (as defined above) and the projected surface normal vector
into range plane. Here range plane is the plane formed by the satellite position,
backscattering element position and the earth centre [2].
Parameters Used
The following parameters are used by the operator:
1. Source Band: All bands (real or virtual) of the source product. User can select one or
more bands for calibration. If no bands are selected, then by default all bands are
used.
2. Digital Elevation Model: DEM types. Please refer to DEM Supported section above.
3. External DEM: User specified external DEM file. Currently only DEM in Geotiff format
with geographic coordinates (Plat, Plon, Ph) referred to global geodetic ellipsoid
reference WGS84 (and height in meters) is accepted.
4. DEM Resampling Method: Interpolation method for obtaining elevation values from the
original DEM file. The following interpolation methods are available: nearest neighbour,
bi-linear, cubic convolution, bi-sinc and bi-cubic interpolations.
5. Image Resampling Method: Interpolation methods for obtaining pixel values from the
source image. The following interpolation methods are available: nearest neighbour, bi-
linear, cubic and bi-sinc interpolations.
6. Pixel Spacing (m): User can specify pixel spacing in meters for orthorectified image. If
no pixel spacing is specified, then default pixel spacing computed from the source SAR
image is used. For details, the reader is referred to Pixel Spacing section above.
7. Pixel Spacing (deg): User can also specify the pixel spacing in degrees. If the value of
any of the two pixel spacing is changed, the other one is updated automatically. For
details, the reader is referred to Pixel Spacing section above.
8. Map Projection: The map projection types. By default the output image will be
expressed in WGS84 latlong geographic coordinate.
9. Save DEM as a band: Checkbox indicating that DEM will be saved as a band in the
target product.
10. Save local incidence angle as a band: Checkbox indicating that local incidence angle
will be saved as a band in the target product.
11. Save projected (into the range plane) local incidence angle as a band: Checkbox
indicating that the projected local incidence angle will be saved as a band in the target
product.
12. Save selected source band: Checkbox indicating that orthorectified images of user
selected bands will be saved without applying radiometric normalization.
13. Apply radiometric normalization: Checkbox indicating that radiometric normalization
will be applied to the orthorectified image.
14. Save Sigma0 as a band: Checkbox indicating that sigma0 will be saved as a band in
the target product. The Sigma0 can be generated using projected local incidence angle,
local incidence angle or incidence angle from ellipsoid.
15. Save Gamma0 as a band: Checkbox indicating that Gamma0 will be saved as a band in
the target product. The Gamma0 can be generated using projected local incidence
angle, local incidence angle or incidence angle from ellipsoid.
16. Save Beta0 as a band: Checkbox indicating that Beta0 will be saved as a band in the
target product.
17. Auxiliary File: available only for ASAR. User selected ASAR XCA file for radiometric
normalization. The following options are available: Latest Auxiliary File, Product
Auxiliary File (for detected product only) and External Auxiliary File. By
default, the Latest Auxiliary File is used. Details about the corrections applied according
to the XCA selection are provided in Radiometric Normalisation – Envisat ASAR
section above.
Reference:
[1] Small D., Schubert A., Guide to ASAR Geocoding, RSL-ASAR-GC-AD, Issue 1.0, March
2008
[2] Schreier G., SAR Geocoding: Data and Systems, Wichmann 1993
[3] Rosich B., Meadows P., Absolute calibration of ASAR Level 1 products, ESA/ESRIN,
ENVI-CLVL-EOPG-TN-03-0010, Issue 1, Rev. 5, October 2004
[4] Laur H., Bally P., Meadows P., Sánchez J., Schättler B., Lopinto E. & Esteban D., ERS
SAR Calibration: Derivation of σ0 in ESA ERS SAR PRI Products, ESA/ESRIN, ES-TN-RS-PM-
HL09, Issue 2, Rev. 5f, November 2004
[5] RADARSAT-2 PRODUCT FORMAT DEFINITION - RN-RP-51-2713 Issue 1/7: March 14,
2008
[6] Radiometric Calibration of TerraSAR-X data - TSXX-ITD-TN-0049-
radiometric_calculations_I1.00.doc, 2008
[7] For further details about Cosmo-SkyMed calibration please contact Cosmo-SkyMed Help
Desk at info.cosmo@e-geos.it
Orthorectification
1. SAR simulation: Generate simulated SAR image using DEM, the geocoding and orbit state vectors
from the original SAR image, and mathematical modeling of SAR imaging geometry. The simulated
SAR image will have the same dimension and resolution as the original image. For detailed steps and
parameters used in SAR simulation, please refer to the SAR Simulation Operator.
2. Co-registration: The simulated SAR image (master) and the original SAR image (slave) are co-
registered and a WARP function is produced. The WARP function maps each pixel in the simulated SAR
image to its corresponding position in the original SAR image. For detailed steps and parameters used
in co-registration, please refer to the GCP Selection Operator.
3. Terrain correction: Traverse DEM grid that covers the imaging area. For each cell in the DEM grid,
compute its corresponding pixel position in the simulated SAR image using SAR model. Then its
corresponding pixel position in the original SAR image can be found with the help of the WARP
function. Finally the pixel value for the orthorectified image can be obtained from the original SAR
image using interpolation.
Products Supported
● ASAR (IMS, IMP, IMM, APP, APM, WSM), ERS products (SLC, IMP), RADARSAT-2, TerraSAR-X are fully
supported.
● Some third party missions are not fully supported. Please refer to the Supported_Mission-
Product_vs_Operators_table.xls
DEM Supported
Currently, only the DEMs with geographic coordinates (Plat, Plon, Ph) referred to global geodetic ellipsoid
reference WGS84 (and height in meters) are properly supported.
Various different types of Digital Elevation models can be used (ACE, GETASSE30, ASTER, SRTM
3Sec GeoTiff).
By default directory C:\AuxData and two sub-directories DEMs and Orbits are used to store the DEMs.
However, the AuxData directory and DEMs sub-directories are customizable from the Settings dialog
(which can be found under Edit tab in the main menu bar).
The location of the default DEMs must be specified in dataPath field in the Setting Dialog in order to be
properly used by Terrain Correction Operator.
The STRM v.4 (3” tiles) from the Joint Research Center FTP (xftp.jrc.it) will automatically be downloaded
in tiles for the area covered by the image to be orthorectified. The tiles will be downloaded to the folder C:
\AuxData\DEMs\SRTM_DEM\tiff.
The Test Connectivity functionality under the Help tab in the main menu bar allows user to verify if the
SRTM downloading is working properly.
Please note that for ACE and SRTM, the height information (being referred to geoid EGM96) is
automatically corrected to obtain height relative to the WGS84 ellipsoid. For Aster Dem height correction
is not yet applied.
Note also that the SRTM DEM covers area between -60 and 60 degrees latitude. Therefore, for
orthorectification of product of high latitude area, different DEM should be used.
User can also use external DEM file in Geotiff format which, as specified above, must be with geographic
coordinates (Plat, Plon, Ph) referred to global geodetic ellipsoid reference WGS84 (and height in meters).
Note that the same DEM is used by both SAR simulation and Terrain correction. The DEM is selected
through SAR Simulation UI.
Pixel Spacing
Besides the default suggested pixel spacing computed with parameters in the metadata, user can specify
output pixel spacing for the orthorectified image.
The pixel spacing can be entered in both meters and degrees. If the pixel spacing in one unit is entered,
then the pixel spacing in another unit is computed automatically.
The calculations of the pixel spacing in meters and in degrees are given by the following equations:
pixelSpacingInDegree = pixelSpacingInMeter / EquatorialEarthRadius * 180 / PI;
pixelSpacingInMeter = pixelSpacingInDegree * PolarEarthRadius * PI / 180;
where EquatorialEarthRadius = 6378137.0 m and PolarEarthRadius = 6356752.314245 m as given in
WGS84.
Projection Supported
Right now the following projections are supported by NEXT:
● Geographic Lat/Lon
● Lambert Conformal Conic
● Stereographic
● Transverse Mercator
● UTM
● Universal Polar Stereographic North
● Universal Polar Stereographic South
Radiometric Normalization
This option implements a radiometric normalization based on the approach proposed by Kellndorfer et al.,
TGRS, Sept. 1998 where
In current implementation θDEM is the local incidence angle projected into the range plane and defined as
the angle between the incoming radiation vector and the projected surface normal vector into range plane
[2]. The range plane is the plane formed by the satellite position, backscattering element position and the
earth centre.
Note that among σ0, γ0 and β0 bands output in the target product, only σ0 is real band while γ0 and β0 are
virtual bands expressed in terms of σ0 and incidence angle. Therefore, σ0 and incidence angle are
automatically saved and output if γ0 or β0 is selected.
For σ0 and γ0 calculation, by default the projected local incidence angle from DEM [2] (local incidence
angle projected into range plane) option is selected, but the option of incidence angle from ellipsoid
correction (incidence angle from tie points of the source product) is also available.
Products Supported
● ASAR (IMS, IMP, IMM, APP, APM, WSM), ERS products (SLC, IMP), RADARSAT-2, TerraSAR-X are fully
supported.
● Some third party missions are not fully supported. Please refer to the Supported_Mission-
Product_vs_Operators_table.xls
ENVISAT ASAR
The correction factors [3] applied to the original image depend on if the product is complex or detected
and the selection of Auxiliary file (ASAR XCA file).
● Latest AUX File (& use projected local incidence angle computed from DEM):
The most recent ASAR XCA available from C:\Program Files\NEST4A\auxdata\envisat compatible
with product date is automatically selected. According to this XCA file, calibration constant, range
spreading loss and antenna pattern gain are obtained.
❍ Applied factors:
1. apply projected local incidence angle into the range plane correction
2. apply calibration constant correction based on the XCA file
3. apply range spreading loss correction based on the XCA file and DEM geometry
4. apply antenna pattern gain correction based on the XCA file and DEM geometry
● External AUX File (& use projected local incidence angle computed from DEM):
User can select a specific ASAR XCA file available from C:\Program Files\NEST4A\auxdata\envisat
or from another repository. According to this selected XCA file, calibration constant, range
spreading loss and antenna pattern gain are computed.
❍ Applied factors:
1. apply projected local incidence angle into the range plane correction
2. apply calibration constant correction based on the selected XCA file
3. apply range spreading loss correction based on the selected XCA file and DEM
geometry
4. apply antenna pattern gain correction based on the selected XCA file and DEM
geometry
● Latest AUX File (& use projected local incidence angle computed from DEM):
The most recent ASAR XCA available from C:\Program Files\NEST4A\auxdata\envisat compatible
with product date is automatically selected. Basically with this option all the correction factors
applied to the original SAR image based on product XCA file used during the focusing, such as
antenna pattern gain and range spreading loss, are removed first. Then new factors computed
according to the new ASAR XCA file together with calibration constant and local incidence angle
correction factors are applied during the radiometric normalisation process.
❍ Applied factors:
1. remove antenna pattern gain correction based on product XCA file
2. remove range spreading loss correction based on product XCA file
3. apply projected local incidence angle into the range plane correction
4. apply calibration constant correction based on new XCA file
5. apply range spreading loss correction based on new XCA file and DEM geometry
6. apply new antenna pattern gain correction based on new XCA file and DEM geometry
● Product AUX File (& use projected local incidence angle computed from DEM):
The product ASAR XCA file employed during the focusing is used. With this option the antenna
pattern gain and range spreading loss are kept from the original product and only the calibration
constant and local incidence angle correction factors are applied during the radiometric
normalisation process.
❍ Applied factors:
1. apply projected local incidence angle into the range plane correction
2. apply calibration constant correction based on product XCA file
● External AUX File (& use projected local incidence angle computed from DEM):
User can select a specific ASAR XCA file available from the installation folder or from another
repository. Basically with this option all the correction factors applied to the original SAR image
based on product XCA file used during the focusing, such as antenna pattern gain and range
spreading loss, are removed first. Then new factors computed according to the new selected ASAR
XCA file together with calibration constant and local incidence angle correction factors are applied
during the radiometric normalisation process.
❍ Applied factors:
1. remove antenna pattern gain correction based on product XCA file
2. remove range spreading loss correction based on product XCA file
3. apply projected local incidence angle into the range plane correction
4. apply calibration constant correction based on new selected XCA file
5. apply range spreading loss correction based on new selected XCA file and DEM
geometry
6. apply new antenna pattern gain correction based on new selected XCA file and DEM
geometry
Please note that if the product has been previously multilooked then the radiometric normalization does
not correct the antenna pattern and range spreading loss and only constant and incidence angle
corrections are applied. This is because the original antenna pattern and the range spreading loss
correction cannot be properly removed due to the pixel averaging by multilooking.
If user needs to apply a radiometric normalization, multilook and terrain correction to a product, then user
graph “RemoveAntPat_Multilook_Orthorectify” could be used.
ERS 1&2
For ERS 1&2 the radiometric normalization cannot be applied directly to original ERS product.
Because of the Analogue to Digital Converter (ADC) power loss correction , a step before is required to
properly handle the data. It is necessary to employ the Remove Antenna Pattern Operator which performs
the following operations:
For Single look complex (SLC, IMS) products
After having applied the Remove Antenna Pattern Operator to ERS data, the radiometric normalisation can
be performed during the Terrain Correction.
The applied factors in case of "USE projected angle from the DEM" selection are:
1. apply projected local incidence angle into the range plane correction
2. apply absolute calibration constant correction
3. apply range spreading loss correction based on product metadata and DEM geometry
4. apply new antenna pattern gain correction based on product metadata and DEM geometry
To apply radiometric normalization and terrain correction for ERS, user can also use one of the following
user graphs:
● RemoveAntPat_Orthorectify
● RemoveAntPat_Multilook_Orthorectify
RADARSAT-2
● In case of "USE projected angle from the DEM" selection, the radiometric normalisation is performed
applying the product LUTs and multiplying by (sin •DEM/sin •el), where •DEM is projected local
incidence angle into the range plane and •el is the incidence angle computed from the tie point grid
respect to ellipsoid.
● In case of selection of "USE incidence angle from Ellipsoid", the radiometric normalisation is
performed applying the product LUT.
These LUTs allow one to convert the digital numbers found in the output product to sigma-nought, beta-
nought, or gamma-nought values (depending on which LUT is used).
TerraSAR-X
● In case of "USE projected angle from the DEM" selection, the radiometric normalisation is performed
applying
1. projected local incidence angle into the range plane correction
2. absolute calibration constant correction
● In case of " USE incidence angle from Ellipsoid " selection, the radiometric normalisation is performed
applying
1. projected local incidence angle into the range plane correction
2. absolute calibration constant correction
Please note that the simplified approach where Noise Equivalent Beta Naught is neglected has been
implemented.
Cosmo-SkyMed
● In case of "USE projected angle from the DEM" selection, the radiometric normalisation is performed
deriving σ0Ellipsoid [7] and then multiplying by (sinθDEM / sinθel), where θDEM is the projected local
incidence angle into the range plane and θel is the incidence angle computed from the tie point grid
respect to ellipsoid.
● In case of selection of "USE incidence angle from Ellipsoid", the radiometric normalisation is
performed deriving σ0Ellipsoid [7]
Definitions:
1. The local incidence angle is defined as the angle between the normal vector of the backscattering
element (i.e. vector perpendicular to the ground surface) and the incoming radiation vector (i.e.
vector formed by the satellite position and the backscattering element position) [2].
2. The projected local incidence angle from DEM is defined as the angle between the incoming radiation
vector (as defined above) and the projected surface normal vector into range plane. Here range plane
is the plane formed by the satellite position, backscattering element position and the earth centre [2].
User can select output the layover-shadow mask by checkmarking "Save Layove-Shadow Mask as band"
box in SAR-Simulation tab.
To visualize the layover-shadow mask, user can bring up the orthorectified image first, then go to layer
manager and add the layover-shadow mask band as a layer.
Parameters Used
The following parameters are used by the Terrain Correction step:
1. RMS Threshold: The criterion for eliminating invalid GCPs. (see Help for Warp Operator for detail)
2. WARP Polynomial Order: The degree of the WARP polynomial. The valid values are 1, 2 and 3. (see
Help for Warp Operator for detail)
3. DEM Resampling Method: Interpolation method for obtaining elevation values from the original DEM
file. The following interpolation methods are available: nearest neighbour, bi-linear, cubic convolution,
bi-sinc and bi-cubic interpolations.
4. Image Resampling Method: Interpolation methods for obtaining pixel values from the source image.
Three interpolation methods are available: nearest neighbour, bi-linear, cubic and bi-sinc
interpolations.
5. Pixel Spacing (m): User can specify pixel spacing in meters for orthorectified image. If no pixel
spacing is specified, then default pixel spacing computed from the source SAR image is used. For
details, the reader is referred to Pixel Spacing section above.
6. Pixel Spacing (deg): User can also specify the pixel spacing in degrees. If the value of any of the two
pixel spacing is changed, the other one is updated automatically. For details, the reader is referred to
Pixel Spacing section above.
7. Save DEM as band: Checkbox indicating that DEM will be save as a band in the target product.
8. Save local incidence angle as band: Checkbox indicating that local incidence angle will be save as a
band in the target product.
9. Save projected local incidence angle as band: Checkbox indicating that the projected local incidence
angle will be save as a band in the target product.
10. Save selected source band: Checkbox indicating that orthorectified images of user selected bands will
be saved without applying radiometric normalization.
11. Apply radiometric normalization: Checkbox indicating that radiometric normalization will be applied to
the orthorectified image.
12. Save Sigma0 as a band: Checkbox indicating that sigma0 will be saved as a band in the target
product. The Sigma0 can be generated using projected local incidence angle, local incidence angle or
incidence angle from ellipsoid.
13. Save Gamma0 as a band: Checkbox indicating that Gamma0 will be saved as a band in the target
product. The Gamma0 can be generated using projected local incidence angle, local incidence angle or
incidence angle from ellipsoid.
14. Save Beta0 as a band: Checkbox indicating that Beta0 will be saved as a band in the target product.
15. Auxiliary File: available only for ASAR. User selected ASAR XCA file for radiometric normalization. The
following options are available: Latest Auxiliary File, Product Auxiliary File (for detected product
only) and External Auxiliary File. By default, the Latest Auxiliary File is used. Details about the
corrections applied according to the XCA selection are provided in Radiometric Normalisation –
Envisat ASAR section above.
16. Show Range and Azimuth Shifts: Checkbox indicating that range and azimuth shifts (in m) for all valid
GCPs will be displayed. The row and column shifts of each slave GCP away from its initial position are
output to a text file.
Figure 1. SAR Sim Terrain Correction dialog box
1. First a DEM image is created by the SAR Simulation operator using the geocoding of the original SAR
image. The DEM image has the same dimension as the original SAR image with each pixel value of the
DEM image is the elevation of the corresponding pixel in the original SAR image.
2. Then 2-pass method (see section 7.4 in [2]) is applied to each range line in the DEM image to
generate the layover and shadow mask for the DEM image. The 2-pass method compares the slant
range for a DEM cell to slant ranges of other cells in the same range line to determine if the DEM cell
will be imaged in layover or shadow area.
3. Next the layover-shadow mask for the DEM image is mapped to the simulated image to create the
mask for the simulated image. The map is done using SAR simulation.
4. The layover-shadow mask for the simulated SAR image is then mapped to the original SAR image
using the WARP function, which was created during co-registration of the simulated SAR image and
the original SAR image.
5. Finally the mask for the original SAR image is mapped to the orthorectified image domain to produce
the mask for the orthorectified image.
1. First a DEM image is created from the original SAR image. The DEM image has the
same dimension as the original SAR image. The pixel value of the DEM image is the
elevation of the corresponding pixel in the original SAR image.
2. Then, for each cell in the DEM image, its pixel position (row/column indices) in the
simulated SAR image is computed based on the SAR model.
3. Finally, the backscattered power σ0 for the pixel is computed using backscattering
model.
DEM Supported
Right now only the DEMs with geographic coordinates (Pא, Pא, Ph) referred to global
geodetic ellipsoid reference WGS84 in meters are properly supported.
By default the following DEMs are available:
● ACE
● GETASSE30
● SRTM 3Sec GeoTiff
● ASTER GDEM
Since the height information in ACE and SRTM is referred to geoid EGM96, not WGS84
ellipsoid, correction has been applied to obtain height relative to the WGS84 ellipsoid.
User can also use external DEM file which, as specified above, must be WGS84 (P א, Pא,
Ph) DEM in meters.
Parameters Used
The following parameters are used by the operator:
1. Source Band: All bands (real or virtual) of the source product. User can select one or
more bands for producing simulated image. If no bands are selected, then by default
all bands are selected. The selected band will be output as a band in the target product
together with the simulated image.
2. Digital Elevation Model: DEM types. Choose from the automatically tiled DEMs or
specify using a single external DEM file by selecting "External DEM".
3. DEM Resampling Method: Interpolation method for obtaining elevation values from the
original DEM file. The following interpolation methods are available: nearest neighbour,
bilinear, cubic convolution, binsinc and bicubic interpolations.
4. External DEM: User specified external DEM file. Currently only WGS84-latlong DEM in
meters is accepted as geographic system.
5. Save Layover-Shadow Mask as band: Checkbox indicating that layover-shadow mask is
saved as a band in the target product.
Detailed Simulation Algorithm
Detailed procedure is as the follows:
1. Get data for the following parameters from the metadata of the SAR image product:
❍ radar wave length
❍ range spacing
❍ first_line_time
❍ line_time_interval
❍ slant range to 1st pixel
❍ orbit state vectors
❍ slant range to ground range conversion coefficients
2. Compute satellite position and velocity for each azimuth time by interpolating the orbit
state vectors;
3. Repeat the following steps for each cell in the DEM image:
1. Get latitude, longitude and elevation for the cell;
2. Convert (latitude, longitude, elevation) to Cartesian coordinate P(X, Y, Z);
3. Compute zero Doppler time t for point P(x, y, z) using Doppler frequency
function;
4. Compute SAR sensor position S(X, Y, Z) at time t;
5. Compute slant range r = |S - P|;
6. Compute bias-corrected zero Doppler time tc = t + r*2/c, where c is the light
speed;
7. Update satellite position S(tc) and slant range r(tc) = |S(tc) – P| for the bias-
corrected zero Doppler time tc;
8. Compute azimuth index Ia in the source image using zero Doppler time tc;
9. Compute range index Ir in the source image using slant range r(tc);
10. Compute local incidence angle;
11. Compute backscattered power and save it as value for pixel ((int)Ia, (int)Ir);
Reference:
[1] Liu H., Zhao Z., Lezek K. C., Correction of Positional Errors and Geometric Distortions in
Topographic Maps and DEMs Using a Rigorous SAR Simulation Technique, Photogrammetric
Engineering & Remote Sensing, Vol. 70, No. 9, Sep. 2004
[2] Gunter Schreier, SAR geocoding: data and systems, Wichmann-Verlag, Karlsruhe,
Germany, 1993
Ellipsoid Correction
1. Get the latitudes and longitudes for the four corners of the source image;
2. Determine target image boundaries based on the scene corner latitudes and longitude;
3. Get range and azimuth pixel spacings from the metadata of the source image;
4. Compute target image traversal intervals based on the source image pixel spacing;
5. Compute target image dimension;
6. Get tie points (latitude, longitude and slant range time) from geolocation LADS of the
source image;
7. Repeat the following steps for each cell in the target image raster:
a. Get latitude and longitude for current cell;
b. Determine the corresponding position of current cell in the source image and the
4 pixels that are immediately adjacent to it;
c. Compute slant range R for the cell using slant range time and bi-quadratic
interpolation;
d. Compute zero Doppler time T for the cell;
e. Compute bias-corrected zero Doppler time Tc = T + R*2/C, where C is the light
speed;
f. Compute azimuth index Ia using zero Doppler time Tc;
g. Compute range image index Ir using slant range R;
h. Compute pixel value x(Ia,Ir) using bi-linear interpolation and set it for current
sample in target image.
● nearest_neighbour
● bilinear_interpolation
● cubic_convolution
Map Projection Supported
Right now the following projections are supported by NEXT:
● Geographic Lat/Lon
● Lambert Conformal Conic
● Stereographic
● Transverse Mercator
● UTM
● Universal Polar Stereographic North
● Universal Polar Stereographic South
Parameters Used
The following parameters are used by the operator:
1. Source Band: All bands (real or virtual) of the source product. User can select one or
more bands. For complex product, i and q bands must be selected together. If no
bands are selected, then by default all bands are selected.
2. Image resampling method: Interpolation methods for obtaining pixel values from
source image. There are three interpolation methods available: nearest neighbour, bi-
linear and cubic interpolations.
3. Map Projection: The map projection types. The orthorectified image will be presented
with the user selected map projection.
● nearest_neighbour
● bilinear_interpolation
● cubic_convolution
● Geographic Lat/Lon
● Lambert Conformal Conic
● Stereographic
● Transverse Mercator
● UTM
● Universal Polar Stereographic North
● Universal Polar Stereographic South
Parameters Used
The following parameters are used by the operator:
1. Source Band: All bands (real or virtual) of the source product. User can select one or
more bands. For complex product, i and q bands must be selected together. If no
bands are selected, then by default all bands are selected.
2. Image resampling method: Interpolation methods for obtaining pixel values from
source image. There are three interpolation methods available: nearest neighbour, bi-
linear and cubic interpolations.
3. Map Projection: The map projection types. The orthorectified image will be presented
with the user selected map projection.
[1] Small D., Schubert A., Guide to ASAR Geocoding, Issue 1.0, 19.03.2008
Map Reprojection
Reprojection Operator
The Map Projection Operator applies a selected Map Projection to the input product and
creates a new transposed output product.
Output Settings
Preserve resolution: If unchecked the Output Parameters... is enabled and the
upcoming dialog lets you edit the output parameters like easting and northing of the
reference pixel, the pixel size and the scene height and width.
Reproject tie-point grids: Specifies whether or not the tie-point grids shall be
included. If they are reprojected they will appear as bands in the target product and not
any more as tie-point grids.
No-data value: The default no-data value is used for output pixels in the projected
band which have either no corresponding pixel in the source product or the source pixel
is invalid.
Resampling Method: You can select one resampling method for the projection. For a
brief description have a look at Resampling Methods.
Output Information
Displays some information about the output, like scene width and height, the
geographic coordinate of the scene center and short description of the selected CRS.
When clicking the Show WKT... button the corresponding Well-Known Text of the
currently defined CRS is shown.
SRGR Operator
1. Create a warp polynomial of given order that maps ground range pixels to slant range
pixels.
2. For each ground range pixel, compute its corresponding pixel position in the slant
range image using warp polynomial.
3. Compute pixel value using user selected interpolation method.
● Nearest-Neighbour interpolation
● linear interpolation
● Cubic interpolation
● Cubic2 interpolation
● Sinc interpolation
Parameters Used
The following parameters are used by the operator:
1. Source Band: All bands (real or virtual) of the source product. User can select one or
more bands for producing ground range images. If no bands are selected, then by
default all bands are selected.
2. Warp Polynomial Order: The degree of WARP polynomial. It should be a positive integer.
3. Interpolation Method: User can select interpolation method used in SRGR conversion.
Mosaic
Mosaic Operator
The Mosaic Operator combines overlapping products into a single composite product. The mosaicking is
achieved based on the geocoding of the source products therefore the geocoding needs to be very accurate. It
is recommended that the source products be Terrain Corrected and Radiometric Corrected first.
1. Source Bands: All bands (real or virtual) of the source product. You may select one or more bands. If no bands
are selected, then by default all bands will be processed.
2. Resampling Method: Choice of Nearest Neighbour, Bilinear, Cubic, Bi-Sinc or Bi-Cubic resampling.
3. Pixel Size (m): The output scene pixel spacing in meters.
4. Scene Width (pixels): The output scene width in pixels.
5. Scene Height (pixels): The output scene height in pixels.
6. Feather (pixels): The number of pixels skipped on the boundary of the source images.
7. Weight Average of Overlap: Averaging option to blend overlapping pixels. To achieve better mosaic result,
the Normalizer checkbox below should be selected as well.
8. Normalizer: Normalization option to remove mean and normalize standard deviation.
Below is an example of a mosaic of ASA_GM1 products that have been Terrain Corrected and Radiometric
Corrected. Overlapping areas have been normalized and averaged.
Interferometry
Implementation Overview
Initially for InSAR functionality, as an algorithmic prototype the DORIS (Delft object-oriented radar
interferometric software) software has been used. However, in the course of development many of
these algorithmic prototypes have been further extended and completely reimplemented, and most
of implementations significantly deviated from the original. Because of these developments, and in
order to streamline and simplify further developments a dedicated library and application interface
(API) for interferometric application is designed and developed - jLinda (Java Library for
Interferometric Data Analysis).
Algorithmic Base
All implemented algorithms are fully documented, described in the literature, and follow a generally
accepted best practices.
Operator parameters:
The following parameters are used by this operator:
1. Number of points to compute reference phase for least square estimation. Default
value is 501, and sufficient for 100x100km SAR scenes. For smaller/or larger scenes
this number can be adapted.
2. The degree of 2D flat-earth polynomial. Recommended degree, appropriate for most
usage cases is 5th degree.
3. Orbit interpolation method. Defaults to a polynomial of degree (number of state
vectors)-1, but smaller than degree 5. Optionally, the degree for the orbit interpolation
can be declared. The specified degree has to be smaller or equal to (number of state
vectors)-1. The positions of state vectors (x,y,z) are independently interpolated, the
velocities are estimated from the position. The annotated velocities, if available, will be
used in the interpolation and not computed from the positions. NOTE: It is not
recommended to use a degree smaller than the maximum possible, except if it gets too
large to avoid oscillations. However, depending on the temporal posting of the state
vectors and their accuracy different interpolation strategies can give better results.
4. Flag for skipping estimation and subtraction of the reference-phase.
Source bands:
The source bands are the set of, usually coregistered, bands of the complex product
Estimation and Subtraction of Topographic Phase (InSAR operator)
Implementation Details
Note that interpolation of DEM is conceptually different then in other Geometry NEST operators (eg. Range
Doppler Terrain Correction Operator).
The DEM reference phase is computed in two steps:
● In first step, the DEM is radarcoded to the coordinate systems of the master image. Per DEM point the
master coordinate (real valued) and the computed reference phase is saved to a file.
● Then, the reference phase is interpolated to the integer grid of master coordinates. A linear interpolation based on
a Delaunay triangulation is used. Dealunay Triangulation library, developed specifically for NEST and SAR
applications is used.
Operator parameters:
Input/Output bands:
Source Bands are stack of flat-earth subtracted ("flattened") interferograms.
Output Bands are stack of interferograms with subtracted DEM reference phase and other optional bands,
for example: radarcoded elevation of the area, of topographic phase.
Known issues:
When using 'External DEM' as a reference surface, user has to make sure that the input DEM sufficiently covers
and and extends, the area of input interferogram. In practice, this means that the input DEM should be 10-
15% bigger in its extent (wider and higher) then the interferogram. This issue will be addressed in NEST 5A-
FINAL release.
Coherence estimation (InSAR operator)
Coherence estimation
This operator computes/estimates the coherence image, with or without subtraction of the
reference phase. The reference phase is subtracted if there is a 2d-polynomial computed --
result of "Compute Interferogram" operator. While, it is not subtracted if this information is
not in the metadata, or if the number of polynomial coefficients in "Compute
Interferogram" operator is set to 0.
Note that this is a "general" coherence estimation operator and not exclusive only for
InSAR applications. It can be utilized to estimate the coherence information from any stack
of coregistered complex images.
In order to reduce the noise, as the post-processing step, you can perform multilooking
(with Multilook Operator). In case of ESA's ERS and Envisat sensors, the factor 5:1
(azimuth:range) or similar ratio between the factors is chosen to obtain approximately
square pixels (20x20 m^2 for factors 5 and 1). Of course the resolution decreases if
multilooking is applied.
Operator parameters:
The input parameters are size of the shifting window for the coherence estimation. The
window size is defined, in both azimuth and range directions.
Source bands:
Source Bands are set of, usually coregistered, bands of the complex product.
Azimuth Filtering (InSAR operator)
Azimuth Filtering
This operator filters the spectras, of stack of SLC images, in the azimuth direction. This is
an optional step in the interferometric processing chain. The part of the spectras do that
does not overlap with the spectrum of the slave is filtered out. This non overlap is due to
the selection of a Doppler centroid frequency in the SAR processing, which normally is not
equal for master and slave image.
This step can in general best be performed after the coregistration. (The offset in range
direction is used to evaluate the polynomial for the Doppler Centroid frequency.)
Operator performs filtering of all images in the stack at the same time. However, if multiple
slave images are present in the stack, only slave images will be filtered, and master image
will remain unfiltered (all slaves coregistered on the same master image). This approach
has the advantage that for each interferogram of the stack a separate set of master images
is created, effectively saving the disk storage space and making the processing more
efficient. The disadvantage of not filtering the master is that a small part of the spectrum of
the master is not shared with the slave spectrum, yielding a minor loss of coherence in the
interferogram.
Operator parameters:
● FFT Window Length: Length of FFT estimation window per tile in azimuth direction.
In general, the larger the better. However, not that if the value for the FFT Window
Lenght is larger then the size of Tile, the length of the window will be reduced to the
maximum possible length.
● Azimuth Filter Overlap: Half of the overlap between consecutive tiles in azimuth
direction. Partially the same data is used to estimate the spectrum, to avoid border
effects. However, the exact influence of this parameter to the end results scales with
the Doppler Centroid Frequency variability between master and slave images. It has
not been studied yet, what is the optimum ratio between Overlap parameter and
Doppler Centroid Frequency difference. Setting this card to 0 gives the fastest results.
● Hamming Alpha: The weighting of the spectrum in azimuth direction. The filtered
output spectrum is first de-weighted with the specified hamming filter, then re-
weighted with a (newly centered) one. If this parameter is set to 1, no weighting is
performed.
Source bands:
Output Bands are set of bands with spectras filtered in azimuth direction.
Range Filtering (InSAR operator)
Range Filtering
This operator filters the spectras, of stack of SLC images, in the range direction. This is an
optional step in interferometric processing chain. The filtering in range direction of master
and slave image increases Signal-to-Noise Ration (SNR) in the interferogram. This noise
reduction results from filtering out non overlapping parts of the spectrum. This spectral non
overlap in range between master and slave is caused by a slightly different viewing angle of
both sensors. The longer the perpendicular baseline, the smaller the overlapping part.
Eventually a baseline of about 1100 m results in no overlap at all (that is also critical
baseline for ERS). Assuming no local terrain slope, a reduction of typically 10-20% in the
number of residues can be achieved.
The range filtering should be performed after coregistration (after slave images are
resampled/warped to the master grid), because the fringe frequency is estimated from the
interferogram (that is temporary computed). It is performed simultaneously for the master
and slave image, unless there are multiple slave images in the stack. If later is the case, as
with Azimuth Filtering Operator, only slave images will be filtered while the master image
will be left in its original state.
Implementation Details
Currently, only a so-called "adaptive" filtering is implemented, while method based on
orbital data and terrain slope will be implemented in coming releases.
Adaptive range filtering algorithm builds on the local fringe frequency estimated from the
locally computed interferogram. After the warping/resampling of the slave on the master
grid the local fringe frequency is estimated using peak analysis of the power of the
spectrum of the complex interferogram. The warping/resampling is required since the local
fringe frequency is estimated from the interferogram. The fringe frequency is directly
related to the spectral shift in range direction. (Note: that this shift is not an actual shift,
but it is an indication that the different frequencies are mapped on places with this shift.
Input parameters
● FFT Window Length: Length of the estimation window, a peak is estimated for parts
of this length.
● Hamming Alpha: Weight for hamming filter. (Note that, if alpha is set to 1, the
weighting window function will be of rectangular type).
● Walking Mean Window: Number of lines over which the (walking) mean will be
computed. This parameter reduces noise for the peak estimation. The parameter has to
be an odd number. Logically, the walking mean can be compared with the principles of
periodogram estimation.
● SNR Threshold: In peak estimation, weight values to bias higher frequencies. The
reasoning for this parameter is that the low frequencies are (for small Oversample
factors) aliased after interferogram generation. The de-weighting is done by a dividing
by a triangle function (convolution of 2 rect window functions, the shape of the range
spectrum). Effect of this parameter may be negligible to overall results.
● Oversampling factor: Oversample master and slave(s) with this factor before
computing the complex interferogram for the peak estimation. This factor has to be a
power of 2. 2 is default, and with this factor the filter is able to estimate the peak for
frequency shifts larger than half the bandwidth. A factor of 4 for example might give a
better estimate, since the interval between shifts that can be estimated is in that case
halfed (fixed FFT Window Length).
Source bands:
Output bands:
Output Bands are set of bands with spectras filtered in range direction.
Phase Filtering of stacks of interferograms (InSAR operator)
Operator parameters:
The following input parameters are used by this operator:
1. Filtering Method: Select filtering method. Choose among goldstein method ("goldstein"), spatial convolution
("convolution"). Note that different methods have different parameters and corresponding levels of fine tuning.
2. Alpha: (Input parameter for Goldstein method only) The Alpha parameter, is the input parameter only for
method "goldstein". This parameter, can be understood as a "smoothness coefficient" of the filter, defining the
effective level of filtering. The value for the alpha, must be in the range from [0, 1]. The value 0 means no
filtering, while 1 results in the most filtering. The Alpha parameter is connected and indirectly influenced with
the input parameters for the Filtering Kernel - a higher smoothing, gives a relative decrease to the peak, and
thus the effect of the alpha.
3. Blocksize: (Input parameter for method "goldstein" only). It defines the size of the blocks that are filtered.
The parameter must be a power of 2 value. The value for block-size should be large enough so that the
spectrum can be estimated, and small enough that it contains a peak frequency (1 trend in phase).
Recommended value for block-size is: 32 pixels.
4. Overlap: Input for method "goldstein" only. The overlap value defines half of the size of the overlap between
consecutive filtering blocks and tiles, thus that partially the same data is used for filtering. The total overlap
should be smaller than the BLOCKSIZE value. If the parameter is set to BLOCKSIZE/2-1 (the maximum value
for this parameter) then each output pixel is filtered based on the spectrum that is centered around it. Not that
is probably the most optimal way of filtering, but may well be the most time consuming one.
5. Filtering kernel: This input parameter is for methods "goldstein" and "spatialconv" only. It defines the one-
dimension kernel function used to perform convolution. A number of the pre-defined kernels is offered, while
future releases will have functionality that can allow users to define their own 1D filtering kernels. For method
GOLDSTEIN: default to kernel is [1 2 3 2 1]. This kernel is used to smooth the amplitude of the spectrum of
the complex interferogram. The spectrum is later scaled by the smoothed spectrum to the power alpha. For
method SPATIALCONV: Default is a 3 point moving average [1 1 1] convolution. The real and imaginary part is
averaged separately this way. For more info see implementation section.
Source bands:
Output bands:
1. Spatial Convolution Method: The input complex interferogram is convoluted with a 2D kernel by FFT's. The
2D kernel is computed from 1D kernel, defined as an input parameter of the operator. The block-size for the
convolution is chosen as high as possible. In future releases, it will be also possible to load 2D kernel from
external file. Note that only odd sized kernels can be used, so if you want to use the kernel of odd simply add a
zero to make a kernel size even.
2. Goldstein Method:
The algorithm is implemented as:
❍ Read a data tile (T);
❍ Get a data block (B) from input tile;
❍ B = fft2d(B) (obtain complex spectrum);
❍ A = abs(B) (compute magnitude of spectrum);
❍ S = smooth(A) (perform convolution with kernel);
❍ S = S/max(S) (scale S between 0 and 1);
❍ B = B.S^alpha (weight complex spectrum);
❍ B = ifft2d(B) (result in space domain);
❍ If all blocks of tile done, write to disk.
Phase to Height conversion (InSAR operator)
Operator parameters:
1. Number of estimation points: The number of locations to compute the reference phase
at different altitudes.
2. Number of height samples: Number of height samples in range [0,5000) at which the
reference phase will be estimated.
3. Degree of 1D polynomial: Degree of the one-dimensional polynomial to "fit" the
reference phase through.
4. Degree of 2D polynomial: Degree of the two-dimensional polynomial to fit the
reference phase through.
5. Orbit interpolation degree: Defaults to a polynomial of degree (number of state
vectors)-1, but smaller than degree 5.
Source Products:
Output bands:
The height estimated from the unwrapped interferogram. The heights are stored meters,
while 0.0 height indicates the problem with unwrapping.
Three-Pass DInSAR (InSAR operator)
Operator parameters:
This operator performs without any parameters, all the necessary processing information is
constructed using product metadata. The only input parameter is the control flag for the
degree of the orbit interpolator.
Source Products:
Output bands:
Output Bands are stack of differential interferograms. The amplitude is the same as that of
the original 'deformation' interferogram. A complex value (0,0) indicates that for that pixel
unwrapping was not performed correctly.
Phase Unwrapping in NEST
Introduction
The principal observation in radar interferometry, is the two-dimensional relative phase
signal, which is the 2pi-modulus of the (unknown) absolute phase signal. The forward
problem, the wrapping of the absolute phase to the [-pi,pi) interval is straightforward and
trivial. The inverse problem, the so-called phase unwrapping, due to inherent non-
uniqueness and non-linearity, is one of the main difficulties and challenges in the
application of radar interferometry.
There are many proposed techniques to deal with the phase unwrapping problem. The
variable phase noise, as well as the geometric problems, i.e., foreshortening and layover,
are the main causes why many of the proposed techniques do not perform as desired.
Furthermore, any of the given phase unwrapping techniques will not give a unique solution,
and without additional a-priori information, or strong assumptions on the data behaviour, it
is impossible to assess the reliability of the solution.
Uwrap
First an independent unwrapping of tiles is performed using the Uwrap operator, and
importantly, results need to be saved.
Stitch
As a second step, independently unwrapped tiles are integrated using the Stitch
operator. This operator stitches the unwrapped phases of all tiles to form a complete
smooth image of unwrapped phase.
SNAPHU export
The graph for exporting NEST InSAR data to processing with SNAPHU, building SNAPHU
configuration file, and creating a "phase" product. The phase product serves as a
container an interface with SNAPHU. In the phase product the wrapped phase is saved,
with the corresponding metadata.
SNAPHU import
The importing (ingestion) of data in previously created "phase unwrapping" container
product. With importing the unwrapped phase, the existing wrapped phase data in the
phase product is replaced with unwrapped phase, while preserving the metadata of the
unwrapped product, or the phase product is extended with the unwrapped phase band.
Further information
Integrated unwrapper
Implementation Reference:
[1] Costantini, M. (1998) A novel phase unwrapping method based on network
programming. IEEE Tran. on Geoscience and Remote Sensing, 36, 813-821.
SNAPHU
Phase unwrapping: For a general reference on phase unwrapping see book of Ghiglia and
Pritt, Two-Dimensional phase unwrapping: theory, algorithms, and software.
Building and running SNAPHU: A good starting point for obtaining further information on
SNAPHU software and algorithms is the project web page. SNAPHU is software developed
for UNIX environment and as such building it on Linux and MacOS systems is
straightforward. On Microsoft Windows operating systems, SNAPHU can be built and
executed on any of the Unix like environments and command-line interface for Windows
(Cygwin, MiniGW, etc.)
Snaphu Data Export
Important note
It is strongly advised before executing graph for exporting NEST data for SNAPHU processing for
user to get familiar with general principles of doing phase unwrapping in NEST.
1. To export NEST data (bands) in the format compatible for SNAPHU processing,
2. To build a SNAPHU configuration file (snaphu.conf), the file where processing parameters for
SNAPHU are being stored,
3. To construct a container NEST product that will store metadata and bands to be used when
SNAPHU results are being ingested back into NEST.
1-Read:
Reads the interferometric product.
3-BandSelect:
Selects bands that are to be stored in the "phase" product. It is recommended that only phase
band is selected.
5-Write:
Writes "phase product" in the standard NEST/BEAM DIMAP format.
SNAPHU product part of the export graph
Branches groupd by the red box in the figure bellow performs the following:
1-Read:
Reads the interferometric product.
2-Read:
Reads the coherence product.
4-SnaphuExport:
Selects bands that are to be stored in the SNAPHU product, required bands are phase and
coherence. Also in this step parameters for SNAPHU are being defined. For more details about
the SNAPHU processing parameters please refer to SNAPHU manual.
6-Write:
Write SNAPHU product, using a SNAPHU writer. Note that as output format "Snaphu" format is
being predefined.
External processing with SNAPHU
Given that the SNAPHU software is properly installed and configured, unwrapping of exported
NEST product is quite straightforward. In the directory where the SNAPHU product is being saved,
the following command is to be executed:
snaphu -f snaphu.conf YOUR_PHASE_BAND.img 99999
where YOUR_PHASE_BAND.img stands for the name of the phase band that is to be unwrapped, and
99999 represents the number of lines of the YOUR_PHASE_BAND. Note that the command to be
externally called for the phase unwrapping is listed in the header of snaphu.conf file that is
created with the SNAPHU writer.
Again, it is strongly recommended that before doing any processing with SNAPHU user becomes
familiar with the software and process control flags.
Snaphu Data Import
Important note
It is strongly advised before executing graph for exporting NEST data for SNAPHU processing for
user to get familiar with general principles of doing phase unwrapping in NEST.
1-Read-Phase:
Reads the "phase-only" interferometric product constructed during Snaphu Data Export step.
2-Read-Unwrapper-Phase:
Reads the "unwrapped-phase-only" product ingested in NEST using Generic Binary Readers.
Note that due to restrictions of the framework currently it is not possible to chain the generic
binary reader in the graph, and hence it is not possible to ingest unwrapped data directly into
NEST. This has to be done outside Snaphu Import graph.
3-SnaphuImport:
Arranges the metadata and merges the bands of the source product into a unwrapped phase
product. In this step metadata and bands are arranged in a compatible form for further NEST
InSAR processing.
4-Write:
Writes "unwrapped phase product" in the standard NEST/BEAM DIMAP format.
InSAR Stack Overview
The stack coherence for a stack with master image m is defined as:
where B symbol represents the perpendicular baseline between images m and k at the
center of the image, the symbol T the temporal baseline, and fDC the Doppler baseline (the
mean Doppler centroid frequency difference). The divisor c, in the second equation, can be
regarded as a critical baseline for which the total de-correlation is expected for targets with
the a distributed scattering mechanism.
The values given in the first equation, are typical for ERS and Envisat.
Known issues:
The model used in computation of coherence will severely underestimate the coherence in
ERS-2 / Envisat Cross Interferometry applications. For this, and similar application, more
robust model that integrates the principles of the wave-number shift shall be used.
Object Detection
Object Detection
The operator detects object such as ships on sea surface from SAR imagery.
1. Pre-processing: Calibration is applied to source image to make further pre-screening easier and more accurate.
2. Land-sea masking: A land-sea mask is generated to ensure that detection is focused only on the area of interest.
3. Pre-screening: Objects are detected with a Constant False Alarm Rate(CFAR) detector.
4. Discrimination: False alarms are rejected based on object dimension.
For details of calibration, the reader is referred to the Calibration operator. Here it is assumed that the
calibration pre-processing step has been performed before applying object detection.
For details of land-sea mask generation, the reader is referred to the Create Land Mask operator.
Let f(x) be the ocean clutter probability density function and x range through the possible pixel values,
then the probability of false alarm (PFA) is given by
If Gaussian distribution is assumed for the ocean clutter, the above detection criterion can be further expressed as
where μb is the background mean, σb is the background standard deviation and t is a detector design
parameter which is computed from PFA by the following equation
In case that the target window contains more than one pixels, this operator uses the following detection criterion
where μt is the mean value of pixels in the target window. In this case, t should be replaced by t√n (where n is
the number of pixels in the target window) in the PFA calculation.
Adaptive Threshold Algorithm
The object detection is performed in an adaptive manner by the Adaptive Thresholding operator. For each pixel
under test, there are three windows, namely target window, guard window and background window, surrounding
it (see Figure 1).
Normally the target window size should be about the size of the smallest object to detect, the guard window
size should be about the size of the largest object, and the background window size should be large enough
to estimate accurately the local statistics.
The operator
● First computes detector design parameter t from user selected PFA using equation above.
● Then computes background mean μb and standard deviation σb using pixels in the background ring.
● If μt > μb + σb*t, then the center pixel is detected as part of an object, otherwise not an object.
Discrimination
The discrimination operation is conducted by the Object Discrimination operator. During this
operation, false detections are eliminated based on simple target measurements.
1. The operator first clusters contiguous detected pixels into a single cluster.
2. Then the width and length information of the clusters are extracted.
3. Finally based on these measurements and user input discrimination criteria, clusters that are too big or too small
are eliminated.
Parameters Used
For Adaptive Thresholding operator, the following parameters are used (see Figure 2):
1. Target Window Size (m): The target window size in meters. It should be set to the size of the smallest target
to detect.
2. Guard Window Size (m): The guard window size in meters. It should be set to the size of the largest target to detect.
3. Background Window Size (m): The background window size in meters. It should be far larger than the guard
window size to ensure accurate calculation of the background statistics.
4. PFA (10^(-x)): Here user enters a positive number for parameter x, and the PFA value is computed by 10^(-x).
For example, if user enters x = 6, then PFA = 10^(-6) which is 0.000001.
Figure 2. Adaptive Thresholding Operator dialog box.
For Object Discrimination operator, the following parameters are used (see Figure 3):
1. Minimum Target Size (m): Target with dimension smaller than this threshold is eliminated.
2. Maximum Target Size (m): Target with dimension larger than this threshold is eliminated.
Figure 3. Object Discrimination Operator dialog box.
The detected object will be circled on top of the image view (see example in the figure below). An Object
Detection Report will also be produced in XML in the .nest/log folder.
Figure 4. Object Detection Results overlayed on the image.
Reference:
[1] D. J. Crisp, "The State-of-the-Art in Ship Detection in Synthetic Aperture Radar Imagery." DSTO–RR–0272,
2004-05.
Oil Spill Detection
1. Pre-processing: Calibration and speckle filtering are applied to source image in this step.
2. Land-sea masking: Land-sea mask is created in this step to ensure that detection is focused
only on area of interest.
3. Dark spot detection: Dark spots are detected in this step with an adaptive thresholding
method.
4. Clustering and discrimination: Pixels detected as part of the dark spot are clustered and
then eliminated based on the dimension of the cluster and user selected minimum cluster size.
For details of calibration and speckle filtering operations, the readers are referred to the Calibration
operator and the Speckle Filter operator. Here it is assumed that the calibration and speckle filtering
have been performed before applying the oil spill detection operator.
For details of land-sea mask generation, the readers are referred to the Create Land Mask
operator.
1. First the local mean backscatter level is estimated using pixels in a large window.
2. Then the detecting threshold is set k decibel below the estimated local mean backscatter level.
Pixels within the window with values lower than the threshold are detected as dark spot. k is a
user selected parameter (see parameter Threshold Shift below).
3. Shift the window to next window position and repeat step 1 and 2.
Discrimination
1. First the contiguous detected pixels are clustered into a single cluster.
2. Then clusters with their sizes smaller than user selected Minimum Cluster Size are eliminated.
Parameters Used
For dark spot detection, the following parameters are used (see figure 1):
1. Source Bands: All bands (real or virtual) of the source product. User can select one or more
bands for producing multi-looked images. If no bands are selected, then by default all bands
are selected.
2. Background Window Size: The window size in pixels for computing local mean backscatter
level.
3. Threshold Shift (dB): The detecting threshold is lower than the local mean backscatter level by
this amount.
1. Minimum Cluster Size: The minimum cluster size in square kilometer. Cluster with size smaller
than this size is eliminated.
Reference:
[1] A. S. Solberg, C. Brekke and R. Solberg, "Algorithms for oil spill detection in Radarsat and
ENVISAT SAR images", Geoscience and Remote Sensing Symposium, 2004. IGARSS '04.
Proceedings. 2004 IEEE International, 20-24 Sept. 2004, page 4909-4912, vol.7.
Create Land Mask
1. Source Band: All bands (real or virtual) of the source product. User can select one or
more bands.
2. Mask the Land: Checkbox indicating that land pixels will become nodata value.
3. Mask the Sea: Checkbox indicating that sea pixels will become nodata value.
4. Use Geometry as Mask: Select a geometry or ROI from the product to use as the mask.
Anything outside the area will be nodata value.
5. Invert Geometry: Anything inside the ROI or geometry will be nodata value.
6. Bypass: Skip any land masking.
Wind Field Estimation
1. First a land-sea mask is generated to ensure that the estimation is focused only on the sea surface
area.
2. Then the SAR image is divided into grid using user specified window size.
3. For each grid, a wind direction (with 180° ambiguity) is estimated from features in the SAR image
using a frequency domain method.
4. With the wind direction estimated for the grid, finally the wind speed is estimated by using CMOD5
model for the Normalized Radar Cross Section (NRCS).
For details of land-sea mask generation, the reader is referred to the Create Land Mask operator.
1. For each window within which a wind direction will be estimated, a local FFT size is determined. The
FFT size is 2/3 of the window size, therefore four spectra can be computed in the window with each
spectra region has a 50% overlap with the neighboring spectrum.
2. Each window is flattened by applying a large average filter, then dividing by the filtered image.
3. The FFT’s are applied and the four resulting spectra are averaged.
4. An annulus is applied to the spectrum to zero out any energy outside of a wavenumber region. The
limits of the annulus are set to wave lengths of 3 km to 15 km.
5. A 3x3 median filter is then applied to the spectrum to remove noise.
6. A 2D polynomial is fit to the resulting spectral samples and the direction through the origin which
has the largest quadratic term (i.e. the widest extent) is determined. The wind direction is then
assumed to be 90 degree from this direction.
● The wind speed is estimated using the CMOD5 model for NRCS developed by Hersbach et al.
[1] for VV-polarized C-band scatterometry.
● For ENVISAT HH-polarized product, where CMOD5 model is not directly applicable, the operator
first converts the NRCS at HH polarization into a corresponding NRCS for VV polarization with the
following equation, then applies the CMOD5 model to the converted NRCS:
where θ is the incidence angle and α is set to 1.
For details of the CMOD5 model, the readers are referred to [1].
Products Supported
● The operator now is only supported for ERS and ENVISAT (VV- and HH-polarized) products. The
source product is assumed to have been calibrated before applying the operator.
Parameters Used
The following parameters are used by the operator:
1. Source Bands: All bands (real or virtual) of the source product. User can select one or more bands
for producing multi-looked images. If no bands are selected, then by default all bands are selected.
2. Window Size: The dimension of a window for which wind direction and speed are estimated.
Figure 1. Wind Field Estimation dialog box
Then wind directions will be displayed as shown in the example below. Note that the wind direction is
indicated by double headed arrows because a 180° ambiguity exists in the estimated wind direction. Also
for those grids in which land pixels are found, the wind directions are not estimated and hence not
displayed.
Reference:
[1] H. Hersbach, CMOD5, “An Improved Geophysical Model Function for ERS C-Band Scatterometry”,
Report of the European Centre Medium-Range Weather Forecasts (ECMWF), 2003.
[2] C. C. Wackerman, W. G. Pichel, P. Clemente-Colon, “Automated Estimation of Wind Vectors from
SAR”, 12th Conference on Interactions of the Sea and Atmosphere, 2003.
General Design
NEST Architecture
NEST consists of a collection of processing modules and data product readers and writers.
All modules are centered on the Generic Product Model (GPM) of the BEAM-NEST Core. The GPM is a
common, unified data model designed so that all data readers convert data into this data model and all analysis
and processing tools use this data model exclusively.
Data product reader modules ingest all the data of a particular product, including metadata and transform it into
the GPM data structure. The GPM is abstract enough to handle all types of data products without losing
any information. The NEST tools have only one interface to the GPM in order to work with the data. Data
product writers are able to take the data from the GPM and produce an external file format. With a GPM,
file conversions from one file format to another are easily achieved with the appropriate reader and writer
modules. Furthermore, the DAT, tools and future plug-ins are not dependent on which data products are supported
or any specific complexities of the file formats.
NEST’s primary goal is to read in ESA and third part SAR data products, provide tools for
calibration, orthorectification, co-registration, interferometry, and data manipulation, and then convert the data
into common file formats used by other 3rd party SAR software.
Built on Beam
The NEST architecture is built upon the proven and extensible architecture of BEAM. BEAM features an
Application Programming Interface (API) that was designed and developed from the beginning to be used by
3rd parties to extend BEAM or re-use it in their own applications.
BEAM is an application suite which facilitates the utilization, viewing and processing of MERIS, AATSR and ASAR
data products of the ESA ENVISAT environmental satellite. It provides a multitude of tools and functions to
visualize, analyze, manipulate and modify earth observation data.
Although BEAM has been intended specifically for optical data, the BEAM architecture has been designed
and developed to facilitate the extension and re-use of its code base such that it is feasible to use BEAM’s
core framework as the building blocks to develop a Synthetic Aperture Radar (SAR) toolbox.
NEST and BEAM share a common core that enables the exchange of modules between the two toolboxes.
This common core is maintained cooperatively by both Array Systems Computing Inc., the developers of NEST
and Brockmann Consult, the developers of BEAM.
The majority of the NEST functionality is encapsulated in BEAM plug-in modules. As such, some modules
are interchangeable between the two systems.
Detailed Design
BEAM-NEST Core
The common BEAM-NEST Core architecture consists of various plug-in modules which are independently
developed, modified and versioned. The core framework is made up of the beam-core, beam-gpf, beam-
processing, beam-ui and beam-visat modules.
The core modules make use of ceres-core, ceres-ui and ceres-binding. These modules package utility
classes for module registration, versioning, application building and swing user interface helper functions.
The beam-core contains most of the managers and data model including the data IO for the product
readers and format writers. The beam-gpf is the Graph Processing Framework (GPF), which implements
the new processing framework introduced in version 4 of the software. Processing tasks are
implemented as Operators. The GPF provides a way to execute a chain of sequential processing steps on
an image. The beam-processing is the old version 3 processing framework, which will not be used by
NEST directly but may still have dependencies for some BEAM operators. The beam-ui provides the user
interface framework for creating applications, windows and dialogs. Beam-visat is the primary
application and user interface to the tools in BEAM. VISAT supports extensions for views for displaying
data, actions to add menu items and toolbar buttons to trigger user initiated events.
For more information, please refer to the Detailed NEST API JavaDoc.
Open Source NEST
Open source makes software inherently independent of specific vendors, programmers and
suppliers. The software can be freely distributed and shared by large communities and
includes the source code and the right to modify it. This not only ensures that there isn’t a
single entity on which the future of the software depends, but also allows for unlimited
improvements and tuning of the quality and functionality of the software.
By making NEST open source, future evolution and growth of the toolbox will be possible by
the community of users and developers that contribute to the project.
Building NEST
1. Build IDEA project files for NEST: Type mvn compile idea:idea
2. In IDEA, go to the IDEA Main Menu/File/Open Project and open the created project
file $MY_PROJECTS/nest/nest.ipr
Eclipse users:
1. Build IDEA project files for NEST: Type mvn compile eclipse:eclipse
2. From Eclipse, click on Main Menu/File/Import
3. Select General/Existing Project into Workspace
4. Select Root Directory $MY_PROJECTS/nest
5. Set the M2_REPO classpath variable:
Note: In Eclipse, some purposely malformed XML in a unit test may prevent you from
building the source.
Simply delete these files module-no-xml.xml and module-malformed-xml.xml in
$MY_PROJECTS/nest/beam/ceres-0.x/ceres-core/src/test/resources/com/bc/ceres/
core/runtime/internal/xml
For both IDEA and Eclipse, use the following configuration to run DAT as an Application:
Readers
To create a reader plugin implement the ProductReaderPlugIn interface.
/**
* Gets the qualification of the product reader to decode a given input object.
*
* @param input the input object
* @return the decode qualification
*/
DecodeQualification getDecodeQualification(Object input);
/**
* Returns an array containing the classes that represent valid input types for this
reader.
* <p/>
* <p> Intances of the classes returned in this array are valid objects for the
<code>setInput</code> method of the
* <code>ProductReader</code> interface (the method will not throw an
<code>InvalidArgumentException</code> in this
* case).
*
* @return an array containing valid input types, never <code>null</code>
*/
Class[] getInputTypes();
/**
* Creates an instance of the actual product reader class. This method should never
return <code>null</code>.
*
* @return a new reader instance, never <code>null</code>
*/
ProductReader createReaderInstance();
}
The reader plugin should create a new instance of your reader in createReaderInstance().
In readBandRasterDataImpl() fill the destination buffer with band data for the requested
rectangular area.
Maven DataIO Archetype
The Maven 2 Archetype Plugin for NEST data I/O modules is used to create archetypes for
NEST data I/O modules.
A Maven Archetype is a template toolkit for generating a new module package. By using the
Maven Archetype you can create a module structure easily and get started adding your code
to the module.
A DataIO Archetype will generate a product reader and writer within the same package.
Before beginning, make sure that you have built the NEST source code and do a maven
install to ensure that all dependencies are in the repository.
From the command line type the following from the NEST source code root folder.:
mvn archetype:create
-DarchetypeGroupId=org.esa.nest.maven
-DarchetypeArtifactId=maven-nest-dataio-archetype
-DarchetypeVersion=1.0
-DgroupId=myGroupId
-DartifactId=myArtifactId
-Dversion=myVersion
-DpackageName=myPackageName
where
Example
Publishing a Reader
Reader implementations are published via the Java service provider interface (SPI). A JAR
publishes its readers in the resource file META-INF/services/org.esa.beam.framework.dataio.
ProductReaderPlugIn. In this file add your reader SPI eg: org.esa.nest.dataio.radarsat2.
Radarsat2ProductReaderPlugIn
<action>
<id>importRadarsat2Product</id>
<class>org.esa.beam.visat.actions.ProductImportAction</class>
<formatName>Radarsat 2</formatName>
<shortDescr>Import a Radarsat2 data product or product subset.</shortDescr>
<description>Import a Radarsat2 data product or product subset.</description>
<largeIcon>icons/Import24.gif</largeIcon>
<placeAfter>importRadarsatProduct</placeAfter>
<helpId>importRadarsat2Product</helpId>
</action>
Developing A Writer
Writers
To create a writer plugin implement the ProductWriterPlugIn interface.
/**
* Returns an array containing the classes that represent valid output types for this
writer.
* <p/>
* <p> Instances of the classes returned in this array are valid objects for the
<code>setOutput</code> method of the
* <code>ProductWriter</code> interface (the method will not throw an
<code>InvalidArgumentException</code> in this
* case).
*
* @return an array containing valid output types, never <code>null</code>
*
* @see ProductWriter#writeProductNodes
*/
Class[] getOutputTypes();
/**
* Creates an instance of the actual product writer class. This method should never
return <code>null</code>.
*
* @return a new writer instance, never <code>null</code>
*/
ProductWriter createWriterInstance();
}
The writer plugin should create a new instance of your reader in createWriterInstance().
WriteBandRasterData() writes raster data from the given in-memory source buffer into the
data sink specified by the given source band and region.
A Maven Archetype is a template toolkit for generating a new module package. By using the
Maven Archetype you can create a module structure easily and get started adding your code
to the module.
A DataIO Archetype will generate a product reader and writer within the same package.
Before beginning, make sure that you have built the NEST source code and do a maven
install to ensure that all dependencies are in the repository.
From the command line type the following from the NEST source code root folder.:
mvn archetype:create
-DarchetypeGroupId=org.esa.nest.maven
-DarchetypeArtifactId=maven-nest-dataio-archetype
-DarchetypeVersion=1.0
-DgroupId=myGroupId
-DartifactId=myArtifactId
-Dversion=myVersion
-DpackageName=myPackageName
where
Example
Publishing a Writer
Writer implementations are published via the Java service provider interface (SPI). A JAR
publishes its writers in the resource file META-INF/services/org.esa.beam.framework.dataio.
ProductWriterPlugIn. In this file add your writer SPI eg: org.esa.beam.dataio.geotiff.
GeoTiffProductWriterPlugIn
Adding Menu Item Actions
In the modules.xml file found in the resources folder of the package, add an Action to create a
menu item in the DAT. State the class of the Action to be called and the text to show in the
menu item.
<action>
<id>exportGeoTIFFProduct</id>
<class>org.esa.beam.dataio.geotiff.GeoTiffExportAction</class>
<formatName>GeoTIFF</formatName>
<useAllFileFilter>true</useAllFileFilter>
<mnemonic>O</mnemonic>
<text>Export GeoTIFF Product...</text>
<shortDescr>Export a GeoTIFF data product or subset.</shortDescr>
<description>Export a GeoTIFF data product or product subset.</description>
<helpId>exportGeoTIFFProduct</helpId>
</action>
Developing An Operator
Operators
The Operator interface is simple to extend. An Operator basically takes a source product as
input and creates a new target product within initialize(). The algorithm implementation for
what your operator does will go inside computTile() or computeTiles(). Operators work on
the data tile by tile. The size of the tile may be dependent on the requests of other
Operators in the graph.
public interface Operator {
OperatorSpi getSpi();
Product initialize(OperatorContext context);
void computeTile(Tile targetTile, ProgressMonitor pm);
void computeTileStack(Rectangle targetTileRectangle, ProgressMonitor pm);
void dispose();
}
The computeTile and computeTileStack methods express different application
requirements. Clients may either implement computeTile or computeTileStack or both. In
general, the algorithm dictates which of the methods will be implemented. Some algorithms
can compute their output bands independently (band-arithmetic, radiance to reflectance
conversion), other cannot.
The GPF selects the method which best fits the application requirements:
● In order to display an image of a band, the GPF is asked to compute tiles of single
bands. The GPF therefore will prefer calling the computeTile method, if implemented.
Otherwise it has to call computeTileStack, which might not be the best choice in this
case.
● In order to process in batch-mode or to save a product to disk, the GPF is asked to
compute the tiles of all bands of a product. The GPF therefore will prefer calling the
computeTileStack method, if implemented. Otherwise it will consecutively call the
computeTile for each output band
A Maven Archetype is a template toolkit for generating a new module package. By using
the Maven Archetype you can create a module structure easily and get started adding your
code to the module.
A GPF Archetype will generate a single tile and a multi tile Operator within the same
package.
Before beginning, make sure that you have built the NEST source code and do a maven
install to ensure that all dependencies are in the repository.
From the command line type the following from the NEST source code root folder:
mvn archetype:create
-DarchetypeGroupId=org.esa.nest.maven
-DarchetypeArtifactId=maven-nest-gpf-archetype
-DarchetypeVersion=1.0
-DgroupId=myGroupId
-DartifactId=myArtifactId
-Dversion=myVersion
-DpackageName=myPackageName
where
Example
Publishing an Operator
Operator implementations are published via the Java service provider interface (SPI). A JAR
publishes its operators in the resource file META-INF/services/org.esa.beam.framework.
gpf.OperatorSpi. In this file add your operator SPI eg: org.esa.nest.gpf.MultilookOp$Spi
In your Operator package add a class to extend the OperatorSpi interface. This class may
also serve as a factory for new operator instances.
public static class Spi extends OperatorSpi {
public Spi() {
super(MultilookOp.class);
}
}
<action>
<id>SlantRangeGroundRangeOp</id>
<class>org.esa.nest.dat.SRGROpAction</class>
<text>Slant Range to Ground Range</text>
<shortDescr>Converts a product to/from slant range to/from ground range</
shortDescr>
<parent>geometry</parent>
<helpId>SRGROp</helpId>
</action>
The Action class should extend AbstractVisatAction and override the handler for
actionPerformed
@Override
public void actionPerformed(CommandEvent event) {
if (dialog == null) {
dialog = new DefaultSingleTargetProductDialog("SRGR", getAppContext(), "Slant
Range to Ground Range", getHelpId());
dialog.setTargetProductNameSuffix("_GR");
}
dialog.show();
}
Remote Sensing Tutorials
Virtual Library on Remote Sensing from VTT Technical Research Centre of Finland
http://virtual.vtt.fi/virtual/space/rsvlib/
SAR Interferometry
http://epsilon.nought.de/tutorials/insar_tmr/index.php
2009 International GEO Workshop on SAR to Support Agricultural Monitoring
http://www.cgeo.gc.ca/events-evenements/sar-ros/training-formation-man-eng.pdf
Radar Remote Sensing Overview
ESA 2010 Warsaw SAR Course part I
ESA 2010 Warsaw SAR Course part II
Quick Start With The DAT
Opening Products
Image Products can be opened by either the Open Raster Dataset menu item in the File menu
● Identification: Basic information on the product (Mission, Product type, Acquisition time, Track and Orbit)
● Metadata: This includes all the original metadata within the product, the Processing_graph history recording
the processing that was done and the Abstracted Metadata which is the important metadata fields used by
the Operators in a common format.
● Tie Point Grids: Raster grids created interpolating the tie-points information within the product. The interpolation
is done on the fly according to the product.
● Bands: The actual bands inside the product and virtual bands created from expressions. Different icons are used
to distinguish these bands.
The information in the metadata and the bands created can vary according to the product.
Below is an example of an ENVISAT ASAR IMP product opened in the Products View.
Double click on the Abstracted Metadata to view all critical fields.
Double click on a band to open it in an Image View.
After you have opened an Image View you can modify the colors of the image using the colour manipulation
window or overlay an opaque or semi-transparent bitmask with the bitmask overlay window. Both windows operate
in non-modal mode, which means they float over DAT's main frame, and you can place them somewhere on
your desktop.
To see what region of the world is covered by the data product, select the World Map View from the View/
ToolViews menu.
Creating A Project
With Projects, you can organise and store complex processing over multiple datasets. The main advantage of
using Projects is that it automatically organizes the processed images in separate directories
Start by creating a New Project from the File menu. From the New Project dialog, browse for a folder and
enter the project name. A new Project folder with the Project name given will be created along with a Project XML
file which will store information about your Project.
Next, use the Product Library to browse and select data products to import or double click on them to open
them directly.
From the Project View you can double click on a product to open it in the Product View.
From the Product View you can examine a products metadata or double click on a band to open it in an Image View.
Coregistration Tutorial
Coregistering Products
Image co-registration is fundamental for Interferometry SAR (InSAR) imaging and its
applications, such as DEM map generation and analysis. To obtain a high quality InSAR
image, the individual complex images need to be co-registered to sub-pixel accuracy.
The toolbox will accurately co-register one or more slave images with respect to a master
image. The co-registration function is fully automatic, in the sense that it does not require
the user to manually select ground control points (GCPs) for the master and slave images.
Images may be fully or only partly overlapping and may be from acquisitions taken at
different times using multiple sensors or from multiple passages of the same satellite.
The achievable co-registration accuracy for images in the same acquisition geometry and
over flat areas will be better than 0.2 pixels for two real images and better than 0.05 pixels
for two complex images.
The image co-registration is accomplished in three major processing steps (see Figure 1)
with three operators: Create Stack operator, GCP Selection operator and Warp operator.
Figure 1. Image co-registration
Input Images
The input images for the co-registration function can be complex or real. But all images
must belong to the same type (i.e. they must all be complex or all real) and have the same
projection system (all slant range or all ground range projected or all geocoded). If the
images are not in the same projection, the slave image(s) should be reprojected into the
same projection system as that of the master image.
Create Stack
The Create Stack operator collocates the master and slave images. Basically the slave
image data is resampled into the geographical raster of the master image. By doing so the
master and slave images share the same geopositioning information and have the same
dimension. For details of the Create Stack operator, readers are referred to Create Stack
operator. For coregistration of detected products it is ok to use the resamling methods. For
coregistration of complex products for interferometry, the option to not do any resampling
in the CreateStack should be used.
GCP Selection
The GCP Selection operator then creates an alignment between master and slave images
by matching the user selected master GCPs to their corresponding slave GCPs. There are
two stages for the operation: coarse registration and fine registration. For real images co-
registration, the coarse registration is applied. The registration is achieved by maximizing
the cross-correlation between master and slave images on a series of imagettes defined
across the images. For complex image co-registration, the additional fine registration is
applied. The registration is achieved by maximizing the coherence between master and
slave images at a series of imagettes defined across the images. For details of the GCP
Selection operator, readers are referred to GCP Selection operator.
Warp
With the master-slave GCP pairs selected, a warp function is created by the Warp operator,
which maps pixels in the slave image into pixels in the master image. For details of
the Warp operator, readers are referred to Warp operator.
Terrain Correction
The Terrain Correction Operator will produce an orthorectified product in the WGS 84
geographic coordinates. The Range Doppler orthorectification method [1] is implemented for geocoding
SAR images from a single 2D raster radar geometry. It uses available orbit state vector information in
the metadata or external precise orbit, the radar timing annotations, the slant to ground range
conversion parameters together with the reference DEM data to derive the precise
geolocation information. Optionally radiometric normalisation can be applied to the orthorectified image
to produce σ0, γ0 or β0 output.
The Ellipsoid Correction RD and Ellipsoid Correction GG Operators will produce ellipsoid corrected products in
the WGS 84 geographic coordinates. The Terrain Correction Operator should be used whenever DEM is available.
The Ellipsoid Correction (RD and GG) should be used only when DEM is not available.
Orthorectification Algorithm
The Range Doppler Terrain Correction Operator implements the Range Doppler orthorectification method [1]
for geocoding SAR images from single 2D raster radar geometry. It uses available orbit state vector information
in the metadata or external precise orbit (only for ERS and ASAR), the radar timing annotations, the slant to
ground range conversion parameters together with the reference DEM data to derive the precise
geolocation information.
Products Supported
● ASAR (IMS, IMP, IMM, APP, APM, WSM) and ERS products (SLC, IMP) are fully supported.
● RADARSAT-2 (all products)
● TerraSAR-X (SSC only)
● Cosmo-Skymed
DEMs Supported
Right now only the DEMs with geographic coordinates (Plat, Plon, Ph) referred to global geodetic ellipsoid
reference WGS84 (and height in meters) are properly supported.
STRM v.4 (3” tiles) from the Joint Research Center FTP (xftp.jrc.it) are downloaded automatically for the
area covered by the image to be orthorectified. The tiles will be downloaded to the folder C:\AuxData
\DEMs\SRTM_DEM\tiff or the folder specified in the Settings.
The Test Connectivity functionality under the Help tab in the main menu bar allows the user to verify if the
SRTM downloading is working properly.
Please note that for ACE and SRTM, the height information (being referred to geoid EGM96) is automatically
corrected to obtain height relative to the WGS84 ellipsoid. For Aster Dem height correction is already applied.
Note also that the SRTM DEM covers area between -60 and 60 degrees latitude. Therefore, for orthorectification
of product of high latitude area, different DEM should be used.
User can also use external DEM file in Geotiff format which, as specified above, must be with
geographic coordinates (Plat, Plon, Ph) referred to global geodetic ellipsoid reference WGS84 (and height in meters)
Pixel Spacing
Besides the default suggested pixel spacing computed with parameters in the metadata, user can specify output
pixel spacing for the orthorectified image.
The pixel spacing can be entered in both meters and degrees. If the pixel spacing in one unit is entered, then
the pixel spacing in another unit is computed automatically.
The calculations of the pixel spacing in meters and in degrees are given by the following equations:
pixelSpacingInDegree = pixelSpacingInMeter / EquatorialEarthRadius * 180 / PI;
pixelSpacingInMeter = pixelSpacingInDegree * PolarEarthRadius * PI / 180;
where EquatorialEarthRadius = 6378137.0 m and PolarEarthRadius = 6356752.314245 m as given in WGS84.
Radiometric Normalization
This option implements a radiometric normalization based on the approach proposed by Kellndorfer et al.,
TGRS, Sept. 1998 where
In current implementation θDEM is the local incidence angle projected into the range plane and defined as the
angle between the incoming radiation vector and the projected surface normal vector into range plane[2]. The
range plane is the plane formed by the satellite position, backscattering element position and the earth centre.
Note that among σ0, γ0 and β0 bands output in the target product, only σ0 is real band while γ0 and β0 are
virtual bands expressed in terms of σ0 and incidence angle. Therefore, σ0 and incidence angle are automatically
saved and output if γ0 or β0 is selected.
For σ0 and γ0 calculation, by default the projected local incidence angle from DEM [2] (local incidence angle
projected into range plane) option is selected, but the option of incidence angle from ellipsoid correction
(incidence angle from tie points of the source product) is also available.
ENVISAT ASAR
The correction factors [3] applied to the original image depend on the product being complex or detected
and the selection of Auxiliary file (ASAR XCA file).
● Latest AUX File (& use projected local incidence angle computed from DEM):
The most recent ASAR XCA available from the installation folder \auxdata\envisat compatible with product date is
automatically selected. According to this XCA file, calibration constant, range spreading loss and antenna pattern
gain are obtained.
❍ Applied factors:
1. apply projected local incidence angle into the range plane correction
2. apply calibration constant correction based on the XCA file
3. apply range spreading loss correction based on the XCA file and DEM geometry
4. apply antenna pattern gain correction based on the XCA file and DEM geometry
● External AUX File (& use projected local incidence angle computed from DEM):
User can select a specific ASAR XCA file available from the installation folder \auxdata\envisat or from another
repository. According to this selected XCA file, calibration constant, range spreading loss and antenna pattern gain
are computed.
❍ Applied factors:
1. apply projected local incidence angle into the range plane correction
2. apply calibration constant correction based on the selected XCA file
3. apply range spreading loss correction based on the selected XCA file and DEM geometry
4. apply antenna pattern gain correction based on the selected XCA file and DEM geometry
● Latest AUX File (& use projected local incidence angle computed from DEM):
The most recent ASAR XCA available compatible with product date is automatically selected. Basically with this
option all the correction factors applied to the original SAR image based on product XCA file used during the
focusing, such as antenna pattern gain and range spreading loss, are removed first. Then new factors computed
according to the new ASAR XCA file together with calibration constant and local incidence angle correction factors
are applied during the radiometric normalisation process.
❍ Applied factors:
1. remove antenna pattern gain correction based on product XCA file
2. remove range spreading loss correction based on product XCA file
3. apply projected local incidence angle into the range plane correction
4. apply calibration constant correction based on new XCA file
5. apply range spreading loss correction based on new XCA file and DEM geometry
6. apply new antenna pattern gain correction based on new XCA file and DEM geometry
● Product AUX File (& use projected local incidence angle computed from DEM):
The product ASAR XCA file employed during the focusing is used. With this option the antenna pattern gain and
range spreading loss are kept from the original product and only the calibration constant and local incidence angle
correction factors are applied during the radiometric normalisation process.
❍ Applied factors:
1. apply projected local incidence angle into the range plane correction
2. apply calibration constant correction based on product XCA file
● External AUX File (& use projected local incidence angle computed from DEM):
The User can select a specific ASAR XCA file available from the installation folder \auxdata\envisat or from another
repository. Basically with this option all the correction factors applied to the original SAR image based on product
XCA file used during the focusing, such as antenna pattern gain and range spreading loss, are removed first. Then
new factors computed according to the new selected ASAR XCA file together with calibration constant and local
incidence angle correction factors are applied during the radiometric normalisation process.
❍ Applied factors:
1. remove antenna pattern gain correction based on product XCA file
2. remove range spreading loss correction based on product XCA file
3. apply projected local incidence angle into the range plane correction
4. apply calibration constant correction based on new selected XCA file
5. apply range spreading loss correction based on new selected XCA file and DEM geometry
6. apply new antenna pattern gain correction based on new selected XCA file and DEM geometry
Please note that if the product has been previously multilooked then the radiometric normalization does not
correct the antenna pattern and range spreading loss and only constant and incidence angle corrections are
applied. This is because the original antenna pattern and the range spreading loss correction cannot be
properly removed due to the pixel averaging by multilooking.
If user needs to apply a radiometric normalization, multilook and terrain correction to a product, then user
graph “RemoveAntPat_Multilook_Orthorectify” could be used.
ERS 1&2
For ERS 1&2 the radiometric normalization cannot be applied directly to original ERS product.
Because of the Analogue to Digital Converter (ADC) power loss correction , a step before is required to
properly handle the data. It is necessary to employ the Remove Antenna Pattern Operator which performs
the following operations:
For Single look complex (SLC, IMS) products
After having applied the Remove Antenna Pattern Operator to ERS data, the radiometric normalisation can
be performed during the Terrain Correction.
The applied factors in case of "USE projected angle from the DEM" selection are:
1. apply projected local incidence angle into the range plane correction
2. apply absolute calibration constant correction
3. apply range spreading loss correction based on product metadata and DEM geometry
4. apply new antenna pattern gain correction based on product metadata and DEM geometry
To apply radiometric normalization and terrain correction for ERS, user can also use one of the following user graphs:
● RemoveAntPat_Orthorectify
● RemoveAntPat_Multilook_Orthorectify
RADARSAT-2
● In case of "USE projected angle from the DEM" selection, the radiometric normalisation is performed applying the
product LUTs and multiplying by (sin •DEM/sin •el), where •DEM is projected local incidence angle into the range
plane and •el is the incidence angle computed from the tie point grid respect to ellipsoid.
● In case of selection of "USE incidence angle from Ellipsoid", the radiometric normalisation is performed applying the
product LUT.
These LUTs allow one to convert the digital numbers found in the output product to sigma-nought, beta-nought,
or gamma-nought values (depending on which LUT is used).
TerraSAR-X
● In case of "USE projected angle from the DEM" selection, the radiometric normalisation is performed applying
1. projected local incidence angle into the range plane correction
2. absolute calibration constant correction
● In case of " USE incidence angle from Ellipsoid " selection, the radiometric normalisation is performed applying
1. projected local incidence angle into the range plane correction
2. absolute calibration constant correction
Please note that the simplified approach where Noise Equivalent Beta Naught is neglected has been implemented.
Cosmo-SkyMed
● In case of "USE projected angle from the DEM" selection, the radiometric normalisation is performed deriving
σ0Ellipsoid [7] and then multiplying by (sinθDEM / sinθel), where θDEM is the projected local incidence angle into the
range plane and θel is the incidence angle computed from the tie point grid respect to ellipsoid.
● In case of selection of "USE incidence angle from Ellipsoid", the radiometric normalisation is performed deriving
σ0Ellipsoid [7]
Definitions:
1. The local incidence angle is defined as the angle between the normal vector of the backscattering element (i.e. vector
perpendicular to the ground surface) and the incoming radiation vector (i.e. vector formed by the satellite
position and the backscattering element position) [2].
2. The projected local incidence angle from DEM is defined as the angle between the incoming radiation vector (as
defined above) and the projected surface normal vector into range plane. Here range plane is the plane formed
by the satellite position, backscattering element position and the earth centre [2].
Steps to Produce Orthorectified Image
The following steps should be followed to produce an orthorectified image:
1. From the Geometry menu select Terrain Correction. This will call up the dialog for the Terrain Correction Operator
(Figure 1).
2. Select your source bands.
3. Select the Digital Elevation Model (DEM) to use. You can select 30 second GETASSE30 or ACE DEMs if they are
installed on your computer. Preferably, select the SRTM 3 second DEM which has much better resolution and can
be downloaded as need automatically if you have an internet connection. Alternatively, you could also browse for an
External DEM tile. Currently only DEM in Geotiff format with geographic coordinates (Plat, Plon, Ph) referred to
global geodetic ellipsoid reference WGS84 (and height in meters) is accepted.
4. Select the interpolation methods to use for the DEM resampling and the target image resampling.
5. Optionally select the Pixel Spacing in meters for the orthorectified image. By default the pixel spacing
computed from the original SAR image is used. For details, the reader is referred to Pixel Spacing section above.
6. Optionally select the Pixel Spacing in degrees for the orthorectified image. By default it is computed from the pixel
spacing in meters. If any of the two pixel spacing is changed, the other one is updated accordingly. For details,
the reader is referred to Pixel Spacing section above.
7. Optionally select Map Projection. The orthorectified image will be presented with the user selected map projection. By
default the output image will be expressed in WGS84 latlong geographic coordinate.
8. Optionally select to save the DEM as a band and the local incidence angle.
9. Optionally select to apply Radiometric Normalizatin to output σ0, γ0 or β0 of the orthorectified image.
10. Press Run to process the data.
Figure 1. Terrain Correction operator dialog box.
Below are some sample images showing the Terrain Correction result of an ASAR IMS
product ASA_IMS_1PNUPA20081003_092731_000000162072_00351_34473_2366.N1, acquired on October
3, 2008, imaging the area around Rome in Central Italy.
The ASAR IMS image has been multi-looked with 2 Range looks and 10 Azimuth Looks before to be orthorectified.
The DEM employed is the SRTM 3 second Version 4 and since the SRTM height information is referred to
geoid EGM96, not WGS84 ellipsoid, correction has been applied to obtain height relative to the WGS84 ellipsoid
(this is done automatically)
The orthorectified image and its radiometric normalised image σ0 are shown in Figure 3 and Figure 4 respectively.
After Terrain Correction your SAR data will be closer to the real world geometry and you will be able to overlay
layers from other sources correctly.
Reference:
[1] Small D., Schubert A., Guide to ASAR Geocoding, RSL-ASAR-GC-AD, Issue 1.0, March 2008
[2] Schreier G., SAR Geocoding: Data and Systems, Wichmann 1993
[3] Rosich B., Meadows P., Absolute calibration of ASAR Level 1 products, ESA/ESRIN, ENVI-CLVL-EOPG-TN-03-
0010, Issue 1, Rev. 5, October 2004
[4] Laur H., Bally P., Meadows P., Sánchez J., Schättler B., Lopinto E. & Esteban D., ERS SAR Calibration:
Derivation of σ0 in ESA ERS SAR PRI Products, ESA/ESRIN, ES-TN-RS-PM-HL09, Issue 2, Rev. 5f, November 2004
[5] RADARSAT-2 PRODUCT FORMAT DEFINITION - RN-RP-51-2713 Issue 1/7: March 14, 2008
[6] Radiometric Calibration of TerraSAR-X data - TSXX-ITD-TN-0049-radiometric_calculations_I1.00.doc, 2008
[7] For further details about Cosmo-SkyMed calibration please contact Cosmo-SkyMed Help Desk at info.
cosmo@e-geos.it
Processing from the Command Line
Running a Graph
First, create a graph using the Graph Builder in the DAT. The graph does not need any
particular source products for the readers as you will be specifying the input products from
the command line. Save the graph to a file.
Next, from the command line (DOS prompt in Windows or terminal in Linux) type gpt.bat -
h or gpt.sh -h to display the help message.
This tutorial will use the command gpt but you may need to specify gpt.bat in Windows or
gpt.sh in Linux.
If the help does not appear, then it may indicate that your environment variables are not
setup. Please refer to the installation FAQ.
In order to run a single reader graph type:
gpt graphFile.xml inputProduct
This will process the graph graphFile.xml and use inputProduct as input. The output will be
created in a file in the current directory called target.dim
To specify the output target product and location type:
gpt graphFile.xml inputProduct -t c:\output\targetProduct.dim
Terrain Correction
To terrain correct using the graph in the Graphs\Standard Graphs folder
gpt "graphs\Standard Graphs\Orthorectify.xml" "c:\data\input.N1"
Coregistration
To coregister you will need a graph with a ProductSet Reader that will help you specify a
list of input products.
Using the Standard Graphs\Coregister.xml graph and an input folder containing all products
we wish to coregister together, type:
gpt "graphs\Standard Graphs\Coregister.xml" -inFolder c:\data
Assuming all products are in relatively the same geolocation, this will coregister all product
in folder c:\data into a coregistered stack
Frequently Asked Questions
Installation F.A.Q.
When I start the DAT it says it could not create the Java virtual
machine
In dat.bat or dat.sh try reducing the Java maximum heap size -Xmx1024M value to
somthing smaller like -Xmx800M
Add a variable for NEST_HOME to point to the installation folder and also include %
NEST_HOME% in the PATH variable.
On Linux, set these environment variables in .profile or .bash_profile in your home directory
export NEST_HOME=installation folder
export PATH=$PATH:$NEST_HOME
Then either restart your terminal or type
source ~/.bashrc
Also, make sure the paths in NEST4B/config/settings.xml are appropriate for your file
system.
The .nest folder is used to store temporary data such as preferences, logs, etc.
to
nest.application_tmp_folder = your_new_path
General F.A.Q.
The method is based on the following reference in which a much more complicated pyramid
approach was proposed. In NEST however one level detection is implemented.
A. S. Solberg, C. Brekke and R. Solberg, "Algorithms for oil spill detection in Radarsat and
ENVISAT SAR images", Geoscience and Remote Sensing Symposium, 2004. IGARSS '04.
Proceedings. 2004 IEEE International, 20-24 Sept. 2004, page 4909-4912, vol.7.
If you select 'Show Graticule Overlay' from the View Menu, graticule lines will be drawn over the band image
view, similar to the following figure:
The steps for both latitude and longitude (in °) can be defined in the Preferences dialog.
Export Color Legend
Note that the transparency mode is only enabled for the image types TIFF and PNG. A
preview dialog for the legend image can be opened by clicking the Preview... button. Within
the dialog, it is also possible to copy the colour legend image to the system clipboard by
using the context menu over the image area:
Select CRS Dialog
Filter: In the
text box you can
type in a word to
filter the list of
available
Coordinate
Reference
Systems, the list
beneath will be
updated
immediately.
Well-Known
Text (WKT):
This text area
displays the
definition of the
CRS as Well-
Known Text.
No-Data Value
No-Data Value
Band or tie point grids of a data product determine by two means whether or not a pixel at
a certain position contains data:
● a no-data value and/or
● a valid pixel expression is set.
These properties can be adjusted using the property editor. If valid pixel detection is enabled,
invalid pixels are excluded from range, histogram and other statistical computations. In order
to visualise the invalid pixel positions, the no-data overlay is used.
The no-data property of a band is also set by the band arithmetic and map projection tools.
Reprojection Output Parameters
Output Parameters
This dialog lets you specify the position of the reference pixel and the easting and northing
at this pixel of the output product. Also you are able to set the orientation angle and the
pixel size. The orientation angle is the angle between geographic north and map grid north
(in degrees), with other words, the convergence angle of the projection's vertical axis from
true north. Easting and northing and also the pixel size are given in the units of the
underlying map (e.g. dec. degree for geographic and meter for the UTM projection).
In order to force a certain width and height in pixels for the output product, you must
deselect the fit product size option. Otherwise the size is automatically adjusted, so that
the entire source region is included in the new region.
Import ASTER DEM
Unwrapping Operator
This operator performs unwrapping based on the concept introduced by Costantini, [1]. The
unwrapping problem is formulated and solved as a minimum cost flow problem on a
network.
Processing Parameters
Reference:
[1] Costantini, M. (1998) A novel phase unwrapping method based on network
programming. IEEE Tran. on Geoscience and Remote Sensing, 36, 813-821.
Stitching for Unwrapping
Stitch Operator
When working with tiles, Unwrapping operator stores unwrapped phase for each tile
individually. Therefore between unwrapped tiles, the unwrapped phases have a phase
difference of multiple of pi. This operator stitches the unwrapped phases of all tiles to form
a complete smooth image of unwrapped phase.
Processing Parameters
Usage Notes
Property Editor
The property editor can be used to edit the properties of a product, a tie-point grid or a
(virtual) band.
Depending on the kind of object for that the propery editor is invoked, different kinds of
properties can be edited. The name and the description can be changed in all cases.
Additionally following properties can be edited:
The property editor can be invoked from the context menu, where it is the top item. The
context menu is activated with a click on the rigth mouse button over the object that
should be edited.