Download as pdf or txt
Download as pdf or txt
You are on page 1of 69

RBA s PixInsight Processes Reference

Guide Rogelio Bernal Andreo


Visit to download the full and correct content document:
https://ebookmeta.com/product/rba-s-pixinsight-processes-reference-guide-rogelio-be
rnal-andreo/
More products digital (pdf, epub, mobi) instant
download maybe you interests ...

The Olmec World Ignacio Bernal

https://ebookmeta.com/product/the-olmec-world-ignacio-bernal/

Spriggs s Essentials of Polysomnography A Training


Guide and Reference for Sleep Technicians A Training
Guide and Reference for Sleep Technicians 3rd Edition
Lisa M. Endee
https://ebookmeta.com/product/spriggs-s-essentials-of-
polysomnography-a-training-guide-and-reference-for-sleep-
technicians-a-training-guide-and-reference-for-sleep-
technicians-3rd-edition-lisa-m-endee/

The Harris Reference Guide for Writers The Prentice


Hall Reference Guide 10th Edition Muriel Harris
Jennifer L Kunka

https://ebookmeta.com/product/the-harris-reference-guide-for-
writers-the-prentice-hall-reference-guide-10th-edition-muriel-
harris-jennifer-l-kunka/

Glock Reference Guide 2nd Edition Chuck Smock

https://ebookmeta.com/product/glock-reference-guide-2nd-edition-
chuck-smock/
Visual Guide to Math DK First Reference Dk

https://ebookmeta.com/product/visual-guide-to-math-dk-first-
reference-dk/

Pre Calculus A QuickStudy Reference Guide Expolog Llc

https://ebookmeta.com/product/pre-calculus-a-quickstudy-
reference-guide-expolog-llc/

Electronics 1 a QuickStudy Laminated Reference


Quickstudy Reference Guide Second Edition, New Updated
& Revised Edition Kothari

https://ebookmeta.com/product/electronics-1-a-quickstudy-
laminated-reference-quickstudy-reference-guide-second-edition-
new-updated-revised-edition-kothari/

Electronics 2 a QuickStudy Laminated Reference


Quickstudy Reference Guide Second Edition, New Updated
& Revised Edition Kothari

https://ebookmeta.com/product/electronics-2-a-quickstudy-
laminated-reference-quickstudy-reference-guide-second-edition-
new-updated-revised-edition-kothari/

Jill Enfield s Guide to Photographic Alternative


Processes Popular Historical and Contemporary
Techniques 2nd Edition Jill Enfield

https://ebookmeta.com/product/jill-enfield-s-guide-to-
photographic-alternative-processes-popular-historical-and-
contemporary-techniques-2nd-edition-jill-enfield/
I
II
MASTERING PIXINSIGHT
AND THE ART OF ASTROIMAGE PROCESSING

RBA's PixInsight
Processes Reference Guide

Rogelio Bernal Andreo

(Pre-Release April 2020)

III
PixInsight Processes Reference Guide
(Annex to “PixInsight and the art of Astroimage Processing”)
Author: Rogelio Bernal Andreo, aka RBA

Published by:
Rogelio Bernal Andreo
Sunnyvale, CA
94086, USA
© 2020 Rogelio Bernal Andreo, All Rights Reserved.
No part of this publication may be used or reproduced or transmitted in any form or by any
means, or stored in a database or retrieval system, without the prior written permission of the
publisher.

Adobe®, Adobe® Photoshop® and Adobe® Lightroom® are either registered trademarks or
trademarks of Adobe Systems Incorporated in the United States and/or other countries.

Microsoft®, Microsoft® Excel® and Microsoft® Windows are registered trademarks of Microsoft
Corporation in the United States and/or other countries.

All other trademarks are the property of their respective owners.

This copy belongs to Claudio Ulloa Saavedra

For more information about the book “Mastering PixInsight”:


http://www.deepskycolors.com/mastering-pixinsight.html
For deep-sky or nightscape workshops and astro-camps:
http://www.deepskycolors.com/workshops.html
For speaking arrangements and assignments:
Rogelio Bernal Andreo, rba@deepskycolors.com

For the latest news, images, updates:


http://www.facebook.com/DeepSkyColors/
@deepskycolors on Instagram

IV
Table of Contents
About this Reference Guide...................................................................................................1
How is this Reference Guide structured.................................................................................1

ACDNR..................................................................................................................................3
ATrousWaveletTransform.....................................................................................................7
AdaptiveStretch....................................................................................................................13
Annotation............................................................................................................................15
ArcsinhStretch......................................................................................................................16
AssignICCProfile.................................................................................................................17
AssistedColorCalibration.....................................................................................................18
AutoHistogram.....................................................................................................................20
AutomaticBackgroundExtractor...........................................................................................22
B3Estimator..........................................................................................................................25
BackgroundNeutralization...................................................................................................27
Binarize................................................................................................................................29
Blink.....................................................................................................................................30
ChannelCombination............................................................................................................31
ChannelExtraction................................................................................................................32
ChannelMatch......................................................................................................................33
CloneStamp..........................................................................................................................34
ColorCalibration...................................................................................................................36
ColorManagementSetup.......................................................................................................38
ColorSaturation....................................................................................................................42
CometAlignment..................................................................................................................44
ConvertToGrayscale.............................................................................................................48
ConvertToRGBColor...........................................................................................................48
Convolution..........................................................................................................................49

V
CosmeticCorrection..............................................................................................................52
CreateAlphaChannels...........................................................................................................57
Crop......................................................................................................................................58
CurvesTransformation..........................................................................................................60
Debayer................................................................................................................................63
Deconvolution......................................................................................................................66
DefectMap............................................................................................................................71
DigitalDevelopment.............................................................................................................72
Divide...................................................................................................................................74
DrizzleIntegration.................................................................................................................76
DynamicAlignment..............................................................................................................80
DynamicBackgroundExtraction...........................................................................................83
DynamicCrop.......................................................................................................................88
DynamicPSF.........................................................................................................................91
ExponentialTransformation..................................................................................................95
ExtractAlphaChannels..........................................................................................................96
FITSHeader..........................................................................................................................97
FastRotation.........................................................................................................................98
FluxCalibration.....................................................................................................................99
FourierTransform...............................................................................................................100
GradientHDRComposition.................................................................................................101
GradientHDRCompression................................................................................................103
GradientMergeMosaic........................................................................................................104
GREYCstoration................................................................................................................106
HDRComposition...............................................................................................................109
HDRMultiscaleTransform..................................................................................................111
HistogramTransformation..................................................................................................113
ICCProfileTransformation.................................................................................................119
ImageCalibration................................................................................................................120
ImageIdentifier...................................................................................................................126

VI
ImageIntegration................................................................................................................126
IntegerResample.................................................................................................................136
InverseFourierTransform...................................................................................................138
Invert..................................................................................................................................139
LRGBCombination............................................................................................................139
LarsonSekanina..................................................................................................................142
LinearFit.............................................................................................................................144
LocalHistogramEqualization..............................................................................................145
LocalNormalization............................................................................................................146
MaskedStretch....................................................................................................................153
MergeCFA..........................................................................................................................156
MorphologicalTransformation...........................................................................................157
MultiscaleLinearTransform................................................................................................161
MultiscaleMedianTransform..............................................................................................171
NewImage..........................................................................................................................176
NoiseGenerator...................................................................................................................178
PhotometricColorCalibration.............................................................................................179
PixelMath...........................................................................................................................188
RGBWorkingSpace............................................................................................................193
RangeSelection...................................................................................................................196
ReadoutOptions..................................................................................................................197
Resample............................................................................................................................201
Rescale...............................................................................................................................204
RestorationFilter.................................................................................................................205
Rotation..............................................................................................................................209
SampleFormatConversion..................................................................................................210
ScreenTransferFunction.....................................................................................................211
SCNR.................................................................................................................................213
SimplexNoise.....................................................................................................................215
SplitCFA.............................................................................................................................216

VII
StarAlignment....................................................................................................................218
StarGenerator.....................................................................................................................228
StarMask.............................................................................................................................231
Statistics.............................................................................................................................235
SubframeSelector...............................................................................................................236
Superbias............................................................................................................................248
TGVDenoise.......................................................................................................................250
UnsharpMask.....................................................................................................................253

VIII
RBA'S PixInsight Processes Reference Guide
About this Reference Guide
This guide is aimed at being useful to novice, intermediate and advanced image processors. This
means the guide has to reach up to an advanced level, which may make the guide a bit hard to read
by those fairly new to astroimage processing. Novice users will benefit more from the main book,
where we can find workflows, techniques and many practical examples illustrated with real data
are explained at a pace that's easy to follow. However, the guide can still be very useful to
beginners when they're using some particular processing tool and would like to learn more about
it, even if just the behavior of one particular parameter.

What this Reference Guide will not do:

1. It does not teach how to process your images.

2. It does not show practical examples or tutorials.

3. It does not explain how PixInsight's user interface works.

For all of the above and more, refer to the main book.

What the reference will (hopefully) do for you:

1. Provide a comprehensive description of each of the nearly 100 processes in PixInsight.


What they are, how they work, when to use them and what they can do for us.

2. Describe every parameter, some extensively, as well as the effects of adjusting many of
them, particularly those that offer tangible benefits for the process at hand.

3. Offer suggestions about the use of a particular tool within a processing workflow, when to
use it, etc.

How is this Reference Guide structured


The reference guide is structured in a very simple way: alphabetical order. If you know the name
of the process you're interested in, no need to guess under which category it was included.

1
Each process comes with an introduction that sometimes is just a few lines, while in other cases it
extends to more than one page. The introduction is followed by a “When to use...” section that
outlines when and why should we use each process, again, sometimes extensively. A list of all
parameters and their explanation follows.

Even though the guide describes how to use each process, many recommended values, etc. all
dialog window screenshots show the process with its default values. For illustrated practical
examples, please refer to the main book.

The digital version of the guide is a dynamic/interactive PDF. The first time a process is
mentioned within the documentation of another process, clicking on it will take you to its
documentation.

Most processes are described inclusively, but without going too far. That is, the guide is designed
so we can navigate to any process directly and have right there all the information we need. When
describing all processes in PixInsight this can lead to continuous repetition, since many parameters
are very similar or even identical among many processes. Not only that, there are many concepts
particular to PixInsight that need to be understood when working with determined processes, yet,
an explanation in every process that relates to those concepts would probably be overkill. As a
result, I aimed at a practical balance between repetition and referring to other parts of the book,
while still keeping each process documentation as inclusive as possible. All parameters are always
described, even common or basic ones such as “Add Images”, “Toggle” or “Upper/Lower Limits”,
while concepts or topics that require extended explanations may offer pointers about where to
continue reading on the topic within the book.

This copy belongs to Claudio Ulloa Saavedra

2
ACDNR

Process > NoiseReduction

ACDNR stands for Adaptive Contrast-Driven


Noise Reduction. It is a very flexible
implementation of a noise reduction algorithm
based on multiscale and mathematical
morphology techniques.

The idea behind ACDNR is, as with any good


noise reduction tool, to perform efficient noise
reduction while preserving details. ACDNR
includes two mechanisms that work
cooperatively: a special low-pass filter and an
edge protection device. The low-pass filter
smooths the image by removing or attenuating
small-scale structures, and the edge protection
mechanism prevents details and image structures
from being damaged during low-pass filtering.

ACDNR offers two identical sets of parameters:


one for the lightness and another for the
chrominance of color images. Chrominance
parameters are applied to the CIE a* and b*
components in the CIE L*a*b* color space,
while lightness parameters are applied to the L component for color images, and to the nominal
channel of grayscale images. In general, chrominance parameters are much less critical and we can
define a stronger noise reduction than for the lightness.

When to use ACDNR


ACDNR is a decent choice when we're trying to either tone down noise in an image or when we're
trying to soften an image or a mask. Because we can apply noise reduction to the details
(lightness) and the color (chrominance) separately, it also comes useful when we're trying to limit
noise reduction to just one of the two.

3
While ACDNR can be applied to linear and nonlinear images whenever noise reduction is desired,
it is not usually the best choice for applying noise reduction to linear images. That said, ACDNR
was one of the original processes that started PixInsight back around 2003, and today it is
competing against other tools in PixInsight that offer more advanced noise reduction methods,
such as TGVDenoise, MultiscaleLinearTransform or MultiscaleMedianTransform.

Parameters

ACDNR Filter
Apply: Since the ACDNR interface offers dual functionality by allowing us to define noise
reduction for lightness and chrominance separately, we enable (or disable) this check box to apply
(or not) the noise reduction to the lightness – should we have the Lightness tab active – or the
chrominance – if the active tab is Chrominance.

Lightness mask: Enable/disable using the lightness mask for the lightness or chrominance noise
reduction, depending on which tab we're on. Read below for more information about the lightness
mask.

Std.Dev.: Standard deviation of the low-pass filter (in pixels). The low-pass filter is a
mathematical function that is discretized on a small square matrix known as a kernel in the image
processing jargon. This parameter controls the size in pixels of the kernel used. The kernel size
directly defines the sizes of the image structures that the low-pass filter will tend to remove. For
example, standard deviations between one and 1.5 pixels are appropriate to remove high-
frequency noise that dominates most CCD images. Standard deviations between 2 and 3 pixels are
quite usual when dealing with film images. Larger deviations, up to 4 or 6 pixels, can be used to
smooth low-SNR regions of astronomical images (as the sky background) with the help of
protection masks.

Amount: This value, in the range from 0.1 to one, defines how the denoised and the original
image are combined. A zero amount value would leave the image unchanged, and an amount
value of one would replace the image with its denoised version completely. This parameter is
especially useful when the ACDNR filter is used repeatedly (see the Iterations parameter below).
At each iteration, amount can be used to re-inject a small fraction to the image resulting from the
preceding iteration. This leads to a recursive procedure that can help in fine-tuning and stabilizing
the overall process.

4
Iterations: This is the number of times that the low-pass filter is applied. The ACDNR filter is
much more efficient when applied iteratively. A relatively small filter (with a low standard
deviation) applied several times is in general preferable to a larger, more aggressive filter applied
once. When three or more iterations are used, ACDNR's edge protection is usually much more
efficient and yields more robust results. The Amount parameter (see above) can also be used along
with iterations to turn ACDNR filtering into a recursive procedure, mixing the original and
processed images.

Prefilter: If necessary, ACDNR can apply an initial filtering process to remove small-scale
structures from the image. This can help achieve a more robust edge protection for the reasons
explained above. Two prefiltering methods have been implemented: Multiscale and Multiscale
Recursive. Both methods employ special wavelet-based routines to remove all bright and dark
image structures smaller than two pixels. The recursive method is extremely efficient. This feature
should only be used in presence of huge amounts of noise, when all significant image structures
have sizes well above the two-pixel limit.

Robustness: When ACDNR's edge protection has to operate in presence of strong small-scale
noise, it may have a hard time defining accurate edges of significant structures. For example,
isolated noisy pixels can be very bright or dark, and their contributions to the definition of
protected edges can be relevant. Robustness refers here to the ability of ACDNR to become
immune to small-scale noise when discriminating significant image structures. Three robustness
enforcing methods have been implemented: Weighted average, Unweighted average and
Morphological median. In these three methods, a neighborhood is defined for each pixel and a
reference value is calculated from the neighbor pixels, which is then used to command the edge
protection device. Both methods have their strong points. The method based on the morphological
median is especially good to preserve sharp edges. On the other hand, the weighted average
method can yield more natural-looking images. We can try both of them and see which is best for
us, according to our preferences.

Structure size: Minimum structure size to be considered by the noise reduction algorithm.

Symmetry: When enabled, use the same threshold and overdrive parameters for both dark and
bright side edge protection.

Edge Protection
We define an edge as a brightness variation that the edge protection mechanism tries to preserve
(protect) from the actual noise reduction. If we consider an edge as the locus of a brightness
change, then for each edge there is a dark side and a bright side. ACDNR's edge protection gives

5
separate control over dark and bright sides of edges. For each side, there are two identical
parameters, threshold and overdrive.

Threshold: This parameter defines the relative brightness difference that triggers the edge
protection mechanism. For example, a threshold value of 0.05 means that the edge protection
device will try to protect image structures defined by brightness changes equal to or greater than a
5%, with respect to their surrounding areas. Higher thresholds are less protective. Too high of a
threshold value can allow excessive low-pass filtering, and thus lead to destruction of significant
image features. Lower thresholds are more protective, but too low of a threshold can lead to poor
noise reduction results. In general, protection thresholds are critical and require some trial and
error work.

Overdrive: This parameter controls the strength of edge protection. When overdrive is zero (its
default value), edge protection just tries to preserve the existing pixel values of protected edges.
When overdrive is greater than zero, the edge protection mechanism tends to be more aggressive,
exaggerating the contrast of protected edges. This parameter can be useful because it may allow a
larger threshold value, which in turn gives better noise reduction, while still protecting significant
edges. However, overdrive is an advanced parameter that requires experience and must always be
used with care: incorrect overdrive dosage can easily generate undesirable artifacts.

Star protection: When enabled, a protection mechanism is activated, in combination with the
general edge protection mechanism, to prevent the low-pass filter (the noise reduction) from
damaging stars. The Star threshold parameter then becomes available.

Star threshold: As a complement to the edge protection for bright sides, Star threshold allows us
to define a threshold for star edge protection. As with the more general threshold parameter above,
higher thresholds are less protective and stars may be softer or even disappear.

Lightness Mask
To improve ACDNR's flexibility, we can use an inverse lightness mask that modulates the noise
reduction work. Where the mask is black, original (unprocessed) pixels are fully preserved; where
the mask is white, noise reduction acts completely. Intermediate (gray) mask levels define a
proportional mixture of unprocessed and processed pixel values. This mask can be useful to
protect high SNR (signal to noise ratio) regions, while applying a strong noise reduction to low
SNR regions. A typical example of this is smoothing the background of a deep-sky image while
leaving bright regions intact.

6
Removed wavelet layers: To create an effective mask, wavelets are used to soften the mask. Here
we define the number of wavelet layers (starting from one) to be removed in order to build the
mask.

Midtones/Shadows/Highlights: The ACDNR mask is generated and controlled with these three
parameters. They define a simple histogram transform that is applied to a copy of the lightness that
is used to mask the noise reduction process. Take into account that an inverse mask is always
generated, which means that we must reverse our logic when varying these histogram parameters.
Increasing the midtones tends to remove protection and lowering them would cause the opposite
effect, while increasing the shadows will remove protection very fast. Lowering the highlights will
add protection but very low values usually protect way too much.

Preview: To help achieving a correct mask with a minimal effort, the ACDNR interface includes a
special mask preview mode. When this mode is enabled, the ACDNR process simply generates the
mask, copies it to the target image, and terminates execution. When used along with the Real-
Time Preview window, this mask preview mode is particularly useful.

ATrousWaveletTransform

Process > Compatibility

ATrousWaveletTransform (often abbreviated as ATWT) is a rich and flexible processing tool that
can be used to perform a wide variety of noise reduction and detail enhancement tasks. The à
trous (with holes) algorithm is a powerful tool for multiscale image analysis.

With ATWT we can perform a hierarchical decomposition of an image into a series of scale layers,
also known as wavelet planes. Each layer contains only structures within a given range of
characteristic dimensional scales in the space of a scaling function. The decomposition is done
throughout a number of detail layers defined at growing characteristic scales, plus a final residual
layer, which contains the rest of unresolved structures. This concept is explained in more detail
when we discuss the MultiscaleLinearTransform process, so please review it if you're new to these
concepts.

This multiscale approach offers many advantages, among which is that by isolating significant
image structures within specific detail layers, detail enhancement can be carried out with high
accuracy at any given scale. Similarly, if noise occurs at some specific dimensional scales in the

7
image, as it usually happens in most cases, by isolating it into appropriate detail layers, we can
reduce or remove the noise at the scales where it usually is without affecting significant structures
nor any details from different scales.

When to use ATrousWaveletTransform


ATrousWaveletTransform was for many years the to-go
tool in PixInsight to process images at different scales.
When the MultiscaleLinearTransform tool was developed,
ATWT became irrelevant and is still only included in
PixInsight for compatibility with old scripts and such. The
reason MultiscaleLinearTransform took over ATWT is
because MultiscaleLinearTransform not only offers an
improved version of everything ATWT does, but much
more.

Therefore, while a description of ATWT as well as its uses


is included here for reference, we should not need to use
ATWT, and use MultiscaleLinearTransform (or
MultiscaleMedianTransform) instead.

ATWT was traditionally used in multiple different


situations, whether applied to an actual astroimage or a
mask. It was often used to separate small structures from
the image (such as stars), or to create smooth images that
mostly emphasize the larger structures in the image.

Parameters
ATWT comprises two main sets of parameters, the first to define the layered decomposition
process and the second for the scaling function used for wavelet transforms.

Wavelet Layers
Dyadic: Detail layers are generated for a growing scaling sequence of powers of two. The layers
are generated for scales of 1, 2, 4, 8... pixels. For example, the fourth layer contains structures with
characteristic scales between 5 and 8 pixels. This sequencing style should be selected if noise
thresholding is being used.

8
Linear: When Linear is selected, the Scaling Sequence parameter is the constant difference in
pixels between characteristic scales of two successive detail layers. Linear sequencing can be
defined from one to sixteen pixels. For example, when Linear 1 is selected, detail layers are
generated for the scaling sequence 1, 2, 3, ... Similarly, Linear 5 would generate the sequence 1, 6,
11, ...

Layers: This is the total number of generated detail layers. This number does not include the final
residual layer (R), which is always generated. In PixInsight we can work with up to sixteen
wavelet layers, which allows us to handle structures at really huge dimensional scales. Modifying
large scale structures can be very useful when processing many deep-sky images.

Scaling Function: Selecting the most appropriate scaling function is important because by
appropriately tuning the shape and levels of the scaling function, we gain full control on how
precise the different dimensional scales are separated.

In general, a smooth, slowly varying scaling function works well to isolate large scales, but it may
not provide resolution enough as to decompose images at smaller characteristic scales. Oppositely,
a sharp, peak-wise scaling function may be very good isolating small scale image features such as
high-frequency noise, faint stars or tiny planetary and lunar details, but quite likely it will be
useless to work at larger scales, as the global shape of a galaxy or large Milky Way structures, for
example.

In PixInsight, à trous wavelet scaling functions are defined as odd-sized square kernels. Filter
elements are real numbers. Most usual scaling functions are defined as 3×3 or 5×5 kernels. A
kernel in this context is a square grid where discrete filter values are specified as single numeric
elements. Here's a more detailed description of the different scaling functions offered in ATWT:

• 3×3 Linear Interpolation: This linear function is a good compromise for isolation of both
relatively large and relatively small scales, and it is also the default scaling function on
start-up. It does a better job on the first 4 layers or so.

• 5×5 B3 Spline: This function works very well to isolate large-scale image structures. For
example, if we want to enhance structures like galaxy arms or large nebular features, we'd
use this function. However, if we want to work at smaller scales, e.g. for noise reduction
purposes, or for detail enhancement of planetary, lunar or stellar images, this function is a
bad choice.

• 3x3 Gaussian: This is a peaked function that works better at isolating small-scale
structures, so they can be used to control a smoothing effect among other things.

9
• 5x5 Gaussian: Same as the 3x3 Gaussian but using a 5x5 kernel.

• 3×3 Small-Scale: A peak-wise, sharp function that works quite well for reduction of high-
frequency noise and enhancement of image structures at very small characteristic scales.
Good for lunar and planetary work, for strict noise reduction tasks, and to sharpen stellar
objects a bit. For deep-sky images, use this function with caution. The main difference
between the 5 different 3x3 Small Scale functions ATrousWaveletTransform provides is in
the strength/value of the central value of the 3x3 kernel: 4, 8, 16 32 or 48.

• 5x5 Peaked: The more pronounced the peak of a scale function is, the more surgical it will
be on small scale structures and the less suitable it'll be to isolate large scale structures. As
its name indicates, the 5x5 peaked function uses a rather pointy 5x5 kernel.

• 7x7 Peaked: The two 7x7 kernels (1 and 0.5) provide even more “pointiness” than the
previous kernels.

See below a 3D plot comparison between the 5x5 Gaussian, 5x5 Peaked and both 7x7 Peaked
kernels, as defined in PixInsight.

The window below the pull-down option to define the scaling function will show the generated
layers. Individual layers can be enabled or disabled. To enable/disable a layer, double-click
anywhere on the layer's row. When a layer is enabled, this is indicated by a green check mark.
Disabled layers are denoted by red 'x' marks. The last layer, R, is the residual layer, that is, the
layer containing all structures of scales larger than the largest of the generated layers.

In addition to the layer and scale, an abbreviation of the parameters specific to each layer -if
defined - are also displayed.

10
Detail Layer
Bias: This is a real number ranging from –1 to +15. The bias parameter value defines a linear,
multiplicative factor for a specific layer. Negative biases decrease the relative weight of the layer
in the final processed image. Positive bias values give more relevance to the structures contained
in the layer. Very high values are not recommended for most purposes.

Noise Reduction
For each detail layer, specific sets of noise reduction and detail enhancement parameters can be
defined and applied simultaneously.

Filter: Only available in ATrousWaveletTransformV1, here we define the type of noise reduction
filter. Out of the three options available, the default Recursive Multiscale Noise Reduction often
yields the best balance between noise reduction and detail preservation. A Morphological Median
Filter would often generate slightly less noisy images at the expense of details.

Threshold: The higher the threshold value, the more pixels will be treated as noise for that
particular scale.

Amount: When this parameter is nonzero and Noise Reduction has been enabled, a special
smoothing process is applied to the layer's contents after biasing. The Amount parameter controls
how much of this smoothing is used.

Iterations: This parameter governs how many smoothing iterations are applied. Extensive try out
work is always advisable, but recursive filtering with two, three or four iterations and a relatively
low amount value is generally preferable, instead of trying to achieve the whole noise reduction
goal with a single, strong iteration.

Kernel size: Only available in ATrousWaveletTransformV1, when Directional Multiway Median


Filter or Morphological Median Filter is selected, here we define the kernel size in pixels. A value
of 4 would define a 4x4 kernel.

K-Sigma Noise Thresholding


When activated, K-Sigma Noise Thresholding is applied to the first four detail layers. This
technique will work just as intended if we select the dyadic layering sequence. The higher the
threshold value, the more pixels will be treated as noise for the characteristic scale of the smaller
wavelet layers.

Threshold: Defines the noise threshold. This is the “k” in the k-sigma method. Anything below
this value will be applied the noise reduction defined by the rest of the parameters.

11
Amount: Strength of the attenuation applied to the threshold coefficients.

Soft thresholding: Not available in ATrousWaveletTransformV1, when enabled, it will apply a


soft thresholding of wavelet coefficients instead of the default, harder thresholding. That's the
recommended value for most cases.

Use multiresolution support: Not available in ATrousWaveletTransformV1, enable this option to


compute the noise standard deviation of the target image. If disabled, ATWT will take less time to
complete at the expense of accuracy.

Deringing
When we use ATWT for detail enhancement, what we are applying is essentially a high-pass
filtering process. High-pass filters suffer from the Gibbs effect, which generates the unpopular
ringing artifacts. For more detailed information about ringing artifacts and deringing, please
review the documentation in MultiscaleLinearTransform about the topic.

ATrousWaveletTransform includes a procedure to fix the ringing problem. It can be used for
enhancement of any kind of images, including deep-sky and planetary.

Dark: Deringing regularization strength for dark ringing artifacts. Increase to apply a stronger
correction to dark ringing artifacts. The best strategy is to find the lowest value that effectively
corrects the ringing, without overdoing it.

Bright: Deringing regularization strength for bright ringing artifacts. It works exactly as Dark but
for bright ringing artifacts.

Since each image is different, the right amount varies from image to image. It is recommended
starting with a low value – such as 0.1 – and increase as needed before over-correction becomes
obvious.

Output deringing maps: Generate an image window for each deringing map image. New image
windows will be created for the dark and bright deringing maps, if the corresponding amount
parameters are nonzero.

Large-Scale Transfer Function


ATWT lets us define a specific transfer function for the residual layer.

• Hyperbolic: A hyperbolic curve is similar to a multiplication by a positive factor slightly


less than one, which usually will improve color saturation by darkening the luminance. The
break point for the hyperbolic curve can be defined in the slider to the right.

12
• Natural logarithm: The natural logarithm function will generally produce a stronger
darkening of the luminance.

• Base-10 logarithm: The base-10 logarithm function will result in a much stronger
darkening of the luminance than the natural logarithm or hyperbolic functions.

Dynamic Range Extension


Several operations executed during a wavelets transformation – such as a bias parameter - may
result in some areas reaching the upper or lower limits of the available dynamic range. The
dynamic range extension works by increasing the range of values that are kept and rescaled to the
[0,1] standard range in the processed result. We can control both the low and high range extension
values independently.

Low range: If we increase the low range extension parameter, the final image will be brighter, but
it will have fewer black-saturated pixels.

High range: If we increase the high range extension parameter, the final image will be globally
darker, but fewer white-saturated pixels will occur.

Any of these parameters can be set to zero (the default setting) to disable extension at the
corresponding end of the dynamic range.

Target: Whether ATWT should be applied over to only the lightness, the luminance, chrominance
or all RGB components.

AdaptiveStretch

Process > IntensityTransformations

AdaptiveStretch is a nonlinear contrast


and brightness adjustments tool in
PixInsight that mostly depends on
adjusting a single noise threshold
parameter.

Despite this being a simple definition,


AdaptiveStretch does offer some

13
significant advantages when compared to other brightness and contrast tools. For example,
AdaptiveStretch not only tries to maximize contrast, but it does so without clipping any data (no
pixel values will become either zero or one). Of course, pixels that were clipped before applying
AdaptiveStretch will continue being clipped after the process is done.

When to use AdaptiveStretch


AdaptiveStretch is a simple tool that can achieve decent results quickly, however it is not at
versatile as many other image intensity processes in PixInsight. Therefore the recommendation is
to use AdaptiveStretch when we're looking for quick and decent results without clipping data.

Parameters
Noise threshold: Brightness differences below the noise threshold are assumed to be caused by
noise, these being the areas that are attenuated by the process, while brightness differences above
the noise threshold will tend to be enhanced. The lower the value, the more aggressive the stretch
would be. Adjust this parameter along with Contrast Protection for best results.

Contrast protection: This parameter is used to constrain the stretch effect on areas that are either
very bright or very dark. The higher the value, the more protection. The checkbox to the right
allows us to completely disable this parameter, which can be useful to see the results with and
without contrast protection.

Maximum curve points: Here we indicate the maximum number of points in the transformation
curve. The computed values will depend on the bit depth of the image. For example, for 8-bit and
16-bit integer images, AdaptiveStretch would only process up to 256 and 65536 points
respectively. Normally we wouldn't need to modify this parameter, and only aim for more points
in very specific cases with images displaying a very large dynamic range.

Real-time curve graph: Enabling this option will open a window that, when the Real-Time
Preview mode is enabled, will show the curve being applied as we adjust the parameters. This
window includes two buttons, one depicting a photo camera, which would generate an 8-bit image
of the actual graph, and another button displaying a graph that when clicked, would open the
CurvesTransformation process with the current transformation defined in it. This can be very
useful not only to evaluate and understand the transformation being applied, but also as a learning
tool.

14
Region of interest
This common set of parameters allows us to restrict the analysis to a specific area in the image.
Often times we would define a region of interest (ROI) instead of a preview so that the entire
processing workflow can be recreated automatically or saved on a process icon.

Annotation

Process > Painting

Annotation is a tool to add simple text sentences


(one line at a time) to an image. It also allows us
to include a leader line, that is, a line that goes
from the text itself to a point that we define
dynamically. Annotation is a dynamic process
which, among other things, it means that the text
is not actually rendered into image pixels until
the process is executed.

First, we define the text and parameters, then we


click anywhere in the image where we want to add the annotation, after which we can reposition
the text and the leader line using the mouse.

When to use Annotation


Annotation is used when we wish to label certain areas in an image, usually after the image is
considered final and it is already saved without annotations. Some people prefer using other image
editing applications that offer more versatility when it comes to adding text and labels, although
for basic annotation and labeling, Annotation should suffice. Being a dynamic process, we can
modify our annotation any way we like until we execute the process, at which point, the
annotation will be rendered into image pixels permanently – unless we undo it later, of course.

If we see our annotations to either collide or overlap, be aware that there is a known problem at the
time of this writing (PixInsight 1.8.8-3) that may cause this behavior.

15
Parameters
Text: Enter here the string that will be displayed.

Show leader: Display a “leader” line. Move the mouse over the area the line is pointing at, and
drag it around to change its position.

Font Properties: Here, we can define the font properties: font, size, style, color and opacity.

ArcsinhStretch

Process > IntensityTransformations

ArcsinhStretch is a tool that stretches image intensity, without affecting color information. It does
so by means of applying an inverse hyperbolic sine function.

When to use ArcsinhStretch


ArcsinhStretch is best used when we're ready to bring a RGB image from linear to nonlinear,
meaning we have already performed at the very least, color calibration, and gradient correction
(background extraction). Although it can be used with monochrome images, it is often used on
color images when other stretch functions seem to destroy color information in stars.

Parameters
Stretch factor: Increase or decrease the
intensity of the stretch. Because the
stretch factor has a logarithmic response,
it works well across the entire range of
possible values. Best results are obtained
by combining this stretch with the Black
point value, described below.

Black point: Here, we set the black


point. The higher the stretch factor, the most sensitive black point adjustments become – this being
the reason the parameter offers two different sliders: the top slider for coarse adjustments, and the
bottom slider for very fine adjustments.

16
Protect highlights: In cases when the stretch we're performing results in saturated pixels,
checking this parameter would rescale the entire image so that such pixels don't become saturated.

Use RGB working space: If disabled, ArcsinhStretch would assume all R, G, and B values have
an equal weight of 1/3 each when calculating the luminance. If enabled, the process will use the
weights defined in the current RGB Working Space (see the process RGBWorkingSpace).

Estimate Black Point: Clicking on this button will automatically set the black point to a value
that is often a good starting point. More specifically, it sets the black point at a level where 2% of
pixels would be clipped to zero. Further adjustments are often needed, especially if the Stretch
factor is readjusted. This option is only available when the Real-Time Preview is activated.

Highlight values clipped to zero: Display in white (in the Real-Time Preview) all pixels that are
clipped to a value of zero by the process.

AssignICCProfile

Process > ColorManagement

AssignICCProfile is used to assign an ICC profile to an image. An ICC profile is a set of data that
characterizes a color input or output device, or a color space, according to ICC (International
Color Consortium) standards.

When to use AssignICCProfile


Assigning an ICC profile to an image allows the image to be displayed correctly across different
devices, so it's always a good idea to assign the ICC profile to our images, particularly if we're
going to share the images with others (say, on the web) or create prints.

Because different ICC profiles would


change the appearance of our image in
our screen, it is recommended to get this
done as one of our first steps during
processing. There are instances where we
may want to make an ICC profile change
later on.

17
Parameters
Current Profile: It shows the description of the ICC profile assigned to the image selected on the
view selection combo box, or the default RGB ICC profile (as selected in the
ColorManagementSetup dialog) if no ICC profile is currently assigned to the image. We cannot
change the profile description.

New Profile lets us assign an ICC profile to the target image.

Assign the default profile: It assigns the default ICC profile to the target image, as selected in the
Color Management Setup dialog.

Leave the image untagged: Don't add a ICC profile to the image. In this case, the default profile
will be used to manage its color.

Assign profile: Here is where we can assign a different ICC profile to the image. We use the pull-
down menu to select the profile, type it manually in the text box, or click on the looping green
arrows icon (bottom-right) to refresh the profile list.

AssistedColorCalibration

Process > ColorCalibration

AssistedColorCalibration allows us to find, manually, the proper color balance (or calibration) of
a linear image before applying background neutralization or gradient subtraction. Because these
other processes will alter the color information in the image, by finding the right white balance
before any of those processes are applied, we can better determine the actual RGB coefficients for
our image. The caveat is, we need to dial the Red, Green and Blue sliders manually in a trial and
error fashion. It is not an automatically calculated color calibration tool.

The “assistance” comes in the form of being able to preview the results after applying a histogram
stretch and color saturation adjustments – but just for the previews - which can be more flexible
than simply applying a STF (ScreenTransferFunction).

18
To use AssistedColorCalibration we first need to
define two previews: one that will be good for a
background reference and another one that we'll
use to evaluate the results – before applying the
process to the actual image. The background
reference preview should contain mostly
background free of nebulosity or galaxies, while
the “results” preview should target an area rich
in color features so we can better evaluate the
results.

When to use
AssistedColorCalibration
AssistedColorCalibration is just one of the many
tools we can use to balance (calibrate) the color
in our images with PixInsight. While it is
recommend to use other processes – that are
more dependent on the actual image rather than
us dialing the RGB sliders to our liking – AssistedColorCalibration comes handy when either
none of the more automated processes seem to work, or when we simply would like to make the
RGB adjustments manually while our image is still linear.

AssistedColorCalibration may also be used to help us determine the color coefficients of our
camera, this also being a reason why the adjustments are recommended to be done prior to
background extraction or other processes that alter the original signal. However, a process that
depends on personal adjustments such as AssistedColorCalibration, lacks the rigorousness that
other processes offer.

Parameters
White Balance: This is where we modify the actual weights of each channel, manually. Prior to
making adjustments, we need to fine-tune the remaining parameters. Do note that these are the
only parameters that will be applied to the image. The remaining of parameters in this dialog box
(all explained below) only affect the previewing instances, they're not applied to the final image.

Background Reference: In this parameter we select the preview that will be used as background
reference.

19
Histogram Transformation: Use the sliders at the bottom of the graph (or enter values manually
in the corresponding text boxes) to stretch the results in our sample preview – that is, the preview
we use to evaluate results, not the one used for background reference.

Saturation: Just like the Histogram Transformation parameter above, this parameter is used to
give a boost to our “results” preview, except in this case we're boosting color saturation, mostly to
help us preview the final colors in the image, so we can determine whether we're close to the
results we want, or we need to continue tweaking the RGB sliders.

AutoHistogram

Process > IntensityTransformations

This process applies an automatic histogram transform to each RGB channel to achieve prescribed
median values, along with a quick histogram clipping feature.

When to use AutoHistogram


AutoHistogram is one of the many tools available
in PixInsight for stretching our data, and using it
(versus using other processes) is often a matter of
choice. While the tool itself does some thinking
for us – unlike, say, HistogramTransformation or
CurvesTransformation – it's somewhat limited to
applying a specific median value to the entire
image, plus some clipping for enhanced contrast.
Its ease of use is often a reason for being the tool
of choice for bringing an image from linear to
nonlinear stage.

Parameters
Histogram Clipping: We enable this option if we want to do a histogram clipping. While better
results can be easily achieved with other tools, careful clipping adjustments can improve the
results obtained with AutoHistogram. We should adjust the clipping value(s) only after having

20
made the adjustments on the Target Median Values (defined below) and previewed the results
without clipping.

• Joint RGB/K channels: Select this option to apply the clipping equally in all
RGB/grayscale channels. If selected, only the R/K values under Shadow/Highlights
Clipping need to be entered.

• Individual RGB/K channels: Select this option to apply the clipping differently for each
RGB channels. If selected, the values for each channel can be modified individually. Since
AutoHistogram isn't much of a color balancing tool, it is usually better to not use this
option.

• Shadows Clipping: The “black point” clipping values. In most cases, this is the only
clipping value that may need adjustments.

• Highlights Clipping: The clipping values for the highlights. Rarely used in astronomical
images.

Target Median Values: Enable this option to perform a transform to each RGB channel to
achieve prescribed median values. Disable to skip it.

• Stretch Method: Here we select one out of three different typical stretch algorithms:
Gamma (a typical exponential transform), Logarithmic and Rational Interpolation (MTF).
The latest method, Rational Interpolation, is a midtones transfer function that usually gives
the most contrast, so it's often the preferred method. For a softer stretching, we would use
any of the other two options.

• Joint/Individual RGB/K Channels: Selecting one option or the other depends on whether
we want to perform the gamma transform to all RGB/grayscale channels equally or
individually.

• Set As Active Image: When clicked, the parameters in the AutoHistogram window will be
populated with the corresponding data from the active image.

• Capture readouts: When enabled, image readouts will have AutoHistogram recalculate
the target median values. We do an image readout by clicking (optionally dragging) on the
image.

21
AutomaticBackgroundExtractor

Process > BackgroundModelization

AutomaticBackgroundExtractor – also known as ABE in PixInsight jargon – is one of PixInsight's


background modelization tools of choice.

As its name says, ABE does its work in a


completely automatic fashion: we
provide a source image, a number of
parameters controlling ABE's behavior,
and we get another image with the
generated background model. That model
can also be applied to the original image
by subtraction or division, depending on
the type of uneven illumination problems
to correct for. Additive phenomena like
light pollution gradients should be
corrected by subtraction. Multiplicative
effects like vignetting should be fixed by
a division, though in this case applying
correct flat-field calibration is the correct
procedure. Except for the Correction
parameter – which needs to be specified
if we want ABE to correct out image - for
general purposes, the default values are
often a good starting point.

ABE works by sampling the background


in typically small samples across the
image at fixed sizes and intervals. With
this information, ABE then can create a
background model with acceptable
accuracy.

22
When to use ABE
Whenever our (lineal) image displays a noticeable gradient, vignetting or any other uneven
illumination adding unwanted signal to our images, ABE offers a quick way to obtain results
without having to deal with “sample generation” (a key element when using ABE's manual
counterpart, DynamicBackgroundExtractor or DBE). Applying ABE over our image may quickly
correct these defects. If ABE fails, then we try DBE instead. It is not recommended to use ABE in
difficult situations or in cases where we want to perform a very careful background modeling.

Parameters

Sample Generation
Box Size: Length in pixels of all background sample boxes. Large sample boxes may fail to
capture local background variations, while small sample boxes may detect small-scale variations
that should be ignored, such as noise and stars.

Box separation: Distance in pixels between two adjacent background samples. A large distance
(less sample boxes) can help making a smoother background model. The more samples we use
(smaller box separation), the longer it will take to build the model.

Global Rejection
Deviation: Tolerance of global sample rejection, in sigma units. This value indicates how far
background samples can be from the median background of the target image in order to be
considered when building the background model. We can decrease this value to exclude more
background samples that differ too much from the mean background. This can useful to avoid
mistaking large-scale structures – such as large nebulae – in the generated background model.

Unbalance: Shadows relaxation factor. Higher values will result in more dark pixels in the
generated background model, while lower values can be used to reject bright pixels.

Use Brightness Limits: Enable this option to set a high-low limit to determine what is a
background pixel or not. When enabled, Shadows indicates the minimum value of background
pixels, while Highlights determines the maximum value allowed for background pixels.

Local Rejection
Tolerance: Tolerance of local sample rejection, in sigma units. We can decrease this value to
reject more outlier pixels with respect to the median of each background sample. This is useful to
protect background samples from noise and small-scale bright structures, such as small stars.

23
Minimum valid fraction: This parameter sets the minimum fraction of accepted pixels in a valid
sample. The smaller the value, the more restrictive ABE will be accepting background samples.

Draw sample boxes: When selected, ABE will draw the background sample boxes on a new 16-
bit image. This can be very useful when adjusting ABE parameters.

In a sample boxes image, each sample box is drawn with a pixel value proportional to the inverse
of the corresponding sample background value.

Just try samples: When selected, ABE will stop after extracting the set of the background
samples, without generating the background model. Normally we would select this option along
with Draw sample boxes. That way, ABE will create a sample boxes image that we can use to
evaluate the suitability of the current ABE parameters.

Interpolation and Output


Function degree: Degree of the interpolation polynomials. ABE uses a linear least squares fitting
procedure to compute a set of polynomials that interpolate the background model. In general, the
default value (4th degree) is appropriate in most cases. For very complex cases, increasing this
value may be necessary to reproduce local background variations more accurately.

Downsampling factor: Downsampling ratio of the background model image. This is a value
between one (no downsampling) and eight. Background models are very smooth images, meaning
that they can usually be generated with downsampling ratios between 2 and 8 without problems,
depending on the variations of the sampled background.

Model sample format: This parameter defines the bit depth of the background model.

Evaluate background function:When enabled, ABE generates a 16-bit image that we can use to
evaluate the suitability of the background model. The comparison image is a copy of the target
image to which we have subtracted the background model. The Comparison factor parameter is a
multiplying factor applied to emphasize possible inconsistencies in the comparison image.

Target Image Correction


Correction: Here, we decide whether we would like to apply the background model to produce a
corrected version of the target image. Again, additive effects, such as gradients caused by light
pollution should be subtracted, while multiplicative effects, such as differential atmospheric
absorption or vignetting should be divided.

24
Normalize: When enabled, the median value of the target image will be applied after background
model correction. If the background model was subtracted, the median is added, and if it was
divided, the resulting image will be multiplied by the median. Enabling this option tends to
recover the original color balance of the background.

If this option is not selected (the default value), the above median normalization is not applied,
which results in the corrected image usually acquiring a more neutral background.

Discard background model: If enabled, ABE does not create an image with the background
model after correcting the target image. If disabled, the generated background model will be
provided as a new image.= with the suffix ABE_background.

Replace target image: We enable this parameter if we want the correction to be performed
directly on the target image. When disabled, a new corrected image is created, leaving the original
target image unmodified.

Identifier: If we wish to give the corrected image a unique identifier, we enter it here. Otherwise
ABE will create a new identifier, adding _ABE to the identifier of the target image.

Sample format: Define the format (bit depth) of the corrected image.

B3Estimator

Process > Flux

B3Estimator is a process that creates a synthetic image as an estimate of flux at the specified
output wavelength or frequency, using two images as source.

Alternatively, it can be used to generate a thermal map as an estimate of temperature, using the
laws of black body radiation.

In other words, given two images at different wavelengths (spectral inputs), B3Estimator can
calculate the temperature of black bodies in these two images, and synthesize a third image at
another wavelength.

25
When to use B3Estimator
B3Estimator can be used for different
purposes, both scientific and aesthetic.
Two that seem to be fairly popular are
creating a synthetic channel (for a missing
filter, for example, say, H-Alpha or
Green) and enhancing features in our
image that may not be too obvious at first,
from non-black body objects or black
body emissions. Remember however that
B3Estimator relies on specific
wavelengths and works better when used
on black body targets. It is not a cosmetic
tool.

In order to produce accurate results,


B3Estimator needs flux-calibrated images
as the two source images. This can be
done by using the FluxCalibration tool.
Alternatively, we can skip using
FluxCalibration as long as the source
images are well equalized.

Also, since all filters have a bandwidth but B3Estimator needs a single wavelength, a common
practice is to divide the source images by the bandwidth of the filter used for that image, and then
multiply the resulting image for the bandwidth we're targeting in our synthetic image. For
example, if we used two source images, one captured with a R (red) filter with a middle point
wavelength of 680 nm and bandwidth of 70 nm, and the other one used a G (green) filter of
540 nm and a bandwidth of 60 nm, and we aimed at producing a synthetic B (blue) image, we'd
divide the R image by 70, the G image by 60, set our target to a central wavelength of, say,
450 nm and then multiply the resulting image by, say 75 nm if we assume our blue filter would
have a bandwidth of 75 nm.

Parameters
Input image 1 & 2: As indicated above, B3Estimator needs two (grayscale) source images –
views actually, that is, images already opened in PixInsight.

26
Input wavelength 1(nm) & 2: Here, we enter the wavelength in nanometers corresponding to
each of the input images.

Output wavelength: Only needed when generating a synthetic image (see below), here we
indicate, also in nanometers, the desired wavelength of the synthetic image. This assumes that
every pixel in the image behaves as a black body.

Intensity units: Here, we determine what do we want the pixels in the source images to represent.
The default Photons/Wavelength value is the most common situation, where the pixels represent
the number of photons in wavelength units. We can also treat the pixels as a measure of energy.
Whether photons or energy, we can have these measured in wavelength or frequency units.

Output image/s: Indicate whether to generate a synthetic image, a thermal map, or both.

Background References (1 and 2)


Each input image can be associated with an image that would act as a background reference for
the algorithm. We can define such images here, or leave these sections blank, in which case, the
input images will act as their own background references.

We can also limit each background reference image to a specific region of interest (ROI).

Reference image: Here, we select the image we will use as a background reference. The image
must be opened (a view) in PixInsight's workspace.

Lower limit: We can select a range of valid pixel values for the purpose of evaluating the
background. Pixels with a value lower than or equal to this will not be considered when
calculating the mean background.

Upper limit: This is the upper limit of the valid range of pixels. Pixel values equal or above this
amount will not be considered when calculating the mean background.

BackgroundNeutralization

Process > ColorCalibration

The BackgroundNeutralization tool makes the global color adjustments required to neutralize the
background color of an image. This requires a good background reference.

27
When to use BackgroundNeutralization
BackgroundNeutralization is a very popular tool in PixInsight that works best on linear images as,
theoretically, any color balancing is better performed when our image is still linear. It is
recommended to be used after removing any gradients, if any, from the image (using ABE or
DBE). For the most part, the default parameters work well, but depending on our image, a small
adjustment to the Upper limit parameter may be desirable.

Parameters
Reference image: BackgroundNeutralization
will use this image to calculate an initial mean
background level for each channel. If left blank,
the target image will be used as background
reference.

We should specify a view that represents as


much as the true background of the image as
possible, avoiding nebulosity, galaxies and other
signal that might interfere with the readout from the pixels within the specified limits. A typical
example involves defining a small preview over an area of the target image that is mostly sky, and
selecting it here as the background reference image.

Lower limit: Pixels with values less than or equal to this value will be ignored when calculating
the mean background values. Since the minimum allowed value for this parameter is zero, black
pixels are never considered background data.

Upper limit: Pixels above this value will be ignored when calculating the mean background
levels.

Working mode: Use this option to select a background neutralization mode.

• Target background: BackgroundNeutralization will force the target image to have the
specified mean background value for all RGB channels. Any out-of-range values after
neutralization will be truncated, which can produce some minimal data clipping.

• Rescale: The target image is always be rescaled after neutralization – rescaled here means
that all pixel values are recalculated so they all stay within the available dynamic range of
the image, which also means that no data clipping can occur. In this mode, besides no data

28
clipping, the neutralized image maximizes usage of dynamic range usage, however, we
can't control the resulting mean background values.

• Rescale as needed: This mode is similar to Rescale, except that the image is only rescaled
if there are out-of-range values after neutralization. This is the default value.

• Truncate: All out-of-range pixels after the neutralization process are truncated, usually
clipping a large amount of pixels. This mode is useful to perform a background subtraction
to a working image used for an intermediate analysis or processing step, but it rarely
produces usable results in itself.

Target background: Only available when the working mode is Target background, here we
define the final mean background level that will be imposed to the target image.

Binarize

Process > IntensityTransformations

The Binarize process transforms all pixels in the


image to either pure black (zero) or pure white
(one). Binarize's threshold parameter also allows
the perfect isolation of the stars in the mask by
fine tuning the structure withdrawn.

When to use Binarize


Binarize is mostly used to generate masks, whether star masks or more complex masks also based
on strong signal in our images. While there's other similar but more flexible tools for these tasks,
like RangeSelection, Binarize has the ability to define different thresholds for each RGB channel.

Parameters
Joint RGB/K channels: Use a single threshold to be applied to each RGB/grayscale channels.

Individual RGB/K channels: Use a different threshold for each RGB/grayscale channel.

29
Blink

Process > ImageInspection

Blink can create an animation of several images,


sequentially, that we can also save as a video file.
Once loaded, we can enable/disable each image in
the sequence by clicking on its checkbox. We define
the master blink image by double-clicking on it. The
animation is then played in a new window, named
BlinkScreen.

When to use Blink


Blink can be used at any situation when we have several images of same pixel height and width
(also usually the same FOV, whether aligned or not), and we'd like to either spot differences
between each image, evaluate our subframes (SubframeSelector is a much more advanced tool for
that task), or would like to create an animation/timelapse based on the input images.

The only parameters in Blink are the input files, all other actions are performed via icon buttons.

Calculate a per-channel histogram stretch for each of the input images. The stretch is not
applied to the images, only calculated and applied to the BlinkScreen image, which is,
therefore, non linear. Color images will appear more neutral.

Calculate an auto-STF (see process ScreenTransferFunction) based on the selected image,


and apply this auto-STF to all images as they're displayed in the BlinkScreen window.
Unlike with the previous option, what we see in the BlinkScreen is linear but “screen-
stretched”. Color images will preserve their original color casts.

Controls to play the animation or step forward/backward one frame at a time.

Click here to select the image files to be used.

Close selected images. An image is selected when it appears highlighted. The check mark
to the left let us know which images are to be included in the animation sequence.

Close all images and start over.

30
Save all selected files (again, selected meaning highlighted) to a specific location.

Move all selected files to a specified location.

Crop all selected files – we can define the crop area by defining a preview first – and save
them to a specified location.

Display several statistical data about the input images. A Statistics dialog box appears
that allows us to specify the image for which we'd like to see the stats and metadata, as
well as some controls to define the range (either the normalized [0,1] range or a classic 16
bit range [0,65535]), the precision or number of decimals in the stats, whether we want
the results to be sorted by channel, whether we want to limit the stats to a particular area
(crop) and whether we want to display the results in a text file. If the Write text file option
is left disabled, the results are written to the Process Console.

Create a video file of the animation. Prior to using this function, we must have installed
ffmpeg in our computer, a command-line tool that can encode and convert audio and
video files. Due to the large number of parameters ffmpeg can use, we will not discuss it
here (refer to the extensive documentation at https://www.ffmpeg.org/).

ChannelCombination

Process > ChannelManagement

ChannelCombination is used to combine


single channel images into a new image
containing all the channels. This is useful
for example to combine previously
calibrated and aligned RGB channels of
an image into one single color image. For
combinations that include Lightness and RGB channels, see the LRGBCombination tool.

Note that ChannelCombination can be applied to an existing image (New Instance icon) or in the
global context (Apply Global icon). When applied to an existing color image, the channels defined
in the dialog box will be applied to that image. When applied in the global context,
ChannelCombination will create a brand new color image.

31
When to use ChannelCombination
When exactly in the workflow should we combine three separate channels into a single color
image depends on a number of things. Regardless, it is best done when the images are still linear
and all three have been previously aligned. This is also regardless of whether we're combining
broadband or narrowband data.

Sometimes during our processing we may want to split the image into different channels (and not
necessarily just using the RGB color space) to later recombine them. The recombination is done
with ChannelCombination, while the split is done with ChannelExtraction, explained next.

Parameters
Color Space: Select the color space to be used for the combination. Depending on the color space
used, the channel/source image descriptors will change accordingly.

Channels / Source Images: Once we have selected the color space, we enter on each of the boxes
the images corresponding to each of the channels we wish to combine. For example, if we selected
the RGB color space, we need to enter an image for the red channel, one for the green channel and
another one for the blue channel. We can enable or disable each channel as needed.

If a text box is left with the default <Auto>, PixInsight will try to use an image with the same
name as the target image plus the suffix _X where X corresponds to the abbreviation for that
particular channel (_R for red, etc).

ChannelExtraction

Process > ChannelManagement

The purpose of the ChannelExtraction tool is to create individual single channel images from a
source color image, where each of the newly created images contains information about only one
of the channels from the source image. For obvious reasons, ChannelExtraction cannot work on
grayscale images.

32
When to use
ChannelExtraction
ChannelExtraction can be used anytime
we want to process image channels
separately, which can be desirable for a
number of reasons at any time during our
processing workflow. In addition to the
popular RGB channel split, CIE L*a*b* and CIE XYZ are often used in astroimage processing to
extract lightness or luminance separate from color data.

Parameters
Color Space: Select the color space that contains the channels we want to extract.

Channels / Target Images: Once we have selected the color space, we use the check-boxes here
to indicate which channels we want to extract. We can also enter the image identifiers if we wish,
or leave it as <Auto> in which case PixInsight will assume the identifiers are the same name as the
source image plus the suffix _X where X corresponds to the abbreviation for that particular
channel (_R for red, etc).

Sample Format: We can select Same as source to produce individual images with the same
format (bit depth) as the source image, or specify a different format.

ChannelMatch

Process > Geometry

ChannelMatch is intended to manually align individual RGB channels in a color image by means
of entering the offset values between channels.

When to use ChannelMatch


Because ChannelMatch is used to align
the RGB channels of an image, very
much like we align different frames
before stacking them, this tool is rarely

33
used for deep-sky imaging, as the alignment between frames is usually taken care of with much
more advanced alignment tools like StarAlignment. In some rare cases, manual alignment between
channels may be needed, as in severe cases of chromatic dispersion that generates misaligned star
halos.

ChannelMatch is however useful in planetary imaging, where the alignment between RGB
channels cannot be solved via StarAlignment or DynamicAlignment yet.

Parameters
RGB: Select/deselect the channels to be aligned.

X-Offset / Y-Offset: Defines the x/y coordinates offset for the given channel. Integer numbers
will result in a pixel-by-pixel operation, while if non-integer values are indicated, ChannelMatch
will perform sub-pixel translations (interpolation). Whenever possible, interpolation should be
avoided, especially at initial processing stages.

Linear Correction Factors: Assign a linear (multiplicative) correction factor to each channel. A
value of one will not apply any correction.

CloneStamp

Process > Painting

CloneStamp is PixInsight's implementation of this well-known image editing tool. It is an


interactive dynamic tool, just like DBE or DynamicCrop, for example. We can create process
icons and scripts with CloneStamp instances, exactly as we can do for any other processes, and
apply them to any number of images without restrictions.

We open the CloneStamp interface and click on an image to start a new session. That image will
be the “clone stamp target,” to which all clone stamp actions will write pixels. Then we
Ctrl/Cmd+click on any point of an open image (including the target image of course) to define a
first “source point,” click on the target image to define a first “target point,” and start dragging
with the mouse to perform cloning actions. We can start a new action by clicking again and
dragging, and define a new source/target point pair with Ctrl/Cmd+click / click at any moment.

34
The Ctrl/Cmd+Z and Ctrl/Cmd+Y keys can be used while the target image is active to undo/redo
clone stamp actions. If we cancel the process (red cross icon on the interface's control bar), the
initial state of the image is restored. If we apply the process (green check mark icon), all clone
stamp actions (except those that have been undone) are applied, just as any other process.

When to use CloneStamp


The main use of CloneStamp is to remove small
artifacts from our image that could not be
removed or corrected by other means. For that
reason, it is normally used late in the processing
stage. However, in some situations it may be
better to “clone out” a given artifact early in the process, so as to not have the artifact to become
even harder to remove after other processes have made the artifact more obvious. CloneStamp is
also used sometimes to make “corrections” on a mask.

Parameters
Radius: Radius of the clone stamp brush.

Softness: Modify to define softer or coarser brush edges.

Opacity: Define the opacity (strength) of the cloning action.

Copy brush: Copy current action brush parameters.

Show bounds: Draws a box around the current cloned area.

Navigation controls: The CloneStamp interface includes a local history that can be used to
undo/redo/delete performed cloning actions. This allows us to revisit any cloning action done
during the cloning session, by clicking on the blue arrows (From left to right: First, previous, next
and last cloning actions) and also delete any given action by navigating to it and clicking on the X
to the left of the navigating arrows.

35
ColorCalibration

Process > ColorCalibration

The principle behind ColorCalibration is to calibrate the image by sampling a high number of
stars in the image and using those stars as a white reference. The tool can, however, be used in
many different ways.

ColorCalibration can also work with a view as its white reference image. This is particularly
useful to calibrate an image using a nearby galaxy, for example. The integrated light of a nearby
galaxy is a plausible white reference, since it contains large samples for all star populations and its
redshift is negligible.

We can also use a single star as our white reference – G2V stars are favorite among many
astrophotographers – or even just a few stars if we wanted, depending on what we're after.

While much has been written about color balance criteria, the takeaway from tools like
ColorCalibration is that we are in control of the criteria that we want to use at any given time.

When to use ColorCalibration


A good color calibration is performed when the image has been accurately calibrated (no pun
intended), particularly flat-field corrected, and the image is still linear and has a uniform
illumination (no gradients). Preferably, the mean background value should be neutral, something
that can be done with BackgroundNeutralization.

Parameters

White Reference
Reference image: White reference image. ColorCalibration will use this image to calculate three
color calibration factors, one per channel. If unspecified, the target image will be used as the white
reference image.

Lower limit: Lower bound of the set of white reference pixels. Pixels with values equal or smaller
than this value will ignored. Since the minimum allowed value is zero, black pixels are always
rejected.

36
Upper limit: Upper bound of the set of
white reference pixels. Pixels with values
greater than or equal to this value will be
ignored. When set to one, only white pixels
are rejected, but by lowering it a bit, we can
also reject pixels with very high values yet
not saturated.

Region of Interest: Define the rectangular


area in the image to be sampled by
ColorCalibration. Although defining
previews to be used as white references is
quicker, this parameters come handy when
we want to reuse the process – say creating
an instance of it.

Structure detection: Detects significant


structures at small dimensional scales prior
to evaluation of color calibration factors.

When this option is selected, ColorCalibration uses a multiscale detection routine to isolate bright
structures within a specified range of dimensional scales (see the next two parameters). We can
use this feature to perform a color calibration based on the star(s) that appear in the white
reference image.

Structure layers: Number of small-scale wavelet layers used for star detection. The higher the
number of layers, the larger the stars being considered for color calibration would be.

Noise layers: Number of wavelet layers used for noise reduction. Use this parameter to prevent
detection of bright noise structures, hot pixels and/or cosmic rays. Additionally, we can also use
this parameter to have ColorCalibration ignore the smallest detected stars. A value of one would
remove the smallest of scales (usually where most noise resides). The higher the value, the more
stars would be ignored.

Manual white balance: Perform a white balance by manually specifying the three color
correction factors. If we select this option, no automatic color calibration will be applied.

Output white reference mask: When selected, ColorCalibration will create a new image with a
white reference mask, where white means pixels that were used to calculate the color correction

37
Another random document with
no related content on Scribd:
of the clothed human form; in fact, an American scientist, John
Campbell, claims to have deciphered an ancient Japanese
inscription upon one of Grey’s figures referring to the “hopeless
number,” presumably of the castaways.

Fig. 44. Pictographic representation of emu-hunt.

In regard to the quaint head-gear which distinguishes these


designs, one need not go far from Australia to find something quite
analogous, for the Papuans wear an article which is quite similar to
what might be suggested by the drawings.

Fig. 45. Flying fox pattern.

We now turn to a more psychological aspect of primitive Australian


art, which includes such factors as convention and imagination.
These processes lie behind the symbolization of thought which has
evolved a means of pictographically conveying messages from one
individual to another, or, collectively, from one tribe to another.
Through long usage, the artist has learned to reduce the complexity
of a familiar, naturalistic design in such a way that, while still
retaining its intrinsic interpretation, he is able to demonstrate by a
few lines what ordinarily would require an intricate drawing. The
ultimate aim of such a system is, of course, to reduce the execution
of a design to a minimum of energy and time, without imperilling the
correctness of its interpretation. But whilst the artist is designing to
simplify the complexity of his thought symbol, the reciprocative factor
of assimilation on the part of his fellows is being stimulated. And by
means of this joint education, a design, with a simple motive from
Nature behind it, might gradually become so conventionalized that
the uninitiated fails entirely to grasp its significance.

PLATE XLIV

1. Hand marks in cave, Port George IV, Worora tribe.

2. Foot marks in cave, Port George IV, Worora tribe.


During this evolution, or, should one say, metamorphosis, in the
perception of art, the tendency must necessarily be to smooth out
the irregular and unsymmetrical contours of Nature and bring the
design down to as nearly geometrical as possible. For the same
reason, a single design is often repeated indefinitely, so that a single
form derived from a simple motive may expand into a continuous
chain or ornate pattern covering a considerable area. A large series
of devices has been established, the majority of which are known all
over the Australian continent; but, as the same time, there are many
signs which are entirely of a “totemic” nature and can only be
understood by a person belonging to that particular “totem.”

Fig. 46. Conventional representation of hopping kangaroo.

Let us take a few simple cases illustrating the different points


enumerated above.
A simple drawing of a lizard would include a substantial body, with
rounded head and tapering tail; the legs are usually extended, and at
times have the claws marked. Vide Fig. 30.
The conventionalized form consists of a long straight line crossed
at right angles by two shorter bars, at points about one-quarter and
three-quarters distance respectively from one end.
A turtle design consists of an oval, representing the shield, from
which extend the head, tail, and paddles. In its modified form this
becomes a circle with six short lines radiating from the circumference
at equal distances apart (Fig. 31).
The picture of a frog in its simplified form becomes, like that of a
turtle, a circle, but has only four radiating lines (Fig. 32). When
designs like these are expanded symmetrically into patterns, the
result is after the style of that shown in the accompanying sketches.
These patterns are extensively used in totemic devices upon
tjuringas and implements.
The drawing of an echidna, or native hedgehog, ordinarily is like
the sketch shown above (Fig. 33), but as a result of its conventional
transformation it becomes a simple hexagon, from the corners of
which the limbs may or may not be shown as simply projecting lines.
The pattern obtained by linking up a number of hexagons is not
uncommonly found engraved upon weapons and implements.

Fig. 47. Crossed boomerangs, the


symbolic representation of a fight.

Many of the conventional patterns are not so apparent as the few


just mentioned. Let us take, for instance, the “Ladjia” or Yam
Tjuringa pattern of the Arunndta. Only the initiated would be able to
recognize in a number of groups of small, concentric circles,
regularly placed at the corners of rectangular figures, as the stems of
the yam plant, and a system of parallel lines connecting the circles
both straight and diagonally as the roots. Vide Fig. 34.
Tracks of animal, bird, and man are conspicuous among the
designs, generally, whether they be true copies from Nature upon
the walls of a cave, more or less modified gravures upon weapons,
or conventional patterns incised upon a message stick. A dog track
is represented by a larger dot, suggesting the imprint of the ball of
the paw, to which are attached four smaller dots, which lie in a row in
close proximity to the former; the smaller dots stand for the
impressions of the claws (Fig. 35).
A kangaroo track shows the two long parallel median impressions,
with a lateral at an acute angle to the former at either side. The same
figure is repeated a short distance away from the one described, and
in a straight line with it. The same design on a smaller scale denotes
a wallaby track. Occasionally the lateral lines are dispensed with
(Fig. 36).
Small oblong dots drawn in pair at equal distances from each
other indicate the hopping of a rabbit (Fig. 37).
The characteristic broad arrow-like footprint of an emu has already
been referred to; the smaller variety of the same design implies that
a wild turkey is meant. When a number of birds are to be
represented collectively, the archetype is developed into a
continuous pattern by linking one track up with another into a chain
(Fig. 38).

Fig. 48. Witchedy grub Tjuringa, Arunndta tribe (× 3/10). Tracing.

When attention is to be drawn to the fact that the birds are laying,
or sitting on eggs, a number of small circles are drawn about the
track (Fig. 39).
A lizard track is indicated by drawing a number of dots, equally
spaced along a straight line, and on alternate sides of it. The dots
are the claw impressions, the line the trail left by the tail (Fig. 40).
A single wavy line, or a number of parallel wavy lines, represents a
snake or a snake track (Fig. 41).
The human footprint is either correctly shown in detail, or is
reduced to a short, straight line at one end of which five
(occasionally only three or four) dots are drawn in a sloping line to
indicate the toes. When walking is to be implied, these footprints are
either shown one behind the other in a straight line, at equal
distances apart, or they stand alternately right and left by an
imaginary central line. The common way of showing the last-named
system conventionally is to connect the alternate footprints, whether
they be actually shown or not, by a zig-zag line (Fig. 42).
In all cases mentioned above, the track may stand equally well for
the object itself.

Fig. 49. Symbolic pictograph of kangaroo Tjuringa, Arunndta tribe.

It is a common thing to see two or more of the above systems


combined, after the style of the figures shown on page 344.
The interpretation of the first of these messages would be: “A man
is tracking a rabbit.”
The second (Fig. 44) would read: “A hunter is pursuing an emu
and is accompanied by his dog.”
In the same way as the natives of the Northern Territory have
applied their artistic talents to deciphering images of earthly objects
amongst the celestial bodies, and point with pride upon the great
emu, “Dangorra,” which nightly watches over them, so the
Wongapitcha and Aluridja of central Australia recognize in the
constellation we know as the Southern Cross the shape of an eagle-
hawk’s claw, and call it in consequence “Warridajinna”; the Milky
Way they consider to be a creek bed and assign to it the name
“Karru”; the northern Kimberleys tribes believe the Milky Way to be
the track of a great carpet snake they refer to as “Womma.”
In the representation of a fish, the scales often take the form of a
cross-hatched pattern; but there are many cases in which the form of
the fish is not shown at all, yet the cross-hatched pattern remains
and is nevertheless indicative of the fish.

Fig. 50. Symbolic pictograph of caterpillar, Tjuringa, Arunndta tribe.

A design, fairly common in the north of Western Australia, consists


of two wavy lines which are parallel, in the inverted or reflected
sense, and joined at one end. The true significance of this pattern
does not become evident until one hides from view all but three of
the “waves,” say that portion lying to the right of the dotted line in the
accompanying sketch. When this is done, the form of a “flying fox” is
immediately recognized (Fig. 45).
In central, northern, and eastern Australia, a pattern frequently met
with on boomerangs, fighting sticks, and message sticks, consists of
strings of lenticles longitudinally striped and generally associated
with kangaroo tracks. This device is analogous to the one standing
for the walking of a man, viz. the zig-zag, in so far as it stands for the
hopping of a kangaroo. Here and there one finds the pattern
“finished off” at one end with the head of a kangaroo (Fig. 46).
A duel or tribal fight of any description is graphically recorded by
two crossed boomerangs, but the conventional derivative of this
design is simply supplied by two crossed lines (Fig. 47).
Time is chronicled by two phases of the moon; a crescent standing
for new, and a circle for full moon.

Fig. 51. Symbolic drawing of “native-pear totem,” Arunndta tribe.

The substance, origin, home, or habitat of any creature figuring in


a drawing or gravure is particularized by the addition of a circle-
within-circle design. For instance, in the tjuringa of the witchedy grub
shown in Fig. 48, the parallel, straight lines, enclosed within a “U,” at
the left-hand side, represent the grub, the three concentric circle
groups in the centre are the gum trees in which it lives, the “U within
U” pattern at the right is an ancestor whose “totem” was the witchedy
grub, while the parallel lines at the extreme right of the tjuringa are
markings on the grub which have been adopted by the man who
owns the tjuringa, in the form of cicatrices, he cuts on his chest.
The “U within U” pattern is frequently met with engraved upon
tjuringas, and in most cases it conventionally conveys the idea of a
“sitting” person or animal. We have already noted something similar
in the peculiar concentric iron stains of Heavitree Gap (page 342),
but in the following three tjuringa drawings of the Arunndta additional
illustrations are given.

Fig. 52. Ochre drawing and tree-carving of man with shield,


Humbert River (× 1/10).

In the first instance, an “Arrera Knaninja” or Kangaroo Tjuringa,


the meat and the fat of the animal are represented by the two series
of concentric circles in the centre, whereas its sinew is indicated by
the horizontal lines connecting the circles. The numerous U groups
on both sides of the central figures stand for a great number of
kangaroo, each of which is sitting or lying on the ground (Fig. 49).
PLATE XLV

1. Cave drawings, Forrest River, north-western Australia.

2. Decorating the body with pipe-clay, Humbert River, Northern Territory.


The second example is from the “Yeapatja” or Caterpillar Tjuringa.
The large “circle-within-circle” groups in the centre are generally
recognized to be the bushes upon which the caterpillars were born,
while the U groups with smaller concentric circles in their centres are
intended for caterpillars attached to smaller plants upon which they
are feeding (Fig. 50).

Fig. 53. Human chain-pattern.

Lastly, a sacred drawing of the “Alangua Knaninja” or Native Pear


(Marsdenia) “totem,” which belongs to the Altjerringa women, is
composed of a central “circle-within-circle” group representing the
“totem,” whilst surrounding it a number of U groups are supposed to
be the mythic women seated on the ground (Fig. 51).

Fig. 54. Camps consisting of a man and his wife (left) and of eight men.

We shall now turn our attention to the consideration of the


representation of the human figure and its derivative forms. Several
more or less obvious designs have already been discussed. The first
step towards conventionalism is seen in the two figures from the
Humbert River district, the first an ochre cave drawing, the second a
carving on a boabab tree (Fig. 52). We notice in the ochre drawing,
which was one foot six inches high, a fairly shapely and solid figure
of a man holding a shield in his left hand; in the carving, which
measured one foot nine inches in height, the solid and shapely
outline has been reduced to a matter of just a few straight lines; that
is, if we neglect for the present the consideration of shield and
boomerang which the figure is holding. The result is, therefore, a
design resembling a Latin cross, at the lower end of which is
attached an inverted V-shaped symbol representing the legs; a small
circle may or may not be added to the top end to stand for the head.
Fig. 55. Anthropomorphous designs, carved on
spear-throwers. Tracing.

This design is often repeated indefinitely in a lateral sense, so that


a rather ornate pattern results in which the individual figures “join
hands” and their “toes touch” below. A chained pattern, such as is
shown in the accompanying sketch (Fig. 53), may be noticed in Plate
XLV, 1, near the centre of the picture, below the ledge on which the
bold drawing of a snake appears, and on the same level as the semi-
human design on the extreme left.
The ultimate stage of this conventionalism so far as the human
figure is concerned is a simple, straight line, the upright arm of the
cross; this is extensively employed in carved representations of
people on message sticks. We might now hark back to the question
at the beginning of our disquisition on aboriginal art “what lies in a
mere line” and supply one answer at any rate.

Fig. 56. Anthropomorphous design, carved on pearl-shell, Sunday Island. Tracing


(× 1/4).

Any number of people might be represented by the same number


of short, straight, vertical lines. Followed by a zig-zag line, such
people are represented as being on the march. When a small dot
stands on each side of the straight line, or perhaps a number of dots
in intermediate positions between the lines, the design conveys the
idea that the people are camped, the dots standing for camp fires.
Moreover, when a horizontal line lies over the upright lines, the last-
named indicates that a hut, shelter, or breakwind was used during
the encampment (Fig. 54).
When there is an obvious, and intended, difference in the lengths
of the upright lines, the longer represent men, the shorter women.
Reverting now to the cross, and looking more closely into its
development, expansion, and embodiment in anthropomorphous
designs, we meet with one or two points of considerable interest.
We have had occasion to note that throughout central and
northern Australia, the tribes during the final acts of initiation
ceremonies make use of a sacred cross, called “Wanningi” or
“Wanninga” in the former region and “Parli” in the western portions of
the latter. These wanningi are constructed only immediately before
they are required and are destroyed again the moment the ceremony
is over. Women are not allowed to see them under any
consideration. A wanningi is made by fixing two pieces of wood
together in the shape of a cross, then, by starting at the intersection
of the arms, a long string, made of human hair, is wound spirally
round and round, from arm to arm, until the whole space between
the arms is filled in. The size of these crosses varies from three or
four inches up to two feet or more. This object is produced by the
Aluridja just before the mutilation of the neophyte is to take place. At
this critical moment of the youth’s life, when he is stepping from
adolescence across the great gap which will lead him to manhood,
the spirit of the Great Tukura presides invisibly concealed within the
wanningi, but returns to his high abode again when the function is
over. Vide Plate XLIII, 2.
PLATE XLVI

Wordaman native with his body and head decorated in imitation of


skeleton and skull, Victoria River, Northern Territory.

The spiral winding of hair-string around the arms of the wanningi


associates the idea of the rhombic outline of the string with the arms
of the cross. In the representation of the human form, one often finds
the two patterns combined, or, it may be, the rhomb replaces the
cross.
In the example before us (Fig. 55, a), we have an engraved
pattern appearing on a spear-thrower, the motive of which, were it
not that the artist had added a human face, would have been very
difficult for the untrained eye to recognize. As it is, we have the
unmistakable evidence of an anthropomorphous design. Not only
does a modified rhomb represent the body of a man, but the figure
itself is flourishing the crossed arms of a “Wanningi” in its right hand.
The principal design thus identified passes, at the bottom, into a
pattern composed of several polygonal figures which may, no doubt,
be looked upon as derivatives of an original rhomb.
In the other illustration (Fig. 55, b), which is also a carving upon a
spear-thrower, the intricate association of the rhomb with the human
form is again apparent. The figure of a man, with face in profile, is
represented in a plain and more or less conventional way; the
straight trunk with the two arms resting upon the hips, symmetrically
on each side, in itself suggests the rhomb, but, in addition, most of
the intervening spaces have been filled in with parallel lines and a
cross-hatching pattern which embodies the rhomb.
Conventionalism in the representation of the human figure has
thus gone further than the mere inclusion of the cross or the rhomb
motive, by working in with the original design a derivative or new
pattern which fills up all the surrounding spaces.
By means of this system, a new element is introduced in the
shape of symmetry. If a vertical line, drawn through the centre of the
trunk, be taken as a median line in a simple design of the human
figure, all subsequent patterns which are drawn will be grouped
symmetrically about it. The most popular pattern used to fill up the
available spaces with, is one of a “concentric” type. By this method a
distinctive, bi-laterally symmetrical pattern is evolved, which after
prolonged usage may actually take the place of the original, and
have a true anthropomorphous significance.
Take the illustration of conventionalism of this kind shown on the
carved pearl shell covering from King Sound reproduced in Fig. 56.
The original motive was a simple line drawing of a human being after
the style of the one on the left-hand side. The next stage in its
evolution was brought about by blocking the areas between the
limbs, and between the head and arm, on either side, respectively, in
a manner which made the resulting pattern appear equally balanced
in respect of a median longitudinal line running through the back and
head of the original figure.
Very numerous designs of this nature are constantly met with in all
tribal areas of Australia, but in most cases the stranger who is not
aware of the intermediate or transitional stages may fail entirely to
grasp their meaning or origin.
CHAPTER XXIX
STONE IMPLEMENTS

Survival of Stone Age in Australia—Stones used in their natural shape for


throwing, pounding, cooking, and grinding purposes—Hand-mills—Rasps—
Stone tomahawks—Scrapers—Operating knives—“Cores” or “nuclei”—Stone
knives—Spokeshaves—Awls—Concave scrapers—Slate scrapers of
Adelaide tribe—Scrapers embedded in resin—Adzes—Bladed spears and
knives—Stone spear-heads—Method of manufacture described.

There are not many places left in the world where the man of the
Stone Age can still be seen roaming the wilds of his inherited
possessions. Even in Australia, although there remain one or two
areas where comparatively little havoc has been wrought among the
primitive institutions of the indigenous man, yet the influence of
civilization is slowly, but very surely, encroaching indirectly upon his
ancient cults by such means as inter-tribal barter, if not actually by
the hand of the white intruder. Especially do these remarks apply to
the manufacture and utilization of stone implements; it is, of course,
only to be expected that the superiority of the metal blades of the
white man’s implements would appeal to the native who formerly had
to spend hours making a crude cutting edge which only too often
broke when applied to the test for the first time. We shall, however,
treat the subject regardless of the alterations which have been
brought about by our appearance upon the scene, and without
attempting to draw up a hard and fast scheme of classification.
At the present time, whilst there are not only some of the primitive
men alive still, but also a limited number of observers who have had
the good fortune of seeing them at work, it is of vastly greater
importance to record the living facts than to write exhaustively, nay,
even speculatively, upon the comparative shapes and embodied
techniques of artefacts whose stony composition will ensure their
keeping, even fossilized, long after the men who made them, and the
scientists who lived among them, have passed into oblivion.
The Australian aboriginal makes adequate use of any suitably
shaped pieces of stone he happens to find whilst in pursuit of game;
both in the Musgrave Ranges and the northern Kimberleys stones
are used in their natural shape for hurling into a flying flock of birds,
for shying at a bounding wallaby, for bringing down nuts of the
boabab, and for precipitating fledgelings out of a nest.
Other stones, usually oblong and rounded pebbles gathered in a
river bed, are used for pounding and cracking purposes. At any
camping ground these pebbles can be picked up in great numbers,
showing one or two places, usually the points, at which the
percussions have worn the stone away; pounding stones and
hammers of this description are equally plentiful in the sandhills on
the plains of Adelaide, all over central Australia, and along the north
coast. They are used for pounding seeds and foliage (the latter of
which is to serve for corrobboree decorations), for pulverizing ochre,
for cracking nuts and hollow bones containing marrow. Vide Plate
LIV, 1.
The underlying surface consists either of a level portion of an
outcrop or another, but larger, stone, which takes the place of an
anvil. Some of the coastal tribes of eastern Australia used to shape
their heavier pounding stones by chipping away material at one side
until a stout, cylindrical handle was formed, the whole resembling a
pestle; dumb-bell shaped pounders were also made, but were rare.
We have already learnt that natural pebbles or rock fragments are
also used, together with a wooden rod, for knocking out teeth during
initiation ceremonies.
PLATE XLVII

1. Cave drawings (kangaroo, etc.), Forrest River, north-western


Australia.

2. Cave-drawing of kangaroo, Forrest River, north-western


Australia.

You might also like