Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

 

TECHNICAL COMMUNICATIONS 

DEEP LEARNING IN ASTRONOMY 


BY- ANIKET SUJAY, ENR N0-17122006 
 

Introduction 

The explosion of digital data volume and complexity has driven a change in how big flow in 
information is handled. This change has opened new opportunities in domains like 
medicine, security, transport, fundamental research, etc. As the amount of data increased 
the need to efficiently analyze the data also increased. Computers provide and excellent 
answers. Computers provide the computing power to process and analyze the huge 
amounts of data. Astronomy is today’s time has indeed become has big data paradigm. The 
continuing development of ground and space-based industries, including large sky surveys, 
brings Astronomy to the big data league. One of the major sciences that have come into 
picture in recent years is deep learning research. With extremely sophisticated algorithms 
being developed that help in analyzing large amount of data very efficiently. Not only that, 
these model can infer patterns that would be impossible for humans to recognize using 
classical methods. 

Deep Learning 
First, we will look at how deep learning helps mitigate the problems of analysis of big data 
problem in astronomy. 

Deep learning was inspired by how the human brain responds to stimulus. The architecture 
of the model involved in deep learning was devised in reference to how information flows 
through our nervous system. 

Model: 

 
 
 

Modeling an artificial neural network from a biological neural network we have to take into 
account 3 basic components. First synapses. The synapses of the biological neuron are 
modeled as weights. These weights represent the strength of the connection between 
neurons. A negative weight represents an inhibitory connection while a positive weight 
represents an excitatory connection. Next, we model the working of the neuron that maps 
the input to the output. All the input data multiplied by the respective weight are summed 
and then fed into the neuron analogous for the ANN. Here the sum is fed into the function 
usually termed as the activation function. This function determines the output of the 
neuron. The output from the neuron is then added with an offset(depending on the 
requirement) and then finally fed into successive neuron for further evaluation.

Fig: A single node in an artificial neural network. 

Above discussion explained the working of only one node of the network. As the name 
suggests the ‘network’ consists of numerous interconnected nodes. We generally have 
three layers in the network. The three layers are the input, hidden and output layers. The 

 

 
 
hidden layer can contain any number of layers stacked in series. The number depends 
upon the dimension of the inputs and the desired outputs. 

Fig: Deep Neural Network 

Learning: 

There are 2 algorithms in deep learning: 

1. Forward propagation. 
2. Backward propagation. 

In forward propagation, we feed the inputs through each layer in the graph. The output 
from the previous layer becomes input for the succeeding layer. 

N
Hk = ∑ Xk*W1k 
k=1

N
Y k = ∑ Gk*W2k 
k =1

Where Wik is the weight associated with the ith hidden layer and kth node in the layer. 

 

 
 

An activation function is defined for each node that suppresses the output in the range of 
either [-1,1] or [0,1]. A cost function is then selected for the network which would be the 
measure of how the model is performing. 

J T otal = 0.5 * (pred − expected) ** 2 [where ** denotes exponential] 

We need this cost function to be minimum. As we can see from the figure above the 
optimization problem becomes multidimensional. This is due to the presence of many 
layers in the graph. An algorithm called gradient descent is applied to the cost function, 
that says to moves in the opposite direction of the gradient at that point. 

Fig: Gradient Descent Algorithm 

After we determine the cost function for one computation we need to apply the 
backpropagation algorithm. 

 

 
 

Gradient descent is the first step of the backpropagation algorithm. To properly apply the 
terms we need to calculate how the cost function changes with respect to different 
parameters. 

A model is determined by the weights between the nodes. So we need to calculate the 
changes in these weights. 

This done using the following relations: 

dJ ) where J = Cost function 


W ik = W ik − Irate * ( dW ik

This step is done many times over the whole data set until a satisfactory minimum is 
obtained. Each cycle over the whole dataset is called epoch. 

Applications in Astronomy 

One of the major applications of the deep neural network in Astronomy is the Galaxy 
Morphology Classification. Galaxy Morphology is a system developed by the astronomers 
to divide galaxies into groups based on their visual appearance. 

There are many schemes present that are used. One of these classification systems is 
Hubble Classification System. This system divides the galaxies into 3 parts based on the 
appearance of their shape. Researchers get deep space photographs form highly 
sophisticated observation instruments. Various images from different spectral perspective 
are collected. Each image is then filtered based on the feature that needs to be extracted. 
For example, if one needs to study the spiral formation transform is applied so that the 
spiral structure is highlighted. Now we a matrix of data. This 2D matrix is now flattened as 
the input layer of the ANN is 1D. The model is now trained on the dataset. After training the 
model is now able to recognize spiral patterns in the new dataset of the galaxies. 
Transforming the images using filter are done mathematically through convolution that is 
why this type of neural networks are called Convolutional Neural Networks. 

References: 

1. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.161.3556&rep=rep1&type
=pdf 

 

 
 
2. https://en.wikipedia.org/wiki/Galaxy_morphological_classification 
3. https://arxiv.org/abs/1901.07047v5 
4. https://academic.oup.com/mnras/article/349/1/87/3101624 

 

You might also like