Report

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 28

Spectral Energy Distribution Models

Stellar Intensity and Disk Excess in a SMC Star


By
David C. Petit

Observational Techniques in Astronomy


Project #1
Fall 2020
KU Leuven
Professor Hans Van Winckel

Abstract

The spectral energy distribution (SED) of a star in the Small Magellanic Cloud is analyzed in this
paper. The many steps taken to acquire, deredden, and compare the data of this star are discussed;
use or removal of outliers and insensitive photometric measurements, as well as, unit conversion
considerations are also presented. Normalizations of stellar models are made, from which
comparisons to the star’s data are based. Error analysis is addressed qualitatively in words, and
quantitatively in a Monte-Carlo error simulation around the spectra of best fit. Lessons learned and
acknowledgments conclude this paper.

Introduction

Since long before the dawn of our species the stars have twinkled in the heavens above. Systematic
investigation and the use of scientific instruments to understand the nature of stars is a much newer
pursuit; now, with massive databases and high-order photometric models, available to any and all
with an internet connection and astronomical nomenclature and syntax, those with a familiarity in
general science and some adequate training in astronomy are able to create the spectral energy
distribution of a myriad of nearby stars. That is what is accomplished in this paper.

Star identification

The star discussed in this paper was assigned in the SED Assignment ReadMe pdf. A summary of
the pertinent information is tabulated below:

Name J-number, First Teff logg LMC/SMC [Fe/H]


release
David Chandler J010254.90- 4000 0.0 SMC -1.0
Petit 722120.9

The Teff was assumed to be in Kelvin [K], and LMC/SMC was assumed to be the star’s location, in
this case, the Small Magellanic Cloud.

Data acquisition and processing

In Strasbourg astronomical Data Center’s SIMBAD (the Set of Identifications, Measurements and
Bibliography for Astronomical Data) I used the VizieR photometry viewer and my star’s
coordinates to gather the raw data. At first a radius of 10 arcsec around the center of the star was
generated. This led to approximately 400 measurements, and was deemed too wide of a range and
my data was likely picking up other objects in a similar potion of the sky. A range of 5 arcsec, with
approximately 230 measurements was considered, but the decision was made to analyze the range
of 3 arcsec around the star’s coordinates, which entailed a raw data set of approximately 185
measurements. Images demonstrating where and how this data was obtained are available in
appendix A. This data in the VizieR Photometry Viewer was downloaded. Additional investigations
into the NASA/IPAC Infrared Science Archive (powered by Caltech’s Gator search engine) were
made. Apparent similarity of recorded data, especially regarding the 2mass and sage catalogs; a lack
of user friendliness; and pressing time constraints led to refraining from actual use of its data in this
project. The experience proved educational never the less.

The next step in analyzing the spectra involved the pruning of insensitive, inaccurate, and redundant
data. For a proper cleaning, this meant that of the approximately 185 data points, almost half were
discredited as coming from a survey that was not equipped with instruments or used with
techniques that would provide a level of accuracy that would be appropriate for making the
dereddened SED in a distant star (approximately 190,000 light years away) [1]. In particular, data
obtained from the Gaia, XMM, DENIS, and SkyMapper missions were all removed from analysis.
Other cleaning techniques led to the removal of several highly uncertain data points because of their
abnormally large error sizes, and of a particular set of data points obtained by the POSS-II survey
because of the same reason. Though not all of the filters were used in the photometric data, all of
the data coming from surveys that have calibration information in Appendix F except DENIS
(Geneva, Johnson, SAAO, Strömgren, ESO JHKLM, Near-IR 2MASS, MSX) were retained. The
remaining selected-for-accuracy data was then plotted after the necessary mathematical
manipulations, verifying the correct transfer from the source to the data for use in this project. This
verification can be seen in the plots in appendix A. After considering discarding duplicates, it was
believed that the differences in flux at the same wavelength for the same filter could be
measurements of stellar oscillations or other astroseismological phenomena that are indeed accurate
measurements; near-duplicate data points were also retained. The next steps, unless otherwise
stated, used a remaining 87 data points.

In order to evaluate the accuracy of the dereddened model that will be compared to and discussed in
a future paragraph, a theoretical model was obtained, before the dereddening procedure. This model
was obtained from Castelli, F. at the National Institute for Astrophysics’s Astronomical Observatory
of Trieste (INAF-OAT) [2]; in particular, I used the data within Teff/logg equal to 4000 and 0.0,
found in “fp00k0c125odfnew, [ 0.0], vturb=0.0 km/s, fluxes i/H=1.25,” which is a section of “Grids
of MODELS and FLUXES.” A plot of this data is seen in appendix B. The similarity of a strong
sharp increase in flux around wavelengths of 500 nm (0.5 \mu m) and slower gradual decrease of
flux between 2 000 nm and 20 000 nm (2-20 \mu m) is seen in both graphs. Nothing else was noted
from a visual inspection of the two graphs, except the product of the frequency times the flux on the
y-axis of the model is noted to be orders of magnitude higher, and scaled to a single steradian of the
sky. This difference was scaled during the chi-squared analysis.

For different E(V-B) values, the curve of dereddened data points will move about a central point
used for scaling the model’s data. At first, a data point taken from the 2Mass mission J filter at 1240
nm (1.24 \mu m) was used as a reference to scale the models large flux in ergs/steradian down to
the normalized Jy that is seen in the photometry data. This can be seen in the four plots shown in
figure 5. From the most cursory inspections, we find that the dereddened flux data consistently
deviates from the models fluxes in high wavelength light. This is known as the infrared excess and
is caused by the emissions from the dust that surrounds the star. Because theory behind the model
assumes there is no surrounding dust nor infrared excess and the photometric data is clearly affected
by a mass of dust that is near, but not a part of the star, data points in these high wavelength ranges
are incompatible for scientific comparisons.

The question is then where are wavelengths too high for the theoretical Castelli model to efficiently
model the photometric data. In moving to larger wavelengths, black-body radiation will decay
quickly from its maximum and then asymptote to zero. Any points in the data which break this trend
are not exhibiting black-body radiation, and would thus likely be the result of an infrared excesses
of nearby dust. This is seen in all plots of the data at approximately 3 um. At first, the data with
wavelengths less than and equal to 2190 nm (2.19 \mu m) was deemed valid; the data with
wavelengths greater than and equal to 3350 nm (3.35 \mu m) was deemed invalid because of the
heavy influence any and all dust in the vicinity of the star would have on these measurements.
These values were chosen because there was no data points in between them. There is a local
minimum in flux detections around 3 um. This indicates that there is already some deviation from
the model at this point; thus all data at higher wavelengths is most certainly affected. It is more
difficult to objectively determine the extent that slightly smaller wavelength data is affected;
meaning, the flux data taken at 2190 nm, 2160 nm (2.19 and 2.16 \mu m), and even smaller
nanometers might be artificially high because of the infrared excess. For the flux measurements at
3350 nm (3.35 \mu m) to be accurate, a very large E(V-B) value, on the order of 2, was needed. The
E(V-B) necessary for the 2190 nm (2.19 \mu m) data to be accurate was 0.5, which was a
reasonable figure enough to keep it in the chi squared analysis. Despite a logical approach at first,
the initial value chosen was deemed too high because the E(B-V) that minimized the chi-squared
was the limiting case of 0.

A second attempt was made where wavelength data above 1700 nm (1.7 \mu m) was discarded with
the assumption that it was caused by infrared excess. The third attempt kept data that was 1300 nm
(1.3 \mu m) in wavelength as less. Both second and third attempts provided no change in the
optimal E(B-V) and the chi-squared was minimized in the limit that E(B-V) went to zero. The
fourth attempt was more successful. Wavelengths less than 1100 nm (1.1 \mu m) were retained,
which meant that a new reference point was needed to scale the model’s data down to the limited
photometric data; a Cousins I filter measurement at a wavelength of 800nm (0.8 \mu m) was
chosen. Because two variables were changed at once in this instance, namely, that the upper
wavelength data were truncated, and a new scaling reference was selected, it was recognized after
successfully obtaining a non-zero chi-squared value, that the initial choice of scaling reference
could have caused the problems before. A trial of returning some of the large wavelength data, but
keeping the new scaling reference the same was made. Very similar chi-squared values were
obtained. This suggests that an initial selection of the particular 2Mass J filter data point was a poor
choice for the acquisition of a valid chi-squared and comparison between the photometric data and
stellar model.

Where to draw this line of validity casts a lingering uncertainty on this investigation; it is addressed
and minimized via a chi-squared analysis, but is not definitively solved from first principles or with
unambiguous measurements in this report. More considerations are given in the discussion’s future
works near the end of this report.

Chi-squared Analysis

The chi-squared analysis was set up by creating 1000 values of \chi^2 over 1000 different E(B-V)
values ranging from 0 to 2. All optimal \chi^2 values were close to 0 and the values of E(B-V) were
concentrated from 0 to 1 after setting up the program. For each of the values of E(B-V) the
reddened data was calibrated to its zero point on the instrument’s magnitude scale. Then a
magnitude was calculated and dereddened (or increased) at each wavelength by using the provided
interstellar reddening law and the current iteration value of E(B-V) as:
chi_mag_gain = evb*chi_ism_reddening
Finally, the dereddened magnitude was then converted back into a flux. At one of the wavelengths
that data was taken (first at the fifteenth element in the flux array, which corresponded to a
wavelength of 1240 nm, and then in the final calculations) at the seventh element in the flux array,
which corresponded to the wavelength of 789 nm, a scaling factor was created in the form:
ScalingFactor = ModelFlux(789 nm) / DataFlux(789 nm)
this then divided all of the terms in the model’s flux which allowed a scaling of the model with
comparatively large fluxes, to the data with its small fluxes. Naturally at the (originally fifteenth,
and eventually) seventh data point where \lambda = 789 nm, the value of the model was precisely
equal the value of the data. The worthwhile analysis that follows is an investigation of the data
around (but not on) this data point. With the raw data dereddened and the model data scaled, the
program then executed a chi-squared computation using the formula:
(1/(N-D)) sum( (np.add(np.divide(chi_flux_dered,chi_fluxerror), np.divide(-
1*chi_model_flux_scaled,chi_fluxerror)) )**2 )
this was the chi-squared obtained for one value of E(V-B); and all sequential chi-squared values
were obtained with the same procedure, just over the range of varying E(V-B)’s.

The graph on χ \chi^2 over the range of E(B-V) in seen in figure 1. The sharp increase in \chi^2
values beyond E(V-B) > 0.5 deceptively makes the function look like it is increasing monotonically.
Zooming in, or plotting the exact same data on a logarithmic plot, which is seen in figure 2, shows
the local minimum; this was found at approximately E(V-B) = 0.15. Despite efforts made online
and with multiple colleagues, the meaning of the acronym in the instructions, “χ^2 graph i.f.o. E(B-
V)” was guessed to be as a function of, but was never determined; it is believed that the information
provided covers the realm that has been requested.

With the minimization of chi-squared, obtained, an optimal E(V-B) was obtained and then used to
identify and isolate the best dereddened flux calculated from the star’s reddened data. This optimal
dereddened data can be seen in figure 3, and was then used in the calculations of the star’s
luminosity, circumstantial dust temperature, and Monte-Carlo analysis of the dereddening model
and it’s relative error.

Luminosity

The basic physics term of power describes the quantity of energy transformed per unit time, those
units of joule per second give rise to a new unit for power, the Watt. Stars are constantly burning
their hydrogen and radiating heat, as well as various small sizes of mass in all directions. If we
consider the two dimensional spherical shell or ball on the surface of a star, we can calculate the
luminosity of a star by first, approximating it to be equal to it’s radiative power, ignoring mass loss
effects, second, by integrating over all of the wavelengths of light emitted by the star, and third,
accounting for the portion of the sky that a telescope can detect and generalizing that to the entire
two-pi steradian for that star’s distance. After minimizing chi-squared and using the best E(B-V)
value for a match between the data and the model, an integral of the fitted model’s flux data points
was taken on wavelengths from 343 nm to 20 000 nm. This led to a luminosity arriving in the area
of the photometric detectors of approximate 15 Jansky (Jy) or 1.5*10^-25 W/m^2. To get the true
luminosity, the area that is being illuminated by the star with this quantity of illumination per unit
area is multiplied by it and then divided by the area of the sampled by the telescope. That is
mathematically,
L = L_area * (A/a)
Assuming a spherical area for the radiation of the star and a circular area for the aperture on the
telescope, this can be rewritten in terms of radii,
L = L_area * (R/r)^2
Three more assumptions were made to calculate the luminosity. The first is that the star is a point
source of light, this is a common approximation because of the immense distance; second is that the
star in this project is exactly 190,000 light years away [1]. Though this is most certainly not an
exact truth, it is deemed to be a good approximation for the average distance from earth to any and
all stars in the small Magellanic cloud. Second, the radius of all of the instruments used to collect
my photometric data is assumed to be 1 m. This assumption comes from the 2MASS instrument
being a 1.3 m telescope [4], the WISE space telescope being 0.40 m in diameter [3], and the Spitzer
space telescope having 0.85 meters in diameter [5]. This is again not a true and correct
determination, and it is likely that this assumption has caused larger errors than the one guessing a
static and precise location of my star within the SMC. With these approximations in place, an initial
luminosity was calculated to be: 6 * 10^60 W. This seems unreasonably high, and something on par
with the amount of energy emanating out of a galaxy cluster, all supernovae, or the observable
universe. 15 Jy seems to be a reasonable estimate of Luminosity from VisieR, and the mathematical
relationship between the amount of light entering a distant small area and the light emitted by the
source is fairly simple, so it was believed that the error laid in a unnoticed bug in the software. After
much painstaking review of numbers and characters, a Jansky conversion factor of “.../(10**-26)”
was discovered and corrected to “...*(10**-26).” Afterwards, a more reasonable approximation of
the luminosity was obtained: 6 * 10^18 W. This can be compared to the typical luminosity of a
post-AGB star, which varies between 1 000s and 10 000s of times brighter than our sun [6][7][8]. A
convenient mid-point to do approximate calculations is 10 000 times the luminosity of the sun. Our
sun has a luminosity of 3.828×10^26 W [9], and we can see that this is about 60 000 000 or 60
million times brighter than the star studied in the Small Magellanic Cloud. This suggests an error in
my mathematics or perhaps this star is not in the post-AGB phase and has not increased in
luminosity from a cold dim star on the main sequence.

Monte-Carlo Analysis

Monte-Carlo analyses are statistical methods to approximate deterministic quantities. Like the
casino in Monaco they are named after, doing a Monte-Carlo simulation enough times will be in
your favor; in winning monetary gambles there, and in gaining more accurate insights into the
phenomena of study here. It is very difficult to calculate the amount of reddening and its margin of
error from a distant star. In principle, we would need a hollow spherical shell of detectors that is
close enough to the star that it will catch all of the emitted photons, but not so close that it melts,
breaks, or is too close that it misses the emission of a photon. The impracticality of making such a
detector gives rise to the usefulness of Monte-Carlo simulation: we can use randomness to help
simulate the expected value and confidence interval of the star’s entire luminosity by varying inputs
to our chi-squared analysis and measuring it’s outputs. Arrays with 87 elements that were randomly
taken from a Gaussian normal distribution around \mu = 0, and \sigma = 0.05 were generated 1000
times. These arrays with distributions around zero were then normalized to the standard deviation of
the flux errors in the selected-for-accuracy data points originally obtained in VisieR; and then added
to 1000 arrays of the best dereddened flux data (with it’s 87 elements). A comparison between the
87 elements in the original and random sets of data led to the creation of 1000 \chi^2 values. These
values could then be investigated with statistical tools to gain some insight into the nature of the
dereddening of the distant star. The average value of the \chi^2s generated was slightly less than
0.01. During my software testing, I ran this simulation many times and noticed values that were
constantly changing (due to the new random numbers generated each run), but never strayed far
from 0.009. An example can be seen in figure 4. When deciding to take a break one afternoon, the
code was modified to generate orders of magnitudes more perturbations and then left to run while
no user was present. The resulting average was given as 0.008705518808426798. Believing this to
be the most accurate number obtained, I rounded this to 0.0087 and continued. A similar approach
was made for the standard deviation of the \chi^2 models the longest run led to a value of
0.001318597301548488, which was rounded off to 0.0013. These average and standard deviation of
\chi^2 values are related to the reddening and error on the reddening of in the data model...

Dust temperature estimate

Many stars have dust that is close to, but not a physically connected part of the star. This can be
seen in the huge difference between the data and the stellar model in high wavelength (<1000 nm)
flux. Whereas an isolated star will exhibit black-body-like radiation emission that naturally decays
at higher wavelengths, the data shows a star which is situated within a dusty medium; one that
absorbs and re-emits the stars energy, but at lower wavelengths. There are few constants in the
universe and we might find a changing and decreasing temperature of the dust grains as we move
farther from the surface of the star. In order to simplify the stellar-dust system, a sweeping
approximation is made: This dust is uniformly radiating, that is, all of the dust is emitting a constant
amount of flux. Like before, we are looking at a large and complex astronomical system, and so this
assumption is clearly inaccurate, but also like before, it is useful to get a sense of the stellar-dust
system. If the bulk of the dust is considered, then a useful approximation of constant-flux dust
emission follows. This means that all of the dust is emitting light between 2 160 nm and 22 100 nm,
and that the flux of all such light arrives to earth with 0.0035 Jy of power. This comes from figure 3,
which shows the large difference between the photometric data and Castelli model. If the dust acts
as a black-body, then we can derive the range of temperatures that the dust must be at to satisfy a
constant flux assumption. Planck’s law states:
{\displaystyle B_{\nu }(\nu ,T)={\frac {2h\nu ^{3}}{c^{2}}}{\frac {1}{e^{h\nu /kT}-1}},}
which can be solved for temperature and rewritten in a wavelength form as:
T = (h*c)/(wavelen * k *ln(1 + (2*h*c)/((wavelen**3)*FluxBB) ))
all of the terms are constants, except wavelength; this is the above range. Substituting the terms in
(and wavelengths in twice as a minimum and maximum) yields a range of temperatures from 16 K
to 140 K. This implies that the highest and lowest energy and frequency wavelengths of infrared
(low energy and frequency) light are re-emitted by the dust is at temperatures of 16 K and 140 K
respectively. If a single figure with fewest significant figures is to be made, 100 K would be the best
estimation.

A second approach was used to compare the results of the first. Here we find a relationship between
the temperature of dust around Post-AGB stars with a near-IR excess,
T_d = T_(eff) * sqrt(R/2d)
Where T is the effective temperature of the star, R is the radius of the sun, and d is the distance from
the star to the dust. Also using the relationship for luminosity,
L = 4 * pi * R^2 * T^4
and Weins Law,
\lambda_(max) = b / T, where b = 2.897771955 × 10^−3 m⋅K



Comparing this to the average temperatures on earth shows that the dust is very cold; interestingly,
simultaneously comparing this to the temperature of the vacuum of space (~3 K) shows that this
dust is much hotter, and thus more radiant than vacuum or typical in interstellar matter.

Discussion

In the many steps taken to obtain astronomical data, analyze the scientific instruments and their
calibration used in their data acquisition, and create and evaluate a spectral energy distribution from
that data, a whole host of new knowledge and familiarity was obtained. Though there have been
numerous assumptions and approximations taken in this work, reasonable quantities have been
obtained on the dereddened flux emissions, \chi^2 of the color excess, the average and deviation of
dereddening from Monte-Carlo simulations, and circumstantial dust temperatures. If the goal of this
project was to create an accurate and academically publishable description of the spectral energy
distribution of a particular star in the Small Magellanic Cloud that this report has a very long way to
go before meeting that objective. On the other hand, if the goal of this project has been to learn
about astronomical database, how to access them, useful literature on the models of stellar
emissions, developing a strong familiarity of mathematical software packages like python,
formatting and writing academic reports, and gaining an ever deeper appreciation for the work of
other and earlier astronomers, than this has undoubtedly been a hugely successful endeavor.

Future works and investigations

Despite being the most interesting and educational project that I have had the fortune to work on
this semester, this as been by far the most difficult and it is time that is finishing this report, rather
than an absolute completion of the work that finishes it. There are many unfinished parts of this
project; most small, some less small. Perhaps the largest project I would continue to undertake
given additional time would be the expansion, reorganization and functionization of my python
code. Superior code will define functions to do repetitive commands and then call them in the
places they are to be used; despite having a moderately robust familiarity with python now, it was a
very unfamiliar language at the start of the semester. A familiarity with spreadsheets (both
Microsoft's and LibreOffice's) led me to some basic reordering, filtering manipulations, and plotting
of the raw data outside of python. This would be another area to expand the python code I used in
order to cut out the use and possible issues that arise from using additional software packages.

The selection on which data point to normalize the Castelli model on was made quite arbitrarily.
Contemplation on this weakness led to the idea of building a for loop around the entire code, one
that would do everything that has already done once in this report, but do it as many times as there
are data points, and in each iteration calculate the distributions of \chi^2, find their minimum, and
then compare each of their minimums (as well as Monte-Carlo averages and standard deviations)
with the intention of optimizing this aspect of the project, instead of finding a functional but not
likely the best point to scale on.

Integration with the NASA/IPAC Infrared Science Archive with Caltech’s Gator search engine
would help to bolster the content of this report. It is also possible, though unlikely, that significant
changes, and even improvements, could occur to the values obtained through the parts of this report.
Learning proposal writing and acquiring telescope time for more and newer photometric data would
also prove useful, though that will be the subject of a much anticipated class in semesters to come.

In terms of appearance this report could be polished in a few ways. It would be very worthwhile to
go back and recreate the earliest plots made because they lack the uniformity (namely dotted grids
and best PNG resolutions) that high-quality reports demand. Though it was not required, our nearby
sun provides a wealth of high-order accuracy data and could have been used in the generation of
more plots and graphs comparing my slightly cooler SMC star with something that not only has
accurate data but also has a deep familiarity and intuition, even for lay readers.

Appendix A: Plots, Figures, and Diagrams


Figure 1: Chi-Squared as a f( E(V-B) )
Figure 2: Log graph of Chi-Squared as f( E(V-B) )
Figure 3: Dereddened Data and Scaled Model

Figure 4: Monte-Carlo Simulations of Chi-Squared


Appendix B: Python code and csv
spreadsheet

Report Should include


1 Illustrative figures with results ! Make
them clear and
illustrative.
2 Determine the total luminosity given the
known distance.
What is the typical Luminosity of a post-
AGB star ?
4 χ^2 graph i.f.o. E(B-V)
5 error on E(B-V)
6 Dust temperature estimate (of the bulk
of the dust)
7 Small discussion
8 Codes in appendix (Codes with
comments included !)

monte-carlo???
What I did:

With the data that would allow for the creation of an accurate SED, a python script was written that
would allow for the scaling and

References

[1] Hodge, Paul W. Professor Emeritus, Department of Astronomy, University of Washington,


Seattle. Magellanic Cloud. Encyclopædia Britannica. June 27, 2017.
https://www.britannica.com/topic/Magellanic-Cloud. Accessed: December 23, 2020

[2] Castelli, Fiorella. Complete database for ATLAS and SYNTHE. May 22, 2015
http://wwwuser.oats.inaf.it/castelli/grids/gridp00k0odfnew/fp00k0tab.html

[3] Greicius, Tony & Dunbar, Brian. National Aeronautics and Space Administration. Aug. 4, 2017
https://www.nasa.gov/mission_pages/WISE/spacecraft/index.html

[4] Skrutskie et al. (2006). This publication makes use of data products from the Two Micron All
Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing
and Analysis Center/California Institute of Technology, funded by the National Aeronautics and
Space Administration and the National Science Foundation.
https://irsa.ipac.caltech.edu/Missions/2mass.html

[5] Greicius, Tony & Dunbar, Brian. National Aeronautics and Space Administration. Aug. 4, 2017
https://www.nasa.gov/mission_pages/spitzer/infrared/index.html

[6] Oomen, Glenn-Michael; Van Winckel, Hans; Pols, Onno; and Nelemans, Gijs. Modelling
Depletion in post-AGB star by re-accretion of gas from a dusty disk. (2019). IvS KU Leuven &
Radboud University. https://fys.kuleuven.be/ster/education/seminars/glennmichaeloomen-
seminar080319.pdf

[7] Leiden University. Chapter 11 – Late Evolution of M < 8 M_sun.


https://home.strw.leidenuniv.nl/~nielsen/SSE17/lectures/Stellar_lecture_10.pdf

[8] Kwok, Sun. (2008). Stellar Evolution from AGB to Planetary Nebulae. Proceedings of the
International Astronomical Union. 252. 10.1017/S1743921308022771.
@article{article,
author = {Kwok, Sun},
year = {2008},
month = {04},
pages = {},
title = {Stellar Evolution from AGB to Planetary Nebulae},
volume = {252},
journal = {Proceedings of the International Astronomical Union},
doi = {10.1017/S1743921308022771}
}

[9] Gregersen, Erik. Luminosity. Encyclopædia Britannica. August 07, 2015.


https://www.britannica.com/science/luminosity accessed: December 23, 2020

VisieR

This research has made use of the NASA/IPAC Infrared Science Archive, which is funded by the
National Aeronautics and Space Administration and operated by the California Institute of
Technology.

Appendix A: SMC Star Photometric Data


Then move data to:

Appendix B: Theoretical Models for Chi-squared comparisons from Castelli


Calculate error bars. Ignore data point:
-c=015.72889922672 -72.35585346729,eq=J2000&-c.rs=0.004 15.72889922672 -
72.35585346729 IV/38/tic 5.41E+05 5.54E-01 6.28E-04 7.82E-04
Johnson:V 6.28E-30 3.4001804E-15 4.2339826E-15
This error is so large it breaks the plot’s formatting

Deleted all data without error bars (otherwise chi-squared goes to infinity), this dropped most of the
data that didn’t have Flux_0’s for calibrations in the process.
Deleted the rest of the data without Flux_0 data (otherwise F = 0*10^mag = 0)

# Observational Techniques in Astronomy Project #1 - Spectral Energy Distribution


# Import data analysis and plotting features
from astropy.io.votable import parse
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
plt.style.use('seaborn-poster') # print(plt.style.available), Candidates: bmh, fivethirtyeight, seaborn, seaborn-dark, seaborn-
darkgrid, seaborn-poster, seaborn-ticks
# Use pandas command library to turn the csv data (with only good accurate measurements) into one large DataFrame (matrix)
star_and_model_data_frame = pd.read_csv('obs_tech_star_&_model_data.csv')
model_data_frame = pd.read_csv('obs_tech_model_of_stellar_output_by_frequency_castelli_python.csv') #860 wavelengths
#star_data_frame = pd.read_csv('obs_tech_star_data_accurate_python.csv')
#star_df = pd.read_csv('TestingFilePythonStar_Data.csv', index_col=0) #this makes the LHS/counting column 0 and vanish
#print('\n\nLets print the SMC star\'s table: ')
#print(star_and_model_data_frame)
# Make arrays from the Data Frame
wavelength = np.array(star_and_model_data_frame['wavelength (nm)'])
flux = np.array(star_and_model_data_frame['_sed_flux ( Jy )'])
fluxerror = np.array(star_and_model_data_frame['_sed_eflux ( Jy )'])
flux_0 = np.array(star_and_model_data_frame['Flux_0 ( Jy )'])
reddening = np.array(star_and_model_data_frame['A/(E(V-B)) [mag]'])
C = np.array(star_and_model_data_frame['Model_Flux_Hnu_(erg/cm^2/s/Hz/ster)']) #the model's (eddington) flux
ism_reddening = star_and_model_data_frame['A/(E(V-B)) [mag]']
ism_reddening = np.array(ism_reddening)
# Calculate the frequency * flux and put it in SI units, 1 Jy = 10^-26 watts per square metre per hertz.
c = 299792458 #m/s
frequnecyflux = ( (c/wavelength) * flux ) / (10**17)
#lets calculate the frequency*fluxs' error and it's maximum
frequnecyfluxerror = (c/wavelength) * fluxerror / (10**17)
frequnecyfluxerrormax = max(frequnecyfluxerror)
wavelength_um = wavelength/1000
# Set up the model's data for stellar emissions (castelli)
wavelengths_all_model = np.array(model_data_frame['wavelength [nm]'])
fluxes_all_model = np.array(model_data_frame['Hnu (erg/cm^2/s/Hz/ster)'])
model_wavelength = star_and_model_data_frame['model_wavelength (nm)']
model_eddington_flux = star_and_model_data_frame['Model_Flux_Hnu_(erg/cm^2/s/Hz/ster)']

# Let's run a chi-squared analysis. O is dereddened flux, C is the model's flux, sigma is data's error,
# N is the # of data points or len(O or sigma), and D is # of parameters (so N-D is the degrees of freedom)
# Run it over the data that is valid (0.3 - 2.2 um wavelengths)
chi_squared_data_frame = pd.read_csv('obs_tech_star_&_model_data_small_wavelength.csv')
chi_wavelength = np.array(chi_squared_data_frame['wavelength (nm)'])
chi_flux = np.array(chi_squared_data_frame['_sed_flux ( Jy )'])
chi_fluxerror = np.array(chi_squared_data_frame['_sed_eflux ( Jy )'])
chi_flux_0 = np.array(chi_squared_data_frame['Flux_0 ( Jy )'])
chi_reddening = np.array(chi_squared_data_frame['A/(E(V-B)) [mag]'])
chi_C = np.array(chi_squared_data_frame['Model_Flux_Hnu_(erg/cm^2/s/Hz/ster)']) #the model's (eddington) flux
chi_ism_reddening = np.array(chi_squared_data_frame['A/(E(V-B)) [mag]']) # few data points
chi_model_wavelength = chi_squared_data_frame['model_wavelength (nm)']
chi_model_eddington_flux = chi_squared_data_frame['Model_Flux_Hnu_(erg/cm^2/s/Hz/ster)']
# make arrays to fill in the for loop
chi_flux_mag = np.array([])
chi_flux_mag_dered = np.array([])
chi_flux_dered = np.array([])
chi_model_flux_scaled = np.array([])
chi_evb_array = []
#evb_array = np.array([])
#chi_sq_array = np.array([])
chi_sq_array = []
chi_model_flux_scaled = np.array([])
chi_sq_error = 0
# set up the for loop's parameters
scale_index = 7
last_step_value = 1
steps = 1000
step_size = last_step_value/steps
# chi-squared constants
N = len(chi_flux)
D = 1 # The chi-sqared analysis is only a function of the reddening here, that's 1 parameter
for i in list(range(1, steps+1)): #(0 - 200)
evb = i*step_size #(0.00 - 1.3) the range of e(v-b)
chi_evb_array.append(evb)
chi_flux_ratios = chi_flux / chi_flux_0 # calibrate flux
chi_flux_mag = -2.5 * np.log10(chi_flux_ratios) # dered flux
#print the fluxes (in magnitudes) for this value of evb print(flux_mag)
chi_mag_gain = evb*chi_ism_reddening
chi_mag_dered = chi_flux_mag - chi_mag_gain
#print the zero point on the flux calibrations print('flux_0 is: ' + str(flux_0))
chi_flux_dered = (chi_flux_0 * (10**(-chi_mag_dered/2.5)))
# scaling_ratio = model_eddington_flux[k] / flux_dered[k] # C/O. make SI units.
chi_scaling_ratio = chi_model_eddington_flux[scale_index]/chi_flux_dered[scale_index] # make scaling ratio at a J filter (2Mass)
chi_model_flux_scaled = chi_model_eddington_flux/chi_scaling_ratio # calc normalized model, for all model data flux / scaling
ratio
chi_sq_error = sum( (np.add(np.divide(chi_flux_dered,chi_fluxerror), np.divide(-1*chi_model_flux_scaled,chi_fluxerror)) )**2 )
chi_sq_array.append(chi_sq_error/(N-D))

print('The ' + str(len(chi_sq_array)) + ' evb\'s are: ', chi_evb_array)


print('The ' + str(len(chi_sq_array)) + ' chi-square\'s are: ', chi_sq_array)
chi_sq_array = np.array([chi_sq_array]) #maybe delete if odd error messages
chi_sq_min = np.amin(chi_sq_array)
evb_min_index = np.argmin(chi_sq_array)
evb_min = chi_evb_array[evb_min_index]
print('The minimum chi-sqared is: ' + str(chi_sq_min))
print('and the E(V-B) index position that minimizes is: ')
print(evb_min_index)
print('The E(V-B) value that minimizes is (at): ', evb_min)
# Show the plot of Chi-Squares as a function of E(V-B)
plt.scatter(chi_evb_array, chi_sq_array)#, label='chi-squared as f( E(V-B) )')
plt.title("Chi-Squared Analysis of E(V-B)s")
# plt.xlim(0, 1) plt.ylim(-10000, 90000)
plt.xlabel("E(V-B)") # this should be dimensionless
plt.ylabel("Chi-squared values") # this should be dimensionless
plt.grid(linestyle='dotted')
plt.show()
# Show the logarithmic plot of Chi-Squares as a function of E(V-B)
plt.scatter(chi_evb_array, chi_sq_array)#, label='chi-squared as f( E(V-B) )')
plt.title("Chi-Squared Analysis of E(V-B)s Logarithmic Axis")
plt.yscale('log')
#plt.xscale('log')
plt.xlabel("E(V-B)") # this should be dimensionless
plt.ylabel("Log chi-squared values") # this should be dimensionless
plt.grid(linestyle='dotted')
plt.show()
# Show the plot of the SED with ALL OF THE (not only chi) DATA with the complete model set
# at the evb that minimizes the chi-square. Compare to the scaled model data
evb = evb_min_index
print('the best evb index position is: ' + str(evb))
chi_evb_array.append(evb)
# remove? print('chi-squared_evb_array: ') print(chi_evb_array) # this is the index of evb's array
#scaling_ratio = model_eddington_flux[j] / flux_dered[j] # C/O. make SI units.
flux_ratios_best = flux / flux_0 # dered flux
flux_mag_best = -2.5 * np.log10(flux_ratios_best) # dered flux
mag_gain_best = evb_min*ism_reddening # Use the evb value that minimizes chi-squared here
mag_dered_best = np.subtract(flux_mag_best, mag_gain_best)
print('the length of flux_0: ' +str(len(flux_0))+ ' and mag_dered_best: ' +str(len(mag_dered_best)))
flux_dered_best = np.multiply(flux_0 , (10**(-mag_dered_best/2.5)))
scaling_ratio_best = model_eddington_flux[scale_index]/flux_dered_best[scale_index] # make scaling ratio at a J filter (2Mass), which
is at the 15th row in my .csv file
model_flux_scaled_best = np.divide(fluxes_all_model, scaling_ratio_best)

plt.scatter(wavelengths_all_model, model_flux_scaled_best, label='Model by Castelli') #, s=20 , marker='o'


plt.scatter(wavelength, flux_dered_best, label='dereddened data') #, s=20 , marker='o'
plt.title("Best dereddened data vs. scaled model fluxes (log wavelengths)")
plt.xscale('log')
plt.xlabel("Wavelength (nm)")
plt.ylabel("Flux (Jy)")
plt.legend()
plt.grid(linestyle='dotted')
plt.show()
# LUMINOSITY: Take the integral of the data (the fitted model's data) over all wavelengths [Jy/m^2]
luminosity_measured_Jy = np.trapz(model_flux_scaled_best, x=wavelengths_all_model, dx=0.1)
luminosity_measured = luminosity_measured_Jy*(10**-26) # in W/m^2
distance_to_smc_LY = 190000 # in Lightyears
distance_to_smc = distance_to_smc_LY * 9461000000000000 # in m = 1,797,590,000,000,000,000,000
print(distance_to_smc)
surface_area_telescope = 1 # in m^2
luminosity = 4*np.pi*(distance_to_smc**2)*luminosity_measured/surface_area_telescope # unless the Jy data is already normalized, and
we just need to multiply by 4 pi
luminosity_small = 4*np.pi*luminosity_measured
print('The luminosity was estimated (on the back of an envelope) by hand to be: 12.3 Jy')
print('An integral of the flux over the wavelengths gives the power arriving: ' + str(luminosity_measured_Jy) +' Jy. now integrate
over the rest of the sky to get total luminosity')
print('The luminosity of the star (over all frequencies and directions) in W is: ' + str(luminosity))
# TEMPERATURE, DUST: Assume the dust is a black-body, solve Planck's law for T, and solve over the range of wavelengths
h = 6.62607015*(10**-34) # J s
k = 1.380649*(10**-23) # J/K
FluxBB = 0.0035*(10**-26) # W/m^2 (Hz is canceled out)
wavelen_min = 2160*(10**-9) # m
wavelen_max = 22100*(10**-9)# m
T_max = (h*c)/(wavelen_min*k*np.log(1 + (2*h*c)/((wavelen_min**3)*FluxBB) ))
T_min = (h*c)/(wavelen_max*k*np.log(1 + (2*h*c)/((wavelen_max**3)*FluxBB) ))
print('The temperature of the dust goes from: ' + str(T_min) + ' K to ' + str(T_max) )
# Do a Monte-Carlo analysis: probe the error on the reddening determination
# For 1000 tests: generate equivalent data-arrays in which the data is given by a normal distribution of random numbers
# with 1 StdDev the error on the data.
difference = []
monte_carlo_chi_sq_array = []
location = 0
scale = 0.05
size = len(flux_dered_best)
sigmadata = 0.000167640600716 #in Jy from the standard deviation of the distribution of error quantities of/on the flux data
# or do I get the sigmadata by dereddening the error values/points and then taking a Stdev()?
array1000 = np.array(list(range(1000)))
for i in array1000:
random = np.random.normal(location, scale, size)
#normalize to the StdDev, then add this to the optimal fluxes?
datanew = flux_dered_best + ( random * sigmadata )
#minimize the difference between the dereddened fluxes and the model (chi^2) to determine the E(B-V)
# do i also divide by the flux error and sum over all the points?
# Now use the perturbed data set above to calculate a new chi-squared
monte_carlo_chi_squared = sum( (np.add(np.divide(datanew,sigmadata), np.divide(-1*flux_dered_best,sigmadata)) )**2 )
# Store the new chi-squared
monte_carlo_chi_sq_array.append(monte_carlo_chi_squared / (N - D))
# Find the mean and StdDev of the chi-squared array
monte_carlo_mean = np.average(monte_carlo_chi_sq_array)
monte_carlo_stdev = np.std(monte_carlo_chi_sq_array)
# The average corresponds to the reddening, and the StdDev to the error on the reddening
print('The Monte-Carlo analysis indicates the reddening (or E(B-V) ) is: ' + str(monte_carlo_mean))
print('The Monte-Carlo analysis indicates the error (on reddening) is: ' + str(monte_carlo_stdev))
#The mean is than a good estimator of the E(B-V)
#The standard deviation is a good estimator of 1 sigma error on the E(B-V)
#difference = np.array(difference)
#difference.append((datanew - flux_dered_best[i])**2)
#ebv_monte = np.mean(difference)
#ebv_monte_stdev = np.std(difference)
#print, and show the distribution of the monte-carlo results
print('From a monte carlo analysis, the E(V-B) is estimated as: ')
print(monte_carlo_mean)
print('and the E(V-B) error (of 1 sigma) is estimated as: ')
print(monte_carlo_stdev)
# print('the number of elements in array1000 is: ' + str(len(array1000)))
# print('the array1000 is: ') print(array1000)
# print('the difference is: ') print(difference) print('the # of elements in difference: '+str(len(difference)))
plt.clf() # clears the entire current figure with all axes
plt.scatter(array1000, monte_carlo_chi_sq_array)# , s=20, marker='o')
plt.title("Monte-Carlo distribution")
plt.xlabel("The simulation number in the 1000 monte-carlo perturbations")
plt.ylabel("Chi-squared values from the 1000 simulations") #plt.ylabel("Squared Differences (between from best fit's perturbed data
and scaled model)")
plt.grid(linestyle='dotted')
plt.show()
--------------------------------------------------------------------------------------
Latex Code:
--------------------------------------------------------------------------------------
\documentclass[a4paper,12pt]{article}
%\usepackage[utf8]{inputenc}
\usepackage[english]{babel}
\usepackage{graphicx}
\usepackage[colorlinks, linkcolor=black, citecolor=black, urlcolor=black]{hyperref}
\usepackage{amsmath,amssymb, gensymb, float}
\usepackage{caption}%, wrapfig
\usepackage{geometry}
%\geometry{tmargin=2.5cm, bmargin=2.5cm, lmargin=2.5cm, rmargin=2.5cm}
\usepackage{todonotes} %Used for the figure placeholders
\usepackage{ifthen}
\usepackage{apacite}
\usepackage{appendix}
\usepackage{bibtopic}
\bibliographystyle{apacite}
\setlength\parindent{0pt}
\usepackage{afterpage}
\makeatletter
\newcommand*{\centerfloat}{%
\parindent \z@
\leftskip \z@ \@plus 1fil \@minus \textwidth
\rightskip\leftskip
\parfillskip \z@skip}
\makeatother

%\title{Observational Techniques in Astronomy\newline Project 1


%\\Spectral Energy Distribution Model\newline Stellar Intensity and Disk Excess in a SMC Star}
%\author{David C. Petit - r0818065\newline\newline KU Leuven}
%\date{Autumn 2020}

\begin{document}

\begin{titlepage}
\begin{center}
\vspace*{1cm}
\textbf{Spectral Energy Distribution Model}\\
\vspace{0.5cm}
Stellar Intensity and Disk Excess in a SMC Star\\
\vspace{1.5cm}
\textbf{David C. Petit\\r0818065}
\vfill
Project \#1 for\\
Observational Techniques in Astronomy\\
\vspace{0.8cm}
\includegraphics[width=0.3\textwidth]{KUL.png}
\\Institute of Astronomy\\
KU Leuven\\
Belgium\\
Autumn 2020
\end{center}
\end{titlepage}

%\maketitle

% Your name and student number must be filled in on the title page found in
% titlepage.tex.

%\newboolean{anonymize}
% Uncomment to create an anonymized version of your report
%\setboolean{anonymize}{true}

%\input{titlepage}

\pagenumbering{roman}
\tableofcontents
\newpage
\pagenumbering{arabic}

\section*{Abstract}
The spectral energy distribution (SED) of a star in the Small Magellanic Cloud is analyzed in this
paper. The many steps taken to acquire, deredden, and compare the data of this star are discussed;
use or removal of outliers and insensitive photometric measurements, as well as, unit conversion
considerations are also presented. Normalizations of stellar models are made, from which
comparisons to the star’s data are based. Error analysis is addressed qualitatively in words, and
quantitatively in a Monte-Carlo error simulation around the spectra of best fit. Lessons learned and
acknowledgments conclude this paper.
\newline

\section{Introduction}
Since long before the dawn of our species the stars have twinkled in the heavens above. Systematic
investigation and the use of scientific instruments to understand the nature of stars is a much newer
pursuit; now, with massive databases and high-order photometric models, available to any and all
with an internet connection and astronomical nomenclature and syntax, those with a familiarity in
general science and some adequate training in astronomy are able to create the spectral energy
distribution of a myriad of nearby stars. That is what is accomplished in this paper.

\section{Star Identification}
The star discussed in this paper was assigned in the SED Assignment ReadMe pdf. A summary of
the pertinent information is tabulated below:

\begin{center}
\begin{tabular}{ c c c c c c }
Name & J-number, First release & Teff & logg & LMC/SMC & [Fe/H] \\
David Chandler Petit & J010254.90-722120.9 & 4000 & 0.0 & SMC & -1.0
\end{tabular}
\end{center}
The $T_{eff}$ was assumed to be in Kelvin [K], and LMC/SMC was assumed to be the star’s
location, in this case, the Small Magellanic Cloud.

\section{Data Acquisition and Processing}

In Strasbourg astronomical Data Center’s SIMBAD (the Set of Identifications, Measurements and
Bibliography for Astronomical Data) I used the VizieR photometry viewer and my star’s
coordinates to gather the raw data. At first a radius of 10 arcsec around the center of the star was
generated. This led to approximately 400 measurements, and was deemed too wide of a range and
my data was likely picking up other objects in a similar potion of the sky. A range of 5 arcsec, with
approximately 230 measurements was considered, but the decision was made to analyze the range
of 3 arcsec around the star’s coordinates, which entailed a raw data set of approximately 185
measurements. Images demonstrating where and how this data was obtained are available in
appendix A. This data in the VizieR Photometry Viewer was downloaded. Additional investigations
into the NASA/IPAC Infrared Science Archive (powered by Caltech’s Gator search engine) were
made. Apparent similarity of recorded data, especially regarding the 2mass and sage catalogs; a lack
of user friendliness; and pressing time constraints led to refraining from actual use of its data in this
project. The experience proved educational never the less. \\
\\
The next step in analyzing the spectra involved the pruning of insensitive, inaccurate, and redundant
data. For a proper cleaning, this meant that of the approximately 185 data points, almost half were
discredited as coming from a survey that was not equipped with instruments or used with
techniques that would provide a level of accuracy that would be appropriate for making the
dereddened SED in a distant star (approximately 190,000 light years away) [1]. In particular, data
obtained from the Gaia, XMM, DENIS, and SkyMapper missions were all removed from analysis.
Other cleaning techniques led to the removal of several highly uncertain data points because of their
abnormally large error sizes, and of a particular set of data points obtained by the POSS-II survey
because of the same reason. Though not all of the filters were used in the photometric data, all of
the data coming from surveys that have calibration information in Appendix F except DENIS
(Geneva, Johnson, SAAO, Strömgren, ESO JHKLM, Near-IR 2MASS, MSX) were retained. The
remaining selected-for-accuracy data was then plotted after the necessary mathematical
manipulations, verifying the correct transfer from the source to the data for use in this project. This
verification can be seen in the plots in appendix A. After considering discarding duplicates, it was
believed that the differences in flux at the same wavelength for the same filter could be
measurements of stellar oscillations or other astroseismological phenomena that are indeed accurate
measurements; near-duplicate data points were also retained. The next steps, unless otherwise
stated, used a remaining 87 data points. \\
\\
In order to evaluate the accuracy of the dereddened model that will be compared to and discussed in
a future paragraph, a theoretical model was obtained, before the dereddening procedure. This model
was obtained from Castelli, F. at the National Institute for Astrophysics’s Astronomical Observatory
of Trieste (INAF-OAT) [2]; in particular, I used the data within Teff/logg equal to 4000 and 0.0,
found in
\begin{center}
fp00k0c125odfnew, [ 0.0], vturb=0.0 km/s, fluxes i/H=1.25
\end{center}
which was a section of "Grids of MODELS and FLUXES"
A plot of this data is seen in appendix B. The similarity of a strong sharp increase in flux around
wavelengths of 500 nm ($0.5 \mu m$) and slower gradual decrease of flux between 2 000 nm and
20 000 nm ($2-20 \mu m$) is seen in both graphs. Nothing else was noted from a visual inspection
of the two graphs, except the product of the frequency times the flux on the y-axis of the model is
noted to be orders of magnitude higher, and scaled to a single steradian of the sky. This difference
was scaled during the chi-squared analysis. \\
\\
For different E(V-B) values, the curve of dereddened data points will move about a central point
used for scaling the model’s data. At first, a data point taken from the 2Mass mission J filter at 1240
nm ($1.24 \mu m$) was used as a reference to scale the models large flux in ergs/steradian down to
the normalized Jy that is seen in the photometry data. This can be seen in the four plots shown in
figure 5. From the most cursory inspections, we find that the dereddened flux data consistently
deviates from the models fluxes in high wavelength light. This is known as the infrared excess and
is caused by the emissions from the dust that surrounds the star. Because theory behind the model
assumes there is no surrounding dust nor infrared excess and the photometric data is clearly affected
by a mass of dust that is near, but not a part of the star, data points in these high wavelength ranges
are incompatible for scientific comparisons. \\
\\
The question is then where are wavelengths too high for the theoretical Castelli model to efficiently
model the photometric data. In moving to larger wavelengths, black-body radiation will decay
quickly from its maximum and then asymptote to zero. Any points in the data which break this trend
are not exhibiting black-body radiation, and would thus likely be the result of an infrared excesses
of nearby dust. This is seen in all plots of the data at approximately 3 um. At first, the data with
wavelengths less than and equal to 2190 nm ($2.19 \mu m$) was deemed valid; the data with
wavelengths greater than and equal to 3350 nm ($3.35 \mu m$) was deemed invalid because of the
heavy influence any and all dust in the vicinity of the star would have on these measurements.
These values were chosen because there was no data points in between them. There is a local
minimum in flux detections around 3 um. This indicates that there is already some deviation from
the model at this point; thus all data at higher wavelengths is most certainly affected. It is more
difficult to objectively determine the extent that slightly smaller wavelength data is affected;
meaning, the flux data taken at 2190 nm, 2160 nm ($2.19 and 2.16 \mu m$), and even smaller
nanometers might be artificially high because of the infrared excess. For the flux measurements at
3350 nm ($3.35 \mu m$) to be accurate, a very large E(V-B) value, on the order of 2, was needed.
The E(V-B) necessary for the 2190 nm ($2.19 \mu m$) data to be accurate was 0.5, which was a
reasonable figure enough to keep it in the chi squared analysis. Despite a logical approach at first,
the initial value chosen was deemed too high because the E(B-V) that minimized the chi-squared
was the limiting case of 0.\\
\\
A second attempt was made where wavelength data above 1700 nm ($1.7 \mu m$) was discarded
with the assumption that it was caused by infrared excess. The third attempt kept data that was 1300
nm ($1.3 \mu m$) in wavelength as less. Both second and third attempts provided no change in the
optimal E(B-V) and the chi-squared was minimized in the limit that E(B-V) went to zero. The
fourth attempt was more successful. Wavelengths less than 1100 nm ($1.1 \mu m$) were retained,
which meant that a new reference point was needed to scale the model’s data down to the limited
photometric data; a Cousins I filter measurement at a wavelength of 800nm ($0.8 \mu m$) was
chosen. Because two variables were changed at once in this instance, namely, that the upper
wavelength data were truncated, and a new scaling reference was selected, it was recognized after
successfully obtaining a non-zero chi-squared value, that the initial choice of scaling reference
could have caused the problems before. A trial of returning some of the large wavelength data, but
keeping the new scaling reference the same was made. Very similar chi-squared values were
obtained. This suggests that an initial selection of the particular 2Mass J filter data point was a poor
choice for the acquisition of a valid chi-squared and comparison between the photometric data and
stellar model. \\
\\
Where to draw this line of validity casts a lingering uncertainty on this investigation; it is addressed
and minimized via a chi-squared analysis, but is not definitively solved from first principles or with
unambiguous measurements in this report. More considerations are given in the discussion’s future
works near the end of this report.

\section{$\chi^2$ Analysis}
The chi-squared analysis was set up by creating 1000 values of $\chi^2$ over 1000 different E(B-V)
values ranging from 0 to 2. All optimal $\chi^2$ values were close to 0 and the values of E(B-V)
were concentrated from 0 to 1 after setting up the program. For each of the values of E(B-V) the
reddened data was calibrated to its zero point on the instrument’s magnitude scale. Then a
magnitude was calculated and dereddened (or increased) at each wavelength by using the provided
interstellar reddening law and the current iteration value of E(B-V) as:
$$\Delta m = E(V-B) \cdot R$$
Where R is the interstellar medium's reddening (effect). Finally, the dereddened magnitude was then
converted back into a flux. At one of the wavelengths that data was taken (first at the fifteenth
element in the flux array, which corresponded to a wavelength of 1240 nm, and then in the final
calculations) at the seventh element in the flux array, which corresponded to the wavelength of 789
nm, a scaling factor was created in the form:
$$ScalingFactor = \frac{Flux_{model}(789 nm)}{Flux_{data}(789 nm)}$$
this then divided all of the terms in the model’s flux which allowed a scaling of the model with
comparatively large fluxes, to the data with its small fluxes. Naturally at the (originally fifteenth,
and eventually) seventh data point where $\lambda = 789$ nm, the value of the model was precisely
equal the value of the data. The worthwhile analysis that follows is an investigation of the data
around (but not on) this data point. With the raw data dereddened and the model data scaled, the
program then executed a chi-squared computation using the formula:
$$\frac{1}{N - D} \sum_{n=1}^{n} (\frac{O_i - C_i}{\sigma})^2$$
this was the chi-squared obtained for one value of E(V-B); and all sequential chi-squared values
were obtained with the same procedure, just over the range of varying E(V-B)’s.

The graph on $\chi^2$ over the range of E(B-V) in seen in figure 1. The sharp increase in $\chi^2$
values beyond E(V-B) > 0.5 deceptively makes the function look like it is increasing monotonically.
Zooming in, or plotting the exact same data on a logarithmic plot, which is seen in figure 2, shows
the local minimum; this was found at approximately E(V-B) = 0.15. Despite efforts made online
and with multiple colleagues, the meaning of the acronym in the instructions, “$\chi^2$ graph i.f.o.
E(B-V)” was guessed to be as a function of, but was never determined; it is believed that the
information provided covers the realm that has been requested.

With the minimization of chi-squared, obtained, an optimal E(V-B) was obtained and then used to
identify and isolate the best dereddened flux calculated from the star’s reddened data. This optimal
dereddened data can be seen in figure 3, and was then used in the calculations of the star’s
luminosity, circumstantial dust temperature, and Monte-Carlo analysis of the dereddening model
and it’s relative error.

\section{Luminosity}
The basic physics term of power describes the quantity of energy transformed per unit time, those
units of joule per second give rise to a new unit for power, the Watt. Stars are constantly burning
their hydrogen and radiating heat, as well as various small sizes of mass in all directions. If we
consider the two dimensional spherical shell or ball on the surface of a star, we can calculate the
luminosity of a star by first, approximating it to be equal to it’s radiative power, ignoring mass loss
effects, second, by integrating over all of the wavelengths of light emitted by the star, and third,
accounting for the portion of the sky that a telescope can detect and generalizing that to the entire
two-pi steradian for that star’s distance. After minimizing chi-squared and using the best E(B-V)
value for a match between the data and the model, an integral of the fitted model’s flux data points
was taken on wavelengths from 343 nm to 20 000 nm. This led to a luminosity arriving in the area
of the photometric detectors of approximate 15 Jansky (Jy) or $1.5*10^{-25} W/m^2$. To get the
true luminosity, the area that is being illuminated by the star with this quantity of illumination per
unit area is multiplied by it and then divided by the area of the sampled by the telescope. That is
mathematically,

\[L = \left(\frac{A}{a}\right)L_{area}\]

Assuming a spherical area for the radiation of the star and a circular area for the aperture on the
telescope, this can be rewritten in terms of radii,

\[L = \left(\frac{R}{r}\right)^2L_{area}\]

Three more assumptions were made to calculate the luminosity. The first was that the star is a point
source of light, this is a common approximation because of the immense distance; second was that
the star in this project is exactly 190,000 light years away [1]. Though this is most certainly not an
exact truth, it is deemed to be a good approximation for the average distance from earth to any and
all stars in the small Magellanic cloud. Third, the radius of all of the instruments used to collect my
photometric data is assumed to be 1 m. This assumption comes from the 2MASS instrument being a
1.3 m telescope [4], the WISE space telescope being 0.40 m in diameter [3], and the Spitzer space
telescope having 0.85 meters in diameter [5]. This is again not a true and correct determination, and
it is likely that this assumption has caused larger errors than the ones estimating the star as point and
with a static and precise location within the SMC. With these approximations in place, an initial
luminosity was calculated, at first, to be: $6 \cdot 10^60 W$. This seems unreasonably high, and
something on par with the amount of energy emanating out of a galaxy cluster, all supernovae, or
the observable universe. 15 Jy seems to be a reasonable estimate of Luminosity from VisieR, and
the mathematical relationship between the amount of light entering a distant small area and the light
emitted by the source is fairly simple, so it was believed that the error laid in a unnoticed bug in the
software. After much painstaking review of numbers and characters, a Jansky conversion factor of
“$.../(10**-26)$” was discovered and corrected to “$...*(10**-26)$.” Afterwards, a more
reasonable, though still wrong approximation of the luminosity was obtained: $6 * 10^18$ W. This
can be compared to the typical luminosity of a post-AGB star, which varies between 1 000s and 10
000s of times brighter than our sun [6][7][8]. A convenient mid-point to do approximate
calculations is 10 000 times the luminosity of the sun. Our sun has a luminosity of $3.828×10^26$
W [9], and we can see that this is about 60 000 000 or 60 million times brighter than the star studied
in the Small Magellanic Cloud. This suggests an error in my mathematics or code; it is extremely
unlikely that this star is not in the post-AGB phase and has not increased in luminosity from a cold
dim star on the main sequence.

\section{Monte-Carlo Analysis}

Monte-Carlo analyses are statistical methods to approximate deterministic quantities. Like the
casino in Monaco they are named after, doing a Monte-Carlo simulation enough times will be in
your favor; in winning monetary gambles there, and in gaining more accurate insights into the
phenomena of study here. It is very difficult to calculate the amount of reddening and its margin of
error from a distant star. In principle, we would need a hollow spherical shell of detectors that is
close enough to the star that it will catch all of the emitted photons, but not so close that it melts,
breaks, or is too close that it misses the emission of a photon. The impracticality of making such a
detector gives rise to the usefulness of Monte-Carlo simulation: we can use randomness to help
simulate the expected value and confidence interval of the star’s entire luminosity by varying inputs
to our chi-squared analysis and measuring it’s outputs. Arrays with 87 elements that were randomly
taken from a Gaussian normal distribution around $\mu = 0$, and $\sigma = 0.05$ were generated
1000 times. These arrays with distributions around zero were then normalized to the standard
deviation of the flux errors in the selected-for-accuracy data points originally obtained in VisieR;
and then added to 1000 arrays of the best dereddened flux data (with it’s 87 elements). A
comparison between the 87 elements in the original and random sets of data led to the creation of
1000 $\chi^2$ values. These values could then be investigated with statistical tools to gain some
insight into the nature of the dereddening of the distant star. \\
\\
The average value of the $\chi^2$s generated was slightly less than 0.01. During my software
testing, I ran this simulation many times and noticed values that were constantly changing (due to
the new random numbers generated each run), but never strayed far from 0.009. An example can be
seen in figure 4. When deciding to take a break one afternoon, the code was modified to generate
orders of magnitudes more perturbations and then left to run while no user was present. The
resulting average was given as 0.008705518808426798. Believing this to be the most accurate
number obtained, I rounded this to 0.0087 and continued. A similar approach was made for the
standard deviation of the $\chi^2$ models the longest run led to a value of 0.001318597301548488,
which was rounded off to 0.0013. These average and standard deviation of $\chi^2$ values are
related to the reddening and error on the reddening of in the data model...

\section{Dust temperature estimate}

Many stars have dust that is close to, but not a physically connected part of the star. This can be
seen in the huge difference between the data and the stellar model in high wavelength ($<1000$
nm) flux. Whereas an isolated star will exhibit black-body-like radiation emission that naturally
decays at higher wavelengths, the data shows a star which is situated within a dusty medium; one
that absorbs and re-emits the stars energy, but at lower wavelengths. There are few constants in the
universe and we might find a changing and decreasing temperature of the dust grains as we move
farther from the surface of the star. In order to simplify the stellar-dust system, a sweeping
approximation is made: This dust is uniformly radiating, that is, all of the dust is emitting a constant
amount of flux. Like before, we are looking at a large and complex astronomical system, and so this
assumption is clearly inaccurate, but also like before, it is useful to get a sense of the stellar-dust
system. If the bulk of the dust is considered, then a useful approximation of constant-flux dust
emission follows. This means that all of the dust is emitting light between 2 160 nm and 22 100 nm,
and that the flux of all such light arrives to earth with 0.0035 Jy of power. This comes from figure 3,
which shows the large difference between the photometric data and Castelli model. If the dust acts
as a black-body, then we can derive the range of temperatures that the dust must be at to satisfy a
constant flux assumption. Planck’s law states,
$$B_{\nu}(\nu,T) = \frac{2h\nu^3}{c^2}\frac{1}{e^\frac{h\nu}{kT} - 1 }$$
\\which can be rewritten in a wavelength form and then solved for temperature as: \\
$$T = \frac{hc}{\lambda k \ln(1 +\frac{2hc^2}{\lambda^5B_\nu})}$$
\\all of the terms are constants, except wavelength; this is the above range. Substituting the terms in
(and wavelengths in twice as a minimum and maximum) yields a range of temperatures from 16 K
to 140 K. This implies that the highest and lowest energy and frequency wavelengths of infrared
(low energy and frequency) light are re-emitted by the dust is at temperatures of 16 K and 140 K
respectively. If a single figure with fewest significant figures is to be made, 100 K would be the best
estimation.

A second approach was used to compare the results of the first. Here we find a relationship between
the temperature of dust around Post-AGB stars with a near-IR excess,
$T_d = T_(eff) * sqrt(R/2d)$
Where T is the effective temperature of the star, R is the radius of the sun, and d is the distance from
the star to the dust. Also using the relationship for luminosity,
\[L = 4\pi R^2T^4\]
and Weins Law,
\[\lambda_{max} = \frac{b}{T}\]
where b is approximated as
\[b = 2.898 \cdot 10^{-3}\ m\ K\]
We find that\\
…\\
…\\
…\\

Comparing this to the average temperatures on earth shows that the dust is very cold; interestingly,
simultaneously comparing this to the temperature of the vacuum of space ($\sim$3 K) shows that
this dust is much hotter, and thus more radiant than vacuum or typical in interstellar matter.

\section{Discussion}

In the many steps taken to obtain astronomical data, analyze the scientific instruments and their
calibration used in their data acquisition, and create and evaluate a spectral energy distribution from
that data, a whole host of new knowledge and familiarity was obtained. Though there have been
numerous assumptions and approximations taken in this work, reasonable quantities have been
obtained on the dereddened flux emissions, $\chi^2$ of the color excess, the average and deviation
of dereddening from Monte-Carlo simulations, and circumstantial dust temperatures. If the goal of
this project was to create an accurate and academically publishable description of the spectral
energy distribution of a particular star in the Small Magellanic Cloud that this report has a very long
way to go before meeting that objective. On the other hand, if the goal of this project has been to
learn about astronomical database, how to access them, useful literature on the models of stellar
emissions, developing a strong familiarity of mathematical software packages like python,
formatting and writing academic reports, and gaining an ever deeper appreciation for the work of
other and earlier astronomers, than this has undoubtedly been a hugely successful endeavor.

\subsection{Error Analysis}

Regarding the improvement of data, and reduction of errors, Outside the photometric data obtained
from VisieR, the lack of error bars and analysis shows that this project is not in its fullest stage of
completion. This is indeed one of the most important part of the project that is not included in this
report.

\subsection{Future Works and Investigations}

Every project can be seen as a work in progress. This research has led to a vast reservoir of
knowledge on the nature of SEDs, photometry, astronomical databases and more. Despite
accomplishing the goals in this class of learning about the above topics, and practicing writing
research reports and making presentations, there are still vast unexplored domains within this line of
research that we could only take with time moving forward. \\
\\
Despite being the most interesting and educational project that I have had the fortune to work on
this semester, this as been by far the most difficult and it is time that is finishing this report, rather
than an absolute completion of the work that finishes it. There are many unfinished parts of this
project; most small, some less small. Perhaps the largest project I would continue to undertake
given additional time would be the expansion, reorganization and functionization of my python
code. Superior code will define functions to do repetitive commands and then call them in the
places they are to be used; despite having a moderately robust familiarity with python now, it was a
very unfamiliar language at the start of the semester. A familiarity with spreadsheets (both
Microsoft's and LibreOffice's) led me to some basic reordering, filtering manipulations, and plotting
of the raw data outside of python. This would be another area to expand the python code I used in
order to cut out the use and possible issues that arise from using additional software packages. \\
\\
The selection on which data point to normalize the Castelli model on was made quite arbitrarily.
Contemplation on this weakness led to the idea of building a for loop around the entire code, one
that would do everything that has already done once in this report, but do it as many times as there
are data points, and in each iteration calculate the distributions of $\chi^2$, find their minimum, and
then compare each of their minimums (as well as Monte-Carlo averages and standard deviations)
with the intention of optimizing this aspect of the project, instead of finding a functional but not
likely the best point to scale on. \\
\\
Integration with the NASA/IPAC Infrared Science Archive with Caltech’s Gator search engine
would help to bolster the content of this report. It is also possible, though unlikely, that significant
changes, and even improvements, could occur to the values obtained through the parts of this report.
Learning proposal writing and acquiring telescope time for more and newer photometric data would
also prove useful, though that will be the subject of a much anticipated class in semesters to come. \\
\\
In terms of appearance this report could be polished in a few ways. It would be very worthwhile to
go back and recreate the earliest plots made because they lack the uniformity (namely dotted grids
and best PNG resolutions) that high-quality reports demand. Though it was not required, our nearby
sun provides a wealth of high-order accuracy data and could have been used in the generation of
more plots and graphs comparing my slightly cooler SMC star with something that not only has
accurate data but also has a deep familiarity and intuition, even for lay readers.

\section{Conclusion}
This has been an exciting and educational endeavor. From nascent masters degree candidates we
have become an ensemble of basic astronomers in the past few months. This paper shows the depth
into which we have immersed ourselves in the pursuit of understanding spectral energy displays, the
role of observations in the creation of models and theories of astrophysical behavior, and associated
scientific literature. The newest technologies are always improving and I have been delighted to
begin to understand how they work and use them for academic purposes, and will relish the
opportunities to continue to do so.

\section{References}
\begin{enumerate}
\item Hodge, Paul W. Professor Emeritus, Department of Astronomy, University of Washington,
Seattle. Magellanic Cloud. Encyclopædia Britannica. June 27, 2017.
https://www.britannica.com/topic/Magellanic-Cloud. Accessed: December 20, 2020
\item Castelli, Fiorella. Complete database for ATLAS and SYNTHE. May 22, 2015\\
http://wwwuser.oats.inaf.it/castelli/grids/gridp00k0odfnew/fp00k0tab.html
\item Greicius, Tony \& Dunbar, Brian. National Aeronautics and Space Administration. Aug. 4,
2017\\ https://www.nasa.gov/mission\_pages/WISE/spacecraft/index.html
\item Skrutskie et al. (2006). This publication makes use of data products from the Two Micron All
Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing
and Analysis Center/California Institute of Technology, funded by the National Aeronautics and
Space Administration and the National Science Foundation.\\
https://irsa.ipac.caltech.edu/Missions/2mass.html
\item Greicius, Tony \& Dunbar, Brian. National Aeronautics and Space Administration. Aug. 4,
2017\\ https://www.nasa.gov/mission\_pages/spitzer/infrared/index.html
\item Oomen, Glenn-Michael; Van Winckel, Hans; Pols, Onno; and Nelemans, Gijs. Modelling
Depletion in post-AGB star by re-accretion of gas from a dusty disk. (2019). IvS KU Leuven \&
Radboud University. \\fys.kuleuven.be/ster/education/seminars/glennmichaeloomen-
seminar080319.pdf
\item Leiden University. Chapter 11 – Late Evolution of M $<$ 8 M\_sun. \\
https://home.strw.leidenuniv.nl/~nielsen/SSE17/lectures/Stellar\_lecture\_10.pdf
\item Kwok, Sun. (2008). Stellar Evolution from AGB to Planetary Nebulae. Proceedings of the
International Astronomical Union. 252. 10.1017/S1743921308022771.
\item Gregersen, Erik. Luminosity. Encyclopædia Britannica. August 07, 2015. \\
https://www.britannica.com/science/luminosity accessed: December 23, 2020
\item VisieR. "This research has made use of the VizieR catalogue access tool, CDS, Strasbourg,
France (DOI : 10.26093/cds/vizier). The original description
of the VizieR service was published in 2000, A\&AS 143, 23"
\item NASA/IPAC. This research has made use of the NASA/IPAC Infrared Science Archive,
which is funded by the National Aeronautics and Space Administration and operated by the
California Institute of Technology
\end{enumerate}

\section{Appendix A: Plots, Figures, and Graphs}

\section{Appendix B: Programming Codes}

%\begin{figure} [h]
% \centering or % \centerfloat
% \includegraphics[width=10cm]{graphs/HR diagram.JPG}
% \caption{Hertzsprung-Russell diagram. Figure from Heber (2009).}
% \label{fig:HR diagram}
%\end{figure}

%\section{Data}
%\subsection{Asteroseismology}
%\subsubsection{Global Rossby waves: r modes}
%\cite{Papaloizou}
%\ref{fig:example}.

%\begin{equation*}
% \Omega_{rot} = \frac{3}{2}\sigma
%\end{equation*}
%\begin{equation}
% f_{rot,crit} = 2 \pi \sqrt{\frac{8 G M}{27 R^3}}
%\end{equation}

% $\Omega_{rot} = 2.674 \ cyc/day$,


% $$\Omega_{rot} = 4.33 \ cyc/day$$
%\begin{eqnarray*}
% \Omega_{rot} &\llless& \sqrt{G M / R^3} \\
% \Omega_{rot} &\llless& 470
%\end{eqnarray*}

%\begin{eqnarray*}
%\Omega_{\textrm{KIC } 05807616} &=& 4.33 \ cyc/day\\
%\Omega_{\textrm{KIC } 010001893} &=& 1.82 \ cyc/day
%\end{eqnarray*}

%\clearpage

%\renewcommand\refname{References}
%\begin{btSect}{References}
%\btPrintAll
%\end{btSect}

%\clearpage

%\renewcommand\refname{Figures}
%\begin{btSect}{figures}
%\btPrintAll
%\end{btSect}
%\afterpage{\blankpage}

%\newcommand\blankpage{%
% \null
% \newpage}

\end{document}

You might also like