Professional Documents
Culture Documents
Experiment: - 1: AIM: To Calibrate and Measure Temperature Using Thermocouple. 1. Thermocouple
Experiment: - 1: AIM: To Calibrate and Measure Temperature Using Thermocouple. 1. Thermocouple
1. Thermocouple
➢ Principle:
Thermocouple used to measure temperature, are composed of two dissimilar
metals that produce a small voltage when joined together—one end of a
thermocouple joins each metal. A thermocouple thermometer then reads voltage
produced. Thermocouples can be manufactured from a range of metals and
typically register temperatures between 200 and 2,600 degrees Celsius (C).
Depending on the types of metals in the thermocouple, the specific temperature
range will vary.
➢ Construction:
In order to achieve accurate readings from a thermocouple, it’s essential to
calibrate the device accordingly.
Typically, thermocouples are standardized by using 0 degrees C as a reference
point, and many devices can adjust to compensate for the varying temperatures at
thermocouple junctions.
Sheathed thermocouples are available in three different junctions: grounded,
ungrounded and exposed.
Grounded junctions feature wire junctions that are attached to the inner probe
wall, enabling effective heat transfer from the outside of the probe wall to the
junction.
Ungrounded probes feature unattached
wire junctions, which promotes electrical
isolation.
Exposed junctions feature a junction
that extends beyond the sheath, enabling a
quick response time but limiting their use
to non-corrosive and non-pressurized
environments. Figure 1.1 Thermocouple circuit
1
➢ Working:
The working principle of thermocouple is based on three effects, discovered
by See beck, Peltier and Thomson. They are as follows:
1) See beck effect: The See beck effect states that when two different or unlike
metals are joined together at two junctions, an electromotive force (emf) is
generated at the two junctions. The amount of emf generated is different for
different combinations of the metals.
2) Peltier effect: As per the Peltier effect, when two dissimilar metals are joined
together to form two junctions, emf is generated within the circuit due to the
different temperatures of the two junctions of the circuit.
3) Thomson effect: As per the Thomson effect, when two unlike metals are joined
together forming two junctions, the potential exists within the circuit due to
temperature gradient along the entire length of the conductors within the circuit.
➢ Calibration:
To calibrate a thermocouple, various types of measuring equipment,
standards, and procedures must be in place.
First, a control temperature must be established that is stable and provides a
constant temperature; it must be uniform and cover a large enough area that the
thermocouple can adequately be inserted into it (such as an ice bath). Sources of
controlled temperatures are called fixed points.
A fixed-point cell is composed of a metal sample within a graphite crucible,
with a graphite thermometer submerged in the metal sample. When this metal
sample reaches the freezing point, it maintains a very stable temperature.
The freezing point occurs when a material reaches the point between the solid and
liquid phase. A reference junction temperature must also be established; typically,
0 degrees C is used. A measuring instrument, such as Fluke 702 calibrator, can be
used to measure thermocouple output.
A simple calibration process can be done by following a few basic
instructions. A basic calibration process involves heating water to 30 degrees C
in a thermo bath.
Next, the thermocouple is turned on and each of two multimeter leads are
attached to one end of the thermocouple at this point, the multimeter should
2
register one microvolt. One junction of the thermocouple is then placed into the
thermo bath.
The voltage can be recorded once the multimeter reading becomes stable. The
water temperature is increased to 35 degrees C, and again the voltage is
recorded. This process is repeated by increasing the temperature by five-degree
increments and recording the voltage, until 60 degrees C is reached.
➢ Applications of thermocouple:
1. Thermocouples are the most popular type of temperature sensors.
2. They are used as hospital thermometers, and in diagnostics testing for vehicle
engines.
3. Some gas appliances such as boilers, water heaters, and ovens use them as
safety features; if the pilot light is out, the thermocouple stops the gas valve
from operating.
4. They are also used as an aid in milk pasteurization, and as food thermometers.
5. In industry, they are valuable as probes and sensors.
3
is undesirable. So, we don’t want to change the resistance of wire by any other
unwanted changes except the temperature changes.
This is also useful to RTD maintenance while the plant is in operation. Mica
is placed in between the steel sheath and resistance wire for better electrical
insulation. Due less strain in resistance wire, it should be carefully wound over
mica sheet.
➢ Working
We can get this RTD in market. But we must know the procedure how to use
it and how to make the signal conditioning circuitry. So that, the lead wire errors
and other calibration errors can be minimized. In this RTD, the change in
resistance value is very small with respect to the temperature. So, the RTD value
is measured by using a bridge circuit. By supplying the constant electric current
to the bridge circuit and measuring the resulting voltage drop across the resistor,
the RTD resistance can be calculated. Thereby, the temperature can be also
determined. This temperature is determined by converting the RTD resistance
value using a calibration expression. The different modules of RTD are shown in
below figures.
4
• These cells allow the user to reproduce actual conditions of the ITS-
90 temperature scale. Fixed-point calibrations provide extremely accurate
calibrations (within ±0.001 °C). A common fixed-point calibration method for
industrial-grade probes is the ice bath.
Figure 1.3 Three wire RTD bridge Figure 1.4 Four wire RTD bridge
➢ Comparison calibrations
• is commonly used with secondary SPRTs and industrial RTDs.
• The thermometers being calibrated are compared to calibrated thermometers
by means of a bath whose temperature is uniformly stable.
• Unlike fixed-point calibrations, comparisons can be made at any temperature
between −100 °C and 500 °C (−148 °F to 932 °F). This method might be more
cost-effective, since several sensors can be calibrated simultaneously with
automated equipment.
• These electrically heated and well-stirred baths use oils and molten salts as the
medium for the various calibration temperatures.
5
➢ Advantages:
• The RTD can be easily installed and replaced.
• It is available in wide range.
• The RTD can be used to measure differential temperature.
• They are suitable for remote indication.
• Stability maintained over long period of time.
• No necessity of temperature compensation.
➢ Disadvantages:
• The RTD require more complex measurement circuit.
• It is affected by shock and vibration.
• Bridge circuit is needed with power supply.
• Slower response time than a thermocouple.
• Large bulb size.
• Possibility of self-heating.
• Higher Initial cost.
• Sensitivity is low.
6
EXPERIMENT: - 2
➢ Construction:
The construction of a DWT is
basically in the form of an oil-filled
chamber fitted with a cylinder-piston
combination above it.
It also features a plunger with a handle
and a weighting platform or pan
attached to the top of the piston, used to
apply varying degrees of pressure to the
oil in the chamber. In addition, it also
has a reservoir to collect displaced oil,
an adjusting piston and a port where the
pressure gauge is connected during
calibration tests.
Figure 2.1 Pressure gauge calibrating device
➢ Working:
Using a dead weight tester, pressure gauges are calibrated through the
application of known weights to the DWT’s piston, the cross-sectional area of
which is also known. This creates a sample of known pressure, which is then
introduced to the pressure gauge being tested to observe its response.
Set up the device being tested on a firm, stable and level surface, and follow
these 7 steps for pressure gauge calibration:
7
1. Check whether the test device is reading zero, by connecting it to the test port
on the DWT. If it isn’t, correct the error before moving on to the next step.
2. Note the cross-sectional area of the piston and rotate the handle of the adjusting
piston until its rod comes out fully. Fill oil into the reservoir up to its halfway
level.
3. Open the oil reservoir’s shutoff valve and let the DWT fill completely with oil
by manually lifting the vertical piston to its maximum position. Do this gently
to avoid air bubbles.
4. Close the shutoff valve and place the first known weight on the platform of the
vertical piston.
5. Turn the handle of the adjusting piston to ensure that both it and the sample
weight are supported by the oil in the chamber.
6. Spin the vertical piston to make sure it is floating freely and allow the system
to stabilize for a few moments.
7. After the system has stabilized, make note of the sample weight, DWT reading
and reading on the pressure gauge being tested, as well as error.
➢ Calibration:
• To start with, check that the calibrator or standard you’re using has been
calibrated in accordance with the manufacturer’s recommendations. If it is
already out of calibration, the results of the procedure would be unreliable.
• Connect the pressure gauge that is to be calibrated to the pressure source. Make
sure there is a block valve to isolate the pressure source from the rest of the
system and a bleeding valve for releasing pressure.
• Set the pointer so that it reads zero on the pressure scale.
• Apply the maximum pressure the gauge can measure and make adjustments till
the gauge being calibrated indicates the right pressure.
• Isolate the pressure source and completely depressurize the system using the
bleed valve.
• Verify that the gauge reads zero, or adjust it as needed.
• Repeat steps 4 to 6 till both the readings are accurate.
• If the gauge includes a linearizing adjustment, adjust the pressure source to 50%
of the maximum pressure the gauge can measure and check the reading.
8
• Check if the gauge readings are correct at zero, 50%, and maximum pressure,
and adjust it each time till all of them are accurate. This step requires a lot of
care and patience.
• After all the readings are correct, write down the gauge’s readings at the applied
pressures onto a calibration sheet.
• If you are performing a bench calibration and need to issue a calibration
certification, draw a graph plotting the increasing and decreasing applied
pressures against the gauge readings.
9
➢ CALIBRATION:
• Place the Askania manometer and the inclined tube manometer on s firm
surface and level them. Take the initial readings of these instrument.
• Blow air into the glass bottle through the tube provided and tighten the stop
cork.
• Take the observation of the pressure from the Askania as well as the inclined
tube.
• By releasing the pressure, a little from the glass bottle take another set of
observation.
• Repeat the process in step (4) to get about 10-12 set of pressure values.
• Plot the calibration graph as the Askania manometer without adjusting for the
vertical units. Fit a straight line passing through these points.
10
EXPERIMENT: - 3
➢ Abstract:
Determining the physical properties of substances is an important subject in
many advanced engineering applications. The physical properties of fluids
(liquids and gases), such as thermal conductivity, play an important role in the
design of a wide variety of engineering applications, such as heat exchangers. In
this article, the authors describe an undergraduate junior-level heat transfer
experiment designed for students in order to determine the thermal conductivity
of fluids. Details of the experimental apparatus, testing procedure, data reduction
and sample results are presented. One of the objectives of this experiment is to
strengthen and reinforce some of the heat transfer concepts, such as conduction,
covered in the classroom lectures. The experimental set-up is simple, the
procedure is straightforward and students’ feedback has been very positive.
➢ Experimental apparatus:
The experimental apparatus, shown schematically in Figure 3.1, consists of
two parts, namely the test module and control panel. These two components are
elaborated on below.
➢ Test module:
The test module is a plug and jacket assembly that consists of a cylindrical
heated plug and cylindrical water-cooled jacket. The fluid (liquid or gas), whose
thermal conductivity is to be measured, fills a small radial clearance between the
heated plug and the water-cooled jacket. It should be noted that the clearance is
made small in size so as to prevent natural convection in the fluid.
The cylindrical plug is made of aluminium (to reduce thermal inertia and
temperature variation) with a built-in cylindrical heating element and temperature
sensor (thermocouple). The temperature sensor is inserted into the plug close to
its external surface. The plug also has ports for the introduction and venting of the
fluid (liquid or gas) whose thermal conductivity is to be measured.
11
The plug is placed in the middle of the cylindrical water jacket. The water
jacket is constructed from brass and has a water inlet and drain connections. A
thermocouple is also fitted to the inner sleeve of the water jacket.
12
➢ Control Panel:
The test module is connected to the control panel (a small console) by flexible
cables for the voltage supplied to the heating element. The control panel includes
all the necessary electrical wiring with variable transformer, power transducer,
temperature controller/indicator, digital displays for temperature, analogue meter
for voltage and a thermocouple selector switch.
➢ Calibration:
Before utilising the unit in order to measure the thermal conductivity of a fluid
(liquid or gas), the unit must be calibrated. This is because not all the power input
is transferred by conduction through the test fluid, some energy (incidental heat
transfer) will be lost to the surroundings and some will be radiated across the
annulus. In this calibration process, students generate a curve that characterises
this incidental heat loss. The incidental heat transfers in the unit are determined
by using air (whose thermal conductivity is well known and documented) in the
radial space.
Procedure:
➢ The following is a brief summary of the procedure to carry out the calibration of
the unit:
1. Set up the equipment and make the necessary connections;
2. Pass water through the jacket at about 3 litre per minute;
3. Connect the small flexible tubes to the charging and vent unions;
4. Close off the tubing with a pure air sample trapped in the device;
5. Switch on the electrical supply;
6. Adjust the variable transformer to give about 10V;
7. At intervals, check the temperature of the plug, T1, and jacket, T2, and when
they are stable, record their values and also the voltage;
8. Repeat steps 6 and 7 for 20V, 30V, 40V, 50V and 60V.
➢ Calculations:
The calculations are determined as follows:
1) Find the thermal conductivity of the air, air, at the average temperature, Tavg
= (T1 + T2)/2. Temperature-dependent thermal conductivity values for air
are found in any heat transfer textbook, such as Incropera and DeWitt, as
well as Özisik.
13
2) Calculate the rate of heat conducted through the air lamina, Q, from
Fourier’s Law, ie:
Qc = kA (ΔT/Δr) … (1)
where the area is A = 0.0133 m2, the radial clearance is Δr = 0.34 mm, and
the temperature difference is ΔT = T1 − T2.
3) Calculate the rate of electrical heat input, Qe, from:
Qe = v2/R … (2)
where V is the voltage and R is the resistance of element, R = 54.8 Ω.
4) Find the incidental heat transfer, Qi, (loss, radiation, etc). The incidental
heat transfer is the difference between the electrical heat input and the heat
conducted through the fluid in the radial clearance, i.e.:
Qi = Qe − Qc … (3)
➢ Results:
From the measured data and the results obtained above, a calibration curve of
the incidental heat transfer, Qi, against the average temperature, T avg, can be
generated. Figure 2 presents the calibration curve of the unit that was obtained by
one of the laboratory student groups. As can be seen from the figure, the incidental
heat transfer increases linearly as the average temperature increases.
14
➢ Determination of the thermal conductivity of a fluid:
Once the calibration curve is obtained and the unit is cleaned and reassembled,
students can then introduce the fluid (liquid or gas) to be tested into the radial
clearance. It should be noted that it is important to ensure that no bubbles exist if
the test fluid is liquid. Water is then passed through the jacket and the variable
transformer is adjusted to the desired voltage. The voltage value is chosen to give
a reasonable temperature difference and heat transfer rate. When stable, the plug
and jacket temperatures, as well as the voltage, are recorded.
The incidental heat transfer rate, Qi, is then found from Figure 2 at the average
temperature, Tavg. Once the rate of the incidental heat transfer is determined, the
rate of heat conducted through the test fluid (liquid or gas) is then found from
Equation (3) as Qc = Qe − Qi , and the thermal conductivity of the test fluid
(liquid/gas) can be calculated from Equation (1), i.e. k = QΔr/(AΔT). It is
recommended that this procedure is repeated at another voltage value in order to
ensure the consistency of the measurements.
In addition, an uncertainty analysis of the measured thermal conductivity of
the test fluid is performed. The calibration process can be used to estimate the
uncertainty in the heat conduction across the sample. The conduction across the
sample is found from Fourier’s Law, which, for this situation, is given in Eq. 1 as
follows:
Qc = kA(ΔT/Δr) … (4)
The application of the standard uncertainty procedure, as described in Ref. [6] to
Eq. (4), yields:
where U represents the uncertainty in the quantity indicated with the subscript and
uncertainty in the temperature measurements are assumed equal, i.e. UT1 ≈ UT2
≈ UT. The radial clearance and the area are provided by the manufacturer and
assumed to be very accurate so that UΔr ≈ UA ≈ 0.
The thermal conductivity of the air at a given temperature is assumed to be known
within 2.5%, and the uncertainties in the temperature measurements are assumed
15
to be less than 1° C. Thus, the uncertainty in the heat conduction across the sample
is estimated to be less than 4%.
The uncertainty in the thermal conductivity can be found with a similar procedure.
First, Eq. (4) is rearranged to solve for the thermal conductivity, ie:
k=(QcΔr/AΔT) … (6)
and the procedure outlined in Ref. [6] yields:
As previously, the radial dimension and the area are assumed to be very
accurate so that ≈ ≈ 0 UΔr UA and the uncertainty in the temperature
measurements are assumed to be equal, UT1 ≈ UT2 ≈ UT, so that Eq. (7) can be
simplified to:
The uncertainty heat conduction across the sample was estimated be 4% and
the level of uncertainty in the temperature measurement was conservatively
estimated to be 1° C, so that the relative uncertainty in the thermal conductivity is
less than 5% for typical experimental conditions.
The uncertainty in the measured results are estimated (at the 95% confidence
level) according to the procedure outlined by Moffat.
16
➢ Conclusion:
A heat transfer laboratory experiment in which undergraduate mechanical
engineering students measure the thermal conductivity of a liquid or a gas is
presented in this article and includes the procedures and relevant calculations.
In this experiment, students perform the calibration of the experimental apparatus
and then employ the apparatus to determine the thermal conductivity of a liquid
or a gas. This kind of experience serves to enhance the level of understanding of
the transfer of thermal energy by undergraduate mechanical engineering students,
while also exposing them to several important concepts involved in heat transfer.
17
EXPERIMENT: - 4
1) Orifice meter
➢ Principle:
When an orifice plate is placed in a pipe carrying the fluid whose rate of flow
is to be measured, the orifice plate causes a pressure drop which varies with the
flow rate. This pressure drop is measured using a differential pressure sensor and
when calibrated this pressure drop becomes a measure flow rate. The flow rate is
given by:
Where,
Qa = flow rate
Cd = Discharge Coefficient
A1 = Cross sectional area of pipe
A2 = Cross sectional area of orifice
P1, P2 = Static pressure
➢ Construction:
The main parts of
orifice meter are as
follows:
A stainless-steel orifice
plate which is held
between flanges of a pipe
carrying the fluid whose
flow rate is being
measured.
It should be noted that for
a certain distance before Figure 4. 1 Orifice meter
and after the orifice plate
18
fitted between the flanges, the pipe carrying the fluid should be straight in order to
maintain laminar flow conditions.
Openings are provided at two places 1 and 2 for attaching a differential pressure
sensor (U-tube manometer, differential pressure gauge etc.) as shown in the
diagram
➢ Working:
The detail of the fluid movement inside the pipe and orifice plate has to be
understood. The fluid having uniform cross section of flow converges into the
orifice plate’s opening in its upstream. When the fluid comes out of the orifice
plate’s opening, its cross section is minimum and uniform for a particular distance
and then the cross section of the fluid starts diverging in the downstream.
At the upstream of the orifice, before the converging of the fluid takes place, the
pressure of the fluid (P1) is maximum. As the fluid starts converging, to enter the
orifice opening its pressure drops. When the fluid comes out of the orifice opening,
its pressure is minimum (p2) and this minimum pressure remains constant in the
minimum cross section area of fluid flow at the downstream. This minimum cross-
sectional area of the fluid obtained at downstream from the orifice edge is
called VENA-CONTRACTA.
The differential pressure sensor attached between points 1 and 2 records the
pressure difference (P1 – P2) between
these two points which becomes an
indication of the flow rate of the fluid
through the pipe when calibrated.
➢ Material of construction:
The Orifice plates in the Orifice
meter, in general, are made up of stainless
steel of varying grades.
19
➢ Shape & Size of Orifice meter:
Orifice meters are built in different forms depending upon the application
specific requirement, the shape, size and location of holes on the Orifice Plate
describes the Orifice Meter Specifications as per the following:
• Concentric Orifice Plate
• Eccentric Orifice Plate
• Segment Orifice Plate
• Quadrant Edge Orifice Plate
20
2) Rotameter
➢ Principle:
The rotameter's operating principle is based on a float of given density's
establishing an equilibrium position where, with a given flow rate, the upward force
of the flowing fluid equals the downward force of gravity. It does this, for example,
by rising in the tapered tube with an increase in flow until the increased annular
area around it creates a new equilibrium position. By design, the rotameter operates
in accordance with formula for all variable-area meters, directly relating flow rate
to area for flow.
21
meters. The materials of construction include stainless steel, glass, metal, and
plastic.
The tapered tube's gradually increasing diameter provides a related increase in
the annular area around the float, and is designed in accordance with the basic
equation for volumetric flow rate:
Q = kA(gh)0.5
Q = volumetric flow rate, e.g., gallons per minute
k = a constant
g = force of gravity
h = pressure drop (head) across the float
➢ Calibration:
Linear scale graduations can be an arbitrary 0%–100% for the meter range.
Calibration can be direct reading in terms of a specific gas or liquid, or a graph that
plots meter readings vs. flow rates in terms of the fluid being measured. Such
graphs make it easy to adapt a meter to handle fluids other than those for which it
was bought; changeover is simply a matter of having a different conversion chart
designed for the new fluid.
22
➢ Advantages:
• The cost of rotameter is low.
• It provides linear scale.
• It has good accuracy for low and medium flow rates.
• The pressure loss is nearly constant and small.
• Usability for corrosive fluid.
➢ Disadvantages:
• When opaque fluid is used, float may not be visible.
• It has not well in pulsating services.
• Glass tube types subjected to breakage.
• It must be installed in vertical position only.
➢ Applications:
• The rotameter is used in process industries.
• It is used for monitoring gas and water flow in plants or labs.
• It is used for monitoring filtration loading.
23
EXPERIMENT: - 5
1) Ultraviolet Measurements
For the measurement of sun and sky ultraviolet radiation in the wavelength
interval 0.295 to 0.385 µm, which is particularly important in environmental,
biological, and pollution studies the Total Ultraviolet Radiometer was developed.
This instrument utilizes a photoelectric cell protected by a quartz window. A
specially designed Teflon diffuser not only reduces the radiant flux to acceptable
levels but also provides close adherence to the Lambert cosine law. An
24
encapsulated narrow bandpass (interference) filter limits the spectral response of
the photocell to the wavelength interval 0.295-.0385 µm.
25
thermistor is also included if one wishes to measure the dome temperature as
compared to the case temperature to make any “corrections” to the final result.
4) Albedo/Reflection Measurements
Albedo is the ratio of the incoming shortwave divided by the reflected
shortwave. This allows for better calibration results and prevents the cold
junctions of the two sensors from affecting each other.
❖ DEVICE
1. Pyranometer
A pyranometer is used to measure the energy from the sun. When levelled in the
horizontal plane, this is called the Global Shortwave Irradiance (GLOBAL) and
when positioned in a plane of a PV Array, it is called the Total Irradiance in the
plane of array (TPA). Inverted, a pyranometer is used to measure the Reflected or
Albedo Irradiance (ALBEDO). A pyranometer can also be shaded from the direct
beam of the sun to measure the Diffuse Shortwave Irradiance (DIFFUSE).
26
➢ Specification of Pyranometer
• Application Network Measurements (Global)
• Classification Secondary Standard / High Quality
• Traceability World Radiation Reference (WRR)
• Spectral Range 295-2800 nm
• Output 0-10 mV analog
• Sensitivity approx. 8 μV / Wm-2
• Impedance approx. 700 Ω
• 95% Response Time 5 seconds
• Zero Offset a) 5 Wm-2
• Zero Offset b) 2 Wm-2
• Non-Stability 0.5%
• Non-Linearity 0.5%
• Directional Response 10 Wm-2 Figure 5.1 S.P. Pyrometer
• Operating Temperature -50°C to +80°C (Eppley laboratory)
• Temperature Response 0.5% (-30°C to +50°C)
• Tilt Response 0.5%
• Calibration Uncertainty* < 1%
• Measurement Uncertainty*
• Single Point < 10 Wm-2
• Hourly Average approx. 2%
• Daily Average approx. 1%
2. Pyrheliometer
A pyrheliometer mounted on a solar tracker is used to measure the Direct
Beam Solar Irradiance (DNI) from the sun. Historically, the preferred field of
view for Pyrheliometers was based on a 10:1 ratio which equated to approximately
5.7. To officially be considered a Secondary Standard, the pyrheliometer in
question must be calibrated with WRR traceability through a Primary Standard
Pyrheliometer such as the AHF Cavity Radiometer. EPLAB Calibrations are
typically performed against a Secondary Standard Pyrheliometer.
27
➢ Specification
• Application Standard/Network Measurements
• Classification Secondary Standard* / High Quality
• Traceability World Radiation Reference (WRR)
• Spectral Range 250-3500 nm
• Field of View 5º
• Output 0-10 mV analog
• Sensitivity approx. 8 μV / Wm-2
• Impedance approx. 200 Ω
• 95% Response Time 5 seconds
• Zero Offset 1 Wm-2
• Non-Stability 0.5%
• Non-Linearity 0.2%
• Spectral Selectivity 0.5% Figure 5.2 Pyrheliometer (Epply
• Temperature Response 0.5% Laboratary)
• Calibration Uncertainty** < 1%
• Measurement Uncertainty**
• Single Point < 5 Wm-2
• Hourly Average approx. 1%
• Daily Average approx. 1%
➢ Calibrations
All calibrations at Eppley are performed according to internationally accepted
techniques and procedures with traceability to the proper World Standards.
Pyrheliometers (sNIP, NIP) are compared on Eppley’s Research Building Roof
Platform according to procedures described in ISO 9059 and Technical
Procedure, TP04 of The Quality Assurance Manual on Calibrations and are
traceable to the World Radiation Reference (WRR) through comparisons with
AHF standard self-calibrating cavity pyrheliometers which participate at the
International Pyrheliometric Comparisons (IPC).
Pyranometers are compared in Eppley’s Integrating Hemisphere according to
procedures described in ISO 9847 and Technical Procedure, TP01 of The Quality
Assurance Manual on Calibrations and are traceable to the World Radiation
Reference (WRR) through comparisons with AHF standard self-calibrating cavity
28
pyrheliometers which participate at the International Pyrheliometric Comparisons
(IPC).
Pyrgeometers (PIR) are compared Blackbody Calibration System according to
Technical Procedure, Quality Assurance Manual on Calibrations and are traceable
to the International Practical Temperature Scale (IPTS) and to the World Infrared
Standard Group (WISG).
Total Ultraviolet Radiometers (TUVR) are compared to procedures described
Technical Procedure, Quality Assurance Manual on Calibrations and are traceable
to the National Institute of Standards and Technology (NIST).
Company recommends a minimum calibration cycle of five (5) years but
encourages annual calibrations for highest measurement accuracy.
➢ Applications
• Meteorology: Climate Study and Long-Term Monitoring / Modelling
The Earth’s radiation budget is a critical component of our weather and
climate, atmospheric circulation and ocean currents. Therefore, reliable and
accurate long-term measurements of shortwave and longwave irradiance are
essential for detecting climate change trends. From grassy plains, rain forests,
deserts, remote mountains, polar regions, equatorial regions, on aircraft and
balloons, on ships and ocean buoys. Universities & Government Institutions
from every continent and often working in cooperation with other institutions
to create networks of stations that measure and study accurate, reliable, long
term data sets of Solar & Atmospheric conditions.
29
(DNI). Often though, the researchers will prefer to install a complete solar
monitoring station to measure Global, Diffuse and Direct (and TPA).
• Reference Cells:
Solar Reference (PV) Cells are made of the same materials used in PV Panels
are common for evaluating the performance of PV. However, different designs
and constructions of Reference Cells result in different performance results due
to temperature and spectral selectivity. Therefore, the SPP Pyranometer is used
as a thermopile-based standard for different Reference Cells to be compared
with traceability to the World Radiation Reference (WRR).
• Material Testing:
Testing of materials and systems of all types are performed with solar, UV
and infrared measurements playing a critical role. These tests vary widely over
many industries. Examples include colour or material degradation due to UV
exposure; performance testing on heating & cooling (AC) systems in buildings,
automobiles, military vehicles, and aircraft; reflectance tests of low angled
roofs or paving materials, improving bottling for soda, milk and other liquids.
These tests can be done outdoors using the sun as the source or in
Solar/Temperature Chambers in the lab and allow for repeating tests in multiple
locations.
30
ISO 9060 Pyranometer Classification
SPP: - Standard Precision Pyrometer
GPP: - Global Precision Pyrometer
PSP: - Precision specific Pyrometer
31
Zero Off-Set A:
Test (a) is for cases when the net thermal radiant flux density is 200Wm-2
such as when the instrument is at 30°C and the sky is temperature -10°C. Eppley
performs this test in our Blackbody Calibration System and by monitoring
Nighttime Offsets.
-2
SPP ±2 Wm
-2
GPP ±2Wm
-2
PSP ±2 Wm
-
8-48 ±2Wm
Zero Off-Set B:
Test (b) is the result of a 5 degree change in temperature over one hour and is
performed in Eppley’s temperature chamber.
SPP ± 0.5%
GPP ± 0.5%
PSP ± 0.5%
8-48 ± 1.0%
33
Spectral:
Eppley has independently tested the Schott Glass WG295 hemispheres as
well as the Black Optical Lacquer to assure uniform spectral transmittance from 0.3
to 2.8 microns.
Temperature:
Temperature Dependence Tests are performed in Eppley’s Temperature
Chambers. Note that while the tests are often -30°C to +50°C, these are not the
operational limits of the instruments. These instruments can be used in hotter (or
colder) climates but you may wish to contact Eppley for a special temperature
dependence test in these extreme climate areas.
SPP ±0.5%
GPP ±0.5%
PSP ±1.0%
34
EXPERIMENT: - 6
AIM: To carry out exhaust gas analysis with gas chromatographer
• Principle of chromatography:
A gas chromatograph (GC) is an analytical instrument that measures the
content of various components in a sample. The analysis performed by a gas
chromatograph is called gas chromatography. Principle of gas chromatography:
The sample solution injected into the instrument enters a gas stream which
transports the sample into a separation tube known as the "column." (Helium or
nitrogen is used as the so-called carrier gas.) The various components are
separated inside the column. The detector measures the quantity of the
components that exit the column. To measure a sample with an unknown
concentration, a standard sample with known concentration is injected into the
instrument. The standard sample peak retention time (appearance time) and area
are compared to the test sample to calculate the concentration.
36
(3) Detectors:
After the components have been separated by the chromatograph columns,
they then pass over the detector. most common detector used for most
hydrocarbon gas measurements is the thermal conductivity detector (TCD) The
TCD uses two thermistors that will reduce in resistance as their temperature rises.
The thermistors are connected on either side of a Wheatstone bridge with a
constant current power supply. As the carrier gas passes over the thermistors, it
removes heat from the thermistor bead, dependent on the carrier gas’s thermal
conductivity. Helium is a commonly used carrier gas because it has a very large
thermal conductivity, and, therefore, will reduce the temperature of the
thermistors bead considerably.
On the reference side of the detector, only pure carrier gas will pass over the
thermistor bead, so the temperature and resistance will remain relatively constant.
On the measure side of the detector, the carrier gas and each component in
series of elution from the columns passes over the thermistor bead, removing heat
from the bead dependent on the thermal conductivity. When there is only carrier
gas passing over the detector bead, the temperature of the bead will be similar to
the reference detector (any difference is compensated for using the bridge
balance).
However, the gas components will have different thermal conductivities than
the carrier gas. As the component flows across the thermistor bead, less heat is
removed from the bead, so the temperature of the thermistor increases, reducing
the resistance. This change in resistance imbalances the electrical bridge and
results in a milli-voltage output. The amount of difference and, therefore, the
output signal is dependent on the thermal conductivity and the concentration of
the component.
The detector output will then be amplified and passed to the gas
chromatograph controller for processing.
37
➢ Applications of Gas Chromatography:
• GC analysis is used to calculate content of a chemical product, for example in
assuring the quality of products in the chemical industry; or measuring toxic
substances in soil, air or water.
• Gas chromatography is used in the analysis of:
air-borne pollutants performance enhancing drugs in athlete’s urine samples oil
spills essential oils in perfume preparation
38
EXPERIMENT: - 7
AIM: - To Study nd Familier with data logging and aquisition system.
➢ Introduction
1) Data Loggigng System:
Temperature and relative humidity level can affect various type of
measurement recorded in many fields. Hence, temperature and humidity must be
maintained within certain limits [1] to achieve repeatable results, reduce the cost
of tedious corrections and meet regulatory and correctness requirements. It has
been found that chart recorders cannot record temperature and humidity
accurately enough to meet quality and regulatory requirements. Chart recorders
are difficult to calibrate and to maintain, many are prone to sensor drift, which
tends to set worse over time and it may not be fully corrected. As chart recorders
use moving parts, they gradually deteriorate and require increasing amount of
maintenance and calibration to keep them accurate. Data loggers use digital
technologies, such as advanced microprocessors, solid state sensors and fully
featured software, which maximize accuracy. As there is no moving part to wear
out and with powerful software compensation, data loggers can deliver greater
accuracy over larger periods of time. Due to their small size and portability, they
can also be moved closer to the critical areas where calibrations take place,
providing greater accuracy for each calibration.
39
voltage, pulse is to be recorded; therefore it can automatically measure electrical
output from any type transducer and log the value. A data logger works with
sensors to convert physical phenomena and stimuli into electronic signals such as
voltage or current. These electronic signals are then converted into binary data,
which is then easily analyzed by software and stored for post process analysis.
Data Loggers are based on digital processor. It is an electronic device that record
data over the time in relation to location either with a built-in instrument or sensor
or via external instruments and sensors. Data Logger can automatically collect
data on a 24-hour basis, this is the primary and the most important benefit of using
the data loggers.
40
• Bluetooth (BLE) Loggers
Wireless data access via mobile devices
• Web-based Systems
Long-range wireless internet access
• Wireless Sensors
Short-range centralized data collection
1. USB data loggers are compact, reusable, and portable, and offer low cost and
easy setup and deployment. Internal-sensor models are used for monitoring at
the logger location, while external-sensor models (with flexible input channels
for a range of external sensors) can be used for monitoring at some distance
from the logger. USB loggers communicate with a computer via a USB
interface, but for greater convenience, a data shuttle device can be used to
offload data from the logger for transport back to a computer.
2. BLE-enabled loggers are also compact, reusable, portable, easy to set up and
deploy, and offer the added benefit of being able to measure and transmit data
wirelessly to mobile devices over a 100-foot range. These loggers are
particularly useful in applications where deployments are in hard-to-reach or
limited-access areas. Without having to disturb the logger, you can use a cell
phone or tablet to view data in graphs, check the operational status of loggers,
share data files, and store data in the cloud.
41
3. Web-based data logging systems enable remote, around-the-clock, internet-
based access to data via cellular, WI-FI, or Ethernet communications. These
systems can be configured with a variety of external plug-in sensors and
transmit collected data to a secure web server for accessing the data.
4. Wireless sensors, or data nodes, transmit real-time data from dozens of points
to a central computer or gateway, eliminating the need to manually retrieve and
offload data from individual data loggers.
42
translates the analog signal to a form acceptable by the analog to digital converter
like an amp1ifier used for amplifying low-level voltages generated by
thermocoup1es or strain gauges.
The analog to digital converter (ADC) converts the analog voltage to its
equivalent digital form. The output of the ADC may display visually and is also
available as voltage outputs indiscrete steps for further processing or recording on
a digital recorder. The auxiliary section contains instruments for system
programming and digital data processing such as linearizing and limit
comparison. These functions may be performed by individual instruments or by a
digital computer. The digital recorder records digital information on punched
cards, perforated paper tape, magnetic tape, typewritten pages or a combination
of these systems. Digital recorder may be preceded by a coupling unit that
translates the digital information to the proper form for entry into particular digital
recorder selected.
43
DAS are now used in many different fields, from industrial production to scientific
experiments, and the type of system used is different depending on each
application.
In general, however, types of DAS can be broken into three components — the
sensors used to collect data from the physical systems, the circuitry used to pass
this data to a computer, and the computer system on which it can be viewed and
analyzed.
If you are setting up a DAS, these are also the three factors that should be
considered. Time spent thinking about exactly which data you need to collect, and
how you want to work with the data once it is collected, can save significant time
and money further down the line.
Let’s take a look some of the most common options in all three of these fields.
➢ Sensors
The design of any DAS must start with the physical system which is being
measured. With the range of sensors available today, it is possible to measure
almost any physical property of the system you are interested in. Careful
consideration must be made, therefore, of exactly the type of data you need to
collect. It might be nice to be able to track the temperature of your industrial
printer, for instance, but you need to think about whether this information will
actually be useful for you.
44
Examples of common phenomenon that are measured by DAS are temperature,
light intensity, gas pressure, fluid flow, and force.
For each variable to be measured, there exists a particular type of sensor.
Sensors, in this sense, are essentially transducers, transforming physical energy
into electrical energy. For instance, a basic pressure sensor will be activated and
driven by the pressure it is measuring, and pass this information as an electronic
signal to the DAS.
For this reason, it is important to recognize that it is not possible to measure
every variable you want to without effecting the system itself. This is because any
sensor will affect the system it is designed to measure, and remove energy from it.
This is especially important if the system being measured works on small
tolerances, because the addition of even a small sensor to these systems can drain
too much energy from them for effective operation.
In short, though there is likely a sensor available to measure almost any aspect
of your systems, it is not always wise to try and measure every variable. Instead,
think carefully about the data you actually need, and use the minimum number of
sensors that will achieve this.
➢ Signal Processing
Typically, DAS use dedicated hardware to pass signals from sensors to the
computer systems that will collect and analyze the data. Converting a messy,
sometimes noisy, signal from a physical system into a format that can be used and
manipulated on a computer can be a tricky business.
One of the first obstacles to be overcome in this regard is signal strength. As
outlined above, typically sensors are designed to take the smallest amount of
energy possible from the system they are being used to measure. In practice, this
also means that the signal they output is of a very low intensity, and must to
amplified to be of any use.
It is therefore critical to use an amplifier that is able to amplify the signal
cleanly. A noisy amplifier will ultimately warp and color the data collected, which
in some cases can render it useless.
Another thing to think about when designing a DAS is the type of signal that
you will use to pass data between the various parts of your system. Most sensors
will output a single ended analog signal. Whilst this type of signal is good at
capturing the raw state of the system being measured, it is also very susceptible to
45
noise and distortion. A common fix for this problem is to convert the signal coming
from the sensors into a differential signal, which is much more stable and easier to
work with.
46
➢ Advantages of DAQifi Devices
DAQ cards typically output data using a dedicated hard link, and in years past
this often meant having a separate PC workstation for every data acquisition
process. Not only did this mean extra expense in terms of hardware, it often meant
that bringing data from several processes together was a manual, painful business.
DAQifi cards send the collected data over a WiFi network — either an existing one,
or one generated by the device itself — to custom software.
What this means in practice is that a single PC, tablet, or even smart phone can
be used to aggregate all the data being collected, bringing it all together for easy
analysis and manipulation. This also means that the computer being used to collect
and manipulate data does not need any additional hardware to be used for this
purpose.
In addition, DAQifi devices represent better value than many DAQ card
solutions. This is because DAQ cards are often made to be used to collect one type
of data only, and in many cases, this means that a bank of cards must be used in
order to collect even quite basic data. The flexibility of DAQifi devices makes
them cheaper to implement in many situations.
This is especially true in situations where portability is paramount. The fact
that DAQifi devices run on their own power makes them ideal for situations where
having a dedicated PC workstation is simply impossible. This is the case in many
industrial processes, where the environment is not conducive to the health of
computer hardware, and also in situations where the system under study is
inherently mobile, such as in automotive engineering.
Lastly, the user interface which comes as standard on DAQifi devices means
that using them is incredibly simple in comparison to many DAQ card solutions.
Often, even in high-end scientific applications, all that is needed from a data
acquisition system is for it to feed data to a centralized device, in a format which
is easy to work with, for later analysis.
This is exactly what DAQifi devices achieve, and it is therefore not surprising
that they are eclipsing DAQ card solutions in many situations.
47
EXPERIMENT: - 8
➢ Introduction
Temperature measurement in today’s industrial environment encompasses a
wide variety of needs and applications. To meet this wide array of needs the process
controls industry has developed a large number of sensors and devices to handle this
demand. In this experiment you will have an opportunity to understand the concepts
and uses of many of the common transducers, and actually run an experiment using
a selection of these devices. Temperature is a very critical and widely measured
variable for most mechanical engineers.
Many processes must have either a monitored or controlled temperature. This can
range from the simple monitoring of the water temperature of an engine or load
device, or as complex as the temperature of a weld in a laser welding application.
More difficult measurements such as the temperature of smoke stack gas from a
power generating station or blast furnace or the exhaust gas of a rocket may be need
to be monitored. Much more common are the temperatures of fluids in processes or
process support applications, or the temperature of solid objects such as metal plates,
bearings and shafts in a piece of machinery.
1. PWM Control:
The PWM or Pulse Width Modulation control is used to control higher end
devices. The PWM signal is a square wave output of a fixed frequency that varies
the on duration of the signal or the duty cycle. This signal is typically a low-level DC
voltage signal in the rage of 0 to 5 volts or 0 to 24 volts. It can also be done in a
current output such as 4 to 20 milliamps. In each of these cases the minimum value
represents the off state and the high value represents the on state of the signal.
This type of a signal is normally used to control valves or positioners. Typically,
the base frequency of this type of control is in the range of a few hundred hertz, but
can be as high as ten or twenty thousand hertz. This frequency is dependent on the
particular controller and the needs of the device under control. The on percentage of
the PWM signal generates the desired valve opening, closing or position.
48
Figure 8.1 PWM controller
➢ Analog Output:
The analogy output control method uses a variable analogy signal, such as a 0-
10-volt DC, -10 to +10-volt signal or current signal (0 to 20 ma or 4 to 20 ma) as the
control output. This signal is generated by the controller, and similar to the PWM
control the level is proportional to the controller’s command signal. As an example,
if the control was generating a 0 to 10-volt control signal, a 25% output would be
2.5volts, and a 50% control output would be 5 volts. This signal is very commonly
used in a 4-20 milliamp output configuration since a signal below 4 milliamps
indicates a line failure and a definite control action can be taken to put the system in
a failed safe mode. This signal output is always a very low power signal and
additional power amplification is required at the control device end to make an actual
control move. Relay Output: The relay output control generally consists of a form C
or form A relay contact. The relay contact generally has a current rating of ten amps
or less, and many times less than one amp. This type of control is the least expensive
of the control outputs and is only useful in and ON/OFF controller. The cycle time
from ON to OFF usually needs to be something longer than five seconds to prevent
premature failure of the relay. There are two ways in which the relay contact can be
shown. The graphic below shows both methods for both a form A and form C contact.
➢ DC Pulse output:
This method of control output generates a DC signal that is of low power. The
low power signal is fed to a control device that has the ability to turn the low power
switching signal into either a high-power signal or into an actual control value. For
instance, using a pulse output signal for an on off control, wired to a solid-state relay
49
can allow a single controller to drive hundreds of thousands of watts of heating
capacity. If this same signal is used in a PWM system, it can be used to control the
position of valves the size of small cars. The signal itself tells the control device what
to do, and the control device uses additional power to amplify this signal to a physical
change.
➢ SSR Output:
The solid-state relay output is an AC semiconductor version of a form A contact.
That being it is either on or off. The solid-state relay output will switch ONLY
alternating current loads and will typically be limited to a maximum current of 5
amps. If larger currents are required, an external SSR is recommended. One caution
to note. Solid state relays will switch only an alternating current load, and will only
turn off as the voltage on the line side of the relay crosses zero. This only happens
twice in each cycle. For this reason, setting an on/off time of less than 1/60th of a
second will produce unexpected results. It also means that if you select a longer time
and are using a PWM method of control the pulse width time (T2) will always be in
16 millisecond increments. This holds even if you are using a DC pulse width system
to control an external SSR. In general, it is a good idea to set your T1 time of any
PWM or ON/OFF system driving an SSR to not less than one second.
2. Proportional control:
The most basic control algorithm for control of any device, is to measure a
command signal and subtract a feedback signal from it, creating an error signal. This
error signal is amplified by a certain amount. This amount is known as GAIN. As the
feedback signal varies farther from the command signal, the error x GAIN signal
grows proportionally larger. This is the signal that generates the control output. In
the case of ON/OFF control, when the proportional signal grows higher than a
specified limit, the output is turned off. When the signal grows smaller than a certain
amount, it turns the output on. This is a typical control method for a heater system.
Using a Proportional control with a PWM or analog signal makes a more efficient
system. In this control mode the amount of deviation from the set point changes the
pulse width or analog output. The higher the error signal, the more the output signal
is changed. This is the essence of proportional control. The output is changed
proportionally to the error signal.
50
3. PD (Proportional – Derivative control):
If you want to change the output signal quickly with a smaller change in the error
signal you will get the system to hold the temperature somewhat better. The problem
is that in this method the control has a tendency to overshoot, or raise the temperature
higher than desired because it is heating faster to get to the set point faster. The rate
of change of the feedback signal is known as the derivative of the signal. If the
feedback signal deviates too quickly, there is a chance we will overshoot the desired
value. By taking the rate of change of the signal into account we know we need to
slow down the control output some to reduce this. The derivative of the feedback is
subtracted from the error to minimize this. The new control algorithm would look
something like: (Command – feedback) * PropGain – Derivative (feedback) *DGain
51
EXPERIMENT: - 9
1. Interferometers
Interferometers are investigative tools used in many fields of science and
engineering. They are called interferometers because they work by merging two
or more sources of light to create an interference pattern, which can be measured
and analyzed; hence 'Interfere-meter'. The interference patterns generated by
interferometers contain information about the object or phenomenon being
studied. They are often used to make very small measurements that are not
achievable any other way. This is why they are so powerful for detecting
gravitational waves--LIGO's interferometers are designed to measure a distance
1/10,000th the width of a proton!
Widely used today, interferometers were actually invented in the late 19th
century by Albert Michelson. The Michelson Interferometer was used in 1887 in
the "Michelson-Morley Experiment", which set out to prove or disprove the
existence of "Luminiferous Aether"--a substance at the time thought to permeate
the Universe. All modern interferometers have evolved from this first one since it
demonstrated how the properties of light can be used to make the tiniest of
measurements. The invention of lasers has enabled interferometers to make the
smallest conceivable measurements, like those required by LIGO.
➢ Construction
Remarkably, the basic structure of LIGO's interferometers differs little from
the interferometer that Michelson designed over 125 years ago, but with some
added features, described in LIGO's Interferometer. Because of their wide
application, interferometers come in a variety of shapes and sizes. They are used
to measure everything from the smallest variations on the surface of a microscopic
organism, to the structure of enormous expanses of gas and dust in the distant
Universe, and now, to detect gravitational waves. Despite their different designs
and the various ways in which they are used, all interferometers have one thing in
common: they superimpose beams of light to generate an interference pattern. The
basic configuration of a Michelson laser interferometer is shown at right. It
52
consists of a laser, a beam splitter, a series of mirrors, and a photodetector (the
black dot) that records the interference pattern
53
But what happens if the distance traveled by the lasers does change while they
are making their way through the interferometer? If one arm gets longer than the
other, one laser beam has to travel farther than the other and it takes longer to
return to the beam splitter. Though the beams entered the interferometer at the
same time, they don't return to the beam splitter at the same time, so their light
waves will be offset when they recombine. This changes the nature of the
interference they experience. Rather than totally destructively interfering,
resulting in no light coming out of the interferometer, a little light will 'leak' out
and be seen by the photodetector. If the arms change length over a period of time
(say with the passage of a gravitational wave), the pattern of light coming out of
the interferometer will also change in-step with the movement of the arms.
Basically, a flicker of light emerges. In an interferometer, any change in light
intensity indicates that something happened to change the distance traveled by
one or both laser beams. Critically, the shape of the interference pattern emerging
from the interferometer over a period of time can be used to calculate precisely
how much change in length occurred over that period. LIGO looks for very
specific characteristics (how the interference pattern changes over time) to
determine if it has caught the passage of a gravitational wave.
54
➢ What’s an IFO con and des interference
55
In nature, the peaks and valleys of one wave will not always perfectly meet
the peaks or valleys of another wave like the illustration shows. Regardless of how
they merge, the height of the wave resulting from the interference always equals
the sum of the heights of the merging waves. When the waves don't meet up
perfectly, partial constructive or destructive interference occurs. The animation
below illustrates this effect. If you watch closely, you will see that the black wave
goes through a full range of heights from twice as high and deep (where total
constructive interference occurs) to flat (where total destructive interference
occurs) as the red and blue waves pass 'through' each other (interfere). In this
example, the black wave is the interference pattern! Note how it continues to
change as long as the red and blue waves continue to interact.
56
EXPERIMENT: - 10
➢ Introduction:
The choice of the appropriate installation method plays an important role for
accurate temperature measurement. In the cryogenic and high vacuum
environment, due to poor contact between the cryogenic temperature sensor and
the surroundings that the sensor is Installed and intended to measure, the self-
heating from sensor measuring current brings about temperature difference and
creates a potential temperature measurement error. The self-heating temperature
difference is directly proportional to the thermal resistance for a mounted sensor,
which means that lower installation thermal resistance of sensors is advantageous
to obtain better measurement results.
A measurement model for the installation thermal resistance of sensor is built
in terms of two currents method which is always used to measure self-heating
effect. A cryostat that can provide variable temperature in the accurate
temperature measurement and control experiments is designed and manufactured.
This cryostat can reach 3K in a few hours and the sample temperature can reach
as high as 20 K. Based on the experimental results, the measurement uncertainty
of the thermal resistance are also analyzed and calculated. To obtain the best
measurement results in our cryostat, the thermal resistances of sensors with two
installation methods are measured and compared.
➢ Experimental Setup
In order to obtain the cryogenic temperature, a new cryostat was designed, as
shown in fig. This cryostat had a simple structure that consists of a two-stage GM
cryocooler Cryogenics of America, Inc. RDK-415D), cryostat wall, radiation
shield, thermal damper, sample holder, Cernox temperature sensors, temperature
controller and other measuring instruments. All the measurement devices used for
acquiring data were connected by IEEE-488 cables and controlled by a personal
computer using a program written by LabView software.
57
The radiation shield was made of copper and the thickness is 1 mm. It was
connected to the first stage of the cryocooler by a flange. The sample holder and
heat sink are made of oxygen-free high conductivity copper (OFHC) in order to
get a good heat conduction. The thermal damper with 1.2 mm of thickness and 45
mm of diameter is made of PTFE and located between the sample holder and the
cold head to reduce the temperature fluctuations.
58
excitation current will result in a decreasing temperature measurement standard
deviation. A larger measurement current will also lead to more obvious
temperature difference between the two measurement currents 1 I and 2 I.
Nevertheless, a larger excitation current will dissipate more power in the
temperature sensor, raising its temperature above the mounted environment. It is
a significant problem to choose an appropriate measurement current to balance
the standard deviation and the self-heating effect. In our experiment, we choose
35 A as the measurement current for 4.2K and 6K, and 65 A for 8K, 10K and
14K.
➢ Conclusion
Cernox thermometer self-heating is a significant factor in high accuracy
cryogenic temperature measurement that cannot be eliminated. The temperature
difference caused by self-heating is depended on the thermal resistance of the
sensor and its environment. In this paper, the thermal resistance of the Cernox
temperature sensor and its surroundings from 4.2K to 14K is calculated by two-
current method. The results show that the thermal resistances of the two mounting
methods (VGE-7031 Varnish and Apiezon N Grease) are roughly equivalent. It
can be observed that with the increasing of the temperature, the effective thermal
resistances are gradually decreased. The uncertainty of the thermal resistance is
also analyzed in this paper. The uncertainty of the thermal resistance decreased
with the increasing of the temperature, while the relative uncertainty for different
temperature is equal and less than 3%.
59