RNEW 320 Second Midterm Exam Student Name
1. Computer modeling of Transistors
Please read the following article in order to properly assess the transistors. For us, the models
must be established with solid understanding of the device physies, performance parameters and
manufacturing processes,
Leading Experts Suggest Guidelines for Assessing Emerging Transistor Performance
NIST, global collaborators propose universal ways to measure and evaluate next-
generation field-effect devices for smartphones and advanced electronics.
July 29, 2022
1D
| tamara ne
2D.
AIS,
Typical design for an emerging field-effect transistor made with nanomaterials. The
movement of current from the source electrode (gold, left top) across an ultrathin
channel (blue) to the drain electrode (gold, top right) is controlled by the source voltage
and the electric field produced by the gate electrode (gold, top center) that is separated
from the channel by an insulating layer (light gray). At left: Atomic-thickness channel
materials can be one-dimensional, such as carbon nanotubes, or two-dimensional
layers.
To continue making smartphones, laptops, and other devices more powerful and energy
efficient, industry is intensely focused on identifying promising next-generation designs
and materials for the principal building blocks of modem electronics: the tiny electrical
on-off switches known as field-effect transistors (FETs). When deciding how to direct
billions of funding dollars for next-generation transistors, investors will base many of
their decisions on published research results.
But a dismaying amount of research on FETs currently suffers from inconsistent
reporting and benchmarking of results, increasing risks of misleading conclusions and
inaccurate claims that set false expectations for the field. This problem and possible
solutions are outlined in an article published today by an international group of leading
experts on semiconductor devices
“Industry is trying to determine the right materials and designs to use,” said Curt Richter,
a physicist at the National Institute of Standards and Technology (NIST) and a co-
author of the new article. “They want to know exactly what to make and how to make it.
But the industry is getting terribly frustrated, they tell us, because they see a promising
piece of information in one publication and another promising piece in another
publication, but they're incompatible. They have no way to compare them. Given the‘enormous cost of adopting design innovations, the industry can't afford to make a
mistake. What they want is uniform benchmarking.”
Richter, former NIST associate Zhihui Cheng(now at Intel), and Aaron Franklin of Duke
University are leading an effort to create and promote guidelines for uniform test
methods and reporting standards. They and more than a dozen colleagues from
industry, academia, and government labs describe their recommendations in an article
published today in Nature Electronics.
The paper provides specific criteria for evaluating and describing each of eight key
Parameters critical to emerging designs for field-effect transistors (see illustration)
1
Gate dielectric
Back-gate
Alternate design for an emerging field-effect transistor made with nanomaterials. The
movement of current from the source electrode (gold, left top) across an ultrathin
channel (blue spheres in a 2D layer) to the drain electrode (gold, top right) is controlled
by the source voltage and the electric field produced by the gate (dark gray layer) that is
separated from the channel by an insulating layer (light gray). Channel material can also
be one-dimensional, such as carbon nanotubes or nanowires.
The research community at work on emerging FET designs includes physicists,
materials scientists, chemists, electrical engineers and more — each approaching the
subject a bit differently.
“At present, each group frequently has its own techniques and measurement methods,”
Cheng said. “There are no uniform guidelines or metrics about how to measure and
report a particular parameter. So it is often very difficult to evaluate the significance of a
reported result, and it is hard to tell whether the results are biased or incomplete.”
Inaccuracies in reporting are “not necessarily intentional,” said Franklin, the Addy
Professor of Electrical and Computer Engineering at Duke. “But the impact that
misreporting has on the field cannot be overstated. In addition to the negative effect on
industry, it also affects the decisions made by funding agencies, program managers and
others who influence research direction in academic and government labs. Properly
extracting and then keeping new findings in the proper context is critical to making true
progress.“I's really a matter of providing education that is currently lacking. There's no textbook
out there about how to properly extract these parameters for emerging devices. You
could think of our paper as a sort of abstract for such a textbook.”
Absent universal guidelines, the authors explain, itis too easy to deliver misleading
results. For example, one of the key parameters to device performance is the
relationship between the ramp-up of applied voltage it takes to turn the transistor “on” —
that is, to get current moving through the channel between the source electrode and the
drain electrode — and the amount of increase in current from the ramped up voltage.
“There is a transition voltage as the current goes up from the lowest to the maximum.
and it’s not a straight line,” Richter said. “It has little variations in curvature. You want
the slope of that curve to be as steep as possible so that you can work with smaller
voltages to turn the current on, Some researchers will report the one spot where the
slope is steep instead of reporting the entire voltage span. That misleads people into
believing that you can operate at lower power.”
“It's like you're running a 100-meter race and you only report the last 10 meters where
you run the fastest,” Cheng said
Or researchers may attribute a positive result to novel channel characteristics “when in
fact it is actually determined by the geometry of the transistor and the non-
semiconducting materials," Franklin said. “Reporting must be done in the proper context
of the dimensions and materials of the transistor, rather than simply attributing
everything, by default, to the semiconductor channel.”
Unless the scientists make enough tests with enough variations to account for all
factors, the results may be deceptive. That poses difficulties for many labs. It can take
many months to create and characterize a new material or emerging design and make
even one or two samples. So constructing enough variations on a device to enable
reliable comparisons requires considerable time and resources.
But the effort must be made, the authors say, to avoid the downside of misreporting
“it's often the case that once a paper is published, everybody believes it,” Richter said
“It becomes gospel. And if your research gets a different answer, you have to work ten
times harder to overcome the effect of the first publication.”Structure of a carbon nanotube. Typical diameters range from 1 to 40 billionths of a
meter. The spheres are carbon atoms. The rods represent atomic bonds.
Too many inadequate or misleading reports can prompt a sort of “gold rush” mentality
similar to what occurred in the late 1990s and early 2000s with a then-emerging
technology: carbon nanotubes (CNTs). Based on wildly enthusiastic early reports, many
believed they would become the successor to silicon semiconductor elements in
microelectronics. But when initial claims proved overstated, interest dried up along with
funding.
“CNTs are a hugely instructive example,” Franklin said. “So much hype and overstated
claims led to disillusionment and frustration rather than steady, collaborative and
accurate progress, An entire field of research was negatively impacted by overstated
claims. After a frenzy of activity, the eventual distaste resulted in a massive shift of
research funding and attention going to other materials. It has taken a decade to bring
deserved attention back to CNTs, and even then many feel it is not enough.”
Uniform benchmarking and reporting can minimize those sorts of convulsive effects and
help scientists convince the research community that they have made genuine
progress. “By using these guidelines,” the authors conclude, “it should be possible to
comprehensively and consistently reveal, highlight, discuss, compare, and evaluate
device performance.”
Paper: Cheng, Z., Pang, CS., Wang, P. et al, How to report and benchmark emerging
field-effect transistors. Nature Electronics. Published 29 July 2022.
DOI: 10.1038/s41928-022-00798-8
.gov/news-events/news/2022/07/leading-experts-suggest-guidelines-assessing-RNEW 320 2ad Midterm Exon
Studant Mame
20% aach i
t tka ™ ohm 4
ve
+
a=
a
S2.
Please understand the transformer standards and codes.
The following technical codes and standards are applicable to and should be
considered when selecting and evaluating the overall performance of a transformer,
specifically dry-type power transformers:
+ NEMA ST-20: Dry-Type Transformers for General Applications.
+ IEEE Standard C57.98; IEEE Guide for Transformer Impulse Tests.
+ IEEE Standard 57.12.80: IEEE Standard Terminology for Power and Distribution
Transformers.
+ IEEE Standard 57.12.70:
Connections for Distribution and Power Transformers.
+ IEEE Standard 57.12.59: IEEE Guide for Dry-Type Transformer Through-Fault
Current Duration.
+ IEEE i 7 jing Dry-1
‘Transformers.
+ NEPA 70, National Electrical Code: Article 450, Transformers and Transformer
Vaults.
+ IEEE Standard C57.12.01: General Requirements for Dry-Type Distribution and Power
‘Transformers, Including Those with Solid-Cast and/or Resin Encapsulated Windings.
+ IEEE Standard C57.12.51: Requirements for Ventilated Dry-Type Power
Transformers, 501 Kilovolt-Amperes and Larger, Three-Phase, with High-Voltage 601
to 34 500 Volts, Low-Voltage 208Y/120 to 4160 Volts.
+ IEEE Standard C57.12.55: Transformers ~ Used in Unit Installations, Including Unit
Substations — Conformance Standard.
+ IEEE Standard C57.12.56: Standard Test Procedure for Thermal Evaluation of
Insulation Systems for Ventilated Dry-Type Pé i
+ IEEE Standard 57.12.58: Guide for Conducting a Transient Voltage Analysis of a
Dry-Type Transformer Coil.
+ IEEE Standard C57.12.60: Guide for Test Procedures for Thermal Evaluation of
insulation Systems for Solid bution
‘Transformers.
+ IEEE lard C57.12.00: IEEE Standard for General Requirements for Liquid-
Immersed Distribution, Pow
+ IEEE Standard C57,12.91: Standard Test Code for Dry-Type Distribution and Power
‘Transformers.
+ IEEE Standard C57.110: IEEE Recommended Practice for Establishing Liquid-Filled
and Dry-Type Power and Distribution Transformer Capability When Supplying
Nonsinusoidal Load Currents.
https://vww.esemag.com/articles/codes-and-standards-for-transformers/beds ty
Mz 0.68(07HDOE Proposes New Efficiency Standards For Distribution Transformers
New Rule Would Strengthen Grid Resiliency, Cut Carbon Emissions, and Deliver up to
$15 Billion in Savings to the Nation
WASHINGTON, D.C. ~ The U.S. Department of Energy (DOE) today proposed new
energy-efficiency standards for three categories of distribution transformers to
improve the resiliency of America’s power grid, lower utility bills, and significantly
reduce domestic carbon-dioxide (C02) emissions. DOE's proposal represents a
strategic step to advance the diversification of transformer core technology, which
will conserve energy and reduce costs. Almost all transformers produced under the
new standard would feature amorphous steel cores, which are significantly more
energy efficient than those made of traditional, grain-oriented electrical steel. If
adopted within DOE's proposed timeframe, the new rule will come into effect in
2027.
“The Biden-Harris Administration continues to use every means available to reduce
‘America’s carbon footprint while strengthening our security posture and lowering
energy costs,” said U.S. Secretary of Energy Jennifer M. Granholm. “Efficient
distribution transformers enhance the resilience of our nation’s energy grid and
make it possible to deliver affordable electrical power to consumers in every corner of
‘America. By modernizing their energy-conservation standards, we're ensuring that
this critical component of our electricity system operates as efficiently and
inexpensively as possible.”
DOE estimates that the proposed standards, if finalized, would reduce U.S. CO2
emissions by 340 million metric tons over the next 30 years—an amount roughly
equal to the annual emissions of 90 coal-fired power plants. DOE also expects the
proposed rule to generate over 10 quads of energy savings and approximately $15
billion in savings to the nation from 30 years of shipments.
The Administration is also working to address near-term supply chain challenges
and strengthen domestic manufacturing of key components in the electric grid. In
June, President Biden invoked the Defense Production Act to accelerate the
domestic production of clean energy technologies, including distribution
transformers and grid components. In October, DOE issued a Request for
Information to gather additional public input to determine how to maximize the
impact of these new authorities. The comment period closed on November 30th and
DOE is carefully considering the information submitted.
Additionally, as the supply of traditional, grain-oriented steel tightens, DOE is focused
on diversifying domestic steel production where capacity can be expanded, such as
in the production of amorphous steel used in advanced transformers. In support ofthese efforts, DOE is also finalizing the implementation guidance for the distribution
transformer and extended product system rebate programs established by the
Energy Act of 2020 and funded by President Biden's Bipartisan Infrastructure Law.
This rebate program encourages the replacement of energy-inefficient distribution
transformers and extended product systems with more-efficient replacements.
A distribution transformer is a device used to change the voltage of electrical power.
Acommon sight on utility poles in neighborhoods throughout the country, these
transformers lower the voltage of electrical power before distribution to the
customer. Purchasers of distribution transformers are primarily electric utilities and
commercial or industrial entities.
Current efficiency standards apply to liquid-immersed, low-voltage dry-type, and
medium-voltage dry-type distribution transformers. DOE's proposed rule would
amend the energy conservation standards for all three categories.
On Thursday, February 16, 2023, DOE will host a public meeting to solicit feedback on
the proposed rulemaking from stakeholders.
DOE's Appliance and Equipment Standards Program implements minimum energy
conservation standards for more than 60 categories of appliances and equipment. As
a result of these standards, American consumers saved $63 billion on their utility
bills in 2015 alone. By 2030, cumulative operating cost savings from all standards in
effect since 1987 will reach nearly $2 trillion, Products covered by standards represent
about 20% of home energy use, 60% of commercial building use, and 30% of
industrial energy use.4.Impedance match and maximum power (Please look at speaker standards here.)
Speaker Measurements: Replacing IEC 60268-5 (Part 1) - What Comes Next and Why
023
Loudspeaker standards are and have been central to my working life, though for
many years I—and probably many people — did not realize why they were and are
so important. To explain, | need to go back many years to when | first got into
loudspeakers — in my case initially through electronics. | first came across
loudspeaker standards in my work at Impulse, Bowers & Wilkins, and Goodmans.
The key requirements almost inevitably came from customers, and this was
especially so when working in programming the test equipment used by our
automotive customers.
As a general rule when you are making one-offs, precision and repeatability is not
or should | say was not as highly regarded then as it is now. And that is the link
between standards and calibration. In order to calibrate something, you have to
have adequately defined Standards, so when you measure it, you can refine the
measurements defining measurement uncertainties.
Calibration Systems - Links to Metrology
Arguably some of the first defining characteristics of civilization are those of (1)Record keeping and (2) Measurement. It's difficult to imagine one without the other
so in all probability they both emerged together.
When you have a measurement, you need at some point to deal with uncertainty.
Asa simple example, it is easy to count 10 coins, but what about counting grains of
rice? You can count 10 individual grains but what about the number of grains of rice
in a bag? A barn or a silo of grain? Taking another physical example, a length of a
beam? The distance between aircraft in the sky? Inherently there will be some
uncertainty of these (if only because they are moving) but how much and how can
we define it so that we can have confidence in our measurements?
Even within “simple” measurements uncertainty can quickly overcome/exceed the
measurement tolerance for many reasons, often these come down to the
measurement environment. The science of this is called Metrology and was defined
by the International Bureau of Weights and Measures (BIPM, 2004) as “the science
of measurement, embracing both experimental and theoretical determinations at
any level of uncertainty in any field of science and technology.” | believe the key
words here are “Determination of Uncertainty.” We don’t know and can’t know a
measurement value absolutely only within a tolerance of that value,
And how do we make reliable, consistent, and accurate measurements? The
obvious thing is first to determine why a measurement may suffer from variation(s)
and then to strive to eliminate as many of these cause(s) as possible. We then need
to consider how accurate we need to be. This largely depends upon the individual
usage of a measurement. With loudspeakers, it probably does not matter much if
the sensitivity of a speaker is off by 0.1dB, however if we were calculating a
trajectory for a spacecraft, we would need a much higher degree of certainty in our
measurements.
So how do we guarantee any measurement? The first thing to say is that there is no
such thing as an absolutely accurate measurement—only one that is known to a
degree of uncertainty. The primary standards are known to a higher degree of
accuracy and have a smaller range of uncertainty. A “Hierarchy of Standards” is
defined as
+ Primary standards
+ Secondary reference standards
+ Working standards
+ Laboratory standardsPrimary standards are held in only a very few selected locations and organizations
throughout the world—these are the internationally agreed standards and are held
by the International Bureau of Weights and Measures, which is an
intergovernmental organization through which its 59 member-states act together
on measurement standards. An example was the standard meter, an example of
which is shown in Photo 1. This is now defined as 1/299 792 458 of the speed of
light...
Photo 1: At 36 rue de Vaugirard, in Paris, a métre étalon (or standard meter) from
the 18th century is embedded in the exterior of a building.
A continuing trend in metrology is to eliminate as many as possible of the artifact
standards and instead define practical units of measure in terms of fundamental
physical constants, as demonstrated by standardized techniques. One advantage of
elimination of an artifact standard is that inter-comparison of artifacts is no longer
required. Another advantage is that the loss or damage of an artifact standard will
not disrupt the system of measures. Secondary reference standards are held by
individual countries primary calibration organizations (e.g., NPL in the UK and NIST
in the US).
Finally, working standards are held by calibration labs usually commercial or
academic institutions that reference the national standards organization tocalibrate their standards. Individual instrument companies thus refer for their
calibration upon these working calibration standards. Thus in theory and practice
an unbroken calibration chain can be realized, the point to note that at every stage
there is a measurement uncertainty and one of the prime aims of metrology is to
determine this, and of course we want to minimize any such uncertainty at every
stage.
Sound Standards
Sound pressure level (SPL) is the pressure level of a sound, measured in decibels
(dB). [tis equal to 20x the Logi0 of the ratio of the Root Mean Square (RMS) of
sound pressure to the reference of sound pressure (the reference sound pressure
in air is 2 x 10-5N/m2, or 0.00002Pa), The primary requirement to calculate SPL
relies upon N=Newtons a measure of Force, m2,or m2 = distance and area. This is
referenced to the nominal threshold of human hearing at 1kHz taken to be OdB SPL
or 20pPa
In high volume applications — be that automotive, mobile, telecom, or consumer
products — it is essential that designs are safe and compatible between competing
products, and also that the same item is being made in the same way every time.
That is where standards and in particular international standards come in.
Standards and Standards Organizations, enable people from individual companies
to contribute freely to the common good — so a standard, represents a balanced
consensual agreement by experts in that subject and the best implementation at a
given time.
A standard clearly outlines the overall results that can be expected by following a
general outline of a methodology, so they, like patents — which by granting a time-
limited monopoly in return for the open disclosure of an idea can gain protection —
refer to earlier inventions and techniques.
A natural result of this is that standards need to evolve and they enable any
discipline to continually update itself to current best practice. As an example,
individual authors publish books or issue papers about Loudspeakers through
journals in our field of interest, by the American Acoustical Society (ASA), Audio
Engineering Society (AES), and the Institute of Acoustics. However these books and
papers have not and cannot have been fully peer reviewed, while the process of
publication as a standard requires a full peer review together with a public call for
comment. This ensures that a standard can be relied upon to achieve what it setsout to do and importantly that the underlying science is correct.
| first came to Loudspeaker Standards around 1980, via IEC60268-5 (which was first
issued in 1972), By 2003, the Total Q-factor (Qt) Thiele-Small parameters and the
compliance of a loudspeaker drive unit had been defined along with the basic
measurement setup of an anechoic chamber, the IEC baffle, and various test boxes,
together with harmonic distortion, noise signals, and so forth.
The Abstract of the first edition summarizes it as: “Applies to sound system
loudspeakers, treated as entirely passive elements. Gives the characteristics to be
specified and the relevant methods of measurement for loudspeakers using
sinusoidal or specified noise signals.”
To better understand it, let us briefly look at the contents of IEC60268-5:2003,
which is the last major incarnation.
We had just four signals defined: Sine, Broadband and Narrow band noise, and
Impulsive. The measurement environment was characterized as free-field and half
space, diffuse, and simulated free-field, and the document specifies standard baffle
or enclosures, impedance measurement, total Q-factor (QU), Vas, rated voltage
(Power), frequency range, and sensitivity, radiation angle and directivity index,
amplitude nonlinearity, THD, and stray magnetic fields.
However, the loudspeaker theory and practice in 2021 is much more advanced
compared to the 2000—let alone the 1980s! And a major update was arguably
overdue to make those standards applicable to all kinds of modern audio devices
whether active or passive.
There's also the need to cope with any input signal, analog, digital, wireless, or
something new to include new measurement techniques (e.g., Rub & Buzz test);
and to bring together manufacturing (QC) and system development (R&D).
Updates were also required to provide comprehensive information in a shorter
measurement time, simplify interpretation (e.g., root cause analysis), and to
increase flexibility to consider particularities of the applications (e.g., home,
automotive, personal, and professional).
‘These new considerations are approached in the updated standards:+ IEC 60268-21: Output Based Acoustical Measurements
+ IEC 60268-22: Electrical and Mechanical Measurements on Transducers
+ IEC 63034: Micro-speakers
These multiple standards are necessary as increased use of loudspeakers in many
areas show a single standard is no longer enough.
Acoustical (Output Based) Measurements
The scope of the IEC 60268-21:2018 International Standard applies to passive and
active sound systems, such as loudspeakers, headphones, TV sets, multimedia
devices, personal portable audio devices, automotive sound systems, and
professional equipment. It applies to those loudspeakers that are best described as
“Black Boxes" in that they accept an input—analog, digital, radio, or something
else—and then produce an acoustic output. It is this acoustic output which is the
subject of the standard.
The evaluation of the Output is of course dependent upon the Black Box (Figure 1).
Digital Audio
Near Field Far Field
Control Parameters.
(eg. attenuation)
Figure 1: Acoustical output based measurements are shown as defined in IEC
60268-21.
Where A Black Box is used as description of a system whose internal workings are
(1) Unknown or (2) the internal status at any time is unknown, all that is known are
the inputs and the overall output. As in previous standards, measurements can bemade in the near field or the far field.
The outputs of concern are generally those that a user will directly notice:
amplitude (SPL) versus frequency (Hz) and distortion (% or dB), together with
maximum level.
The key is that specifications and measurements made under this standard are
relevant user output-based ones. The users have no knowledge of the internal
workings of the system. The specifications therefore are purely input to output
based, or ones that rely only upon measurable acoustic outputs such as:
+ SPL.vs. Frequency
+ THD vs. Frequency
+ Output level vs. Distortion
So, IEC 60268-21:2018 is primarily focused on output-based measurements. It
updates the measurement techniques to include stimuli (chirp, multi-tone complex,
and burst), includes comprehensive physical evaluation of the acoustical output,
and assesses large signal performance (considering heating and nonlinearities).
Umax or SPLmax are rated by the manufacturer to calibrate the Real Mean Square
(RMS) value of the stimuli
The updated standard also extends acoustical measurements of loudspeaker
beyond the IEC baffle to include relative and end-of-line test chambers based upon
the tetrahedral test method, and considers a complete assessment of the 3D sound
field radiated by the loudspeaker in an anechoic environment (near and far field).
Italso describes physical measurement of higher-order harmonics and impulsive
distortion in the time domain to assess Rub & Buzz and other loudspeaker defects.
It effectively forms a bridge between manufacturing (QC) and research and
development (R&D) measurement procedures.
Maximum Input and Output Signal
These measurements are required to allow effective measurements of both
conventional passive and more commonly of active loudspeaker systems (Figure 2).
However, to make such measurements we need to firstly define and calibrate our
input signals. The updated standard considers new sinusoidal chirp, steady-state
two-tone signal, sparse multi-tone complex, and Hann-burst as input signals forthat effect.
Sound
“> pressure
Stimulus \ utp
‘Amplifier Transducer
equalizer, etc.
Rated maximum input value Una Rated maximum (output) SPL...
* Good for DUTs with a single input and + Universal approach for passive and active systems
constant transfer function between + Can be applied to any input channel
input and output + Can cope with gain controllers, equalizers,
+ Not meaningful for active systems limiters, etc.
Figure 2: This graph shows the maximum input and output signal, The rated
maximum input value umax is good for DUTs with a single input and a constant
transfer function between input and output. And, the rated Maximum (output)
SPLmax is the universal approach for passive and active systems, can be applied to
any input channel, and can cope with gain controllers, equalizers, limiters and
more.
jaudioxpress.com/article/speaker-measurements-replacing-iec-60268-5-part-1-
what-comes-next-and:4. A stereo amplifier has an output
yasis tence of 600.0, The vatttane
of the load ( speater ) is 4 a,
Ry= 600.4 iva)
Eee
Task, Select the turns ratio of a
trans former te obtain maximum
power trans far,