Part I .

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 201

ISSN 2411-3050

NATIONAL TECHNICAL UNIVERSITY OF UKRAINE


“IGOR SIKORSKY KYIV POLYTECHNIC INSTITUTE”

POLYTECHNIC UNIVERSITY OF THE PHILIPPINES,


THE PHILIPPINES

Kyiv
November 23, 2023

XXIV INTERNATIONAL R&D ONLINE CONFERENCE FOR


STUDENTS AND EMERGING RESEARCHERS
“SCIENCE AND TECHNOLOGY OF THE XXI CENTURY”

CONFERENCE PROCEEDINGS
PART I
NATIONAL TECHNICAL UNIVERSITY OF UKRAINE
“IGOR SIKORSKY KYIV POLYTECHNIC INSTITUTE”

POLYTECHNIC UNIVERSITY OF THE PHILIPPINES, THE PHILIPPINES

XXIV INTERNATIONAL R&D ONLINE CONFERENCE


FOR STUDENTS AND EMERGING RESEARCHERS
“SCIENCE AND TECHNOLOGY OF THE XXI CENTURY”

November 23, 2023

CONFERENCE PROCEEDINGS

Kyiv – 2023
Science and Technology of the XXI century Part I

УДК 330.341.1(063)

I-57

ISSN 2411-3050

Head of the editorial board:


Zoia Kornieva, Doctor of Pedagogy, Professor, Dean of the Faculty of Linguistics,
National Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”

The editorial board:


Emejidio C. Gepila JR, faculty member, College of Education, chief, Research
Evaluation and Monitoring Center, Polytechnic University of the Philippines, Manila, the
Philippines
Mariya Leshchenko, prof. dr hab, Polska Akademia Piotrkowska, Piotrków
Trybunalski, Poland
Yuliana Lavrysh, Dr. Sc., Professor, head of the Department of English for
Engineering №2, Igor Sikorsky Kyiv Polytechnic Institute
Oksana Synekop, Ph.D. in Pedagogics, Associate Professor, Department of English
for Engineering №2, Igor Sikorsky Kyiv Polytechnic Institute
Iryna Simkova, Dr. Sc., Professor, head of the Department of English Language for
Humanities №3, Igor Sikorsky Kyiv Polytechnic Institute
Kateryna Halatsyn, Ph.D. in Pedagogics, Associate Professor, Department of English
for Engineering №2, Igor Sikorsky Kyiv Polytechnic Institute

Science and Technology of the XXI Century: Proceedings of the XXІV International
R&D Online Conference for Students and Emerging Researchers, 23 November, 2023. Kyiv,
2023. Part I. 200 p.

The edition is recommended by the organizing committee of the Conference and


approved by the Academic Council of the Faculty of Linguistics.

The edition features proceedings delivered at the Twenty Fourth International R&D
Online Conference for Students and Emerging Researchers ‘Science and Technology of the
XXI Century’ held at the National Technical University of Ukraine ‘Igor Sikorsky Kyiv
Polytechnic Institute’ on November 23, 2023.
The Conference attracted over 375 students and postgraduates.
The publication is intended for scholars, undergraduate and postgraduate students from
Ukraine, Poland, Denmark involved in research and development work in different fields of
science and technology.

The articles compiled in the book are reproduced without editorial interference as
they were presented by the authors.

© National Technical University of Ukraine ‘Igor Sikorsky Kyiv Polytechnic Institute’, 2023

2
Science and Technology of the XXI century Part I

SECTION: ENGINEERING INNOVATIONS

MECHANICAL PROPERTIES OF THE TENDON. TENDON


RECOVERY AFTER AN UPPER EXTREMITY INJURY
Violeta Akhmedova, Larisa Tarasova
Faculty of Biomedical Engineering, National Technical University of Ukraine
“Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: biomechanical properties, tendon, upper extremity injury.


Introduction. Today, understanding the mechanical properties of tendons
(fibrous connective tissues that connect muscles to bones, enabling the transmission of
force and movement of the skeletal system) including their strength, elasticity, and
durability, is crucial in various fields such as orthopedics, sports medicine, and
biomechanics. Tendon grafts are often used in surgical procedures to repair damaged
or ruptured tendons. Also, in the conditions of war, there is a massive need to restore
tendons after upper extremity injures and unfortunately the number of victims is
increasing every day. It is impossible to establish exact percentage data, but the
percentage definitely exceeds the number of industrial injuries, because a large amount
of the territory of Ukraine is mined.
Objectives. Based on the studied sources, determine the main mechanical
characteristics of the tendons and consider the method of restoring the tendons of the
upper limb. List the main advantages and main problems of this method.
Methods. Without understanding the basic characteristics of tendons, it is
impossible to understand how to repair a damaged tendon. A review of new
publications showed that it is possible to highlight the following mechanical properties
of tendons:
1. Tendons possess high tensile strength, meaning they can withstand
significant pulling forces. This property is essential for their role in transmitting
muscle-generated forces to bones;
2. Tendons are elastic, allowing them to stretch and recoil. This elasticity
helps absorb and store energy during muscle contractions, preventing sudden jerky
movements;
3. Tendons are relatively stiff, which ensures efficient transmission of forces.
However, they are not as stiff as bones, providing some flexibility to the
musculoskeletal system;
4. Tendons are designed to withstand repetitive loading and have good
fatigue resistance. They can endure cyclic stresses without undergoing significant
damage;
5. Ultimate Failure Stress – is the stress at which a tendon will rupture or fail
completely. It varies among individuals and tends to be higher in larger tendons (Bi et
al., 2023).
Contrast to the lower limb, preserving the upper limb is strongly favored over
amputation and the use of prosthetics. Primary and secondary tendon repair of the

3
Science and Technology of the XXI century Part I

upper limb are important aspects of injury and damage management depending on the
degree of injury.
Primary tendon repair is a procedure in which
damaged tendons are attempted to be repaired by direct
stitching. In the graphtic placement closer to the center, the
Pulvertaft weave technique is used to fix the graphite from
the near end of the tendon (Figure 1). The Pulvertaft weave
technique involves injecting graft through the motor tendon
at least 3-5 times and anchoring with a suture. This
technique simplifies the adjustment of the grapht voltage,
gives it a higher strength compared to the technique of
Fig.1 Schematic depiction of
stitching the end to the end and reduces the risk of a gap
Pulvertaft weave technique
for upper limb tendon repair and subsequent rupture of the grapht. Successful primary
recovery can lead to restoration of tendon functionality and
restoration of limb mobility. However, unsatisfactory suture or other complications can
lead to decreased function or even tendon atrophy (Chattopadhyay et al., 2015).
Secondary repair of upper limb tendons is used in case of failure of primary
repair or in complicated cases and requires grafts or other methods of tendon
reconstruction. Secondary recovery may help restore tendon function, but results may
be less satisfactory compared to primary recovery. This process can also take longer to
fully recover (Elliot, 2011).
Results. Based on the comparison of the two articles primary recovery provides
better results and faster recovery, but requires more skills and experience. Secondary
recovery may be necessary in complex cases, although the results may be less
satisfactory and require more time to restore tendon function. In each case, the decision
on the recovery method should be based on specific circumstances and requires
an individual approach.
Conclusion. Upper limb tendon repair is critical to ensure upper limb
functionality and mobility after injury or damage. The main mechanical characteristics
of tendons, such as tensile strength, elasticity, rigidity, fatigue resistance and final
stress from destruction, are determined.
References
Bi, C., Thoreson A.R., & Zhao, C. (2023). Improving Mechanical Properties of Tendon
Allograft through Rehydration Strategies: An in Vitro Study. Bioengineering,
10(6): 641. https://doi.org/10.3390/bioengineering10060641
Chattopadhyay, A., McGoldrick, R., Umansky, E., & Chang, J. (2015). Principles of
tendon reconstruction following complex trauma of the upper limb. Semin Plast
Surg., 29(1):30–9. doi: 10.1055/s-0035-1544168. PMID: 25685101; PMCID:
PMC4317277
Elliot, D. (2011). Staged tendon grafts and soft tissue coverage. Indian J Plast Surg.,
44(2):327–36. doi: 10.4103/0970-0358.85354

4
Science and Technology of the XXI century Part I

STONE PAPER IS AN INNOVATIVE MATERIAL


Anna Bezrodnova, Oleksandra Hres
Educational and Scientific Institute for Publishing and Printing,
National Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”
Keywords: stone paper, mineral paper, calcium carbonate.
Introduction. Nowadays, the issue of using a large amount of paper wisely is
quite sensitive, as too many natural, exhaustible resources are used to make paper.
Today, scientists are trying to find new methods of creating paper without cutting down
tonnes of trees. Seeking new methods of producing paper not resorting to wood pulp is
important for several reasons primarily related to the environment, sustainability and
resource conservation. Stone paper can serve as example of a solution to this problem.
Objectives. The main aim is to learn about mineral paper and compare ordinary
paper with stone paper.
Methods. Mineral paper consists of organic materials (such as plastic) that are
melted under high temperature and then rolled into large sheets of paper. This process
does not require as much water, solvents, chemical additives and bleaching agents to
produce high-quality paper (Trufulla, 2019).
An example of mineral paper is rock paper, which is made from calcium
carbonate and low-pressure polyethylene (HDPE). It is not worse than standard paper
in quality, but has refined parameters. For example, stone paper is more durable and
resistant to abrasion, suitable for printing products of all kinds. A comparison of
conventional paper with rock paper (Fiona Macdonald, 2015) is shown in Table 1.
Table 1 — Comparison of regular paper with stone paper
Сharacteristics Traditional paper Stone paper
Calcium carbonate (70-80%) derived
Wood pulp, which is sourced from
from limestone or waste marble, small
Сomposition trees such as pine, spruce, or
amount of polyethylene (20-30%) as a
eucalyptus
binder
The impact on Tons of trees are cut down, a lot of Reducing the cost and resources of tree
the water is used to produce traditional felling and water usage has a positive
environment paper impact on the environment.
Production is energy-intensive, Generally, requires less energy during
Energy
involving several stages of processing manufacturing due to simplified
consumption
and drying processing
Known for its durability, tear resistance,
Can be susceptible to tearing, moisture and resistance to water and moisture. It
Durability
damage, and degradation over time is less likely to degrade or yellow over
time
Suitable for use with inkjet or solid ink
Well-suited for a wide range of
Printing printers (e.g. offset, letterpress, gravure,
printing techniques, including offset,
compatibility flexographic) but doesn't respond well to
digital and inkjet printing
very high-temperature laser printers
Fully recyclable into a secondary
Recycling Suitable for recycling and utilisation
material such as cardboard
Widely used for books, magazines, Commonly used for waterproof or tear-
Use cases newspapers, office documents, resistant applications, such as maps,
packaging etc. labels, packaging and notebooks

5
Science and Technology of the XXI century Part I

The table shows that mineral paper has the same quality as traditional pulp paper.
However, it should be noted that rock paper has certain advantages: it doesn’t require
chemical whiteners, has a high density, and is waterproof. This means that stone paper
basically has the characteristics that we get from pulp paper by adding fillers and using
a large amount of equipment.
Results. Having reviewed the information on the composition and values of
mineral paper, we can see the advantages of mineral paper over traditional pulp paper.
It is possible to abandon exploiting a large number of natural resources (water, trees),
chemical additives and a large amount of equipment in favour of stone that can be used
to create high-quality stone paper.
Conclusion. Mineral paper, especially rock paper, is a step towards the
development of eco-friendly printing. Considering all the benefits of mineral paper,
people need to think about how we destroy forests to produce pulp paper. The study of
mineral paper is a relevant topic because the whole world is now concerned about
nature and the environment. The printing industry must also move in the direction of
sustainability, so replacing traditional paper with mineral paper is a significant
contribution to our future.
References
Fiona Macdonald. (2015). Researchers are turning old plastic bottles into waterproof
paper. ScienceAlert. https://www.sciencealert.com/researchers-are-turning-old-
plastic-bottles-into-waterproof-paper
Truffula (2019). Briefing Document: Stone Paper. https://vdocument.in/stone-paper-
carbonate-with-a-plastic-resin-as-a-binder-it-is-found-on-the-global.html?page=13

STUDY OF THE INFLUENCE OF EXTERNAL ELECTRONICS ON


THE AERODYNAMICS OF CONVERTED AMMUNITION, IN
PARTICULAR OF VARIOUS SENSORS AND ANTENNAS
Olexii Bychkov
Faculty of Radio Engineering, National Technical University of Ukraine
“Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: aerodynamics, antennas, ammunition, guided ammunition, airflow,


aircraft.
Introduction. The study of the effect of various electronic sensors on the
aerodynamics of ammunition is an important direction in the design of conversion kits
for ammunition. After all, today, electronic sensors are widely used in conversion kits
to increase their accuracy and stability during flight. However, the output of sensors
and other electronic components on the external surface of the device directly affects
the aerodynamics of such a set (Boord, Hoffman, 2016).
Objectives. The purpose of this study is to investigate which external elements
of the ammunition kit have the greatest effect on the aerodynamics of the entire kit and
why.
Since this type of aircraft belongs to the military, there are no freely available
studies for such objects. However, it is important to know how to design these types of

6
Science and Technology of the XXI century Part I

aircraft nowadays due to the lack of necessary information about them and the need for
this type of aircraft for military purposes.
Methods. This work uses a model of the NATO 81 mm projectile conversion kit
(Quality Austria, 2017), which was created in the «SOLIDWORKS» software
environment.
The proposed kit is designed to control ammunition when it is dropped vertically
from an unmanned aerial vehicle. The program «FloEFD». was used to simulate air
flows. Fig. 1 shows the projectile installed in the conversion kit.

Figure 1 – View of the researched model


The presented model has an antenna 80 mm long, a control surface with a fairing
for a servo drive and a sensor with a rough shape and a size of 50x20x30 mm. Also, in
the nose part, you can see a recess, which is intended for the location of the camera
lens. A distance of 1 mm is specially provided between the control surface and the
fairing to study its effect on aerodynamics (Chen, Zhigong, Wenzheng, Mingxin,
2019).
During the research, the nose speed was equal to 50 m/s. The determination of
this parameter was based on the calculation that the converted ammunition weighs 5 kg
(4.2 kg is the ammunition, 0.8 kg is the conversion kit) and falls from a height of 100
m, which is approximately equal to 44.7 m/s according to the formula:
V  2 gh
The value of 50 m/s is taken to take into account possible air or wind
irregularities.
After the calculations, the results shown in Fig. 2.

a) b) c)
Figure 2 — Study results

7
Science and Technology of the XXI century Part I

The cutout for the camera in the front part of the device has the greatest impact
on aerodynamics. This is one of the most important places, the wrong changes of which
can lead to significant problems with aerodynamics. What is observed in this study.
It can be concluded that the optimal method to reduce the effect of this cutout
can be the installation of a transparent cover, which will continue the shape of the entire
nose part. Another option for placing the camera with a smaller impact on
aerodynamics can be to move the camera to one of the sides of the ammunition with
the development of the design of the appropriate fairing. However, such a decision will
have an impact on the control process, because it will not be possible to see exactly
what is under the nose of the ammunition.
The sensor, which has a rough shape, greatly influences aerodynamics (Fig. 2,
a). This allows us to conclude that if it is necessary to bring the sensor to the outside of
the device, then a special fairing should be made for it. At the same time, it is necessary
to consider the displacement of the center of gravity due to the momentum between the
sensor and the available center of gravity.
It can be seen that the opening between the fairing for the servo drive and the
control surface has a rather large effect on aerodynamics (Fig. 2, c) because a decrease
in pressure is observed behind it, while this is not observed behind the wing itself.
The results presented earlier allow us to conclude that this hole must be made of
the minimum size so that the aerodynamic control surface transitions into the fairing
as smoothly as possible.
The results also show that the antenna has the least impact on aerodynamics (Fig.
2, b), because it has a streamlined shape and small dimensions. But the speed graph
(Fig. 3) shows that we have a higher speed from the side of the antenna than from the
other side. Thus, the ammunition is carried away in the direction of the antenna. This
can be explained by the momentum that occurs between the end of the antenna and the
surface of the conversion kit.

Figure 3 – Changes in speed due to the antenna


Conclusion. According to the results of this study, it was found that electronic
sensors and antennas worsen the aerodynamic properties of ammunition, changing its
shape and center of gravity.

8
Science and Technology of the XXI century Part I

To achieve the maximum efficiency of ammunition, it is important to take into


account the influence of external electronics on aerodynamics and to comply with the
requirements for aerodynamic stability when installing electronic components.
References
Boord, W., Hoffman, J. (2016). Air and Missile Defense Systems Engineering. New
York: CRC Press, 268.
Chen, G., Zhigong, T., Wenzheng, W., Mingxin, X. (2019). A novel aerodynamic
modeling method for an axisymmetric missile with tiny units. Advances in
Mechanical Engineering, 11, 9.
81 MM SYSTEM. (2017). Quality Austria, 11, 1–2.

SUSTAINABILITY IN FORMULA ONE


Kristina Dakhal
Faculty of Applied Mathematics, National Technical University of Ukraine
“Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: combustion engine, synthetic fuel, carbon dioxide footprint, efuel.


Introduction. Sustainability is all about efficiency and doing more with less.
The world is moving towards new ways of energy saving, so does Formula One – the
most advanced series of motor racing. Formula One car uses combustion engine, which
tends to have high level of carbon dioxide footprint. Therefore, some might say
switching to electric cars would be the best solution to solving sustainability issue.
However, it is not entirely true: complete switch to electric engine is hardly possible in
the nearest future. For now, there are simply not enough energy resources and power
stations in Europe. Besides, the process of manufacturing electric car is more
complicated and energy consuming.
Objectives. Look into F1 sustainability strategies, how it can be implemented
nowadays, analyze its efficiency.
Methods. There are ways to create carbon neutral fuels: it can be made from
plant matter or synthetic materials. Fuels synthesized from biomass or waste or biofuels
using chemical or thermal processes are called synthetic biofuel. There are also known
as so-called second-generation fuel, as mostly waste product from food matters is used
in process. These are also synthetic fuels manufactured using captured carbon dioxide
together with low-carbon hydrogen and it is commonly used by the term efuel (“The
Royal Society”, 2019). For example, famous car manufacturer and F1 team – Mercedes
Benz cut emissions by about 90 per cent in 2022, by using biofuels and investing in
sustainable aviation fuel, among other programs. Other than that, F1 and its governing
body, the Federation Internationale de l’Automobile, decided to slowly deploy power
units that are a 50-50 combination of a small capacity internal combustion engine with
an electric hybrid system, each generating 350kW of power. This is a significantly
higher proportion of electric power than the current hybrid engines. And they will run
on zero emission e-fuels. F1 will use its huge global platform to showcase these fuels,
which will, in due course, become available at retail pumps around the world
(Allen, 2023).

9
Science and Technology of the XXI century Part I

Results. Solving sustainability challenges of the future is challenging as not all


the solutions exist today. That’s also an exciting opportunity because it allows to be at
the cutting edge of innovation and create new ways of reducing carbon dioxide
footprint.
Conclusion. From ground-breaking aerodynamics to improved brake designs,
the progress led by F1 teams has benefitted millions of cars on the road today. Few
people know that the current hybrid power unit is the most efficient in the world,
delivering more power using less fuel, and hence CO2, than any other road car
(“Formula One”, 2019).
References
Allen, J. (May 27 2023). Formula One accelerates towards sustainability goals. Special
Report “The Business of Formula One”. https://www.ft.com/content/dce87fac-
c173-4ede-b9cc-a5205ea87656
Carey, C. (2019, November). F1 sustainability strategy. Formula One.
https://corp.formula1.com/wp-content/uploads/2019/11/Environmental-
sustainability-Corp-website-vFINAL.pdf
Hutchings, G., & Davidson, M. (2019, January). Sustainable synthetic carbon based
fuels for transport. The Royal Society. https://royalsociety.org/-
/media/policy/projects/synthetic-fuels/synthetic-fuels-briefing.pdf

3D MODELING AS A SCIENCE
Oleg Gordiy
Faculty of Radio Engineering, National Technical University of Ukraine
“Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: 3D modeling, science, geometry, mathematics, algorithms, art,


design, games, infrastructure, medicine.
Introduction. 3D modeling is a versatile and dynamic scientific discipline that
has gained significant importance in various fields such as computer graphics,
animation, engineering, medicine, archaeology, and more. This report explores 3D
modeling as a science and its role in shaping our digital and physical worlds.
Objectives. 1960s – The Dawn of Computer Graphics: The origins of 3D
modeling can be traced back to the early days of computer graphics. Ivan Sutherland
developed the first computer-based interactive 3D graphics system called "Sketchpad"
in 1963. It allowed users to create and manipulate 3D objects on a computer screen.
1970s - Wireframe Modeling: In the 1970s, wireframe modeling became popular. This
technique represented 3D objects as a collection of lines and vertices. It was primarily
used in early computer-aided design (CAD) systems. 1980s – The Emergence of Solid
Modeling: The 1980s saw the advent of solid modeling, a more sophisticated approach
that represents 3D objects as a collection of interconnected surfaces and volumes. This
period marked significant advancements in CAD and engineering applications. 1990s –
Texture Mapping and Animation: During the 1990s, 3D modeling and computer
graphics began to incorporate texture mapping, allowing for more realistic and detailed
rendering. Video games and animated films started to use 3D models extensively for

10
Science and Technology of the XXI century Part I

characters and environments. 2000s – Virtual Reality and Simulation: The 3D


modeling field expanded into various domains, including virtual reality and simulation.
This decade witnessed the development of more complex and realistic 3D models, as
well as a surge in real-time 3D graphics in gaming and interactive applications. 2010s –
3D Printing and Medical Imaging: 3D modeling played a crucial role in the rise of 3D
printing. It was used for creating 3D-printed objects and prototypes. Additionally, the
medical field leveraged 3D modeling for creating patient-specific implants and surgical
planning. 2020s – AI and Real-time Ray Tracing: In recent years, artificial intelligence
(AI) and machine learning have been employed to assist in generating 3D models more
efficiently. The gaming industry has also introduced real-time ray tracing for incredibly
realistic graphics in video games (Koshel, 2019).
Methods. Creating 3D models involves various methods and techniques,
depending on the specific requirements and the field of application. Here are some
common methods of creating 3D models:
3D Modeling Software: Polygonal Modeling, NURBS Modeling, Sculpting, 3D
Scanning, Photogrammetry, CAD (Computer-Aided Design), Procedural Generation,
3D Printing, VR and AR Modeling, AI and Generative Models, Parametric Modeling
(Katwinkel, 2017).
Results. Presently, 3D modeling is used extensively in various fields, including
entertainment, engineering, medicine, archaeology, and education. The future holds the
promise of further advancements, with increased use of augmented reality (AR), virtual
reality (VR), and the integration of 3D models into everyday life, such as for
architectural visualization, interior design, and e-commerce.
Conclusion. 3D modeling is a multifaceted scientific discipline that has evolved
significantly since its inception. Its interdisciplinary nature, rooted in computer
science, mathematics, and art, enables its diverse range of applications in various
industries. As technology continues to advance, 3D modeling will undoubtedly play an
increasingly pivotal role in shaping our digital and physical worlds.
References
Katwinkel, P. (2017). Unity 3D and PlayMaker Essentials: Game Development from
Concept to Publish, Second Edition. CRC Press.
Koshel (2019). Designing industrial robots and manipulators book.

CYBER-PHYSICAL TRANSFORMATION IN THE 21ST CENTURY


INDUSTRY
Artur Halynskyi
Educational and Research Institute of Mechanical Engineering, National
Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: cyber-physical transformation, industry 4.0, social inequality,


privacy, virtual dependency.
Introduction. As we stand on the threshold of another industrial revolution,
creating entirely new types of production and opening doors to meta-universes, it is
impossible to ignore how new opportunities have the potential to deepen and radicalise

11
Science and Technology of the XXI century Part I

old problems and create new ones. I would like to focus on three problems that reflect
the problematics of Industry 4.0.
Objectives. Focus on issues of technological unemployment, social inequality,
and privacy concerns
In the context of the contemporary world, there are social patterns that allow us
to predict the realities of life in the era of the new Industry. Social stratification, as two
hundred years ago, plays one of the most important roles in the socio-political stability
of countries. The new era can be described by a couple of words: “Cyberphysical
transformation and full automation” (Manogaran, Khalifa, Loey, & Taha, 2023), which
also brings us back to the definition of “Technological Unemployment”, which was
described in his works by the English economist John Maynard Keynes (1932). This
type of unemployment occurs when there is a reduction in the number of people
employed in production or services due to the introduction of new technologies, has
both short-term and long-term nature and does not depend on the level of development
of the country. Which will eventually lead to the reduction or complete disappearance
of old professions, as in past revolutions. The robotisation of many tasks may lead to
a fall in the demand for low and medium-skilled labour, and this will negatively affect
the material wealth of the vast middle class, which will increase the financial barrier to
the development of human capital for high-skilled skills, with the result that labour
continues to be poorly paid with no opportunity to remedy this. Few social lifts will be
closed. The above facts will extremely affect social inequality in developing countries
and will increase the wealth gap between countries. This may lead to unbalanced
political systems in the countries, for example: the rise of populism, fundamentalism
or militarism.
Results. One of the main challenges of Industry 4.0 and the 21st century in
particular is becoming an issue of privacy. Privacy is no longer a social norm. People
have gained limitless possibilities in sharing and receiving information. However, IT
giants have gained access to the information of millions of people. From what shop
a person shops in, what films they watch, to which political parties they are supporters
of. Such information in unscrupulous hands, for example, makes it possible to
manipulate public opinion: which politician to vote for, which events and how to cover
them, and not only to receive targeted advertising. Therefore, mechanisms are being
developed that will allow for a permissible limitation of the link between real and
virtual life. The Council of Europe was the first organisation to think seriously about
the need to strengthen the protection of human rights in search engines and social
networks.
Conclusion. The inseparability of the Internet in modern human life and the
increasing development of technology have created a new kind of human existence –
a symbiosis of real and virtual reality. This trend was especially strongly fuelled by the
restrictions associated with the COVID-19 pandemic, which created a widespread need
to communicate and work remotely. However, the development of Internet capabilities
is changing the rules of the game very quickly and dramatically, and sometimes new
phenomena emerge that we do not even have time to follow. Lack of control and clearly
defined norms of use can negatively affect children in the first place, creating

12
Science and Technology of the XXI century Part I

dependence on the virtual world. Loss of interest in the real world, as well as the loss
and underdevelopment of communication skills with peers, will lead to the inability to
fully adapt in society and to imagine their lives without constant contact with the
computer, making them virtual “addicts”.
New opportunities create new problems for which there are no clear-cut
solutions; only a careful and controlled transition to new technologies will avoid the
upheavals of the new industrial revolution.
References
Acquisti, A., Brandimarte, L., & Loewenstein, G. (2015). Privacy and Human Behavior
in the Age of Information. Science.
Manogaran, G., Khalifa, N.E.M., Loey, M., & Taha, M.H.N. (Eds.). (2023). Cyber-
Physical Systems for Industrial Transformation: Fundamentals, Standards, and
Protocols (1st ed.). CRC Press. https://doi.org/10.1201/9781003262527
Maynard Keynes, J. (1930). Economic Possibilities for our Grandchildren, in Essays
in Persuasion (New York: Harcourt Brace, 1932), 358–373.
Milanovic, B. (2016). Global Inequality: A New Approach for the Age of Globalization.
Young, K. S. (1998). Internet Addiction: The Emergence of a New Clinical Disorder.
Journal of Behavioral Addictions.
Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future
at the New Frontier of Power.

INNOVATIONS IN AIRCRAFT ENGINEERING: TRANSFORMING


THE FACE OF COMBAT ADVANTAGE AND SAFETY IN MODERN
ARMED FORCES
Andrii Horbul, Kyrylo Bychko
Faculty of Applied Mathematics, National Technical University of Ukraine
"Igor Sikorsky Kyiv Polytechnic Institute"

Keywords: engineering, aviation, innovation, combat.


Introduction. It is not a secret to anyone that the modern battlefield is very
different from what we know from the near past, like World War II, all shifted from
month-long sieges to swift ground operations with the appearance of modern aviation,
which is designed to be multi-role and be able to gain air superiority as well as be able
to support troops on the ground. Our country knows this better than anyone else. With
constant air raids almost every day, it has become clear that aerial superiority is a key
to effective offense as well as defense.
Objectives. This article's objective is to examine and highlight innovations in
aeronautical engineering that have enhanced combat capabilities and military
operational safety.
Methods. Acknowledging the constraints regarding the research resources
accessible to students, we primarily made use of easily accessible online materials for
our investigation, mostly online research articles and journals.
Results. The first innovation in aeronautics will be the use of aerodynamically
unstable frames to boost the maneuverability of aircraft during close-quarters combat.

13
Science and Technology of the XXI century Part I

The first aircraft to use this kind of engineering decision was the F-16 Fighting Falcon,
created by General Dynamics as well as Lockheed Martin. During the development
process, it became clear that maneuverability is one of the main aspects of modern
aircraft. The technological progress of that time made it possible to have many sensors
drawing tons of data to transmit it into a powerful computer inside for it to do necessary
calculations and microcorrect mid-flight while giving the pilot complete control over
the moving direction of the aircraft. The second innovation will be a stealth technology
used on aircraft to decrease their detectability with modern radars. The detection range
is proportional to the fourth root of the radar cross-section (Pezzati, 2020, p. 7). The
point is to decrease airframe radar cross-section, so it shows on the radar as a much
more minor point, which can even result in threatening it as radio interference and
being neglected. The first “stealth” aircraft was the F-117 Nighthawk, made by
Lockheed Martin. To be stealthy, they had to make compromises such as lower engine
thrust due to losses in the inlet and outlet, a very low wing aspect ratio, and a high
sweep angle (50°) needed to deflect incoming radar waves to the sides.
Conclusion. So, as we can see, such innovations are increasing the combat
capability of aircraft, which helps them get an advantage over competitors and, as such,
will undoubtedly lead to those aircraft winning in coming air operations, which will
lead to total aerial superiority and, as such, much lower casualties from this side. It is
in our best interests to invest, research, and develop this field so that we will no longer
have to pay such a high price in the event of war.
References
Peiris, H.C., Nirmal, P.V., Bandara, H., Mahindarathne, D., Rangajeeva, S., &
Bandara, R. (2015). Aerodynamics Analysis of F-16 Aircraft.
Pezzati, A. (2020). Radar Detection and Stealth Bomber: What Future for Stealth
Technology? doi:10.13140/RG.2.2.22213.06884.

FUTURE OF OPTICAL TELESCOPES


Nikita Horobets
Faculty of Applied Mathematics, National Technical University of Ukraine
“Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: astronomy, telescopes, optics.


Introduction. Exploration of the world beyond our own has captivated minds of
humanity for centuries. Answering questions about the nature of the cosmos was, and
still is, one of the main priorities of scientific communities. One tool has been of great
help in this effort: the telescope. Optical telescope, to be precise. From the earliest
drafts of the 16th century to the soon-to-be constructed Extremely Large Telescope
(ELT), telescopes have been pivotal in expanding our understanding of astronomy.
However, with further technological advancements, it becomes apparent that optical
telescopes might soon become obsolete, with the ELT probably being the last ground-
based observatory ever built.

14
Science and Technology of the XXI century Part I

Objectives. This thesis aims to explore the theme of the transition from optical
telescopes to their alternatives, and explain possible reasons for transition, as well as
examine alternatives.
Methods. There are several reasons for the obsolescence of optical astronomy,
one of them being the technological limitation of optical telescopes. Optical telescopes
are constrained to a narrow range of wavelengths of light visible to human eye. Those
wavelengths are extremely small in comparison to, for example, radio waves, that are
captured by radio telescopes. While light waves can be as minuscule as one-celled
organisms, the length of a radio wave can range from the diameter of a grain of rice to
the radius of the Earth (Manning, 2018). Because of this, working with visible light
requires exceptional precision when it comes to construction and operation of optical
telescopes (ELT (Extremely Large Telescope), 2021). The other factor that poses
significant challenges for optical astronomy is Earth’s atmospheric interference. Light
and air pollution, unsteadiness in the atmosphere, and poor weather conditions degrade
telescope image quality and limit observational capabilities of optical telescopes.
The main reason for optical telescope downfall, however, is much simpler: the
sheer cost (van Belle, Meinel, & Meinel, 2004, para. 2.3). The construction of the
Extremely Large Telescope will cost around one billion euros for the first construction
phase (European Southern Observatory [ESO], 2023). Overwhelmingly Large
Telescope (OWL), a conceptual design previously intended to be built in the
foreseeable future, was estimated to cost around 21 billion euros (van Belle et al., 2004,
para 2.3). Because of the cost of building such a telescope, OWL’s construction was
postponed indefinitely. The cost of an optical telescope grows exponentially compared
to its effectiveness (van Belle et al., 2004).
As for alternatives, radio telescopes are most common. They operate in a wide
range of easy-to-capture radio frequencies, with minimal precision when it comes to
construction. Radio waves are also less affected by atmospheric conditions and can
operate in a wider range of locations and environments.
Results. The future of optical telescopes, which have been the main tool of
astronomical observation for centuries, is not bright. However, it doesn’t mean optical
telescopes can’t be useful again in the context of modern astronomy. The late 20th
century has seen the development of adaptive optics and space telescopes, which aim
to overcome problems listed above. Optical telescopes can also be integrated into
multi-wavelength observatories that combine data from various telescopes and
instruments operating at different wavelengths.
Conclusion. In conclusion, optical telescopes are limited in their use by a wide
variety of factors, including wave lengths and upkeep costs. Radio telescopes offer
easier, more stable and reliable observations. Still, optical telescopes play a crucial role
in astronomy, revitalized through modern methods and technologies.
References
ELT (Extremely Large Telescope). (August 23, 2021). Earth Observation Portal.
https://www.eoportal.org/other-space-activities/elt#development-status
Facts about the ELT. (2023). European Southern Observatory.
https://elt.eso.org/about/facts/

15
Science and Technology of the XXI century Part I

Manning, C. (August 31, 2018). What are radio waves? The National Aeronautics and
Space Administration. https://www.nasa.gov/general/what-are-radio-waves/
van Belle, G., Meinel A.B., & Meinel, M. P. (2004). The Scaling Relationship Between
Telescope Cost and Aperture Size for Very Large Telescopes. Lowell
Observatory.

THE FUTURE OF RADIO ENGINEERING


Dmytro Huk, Ivan Domashenko
Faculty of Informatics and Computer Engineering, National Technical
University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: 5G communication, Internet of things (IoT), quantum signal


processing, critical challenges.
Introduction. In today's digital age, the field of radio engineering is undergoing
a transformation, driven by rapid technological advances and the growing demand for
wireless communications and connectivity. Wireless engineering, including the design,
development, and optimization of wireless communications systems, is at the forefront
of the innovations shaping our connected world. The future of wireless technology
promises exciting possibilities, from the development of smart cities and autonomous
vehicles to the continued development of wireless healthcare technology. To grasp the
full scope of these perspectives, it is essential to embark on a journey through the
rapidly evolving landscape of radio engineering and its transformative impact on our
interconnected world.
Objectives. As we face the complex challenges and opportunities of the 21st
century, the future of wireless engineering is poised for transformation. This
transformation will be driven by the convergence of many factors, including
advancements in semiconductor technology, the advent of 5G communications, and
the proliferation of Internet of Things (IoT) devices, and the growing need for secure
and efficient wireless communications.
Methods. The next generation of radio engineering promises to unlock exciting
possibilities. With the continuous miniaturization of electronic components, we can
anticipate the development of smaller, more energy-efficient radio devices, paving the
way for wearable technologies, smart infrastructure and even implantable medical
devices to have the ability to communicate seamlessly with the world around us.
One of the most promising areas for the application of radio technologies is
quantum signal processing. Quantum computers can process large volumes of data
much faster than classical computers. This opens up opportunities to improve radio
engineering systems that require significant computing power. For example, we can
more accurately model and analyze the complexity of the radio spectrum, helping to
solve problems related to interference and signal mixing.
Furthermore, the rollout of 5G communication networks is set to revolutionize
the way we connect and interact. It supports the unique combination of high-speed
connectivity, very low latency, and ubiquitous coverage, making it natively suitable for
supporting IoT use cases (Cheruvu, Kumar, Smith, Wheeler, 2019).

16
Science and Technology of the XXI century Part I

Results. This new era of wireless communications will not only bring lightning-
fast data speeds, but also enable widespread adoption of virtual and augmented reality
applications, autonomous vehicles, and industrial IoT. Radio engineering will play
a central role in designing the infrastructure needed to support these innovations.
Amid these technological advances, there are critical challenges to address. The
efficient utilization of the radio frequency spectrum, the mitigation of interference, and
the enhancement of cybersecurity measures are of paramount importance.
Additionally, ethical considerations surrounding the potential health effects of
prolonged exposure to electromagnetic radiation and responsible e-waste management
need to be carefully considered.
It is assumed from the study results that as the 5G IoT applications are becoming
widespread, the billions of IoT-connected devices will be operational and transmitting
data endlessly at every moment. So as a consequence, there will be a massive amount
of energy consumption, and it will also increase with every passing moment. (Malik,
Parihar, Bhushan, Chaganti, Bhatia, Astya, 2023).
Conclusion. The future of radio engineering represents an exciting frontier
where innovation and connectivity converge. It is a field that will not only shape our
interconnected world but also challenge us to address critical issues related to
sustainability, security, and ethics as we work to build a future more connected and
technologically advanced.
References
Cheruvu, S., Kumar, A., Smith, N., Wheeler, D.M. (2019, 14 August). Demystifying
Internet of Things Security. Connectivity Technologies for IoT.
https://link.springer.com/chapter/10.1007/978-1-4842-2896-8_5
Malik, A., Parihar, V., Bhushan, B., Chaganti, R., Bhatia, S., Astya, P.N. (2023,
30 August). 5G and Beyond. Security Services for Wireless 5G Internet of
Things (IoT) Systems. https://link.springer.com/chapter/10.1007/978-981-99-
3668-7_9

ADDITIVE MANUFACTURING (3D PRINTING): NEW ERA IN


MODERN TECHNOLOGIES
Dmytro Ivanytskyi
Educational and Research Institute of Mechanical Engineering, National
Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: 3D printing, additive manufacturing, stereolithography, rapid


prototyping, customization, cost-effective production.
Introduction. Three-dimensional (3D) printing, often referred to as additive
manufacturing (AM), stands as a transformative technology that has revolutionized the
way we design and produce objects. Unlike traditional subtractive manufacturing
methods that involve cutting away material from a solid block, 3D printing adds
material layer by layer, enabling the creation of intricate and complex structures with
precision and efficiency (Androshchuk & Kopyl, 2016). This innovative technique has

17
Science and Technology of the XXI century Part I

found applications across various industries, from aerospace and healthcare to art and
consumer goods, reshaping the landscape of manufacturing and design.
Objectives. The aim of our study was to provide an overview of the underlying
techniques of 3D printing that underpin this revolutionary manufacturing method.
Methods. In our research we used the methods of comparison, analysis and
synthesis of information in the area of additive manufacturing.
Results. Additive manufacturing is a process of joining materials to make
objects from 3D model data, usually layer upon layer, as opposed to subtractive
manufacturing methodologies (Frazier, 2014). This process can be achieved using
various materials, including plastics, metals, ceramics. The technology has gained
popularity due to its ability to produce complex geometries that are challenging to
achieve with traditional manufacturing methods. 3D printing has found applications
across various manufacturing sectors. It enables rapid prototyping, reducing
development times and costs significantly. Moreover, customization is a key
advantage, allowing for the production of tailored products and components. Industries
such as aerospace, healthcare, automotive, and fashion have integrated 3D printing into
their production processes to enhance efficiency and innovation.
Stereolithography, also known as 3D printing SLA (Stereolithography
Apparatus), is not only one of the world's earliest 3D technologies but also one of the
most precise methods of additive manufacturing. In some ways, it is unique because it
utilizes a liquid photopolymer resin as its consumable material. The essence of this
technology lies in exposing the photopolymer to specific algorithms (determined by
a slicing program based on a 3D model). Under the influence of laser radiation, the
resin solidifies, forming the final object (Schmidleithner & Kalaskar, 2018).
Powder bed fusion (PBF) methods use either a laser or an electron beam to melt
and fuse material powder together. Electron beam melting (EBM), methods require
a vacuum but can be used with metals and alloys in the creation of functional parts
(Frazier, 2014).
AM also has the potential for mass production of complex geometries such as
lattice structures, where the application of traditional methods of manufacturing such
as casting is not straightforward and require further time-consuming tooling and post-
processing. However, improvements in the fabrication speed and cost reduction must
be resolved through the improvement of machine design. Also, the high costs and time-
consumption of the AM process remain to be major hurdles that inhibit mass
production.
Conclusion. In conclusion, 3D printing has ushered in a new era in
manufacturing, offering unprecedented flexibility, speed, and cost-effectiveness. This
technology has transformed the way products are designed, developed, and produced.
With its continued advancement and adoption, we can expect 3D printing to play an
even more prominent role in shaping the future of manufacturing.
References
Androshchuk, G.O. & Kopyl, Y.V. (2016). 3D Printing in the Era of Innovative
Technologies: Regulatory Challenges. Intellectual Property in Ukraine, 5, (pp.1–7).

18
Science and Technology of the XXI century Part I

Frazier, W.E. (2014). Metal Additive Manufacturing: A Review. Journal of Materials


Engineering and Performance, 23, 1917–1928. https://doi.org/10.1007/s11665-
014-0958-z
Schmidleithner C. & Kalaskar, D.M. (2018). Stereolithography. In D. Cvetković (Ed.),
3D Printing (pp.1-22). https://doi.org/10.5772/intechopen.78147

APPLICATION OF ARTIFICIAL INTELLIGENCE IN


MANUFACTURING AUTOMATION
Yaroslav Kaliberda
Educational and Research Institute of Mechanical Engineering, National
Technical University of Ukraine "Igor Sikorsky Kyiv Polytechnic Institute"

Keywords: Artificial Intelligence (AI), Manufacturing Automation,


Manufacturing Optimization.
Introduction. Automation has long been used extensively in the manufacturing
sector to boost productivity and efficiency. However, the industry is going through
a tremendous shift as a result of the development of artificial intelligence (AI). The use
of AI in manufacturing is expanding beyond task automation to include process
optimization, data analysis, and innovation (Mandal, 2023). Artificial intelligence is
a field of computer science that deals with creating programs and systems capable of
analyzing data, making conclusions, and making decisions that would typically require
human involvement. This leads to increased efficiency, reduced costs, and improved
production quality.
Objectives. The purpose of this work was to analyze applications and
advantages of artificial intelligence in manufacturing automation.
Methods. To achieve the purpose, we used the method of literature review on
the topic of artificial intelligence and its application in manufacturing automation.
Results. Artificial intelligence is completely changing the manufacturing sector
by altering how procedures are planned and carried out. AI gives machines the ability
to learn from data, see patterns, and make wise judgments. It includes innovations like
deep learning, machine learning, and natural language processing (Mandal, 2023). One
of the key applications of AI is in production forecasting and optimization due to its
ability to analyze large volumes of data and forecast product demand. This allows
companies to optimize production processes, manage inventory, and minimize
resource or time wastage. Artificial intelligence can be used in quality control since it
can accurately detect defects and deviations in products, reducing the number of
defective items and improving the quality of the final product. The use of AI in robotic
manufacturing is another example. It is employed in production to control robots and
automated systems. The use of AI in robotics has ushered in the concept of
collaborative robots, also referred to as “cobots” that can efficiently interact with
humans and follow their instructions (Bharadiya, Thomas, & Ahmed, 2023).
AI-equipped robots can perform various tasks, from assembling products to
packaging and sorting. Another important area of AI use is Machine Learning, which

19
Science and Technology of the XXI century Part I

enables machines to learn from accumulated experience. This means that over time,
systems become smarter and more adaptable to changing production conditions.
Adopting artificial intelligence into industrial procedures has many advantages,
including reduced costs, higher product quality, greater predictive maintenance, and
more precise demand forecasts. Manufacturers can maintain their competitiveness in
a market environment that is becoming more dynamic and complicated by utilizing AI-
enabled optimization (Mandal, 2023). Automation using AI helps to boost productivity
and reduce time and monetary costs in production. It also allows for a reduction in the
workforce and a decrease in working hours, contributing to higher profits. Another
important benefit of using AI in manufacturing automation is cost reduction. AI aids
in optimizing labor and raw material costs, as well as reducing equipment maintenance
expenses through more accurate fault prediction.
Among the major benefits of AI is the enhancement of the quality of
manufactured goods. By controlling and detecting defects through AI, companies can
achieve higher product quality, leading to increased customer satisfaction. Integrating
AI into manufacturing promotes economic growth for companies and regions by
creating new job opportunities and increasing production output.
Conclusion. The application of artificial intelligence in manufacturing
automation is becoming an increasingly important trend. It leads to increased
efficiency, cost reduction, and improved production quality. Companies that invest in
the development and implementation of AI gain competitive advantages in the market
and contribute to the economic growth of their regions.
References
Bharadiya, J. P., Thomas, R. K., & Ahmed, F. (2023). Rise of Artificial Intelligence in
Business and Industry. Journal of Engineering Research and Reports, 25(3),
85–103. https://doi.org/10.9734/jerr/2023/v25i3893
Mandal, S. (2023, September 26). Transforming the Manufacturing Industry: How
Artificial Intelligence is Driving Innovation.
https://www.linkedin.com/pulse/transforming-manufacturing-industry-how-
artificial-driving-mandal

INNOVATIONS IN THE MANUFACTURE OF PEROVSKITE SOLAR


CELLS
Valentyna Kalytiuk
Faculty of Electric Power Engineering and Automatics, National Technical
University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: perovskite solar cells, solar energy, full precursor method.


Introduction. Perovskite solar cells have been integrated into the solar energy
landscape for a long time now. Their evolution has been quite noteworthy, witnessing
a remarkable efficiency surge from a modest 3% in 2009 to an impressive 20% as of
2023 (Perovskite-info, 2023). Despite this rapid advancement, numerous obstacles
must be overcome for these solar cells to become a commercially competitive
technology.

20
Science and Technology of the XXI century Part I

Objectives. A notable impediment to the acceptance of perovskite solar cells has


been the presence of lead, which is inherently unsustainable from an ecological
standpoint. Hence, the principal objective of the present paper was to address this
environmental concern and consider the current projects to tackle this issue.
Methods. In pursuit of a viable solution, researchers from Nanyang
Technological University, in collaboration with the Singapore Institute of Science and
Technology’s IMRE, devised a method to encapsulate perovskite solar cells using non-
toxic metal materials (TenTek, 2023). They developed a special compound designed
to shield the perovskite layer, safeguarding it from environmental influences while
simultaneously enhancing its performance.
Results. NTU scientists in Singapore have identified that the zinc-based
compound known as PEA2ZnX4, produced through the full precursor (FP) technique,
can serve as a lead-free protective layer for perovskite cells. Employing the FP method,
these scientists successfully fabricated a 1-inch by 1-inch prototype solar cell coated
with the zinc-based compound (Solar Magazine, 2022). Remarkably, this intervention
left the electrical properties of the underlying perovskite layer unaffected, while also
rectifying surface defects and enhancing overall performance.
Conclusion. Furthermore, this innovative approach not only renders perovskite
layers more ecologically sustainable and stable but also more efficient. It circumvents
the need to extract lead ions from perovskite layers for the conventional lead-based top
layers, thereby opening up avenues for alternative materials and cladding layers in the
realm of perovskite technology.
References
Perovskite-Info. (2023, June 03). Perovskite Solar. https://www.perovskite-
info.com/perovskite-solar
Solar Magazine. (2022, May 16). Perovskite Solar Cells: An In-Depth Guide +
Comparisons with Other Techs. https://solarmagazine.com/solar-
panels/perovskite-solar-cells/
TenTek. (2023, April 21). A new environment friendly way to make perovskite cells
greener. https://www.tentekenergy.com/index.php/about/315.html

THE NEW AIR DEFENSE SYSTEM OF UKRAINE


Maksym Klymenko
Faculty of Radio Engineering, National Technical University of Ukraine
“Igor Sikorsky Kyiv Polytechnic Institute”

Key words: drones, anti-air, cheap, propeller plane.


Introduction. In our time we can see that all countries have started to use a lot
of drones for recon, attack, and decoy targets. We can see thousands of videos where
it is so effective. The most widespread anti-air now is machine guns on cars, anti-air
guns, and infantry arms.
The most effective anti-air are planes. Some pilots destroy a lot of drones, and
drones can not be protected by planes, because their speeds are slowest in a lot times.

21
Science and Technology of the XXI century Part I

Speed of Shahed 136 is 120-180 km/h, lancet – 90-120 km/h, and Orion – 120-200.
Any reactive plane can destroy this drones (Boffey, 2023).
A very big problem of our reactive planes are price for one hour of flight, that
can be more than 30000$. (10000-30000$+). Also, supersonic speeds greatly affect the
overall cost of maintenance and operation to a greater extent. Our jets are a priority
target for their lancets and missiles (Plane4you, n.d.; Flight on the L-29 Fighter, n.d.).
Objectives. Understand, how to make anti-air planes cheaper and useful. Make
destroying enemies’ drones cheaper and quicker. Give more cheap planes for any
objectives.
Methods. A review of prices of propeller planes, for their cost and cost one hour
of flight. We buy or manufacture propeller aircraft. The cost of even a few dozen
propeller planes will be less than the cost of a jet plane (Kuper, 2023). For the cost of
the 1st jet plane, you can easily buy or make a hundred (or even more) light propeller
planes. You can put fairly cheap machine guns or small-caliber guns on them to shoot
down lancets and shahids. Their protection and camouflage will not be a problem,
because the cost of many UAVs is many times higher than 1 such aircraft, let alone
a missile. Also, the length of the runway for many models is much shorter, compared
to modern jets, which eases the requirements for airfields.
Results. We will give hundreds of anti-air planes, with full safety for pilots, because
it will be used in 10-20 km from the frontline. All it will be in price like 2-3 modern
reactive planes, and may help our military forces in any objects, like the evacuation of
wounded people, transport some small loads and maybe using it like helicopters for
launching light rockets (100-200 kg, maybe 300+ in perspective) (The Chinese UAV
Tengoen TB-001 is loaded with air-to-surface missiles: what is known, 2022).
Conclusion. Wars are won by the one who is one step ahead of the enemy. Yes,
this decision looks strange, but in fact, it will be a big surprise for the enemy. In the
future, we can actually cover almost our entire sky with hundreds of cheap anti-aircraft
aircraft that, with certain modifications, will be able to shoot down even missiles (like
some Chinese anti-aircraft drones).
References
Boffey, D. (2023). Revealed: Europe’s role in the making of Russia killer drones.
https://www.theguardian.com/world/2023/sep/27/revealed-europes-role-in-the-
making-of-russia-killer-drones
Kuper, T. (2023). Tom Kuper: chomu NATO ne postachaie Ukraini zakhidni
vynyshchuvachi? [Tom Cooper: why does NATO not supply Western fighter jets to
Ukraine?]. https://mediavektor.org/44993-tom-kuper-pochemu-nato-ne-postavlyaet-
ukraine-zapadnye-istrebiteli.html
Kytaiskyi BPLA Tengoen TB-001 napkhanyi raketamy klasu “povitria-zemlia”:
shcho vidomo [The Chinese UAV Tengoen TB-001 is loaded with air-to-surface
missiles: what is known] (2022). https://focus.ua/uk/digital/540172-kitayskiy-bpla-
tengoen-tb-001-mozhet-vypuskat-rakety-klassa-vozduh-zemlya-chto-izvestno
Plane4you. (n.d.). Centrum sprzedaży samolotów. https://www.plane4you.eu/
Polet na Ystrebytele L-29. [Flight on the L-29 Fighter]. (n.d.).
https://www.czech-jet.com/ru/polet-na-istrebitele-l-29

22
Science and Technology of the XXI century Part I

THE IMPACT OF WEARABLE TECHNOLOGIES ON OUR LIVES


Nadiia Kostenko
Faculty of Applied Mathematics, National Technical University of Ukraine
“Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: wearable technologies, monitoring health, employee safety.


Introduction. In recent years, wearable technologies have been improving
rapidly. But how exactly does it affect people’s lives, and which aspects are influenced?
These gadgets, from fitness trackers and smartwatches to VR headsets and smart rings,
have reasonably altered society, the way it interacts, works, monitors health and secures
itself.
Objectives. The main goal is to distinguish the changes made by wearables
when it comes to health, communication, entertainment and workplace, and why
people wear them.
Methods. To begin with, most of the wearables are created to continuously
monitor health, for instance, heart rate, blood oxygen, sleep patterns, stress, menstrual
cycle, activity and the list goes on. So, anybody with the device can keep long-year
records about yourself that could help you to determine and analyze patterns in your
health metrics (Tiga Healthcare., n.d.).
Apart from healthy users, the wearables are also used by patients and might
detect some serious conditions, such as arrhythmias and other cardiac abnormalities if
it’s a medical device. Furthermore, with special gadgets diabetics can be alerted
whether they have low or high blood sugar levels, preventing the need for constant
tests, and people with asthma can detect an attack before it’s at an advanced level,
which can be intervened with medicine.
Not only can they warn you about risks, but also prevent some of the troubles.
The Q-Collar worn on the neck could help to avoid sports traumas and concussions by
stabilizing the brain inside the skull (Kruglyak, 2020).
Besides, there is a possibility to track employee safety, talking about high-risk
careers. For example, measuring critical biometrics as well as collecting information
about location if needed could ensure the safety of hired workers and reduce injury
levels, which is essential in industries, like construction. Such equipment typically has
a self-alert button, which notifies the supervisor about having a safety issue and
requiring help. In addition to this, the sensor informs of an accident in the event of any
slips, trips, or falls.
What is more, such technologies can be worn as stylish accessories nowadays
and be a way to express yourself. Indeed, a smartwatch strap of the right colour can be
an addition to your look, but primarily let’s review an example of more fashionable
items. The first smart jean jacket for urban cyclists, Project Jacquard, presented by
Google and Levi, enables users to perform straightforward tasks like changing music,
blocking or answering calls, or accessing navigation information by tapping, swiping
or holding on the left sleeve cuff (Arthur, 2016).
Results. More than 1 billion people (Beckman, 2023) worldwide use wearable
technology both regularly and for work, and the outlook for further growth is also quite

23
Science and Technology of the XXI century Part I

promising. This proves the users’ trust in the product and received assistance,
convenience and stylishness.
Conclusion. Wearable technologies have a great impact on our daily lives,
including how we communicate, how productive we are, how we access information,
and how we stay entertained. Owing to them, many patients with a variety of conditions
and employees feel a lot safer, due to the decreased rates of injuries and even deaths.
References
Arthur, R. (2016, May 20). Project Jacquard: Google and Levi’s Launch the First
‘Smart’ Jean Jacket for Urban Cyclists.
https://www.forbes.com/sites/rachelarthur/2016/05/20/exclusive-levis-and-googles-
project-jacquard-launch-wearable-tech-jacket-for-urban-cyclists/?sh=7b5e1dc850c7
Beckman, J. (2023, August 17). 15 Wearable Technology Statistics [2023 Edition].
https://techreport.com/statistics/wearable-technology-statistics/
Kruglyak, I. (2020, December 9). 20 examples of wearables and IoT disrupting
healthcare. https://www.avenga.com/magazine/wearables-iot-healthcare/?region=ua
Tiga Healthcare. (n.d.). The Rise of Wearable Health Devices and Their Impact
on Patient Monitoring. https://www.tigahealth.com/the-rise-of-wearable-health-
devices-and-their-impact-on-patient-monitoring/

THE MILLENNIUM PROBLEM: THE NAVIER-STOKES EQUATION


Sofiia Kulyk
Education and Research Institute of Aerospace Technologies, National
Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: Incompressible Flows, Regularity, Blow-Up, Navier-Stokes


Equations, Euler Equations, Clay Millennium Problem, the essence of the problem,
solution problems.
Introduction. The Navier-Stokes equations are fundamental in fluid mechanics,
describing fluid motion. Despite their significance in science and engineering, there
remains an incomplete theoretical understanding of their solutions. For systems of 3D
equations, mathematicians have not yet established a proof for the existence of
a continuous solution, under specified initial conditions. This unresolved issue is
known as the Navier-Stokes existence and smoothness problem, which pertains
specifically to incompressible fluids. The problem can be divided into three parts in
this way: prove that a solution exists; a solution must exist at every point in space; the
solution should be smooth. This means that a small change in the initial conditions
leads to only a small change in the result (Bazzi, 2020; Millennium Prize Problems).
This problem of the millennium plays a very important role in the fluid
mechanics, because it can be used to describe the motion of any viscous incompressible
Newtonian substance. These are, for example: how ketchup flows from a bottle, how
far lava falls during a volcano eruption, glacier flow regimes and their variations, ocean
currents, weather conditions, pollution modeling, blood flow within our bodies, airflow
around objects (aerodynamics), rocket exhaust gases, hot gases in stars, and everything
else related to fluids are primarily governed by the Navier-Stokes equations in the

24
Science and Technology of the XXI century Part I

background. Also, the solution to this equation contains one of the most difficult
problems of modern physics – the problem of turbulence. Although turbulence is
a fairly common phenomenon, it still remains almost unstudied, which is why it is
completely unpredictable (Mishra, 2020).
The novelty of the problem is that the problem requires finding a proof of
a fundamental theoretical property, namely the existence of solutions.
Objectives. This theoretical paper aims to explain the essence of the Navier-
Stokes equation and to prove its important role for fluid mechanics.
The task is to tell about the Navier-Stokes equation and the difficulties that arise
during its solution; explain the mathematical concept of this equation; emphasize the
values of the equation for fluid mechanics.
Methods. The methodology involves a systematic review and analysis of
literature, scientific works and Internet resources. Using theoretical data and
mathematical concepts, this article explores the problems and complexities underlying
the solution of the Navier-Stokes equation and the significance of these problems in
fluid mechanics.
Results. The results of this theoretical exploration encompass a thorough
treatment of the problems and difficulties that arise in solving the Navier-Stokes
equation. A serious difficulty is that the flow we are considering is three-dimensional.
Because it remains uncertain whether solutions exist for three-dimensional cases
involving fluid motion in the X, Y, and Z directions, the numerical solutions are far
from unique. This lack of uniqueness can be easily grasped. For instance, when you
pour water into a glass, it never follows the same path twice, exhibiting erratic and
seemingly random behavior. Additionally, the solutions derived from these equations
aren't consistently smooth. In specific points along the fluid's flow, the calculated
velocity can skyrocket to infinity, a phenomenon not observed in real-world scenarios.
Moreover, turbulence, a well-known characteristic of fluids, remains shrouded in
mystery. Predicting how fluid flows behave under turbulent conditions remains
a significant challenge (Mishra, 2020).
Conclusion. The Navier-Stokes equation plays a very important role in modern
physics and one of the primary and simultaneously the most challenging issues for
mathematicians is this. If this problem is proved, then if we know the state of the liquid
at a certain moment in time and the characteristics of its movement, we will be able to
predict the behavior of an incompressible liquid for the entire future.
The Navier–Stokes equations prove valuable as they elucidate the underlying
principles of numerous scientific and engineering phenomena. They find applications
in simulating weather patterns, ocean currents, the movement of water in pipelines, and
the aerodynamics of airflow around an aircraft wing.
References
Bazzi, A. (2020, Sep 6). The Navier-Stokes Equations. A simple introduction to
a million dollar problem. https://www.cantorsparadise.com/the-navier-stokes-
equations-461f7453d79e

25
Science and Technology of the XXI century Part I

Mishra, A. A. (2020, Sep 15). Navier Stokes equations – the million dollar problem.
https://medium.com/@ases2409/navier-stokes-equations-the-million-dollar-
problem-78c01ec05d75
Millennium Prize Problems (n.d.). Encyclopedia, Science News & Research Reviews.
https://academic-accelerator.com/encyclopedia/millennium-prize-problems

AEROSPACE TECHNOLOGY TRENDS


Alona Kyrylenko
Faculty of Applied Mathematics, National Technical University of Ukraine
“Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: Aerospace technology, trends, advanced materials, autonomous


flight systems, artificial intelligence.
Introduction. Nowadays, there are many different activities related to the
exploration and utilization of space in the global space economy. With the many
diverse aspects of the aerospace industry, there are many opportunities for growth and
innovation that will make these explorations a simpler and more efficient process.
Objectives. The main goal is to understand current aerospace industry outlook
and project forward-looking trends with the aerospace industry continually making
improvements.
Methods. Aerospace engineering consists of aeronautics and astronautics, it’s
a diverse industry with a multitude of commercial, industrial, and military applications
(CMTS, 2021, Aerospace Technology section). Thus, aerospace technology refers to
the engineering, testing, and servicing of aircraft and spacecraft. Technicians may be
involved in the construction, maintenance, testing, operating, and repair of systems
associated with reliable and reusable space launch vehicles and related ground support
equipment.
Despite various constraints, the space industry is growing steadily. What's more,
as funding continues to increase and costs decrease, this opens up more and more
opportunities. One of the areas of development is the use of advanced materials.
Progress in the development of advanced materials is expected to focus on the
combination of functions such as energy harvesting, cloaking, structural and personal
health monitoring.
Implementing autonomous technologies has been a growing trend across several
industries, and the aerospace industry is no exception (CMTS, 2021, Autonomous
Flight Systems section). Much of this has been concentrated on expanding autonomous
flights with the ultimate goal of launching flights without humans. This has already
happened with drones, although this technology will certainly need to be expanded
before it is ready for passenger aircraft and long-haul travel.
The aerospace technology industry is also gaining from artificial intelligence and
the use of machine or active learning in research and education (CMTS, 2021, AI
section). Artificial intelligence can solve much more complex problems than humans
and can run thousands of results in a few seconds compared to how long it takes the
human brain to analyze information.

26
Science and Technology of the XXI century Part I

Results. The aerospace technology consists not only of aeronautics and


cosmonauts, but the research, design, manufacture, exploitation or servicing of aircraft
and spacecraft involves the work of many organizations.
Conclusion. Aerospace is a field that is going to witness many changes moving
forward. Aerospace engineering isn't just going to have a bright future, it is going to be
revolutionizing in more ways than one. Innovative technologies and manufacturing
processes are being developed on a continuous basis, and manufacturing is already
reaping the benefits.
References
10 Emerging Aerospace Technology Trends You’ll Want to Know About. (2021, July).
https://www.cmtc.com/blog/emerging-aerospace-technologies

COMPARING CHARACTERISTICS OF CHEMICAL AND ION


REACTIVE ENGINES
Kostiantyn Liakhovoi, Oleksandr Petrichenko
Education and Research Institute of Aerospace Technologies, National
Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: rocket propulsion technologies, rocket engines, chemical engines,


electric propulsion, engine efficiency.
Introduction. Nowadays rocket launches have become a usual thing, especially
the orbital ones. But long-term space flights have constant trouble – strict mass
limitations, herewith fuel mass is often times bigger than useful payload. Ion engines
are the alternative to chemical engines in terms of space maneuvers. Unlike chemical
rocket engines, they utilize electromagnetic fields to throw out mass. Let us compare
some of their characteristics and scopes of possible usage.
Objectives. The main objective is to understand the differences in technical
characteristics between chemical and ion-powered rocket engines and to compare
them.
Methods. The review and analysing of science articles and development notes
as well as open source data published by NASA, ESA and JHUAPL have been
analyzed. Searching work through specialized websites and internet encyclopedias has
been done.
Results. Chemical rocket engines are the only way to launch rocket into orbit.
Their working principle is based on mixing fuel and oxidizer in a combustion chamber
and further emission of gases through the Laval nozzle (Mishra, 2017). However, this
engine type has some drawbacks while being used in long-term space flights. It is due
to the necessity to carry big mass of fuel and oxidizer. Technical complexity of
secondary ignition system of chemical engine in vacuum is also a problem. The main
advantage of using chemical reactive engines is their possibility to convert chemical
energy into kinetic one rapidly, though the efficiency of such conversion is rather low.
We can describe the efficiency of engine using specific impulse (Isp) (Interstellar
Travel. Purpose and Motivations, 2023). It can be expressed as a ratio of thrust force
to free-fall acceleration by mass fuel consumption:

27
Science and Technology of the XXI century Part I

𝐹𝑇
𝐼𝑠𝑝 =
𝑔 ∙ 𝑄𝑀
As a result, we have got a value in seconds which depicts the time, during which
the engine can produce a 1 N of force while using 1 kg of fuel. For chemical engines,
the value of this impulse varies from 280 seconds for solid fuel engines to 450 seconds
for liquid fuel ones (Interstellar Travel. Purpose and Motivations, 2023). Due to the
mentioned drawbacks, the engineers faced a challenge to develop more efficient engine
to be used in deep space flights. The first successful prototypes were developed in the
1950-s by NASA (Sovey, et al., 1993). The idea to use ionized air had been proposed
by K. Tsyolkovsky. Ion thrusters stand as an alternative to chemical ones when
efficiency is prioritized over thrust. The ion engine creates thrust by ionizing molecules
of noble gas and shooting them through the acceleration grid using electromagnetic
fields. Leaving the engine ions gain the velocity of up to 100 km per second. To
compare, the velocity of gas emissions from a chemical engine`s nozzle is about 3 to
4 km per second. The specific impulse of ion engines varies from 2500 to 10000
seconds, that exceeds the same value for chemical engines up to ten times, but the thrust
force of ion engines is much lower (table 1) (Graham & Reckart, 2023). The mass of
a standard ion engine is 50 kg, while chemical engines weight over 260 kg.
Table 1
Characteristics Ion Chemical
Liquid or solid propellant plus oxidizer (if
Fuel Noble gases
needed)
Thrust efficiency Isp = 2500-10000 s Isp = 280-450 s
Energy of electromagnetic
Convertable energy Chemical
fields
Range of thrust 25-250 mN 800 - 7887 kN
Mass ≈50 kg From 260 to 9750 kg, depends on purpose

Scope of use Space maneuvers Liftoff, space maneuvers


Conclusion. Chemical and ion rocket engines differ by scopes of use and
working principles but both can be improved through the following measures: more
effective propellants, nozzle optimization, minimization of mass and dimensions of
engines, increase of electromagnetic field’s power, optimization of acceleration grid.
These changes can help achieving required needs.
References
Elsevier Inc. (2023). Les Johnson & Kenneth Roy (Eds.). Interstellar Travel. Purpose
and Motivations. Elsevier Inc.
https://www.sciencedirect.com/book/9780323913607/interstellar-travel#book-description
Graham, S. & Reckart, T. (2023, January 25). Gridded Ion Thrusters (NEXT-C).
https://www1.grc.nasa.gov/space/sep/gridded-ion-thrusters-next-c/
Mishra, D.P. (2017). Fundamentals of Rocket Propulsion. CRC Press.
https://ftp.idu.ac.id/wp-content/uploads/ebook/tdg/DESIGN SISTEM DAYA
GERAK/Fundamentals of Rocket Propulsion.pdf
28
Science and Technology of the XXI century Part I

Sovey, J. S., Hamley, J. A., Patterson, M. J., Rawlin V. K., & Sarver-Verhey, T. R.
(1993). Ion Thruster Development at Nasa Lewis Research Center. National
Aeronautics and Space Administration Lewis Research Center.
https://www.researchgate.net/publication/24318556_Ion_thruster_development
_at_NASA_Lewis_Research_Center

SUPERCOMPUTERS: UNLEASHING THE POWER OF


COMPUTATIONAL EXCELLENCE
Vladyslav Logvynenko
Faculty of Informatics and Computer Engineering, National Technical
University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: supercomputer, computational tasks, applications.


Introduction. In the ever-evolving landscape of technology, supercomputers
stand as the epitome of computational prowess. These colossal machines have played
an indispensable role in advancing science, engineering, and industry by solving
complex problems at speeds that were once unimaginable (IBM, 2023). In this article,
we delve into the world of supercomputers, their significance, and the remarkable feats
they’ve achieved.
Objectives. The main goals are to consider the tasks for which supercomputers
are used and also to learn about a couple of the most prominent representatives of these
machines.
Methods. The main method was the analysis of published articles on the topic
of supercomputers. The first relevant studies and those based on them, as well as the
most recent ones, are reviewed. These articles describe the definition of
“supercomputers”, explain the purposes of their use, and also present a couple of
examples of supercomputers.
Results. Supercomputers, purpose-built for handling intricate and data-intensive
computational tasks, far surpass the capabilities of typical personal computers or
servers by delivering processing power measured in flops (floating-point operations
per second). These machines can perform an enormous number of calculations in an
instant, making them essential for a wide range of applications.
Supercomputers have a profound impact across numerous fields, playing
a pivotal role in:
 Scientific Research: They enable simulations of subatomic particle
behavior and climate modeling, providing invaluable insights into complex natural
phenomena.
 Medical Breakthroughs: Supercomputers expedite drug discovery, DNA
sequencing, and disease modeling, accelerating advancements in medical research and
treatments.
 Aerospace and Engineering: They are indispensable for aerodynamic
simulations, contributing to the design of more efficient aircraft and spacecraft.
 Weather Forecasting: Supercomputers empower meteorologists to create
detailed and precise weather models, improving our ability to forecast natural disasters.

29
Science and Technology of the XXI century Part I

 Energy Sector: They assist in optimizing energy production and


distribution, facilitating the development of sustainable energy solutions.
 National Security: Supercomputers are instrumental in cryptography,
code-breaking, and defense simulations (Oliveira, Blanchard, DeBardeleben, &
Santos, 2023).
Among the prominent supercomputers worldwide, a few noteworthy examples
stand out:
 Summit: Located at Oak Ridge National Laboratory, Summit is currently
one of the most powerful supercomputers, with over 200 petaflops of processing
power. It excels in a variety of tasks, including nuclear simulations and drug discovery
(Oak Ridge National Laboratory, 2022, February 19).
 Fugaku: Housed in Japan's RIKEN Center for Computational Science,
Fugaku is renowned for its ability to perform over 442 petaflops, making it the fastest
supercomputer in the world (RIKEN Center for Computational Science, 2023).
 Sierra and Lawrence Livermore National Laboratory: These
supercomputers are dedicated to advanced simulations for nuclear weapons research
and nonproliferation efforts.

While supercomputers boast remarkable capabilities, they encounter various


challenges, such as energy consumption and cooling concerns, along with the perpetual
demand for hardware and software advancements. Additionally, researchers are
investigating the incorporation of artificial intelligence to further elevate their
performance and capabilities.
Looking ahead, exascale supercomputers, with the ability to execute quintillion
calculations per second, are within our grasp, offering the potential for even more
extraordinary advancements in domains like climate modeling, drug discovery, and
astrophysics.
Conclusion. Supercomputers represent the apex of human achievement in the
realm of computational technology. Their existence is a testament to our unquenchable
thirst for knowledge, pushing the boundaries of what is possible. As we continue to
harness their immense power, the world can anticipate groundbreaking discoveries and
advancements that will shape the future of science and innovation.
In a world increasingly reliant on data and computation, supercomputers serve
as our most potent allies in unraveling the complexities of the universe and driving the
progress of society, science, and technology to new heights.
References
IBM. (2023). What is supercomputing technology?
https://www.ibm.com/topics/supercomputing

30
Science and Technology of the XXI century Part I

Oak Ridge National Laboratory. (2022, February 19). Solving Big Problems:
Science and Technology at Oak Ridge National Laboratory.
https://www.olcf.ornl.gov/summit/
Oliveira, D., Blanchard, S., DeBardeleben, N., & Santos, F., F. (2023). Computer
Design, Analysis, and Use. An International Journal of High-Performance
Computer Design, Analysis, and Use.
https://link.springer.com/article/10.1007/s11227-020-03324-9
RIKEN Center for Computational Science. (2023). Fugaku continues to achieve top
rankings. https://www.r-ccs.riken.jp/en/outreach/topics/20230522-2/

WELDING TECHNOLOGY ENHANCEMENTS FOR OPTIMAL


JOINT INTEGRITY
Tetiana Luhovets
Faculty of Informatics and Computer Engineering, National Technical
University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: welding, joint integrity, materials science, technology


advancements.
Introduction. New technologies in welding are focused on improving process
performance and improving the quality of the welded joint, namely: reducing metal
deformation; increasing the productivity of the welding process; saving consumables;
facilitating and simplifying the management of the welding process, introducing
digitalization and robotization of welding processes; expanding the possibilities of
joining thin sheet metal of various grades; implementation of developed diagnostic
techniques used to control the quality of welded joints. Recent strides in welding
technology offer promising solutions to address these hurdles.
Objectives. This study endeavors to delve into the latest innovations in welding
technology and their influence on joint integrity. Through an assessment of factors such
as a decrease in the strength of welded joints, a slowdown in diffusion processes, and
changes in the burning conditions of the welding arc.
Methods. A thorough examination of recent literature pertaining to
advancements in welding technology was conducted, with a specific focus on their
impact on joint integrity and material characteristics. The use of new materials for the
manufacture of structures, the use of new welding consumables, increased
requirements for the quality of the weld seam, determined by the conditions of welding
operations, necessitate the consideration of the latest welding technologies and
innovations in welding production (Salvati & Wharry, 2023). One of the modern
developments is the technology of hybrid laser welding, which allows us to solve the
problem when the strength of the weld is not inferior to the strength of solid metal. In
thin-walled structures, vibrations arising from loads negatively affect the weld. The use
of hybrid welding, in which the seam is simultaneously exposed to a laser beam and an
electric arc, makes it possible to solve the problem of weld strength, which is
comparable to the strength of solid metal. The welding programming process is the
most promising. Developments are carried out based on the application of the electron

31
Science and Technology of the XXI century Part I

beam principle, used for joining high-strength alloys of non-ferrous metals. This
encompassed evaluations of hardness, tensile strength, and fracture toughness.
Results. The study reveals that recent developments in welding technology,
including advancements in laser welding, electron beam welding, and friction stir
welding, have shown significant improvements in joint integrity. Computer
technologies used in welding production make it possible to: calculate and optimize
welding modes using specialized mathematical packages; make drawings of parts and
structures to be welded, and prepare documentation; simulate various processes in
order to control the propagation of thermal fields and deformations, as well as set
parameters for the welding process and operation of welding equipment; These
techniques offer precise control over the welding process, resulting in reduced heat-
affected zones and minimized residual stresses.
Conclusion. In the modern world of engineering and technology development,
the need to join new materials with unique properties requires the development of new
technologies and welding methods that are focused on improving the quality of welded
joints while increasing the productivity of the welding process. The use of computer
technology makes it possible to achieve optimal quality of welded joints with minimal
labor and resources and is a popular process in modern conditions (Haghshenas, 2018,
p. 132). However, joining dissimilar materials through conventional fusion-based
methods can be very challenging (and even impossible) due to large discrepancies in
physical properties (i.e. melting temperature) between dissimilar metals such as
aluminum and steel.
References
Haghshenas, M. (2018). Joining of automotive sheet materials by friction-based
welding methods. Journal of Engineering Science and Technology, 45(3), 130–
148. https://doi.org/10.1016/j.jestch.2018.02.008
Salvati, E., & Wharry, J. (2023). Advanced Welding and Joining Technologies.
Journal of Materials Today Communication, 45(7), 1568–1578.
https://doi.org/10.1016/j.mtcomm.2023.105563

AUTOMATED SYSTEM FOR CONTROLLING THE PROCESS OF


DRILLING HOLES IN CARBON FIBER REINFORCED POLYMER PARTS
Oleksandr Matoshyn
Faculty of Instrumentation Engineering, National Technical University of
Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: composite materials, carbon fiber reinforced polymer, automated


system, delamination, optimal cutting mode.
Introduction. Composite materials (CM) are becoming more and more common
in various fields of science and technology every day, which is caused by their high
strength, rigidity, corrosion resistance, low thermal expansion coefficient, electrical
insulating properties and property anisotropy, which allows to control the properties of
the final product by changing the number of fibers, their orientation and the type of

32
Science and Technology of the XXI century Part I

laying of the layer. In recent years, the demand for these materials has also been
growing rapidly in the machine and instrument making industries.
The use of carbon fiber reinforced polymers (CFRP), that is, composite materials
with carbon fibers and a flexible epoxy resin matrix, is particularly relevant. Their
advantage is the ability to provide the required properties for a large number of
applications through the correct selection, combination and arrangement of fibers and
matrix. This material has high strength, stiffness and low density, which provides
products with increased operational characteristics and low weight.
Most of the products made of this material are used already in a finished state,
but complete exclusion of mechanical processing is impossible. To obtain holes for
fasteners, a drilling operation is used, which is accompanied by various defects:
delamination, chips, shrinkage and high surface roughness. Compared to other
structural materials, CFRP have certain problems during their mechanical processing
and, in particular, during drilling. The problems of machining these composites are
related to such characteristics of the material as heterogeneity, anisotropy, the presence
of highly abrasive and hard-reinforced fibers, the combination of hard abrasive fibers
with a soft matrix.
The main defects of this process include delamination and thermal damage of
the matrix, which is a serious problem in ensuring the quality and accuracy of the
obtained products. According to researchers, 60% of defects in CFRP are caused by
damage to the surfaces of the holes (Abhishek et al., 2014). The emergence of new
composites with a unique combination of properties that surpass the properties of
traditional structural materials requires the introduction of new technical solutions that
will ensure increased process productivity and the necessary surface quality
parameters.
Objectives. The main aim is to improve the efficiency of the process of drilling
holes in CFRP by implementing an automated control system.
Methods. Fulfilling the requirements for the quality of the surfaces of the parts
is possible by choosing the optimal cutting modes based on the processing modes of
traditional materials. To ensure the quality of the surfaces of CFRP parts, it is proposed
to develop an automated control system (ACS), which allows you to ensure the
required values of the hole delamination parameter. At the same time, the control of
the process is carried out by measuring the thrust force, and the stabilization of this
parameter within the specified limits is carried out by adjusting the feed of the drill
taking into account other quality parameters: roughness, accuracy, etc.
The purpose of the control is to make the hole as quickly as possible while
ensuring no delamination. The relevant stages are:
1. Starting the process with the recommended processing modes.
2. Feeding the drill: the drill moves to the workpiece.
3. Contact: the tip of the drill is in contact with the workpiece material.
4. Normal drilling: drilling without delamination (or within the permissible
limits).
5. Initiation of delamination: if the critical thrust force is exceeded, delamination
begins.

33
Science and Technology of the XXI century Part I

6. Adjustment of processing modes using feedback: reducing the feed value.


7. Drill exit: the tip of the drill exits through the workpiece.
8. Completion: The hole is complete.
9. Removal of the drill: the drill must be removed from the workpiece and moved
back to the zero point.
Results. During the drilling of CFRP, it was found that with an increase in the
spindle speed, the temperature in the processing zone of the drilled hole increases,
which leads to the softening of the matrix material, thereby reducing the probability of
delamination. Increasing the spindle speed also provides increased processing
performance. In turn, delamination begins when the thrust force exceeds a critical
threshold value. In addition to the thrust force, the torque also affects the delamination
of the holes. At the same time, an increase in the diameter of the drill increases the
contact area of the drilled hole, which tends to increase the thrust force and, as a result,
increase the delamination. On the other hand, increasing feed also leads to increased
delamination due to increased thrust force. So, in the worst case, delamination occurs
at the highest feed and low spindle speeds.
Conclusion. Therefore, defect-free machining that provides satisfactory
performance is indeed a challenge when drilling CFRP. This requires relevant
knowledge about the behavior of the machining process as well as the optimization of
the machining parameters. The developed automated control system of the drilling
process should ensure the quality parameters of the hole surface by adjusting the cutting
feed based on the developed mathematical model and algorithm.
References
Abhishek, K., Datta, S., & Mahapatra, S. S. (2014). Optimization of thrust, torque,
entry, and exist delamination factor during drilling of CFRP composites. The
International Journal of Advanced Manufacturing Technology, 76(1–4), 401–
416. https://doi.org/10.1007/s00170-014-6199-3

BOUNDARY LAYER CONTROL OVER THE UPPER WING


SURFACE APPLYING BLOWING
Oleksii Oliinyk
Education and Research Institute of Aerospace Technologies, National Technical
University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: active flow control, boundary layer, numerical analysis, CFD,


aerodynamics.
Introduction. The use of active methods of boundary layer control is one of the
promising areas for radical improvement of the load-bearing, maneuvering, takeoff and
landing properties of the wing. It is important to note that this technology also has a
positive impact on the safety of aircraft. Despite the fact that this method has existed
for a long time, its practical applications are quite limited. Research in this area is
relevant and necessary for the Ukrainian aircraft industry. This experimental study
delves into design principles of energy increase wing flight characteristics.

34
Science and Technology of the XXI century Part I

Objectives. The goal of the project is to modify the airfoil, determine its
aerodynamic characteristics, and compare it with the original.
Methods. The high-speed natural laminar profile HSNLF (1)-0213 (Sewall et
al., 1987) was chosen to improve its aerodynamic characteristics, flow, and circulation
at detached flow parameters by integrating systems of active control of the boundary
layer by air blowing (Viken et al.,1987). During the study a numerical method
(computational simulations and analysis) was used to determine the required lift and
drag force parameters using specialized software (O’Connor et al., 2023). The Xflr5
panel-vortex method and the Transition SST viscosity model of the ANSYS Fluent
finite element method were used.
Results. Here is a graphical example of main characteristics (Fig.1):

Fig. 1. Comparison of the aerodynamic quality of the profile with tangential


air blowing slots at different distances
For a blower nozzle located at 1% of the chord we have compared modeling
results with experimental data (Sewall et al., 1987). The results are presented in Table
1.
Table 1. Comparison of force and efficiency coefficients
α° «Clear» airfoil Airfoil with air blowing Growth
Сy Cx K Cy Cx K ΔCy ΔCx ΔK
0 0.12 0.00 31.12 0.14 0.00 35.44 1.22 1.08 4.32
2 0.29 0.00 63.45 0.35 0.00 71.38 1.20 1.07 7.93
4 0.50 0.01 67.71 0.61 0.01 79.75 1.22 1.04 12.04
6 0.68 0.01 74.63 0.83 0.01 88.39 1.22 1.03 13.76
8 0.83 0.01 76.01 1.01 0.01 91.41 1.21 1.01 15.40
10 0.98 0.01 72.92 1.20 0.01 96.75 1.22 0.92 23.83
12 1.20 0.02 73.99 1.44 0.02 93.29 1.20 0.95 19.30
14 1.32 0.02 66.29 1.62 0.02 86.26 1.23 0.95 19.98
16 1.40 0.03 52.42 1.71 0.03 68.22 1.22 0.94 15.81
18 1.42 0.04 38.08 1.74 0.03 49.87 1.22 0.94 11.79
20 1.30 0.07 18.79 1.58 0.06 24.60 1.22 0.93 5.82
Conclusions. By determining the parameters of the nozzle position and flow
characteristics, it was possible to select the most optimal combinations that increased

35
Science and Technology of the XXI century Part I

the wing lift with insignificant changes in drag forces, and reduced the effect of flow
separation.
By incorporating the use of this technology at the initial stages of design,
significant advantages can be gained and the flight characteristics of the aircraft can be
improved.
This algorithm can be used for the design of short takeoff, maneuvering, and
cruise aircraft. The designed airfoil can be scaled while maintaining the methodological
idea of placing the blowing slots of the active boundary layer control system.
References
O’Connor, J., Diessner, M., Wilson, K., Whalley, R. D., Wynn, A., & Laizet, S. (2023).
Optimization and analysis of streamwise-varying wall-normal blowing in a
turbulent boundary layer. Flow, Turbulence and Combustion, 110(4), 993–1021.
Sewall, W. G., McGhee, R. J., Hahne, D. E., & Jordan, F. L., Jr. (1987). Wind tunnel
results of the high-speed NLF(1)-O213 airfoil. NASA Langley Research Center,
30, Report No. 90–12542.
Viken, J. K., Viken, S. A., & Pfenninger, W. (1987). Design of the low-speed NLF(1)-
0414F and high-speed HSNLF(1)-0213 airfoils with high-lift systems. NASA
Langley Research Center, 35, Report No. 90–12540.

ELECTROPORATION AS A NEW TECHNOLOGY FOR


PERFORMING ABLATIONS TO TREAT ARRHYTHMIA
Yulian Petkanych
Faculty of Biomedical Engineering, National Technical University of
Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: arrhythmia, electroporation, ablation, pulsed field ablation,


pulmonary vein, esophagus, phrenic nerve.
Introduction. Arrhythmia is a condition that happens when the electrical signals
that coordinate the heart's beats don't work properly (Heart arrhythmia – Symptoms
and causes, 2023). This can result in the heart beating too fast (known as tachycardia),
too slow (called bradycardia), or in an irregular pattern. Every year the number of
people who have arrhythmia is increasing. There is a point of view that till 2050 every
5th person will suffer from this disease.
The first way of treatment is medications. It may not work or after some time
the organism can get used to it and it can no longer be helpful. In this case surgical
intervention is necessary to provide ablation. Ablation – procedure during which tiny
scars in the heart (usually pulmonary veins (PV)) are created to block irregular
electrical signals and restore a typical heartbeat (Cardiac Ablation – About, 2022).
Cardiac ablation is most often done using thin, flexible tubes called catheters inserted
into the heart through the veins or arteries. Previous technology used heat and cold to
kill generating energy cells. Still, this procedure takes 4+ hours using heat and 1,5-
3 hours using cord. Duration is unstable and unsatisfactorily short. Also there is a risk
of damage to the esophagus or phrenic nerve which is dangerous in this case.

36
Science and Technology of the XXI century Part I

Objectives. The Boston Scientific (BS) company started the research. The
primary goal was to enhance the safety of cardiac ablation procedures used to treat
arrhythmia, both for patients and medical professionals. Additionally, the study is
aimed at improving the long-term effectiveness of the treatment and exploring the
methods to reduce the risk of injury to the phrenic nerve and esophagus.
Methods. A review of discovered and documented knowledge about
electroporation and its use in medicine. Overviewing cardiac and neighboring tissues
like phrenic nerve and esophagus. Testing the reaction on electroporation. All kinds of
testing were provided to make sure it is completely safe to use in the human body. BS
released their catheter called FARAPULSE and stared testing. IMPULSE, PEFCAT,
PEFCAT II are three studies regressed on 121 subjects to get evidence on safety,
continuance, PV isolation success, effect on phrenic nerve, esophagus and other tissues
(Di Monaco et al., 2022).
Results. The 121 subjects in the study between the three PFA cohorts
demonstrated PFA was safe. All PVs were successfully isolated with FARAPULSE
catheter with no major complications, procedure time took more or less 45 minutes.
Accuracy of single-shot isolation is nearly 99%. In a real world, non-clinical study
setting, PFA was a safe modality for ablation near the esophagus. No patients showed
neurological deficits. No chest pain, coughing or hemoptysis reported; no relevant
decrease in hemoglobin levels between baseline and 30 days post-PVI (Pulmonary
Vein Isolation). Bronchoscopy revealed no visible thermal lesions or ulcers. After 1
year 92+-5% was free of arrhythmia.
Conclusion. PFA revealed a fully safe, fast and effective method to treat
arrhythmia. That means that all goals were reached. Moreover, after this study finished
in 2018, including the release of the system to the whole world in 2021, there were no
issues or problems recorded. PFA is future of ablations (DeSimone et al., 2014). BS
company is preparing new futures to introduce into the system in 2024. In the same
year Medtronic team is also releasing their system, and many other companies are
investing mostly into such products. That is why, this theme is actual and prospective
to discover.
References
Cardiac Ablation – About. (2022, February 22). Mayo Clinic.
https://www.mayoclinic.org/tests-procedures/cardiac-ablation/about/pac-20384993
DeSimone, C. V., Kapa, S., & Asirvatham, S. J. (2014). Electroporation. Circulation-
arrhythmia and Electrophysiology, 7(4), 573–
575. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4390039/
Di Monaco, A., Vitulano, N., Troisi, F., Quadrini, F., Romanazzi, I., Calvi, V., &
Grimaldi, M. (2022). Pulsed field ablation to Treat Atrial fibrillation.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9030965/
Heart arrhythmia – Symptoms and causes. (2023, April 21). Mayo Clinic.
https://www.mayoclinic.org/diseases-conditions/heart-arrhythmia/symptoms-
causes/syc-20350668

37
Science and Technology of the XXI century Part I

ROBOTS IN SPACE: FROM AUTOMATION TO ROBOTICS


Ruslan Prohorchuk
Educational and Research Institute of Mechanical Engineering, National
Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute””

Keywords: space robotics, space infrastructure, satellite.


Introduction. The exploration of outer space has long captured the human
imagination, pushing the boundaries of our scientific understanding and technological
capabilities. As we venture deeper into the cosmos, the role of automation and robotics
in space exploration becomes increasingly vital. While initially used mainly for
automation of routine tasks, modern robots evolved into highly sophisticated machines
capable of performing complex activities autonomously.
Objectives. The aim of this work is to analyze the evolution of robotic systems
used in space and their changing roles over time, from simple Automated Transfer
Vehicles to advanced humanoid robots.
Methods. The main method used in this work is a literature review on the history
and current state of robotics in space. Publications by NASA and other space agencies
were reviewed to single out key milestones in robotic technology development for
space applications and discuss roles of robots.
Results. “Lunokhods” were the first automatic mobile scientific laboratories on
the Moon, which were a key step in space exploration and uncovering the secrets of
the Moon. Kaidash and Shkuratov (2017) consider the technical features of
“Lunokhods”, including their structure and functionality. These automatic devices
consisted of a hermetic instrument container and a self-propelled chassis intended for
movement on the surface of the Moon. The instrument container housed all the
necessary scientific equipment, including television cameras and other sensors. The
specific feature of “Lunokhods” was their ability to work both in daylight and during
a moonlit night thanks to solar batteries and radiators for heat removal. The authors
also provide information about the history of “Lunokhods” launches. The first attempt
to launch these missions took place on February 19, 1969, but was unsuccessful due to
operator error. However, subsequent missions, such as Luna 17 and Luna 21, were
successful and provided a large amount of lunar data, including photographs, soil
properties, and distances from the Earth to the Moon. Here we can note the important
contribution of specialists from Ukraine, in particular the Research Institute of
Astronomy of Kharkiv University, to the study of the Moon and the development of
space programs, which opened new horizons in the knowledge of space (Kaidash &
Shkuratov, 2017).
True autonomy appeared thanks to the work of planetary explorers. The
Sojourner rover became the first robotic rover to explore an extraterrestrial planet.
Launched in 1996 as part of NASA’s Mars Pathfinder mission, it marked a major
milestone in space exploration as it became the first successful rover to operate on the
surface of Mars. Sojourner analyzed the composition of Martian soil at the Pathfinder
landing site in 1997 (Baker, 2013). The Sojourner instruments provided valuable
information about the composition of Martian rocks and soil. This led to the discovery

38
Science and Technology of the XXI century Part I

of rounded pebbles and rocks that indicated the presence of water on Mars in the past,
a key discovery in the search for evidence of life on Mars.
Robonaut 2 was an incredible achievement as the first humanoid robot to
function in space, achieving a major milestone and demonstrating cutting-edge robotics
technology. As described by (Diftler et al. 2011), Robonaut 2 pioneered
anthropomorphic dexterous manipulation abilities suitable for working closely with
humans in spacecraft through its two fully articulated arms with hands consisting of
five fingers each, as well as a torso, neck and sensor head, allowing it to operate tools
and perform tasks typically handled by astronauts. Its development involved
overcoming substantial technical challenges to design a robot capable of withstanding
launch aboard a rocket while also functioning autonomously in the microgravity
environment of the International Space Station, relying on advanced sensors,
algorithms and software to perceive its surroundings and coordinate its limbs in three
dimensions without falling over as it moved around and manipulated objects,
representing a major advance over previous robotic systems confined to fixed
workstations on Earth. Its success established humanoid robotics as a viable approach
for future space missions, paving the way for more advanced systems that could one
day accompany astronauts beyond low Earth orbit to other planets.
Conclusion. Space robots advanced significantly from early prototypes to
skilled robotic assistants capable of human-level dexterity and decision making. Their
roles expanded from equipment operators to collaborative partners enhancing human
productivity and safety in space. Continued robotics development will help enable
future human exploration of the Moon and Mars.
References
Baker, D. (2013). NASA Mars Rovers Manual: 1997–2013 (Sojourner, Spirit,
Opportunity and Curiosity). Sparkford, UK: Haynes Publishing UK.
Diftler, M.A., Radford, N.A., Mehling, J.S., Abdallah, M.E., Bridgwater, L.B.,
Sanders, A.M., Askew, R.S., Linn, D. M., Yamokoski, J.D., Permenter, F.A.,
Hargrave, B.K. (2011). Robonaut 2 – the first humanoid robot in space.
Proceedings – IEEE International Conference on Robotics and Automation,
2011, 12-01.
Kaidash, V. G., Shkuratov, Yu. G. (2017). Lunokhod in I. M. Dzyuba, A. I. Zhukovsky,
M. G. Zheleznyak ... and I. S. Yatskiv (Eds.), Encyclopedia of Modern Ukraine.
Institute of Encyclopedic Research of the National Academy of Sciences of
Ukraine. https://esu.com.ua/article-59283

39
Science and Technology of the XXI century Part I

RESEARCH AND ANALYSIS OF POLYMER MEMBRANE


MATERIAL PROPERTIES
Bohdan Sharaievskyi
Educational and Research Institute of Mechanical Engineering, National
Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: waterproof fabrics, membrane properties, combined membranes,


duplexes and triplexes, three-layer membrane fabric.
Introduction. Composite polymer materials are multicomponent materials
based on a macromolecular compound (polymer) reinforced with various fillers that
enhance adhesion between layers, with the matrix serving as the connecting link, while
additives provide physical properties (Bondaletova & Bondaletov, 2013; Hsis, Seghiri,
& Benzekri, 2021).
Objectives. In this work, more attention is given to a three-layer membrane
fabric with a protective layer. A modern membrane is often described as the thinnest
film laminated to the outer fabric or as a specialized coating applied through a hot
process during production (Mulder, 1996). Based on their structure, membranes are
classified into: non-porous, porous, and combined.
Common drawbacks of porous membranes include:
1. Relatively high susceptibility to various contaminants that can clog the pores,
such as different fats and salts found in sweat, detergent residues, and “dirt” in the
general sense of the word. In the air of modern cities, there are many very fine particles
from combustion products – particles of such size can clog these pores. As a result,
there is a rapid reduction in vapor permeability during use.
2. Potential leakage is caused by the porous structure of the membrane (Ahuja &
Sirkar, 2016).
Common drawbacks of non-porous membranes include: relatively low vapor
permeability; “inertness” – a significant delay in achieving maximum vapor
permeability (Gray & Woo, 2015).
Combination membranes combine both types of non-porous and porous
membranes.
Methods. The outer fabric is coated with a porous membrane on the inner side,
and on top of the membrane, there is also a thin layer, which is a non-porous
polyurethane membrane film (Patnaik, 2017).
The designs of laminated materials are divided into duplexes and triplexes (Pan
& Sun, 2016).
Triplex (three-layer laminate) – incorporates three substrates in its construction:
fabric, moisture-resistant film, and a bonding fabric (e.g., mesh). It is used for making
clothing with high requirements for strength and mechanical-physical loads.
The advantage of three-layer membrane fabric is that it has superior
characteristics, including high strength, durability, and waterproof breathable fabric
(which creates a comfortable microclimate by wicking away excess moisture from the
body) (Fane, Fell, & Waters, 1983).

40
Science and Technology of the XXI century Part I

For bonding the membrane to the fabric, adhesive polyurethanes are commonly
used, in most cases (Kang & Park, 2015). Typically, the polyurethane melt is applied
to the moisture-resistant film, which is fed together with the fabric between the pressure
rolls of a calendar, ensuring the bonding (“adhesion”) of the two substrates. The
difference between lamination methods lies only in the manner of applying the
polyurethane melt to the film's surface.
There are various methods for applying the polymer to the film’s surface:
 Pressing through a perforated rotary template.
 Transfer from engraved roll embossments (Seyam, 2014).
Calendaring is a process of polymer treatment using specialized equipment that
includes a calendar (Wilkinson & Ryan, n.d.). This process involves forming thin
polymer products (films), applying polymer coatings to the base, doubling films, and
impregnating the base with polymer. Calendars can be used for shaping thin polymer
products (films), applying a layer of polymer composition (coating) to the base,
duplicating formed polymer films, sheets, and coatings (Kutz, 1991). Calendaring can
produce coatings from 0.1 to 1.5 mm, but the thickness cannot exceed 1.5 mm due to
the possibility of forming bubbles. To obtain thinner coatings, high loads on the
calendar are required, but this method allows for relatively fast work.
The advantages of this process include direct transformation of pellets into
a deposited film, the ability to make rapid changes during the process, and the
suitability for producing small batches (Baker, 2004).
Results. Breathable waterproof fabrics, obtained by laminating fabrics with
polytetrafluoroethylene (PTFE) polyurethane coating, were investigated. Research was
conducted on various methods of membrane application, including vapor permeability,
water resistance, hydrophobicity, and air permeability. In the case of wet fabric
coatings, high vapor permeability and low water resistance were observed, but dry
coatings showed the opposite results. The variation in these parameters is also linked
to the quantity of PTFE applied to the fabric (Jan, 2022).
The research in the work (Luo, Tan, & Wang, 2018) analyzed the creation of
an insulating material with functional fabric, breathable insulating layer, and micro-
porous membrane. These layers are laminated together, forming a waterproof
insulating material with high strength and no noise.
As revealed by the results of research and property analysis of waterproof fabrics
in the works, for multilayer waterproof fabrics obtained from incompatible polymers
and fabrics, there is typically a low adhesive interaction between the layers, resulting
in low interlayer strength of such materials. This can lead to delamination and overall
material failure, posing challenges during testing and film use. In the study (Kozior &
Blachowicz, 2015), possibilities for enhancing the interlayer strength of multilayer
waterproof fabrics using various methods for evaluating interlayer strength were
identified. This issue will be addressed in further research.
Conclusion. Thus, the adhesive strength of the adhesive bond is the primary
property on which its level of waterproofing and durability depend. Achieving adhesive
bonds with high adhesive strength values will ensure a high level of seam
waterproofing and their reliability.

41
Science and Technology of the XXI century Part I

References
Ahuja, S.M., & Sirkar, K.K. (2016). Membrane Science and Technology: A Practical
Guide.
Baker, R.W. (2004). Membrane Technology and Applications.
Bondaletova, L.I., & Bondaletov, V.G. (2013). General idea of composite polymeric
materials. Polymeric composite materials. Tomsk: TPU.
Fane, A. G., Fell, C. J. D., & Waters, A. G. (1983). Principles and Applications of
Membrane Technology in the Pulp and Paper Industry. Appita Journal, 36(5),
361–365.
Gray, D.L., & Woo, L.K. (Eds.). (2015). Membrane Technology and Research.
Hsis, R., Seghiri, R., & Benzekri, Z. (2021). Polymer composite materials. Elsevier,
262, 1–15.
Jan, B.S. (2022). Study of the properties of water-repellent fabrics obtained by
polyurethane coating with polytetrafluoroethylene (PTFE).
Kang, K., & Park, C. H. (2015). Textile Laminating Technology: Fundamentals and
Applications. In Engineered Textiles: Integration of Polymer, Structural and
Functional Technologies. Woodhead Publishing.
Kozior, T., & Blachowicz, T. (2015). Investigation of the interlayer adhesion strength
of multilayer water-repellent fabrics obtained from incompatible polymers and
fabrics.
Kutz, M. (1991). Plastics Engineering Handbook of the Society of the Plastics Industry.
Luo, L., Tan, L., & Wang, J. (2018). Development of a water-repellent and air
permeable.
Mulder, M. (1996). Basic Principles of Membrane Technology (2nd ed.). Kluwer
Academic Publishers. http://dx.doi.org/10.1007/978-94-009-1766-8
Patnaik, A. (Ed.). (2017). Textile Finishing: Recent Developments and Future Trends.
Pan, N., & Sun, G. (Eds.). (2016). Functional Textiles for Improved Performance,
Protection and Health. CRC Press.
https://onlinelibrary.wiley.com/doi/book/10.1002/9781119526599
Seyam, A. F. M. (2014). Lamination and coating techniques in the textile industry. In
Textile Processing and Properties: Preparation, Dyeing, Finishing and
Performance. Elsevier.
Wilkinson, N., & Ryan, A. (n.d.). Polymer Processing and Structure Development.
https://books.google.com.gi/books?id=S7vwDQx6SJsC&printsec=frontcover#
v=onepage&q&f=false

A TWO-MONTH RETROSPECT REGARDING LK-99


Hlib Skopyk
Faculty of Applied Mathematics, National Technical University of Ukraine
“Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: LK-99, superconductivity, room temperature, ambient pressure.


Introduction. Scientists diligently seek room-temperature ambient-pressure
superconductors due to their transformative potential for everyday technologies and

42
Science and Technology of the XXI century Part I

applications. As of now, ubiquitous use of electricity is inescapably bound by energy


loss due to electrical resistance – a measure of opposition that a material presents to
the flow of electrical current. Rendering resistance virtually nonexistent would
practically eliminate any insufficiencies and establish a firm foundation for future
advancements.
Objectives. To investigate the evolving public perception of alleged
superconductor LK-99 and the informational landscape regarding research
advancements.
Methods. A comprehensive analysis was conducted on various articles and
sources published online over a two-month period, with focus on the reception of LK-
99.
Results. July 22. As per Hyun-Tak Kim, paragraph 2, South Korean researchers
posted two papers (available at https://arxiv.org/abs/2307.12008 and
https://arxiv.org/abs/2307.12037) about LK-99 on arXiv, a non-peer-reviewed
repository for scientific reports. The researchers reported possible indicators of
superconductivity in LK-99, including unexpectedly low electrical resistance and
partial levitation in a magnetic field.
July 27. The media buzzes with excitement over the discovery of a new
superconductor. However, scientists exercise caution and skepticism due to past
instances where similar claims proved incorrect. LK-99 possesses a number of
convincing properties, although a few aspects necessitate further investigation
(Garisto, 2023).
August 3. A series of additional theoretical studies and practical experiments
become intertwined in a cycle of excitement and disappointment. Theory neither
refuted the potential for superconductivity in LK-99 nor provided conclusive evidence,
while experiments yielded no definitive results (Chang, 2023). August 5. The situation
becomes more complex due to theoretical support for superconductivity in LK-99,
albeit with additional conditions to be met. Experimental results strongly align with
these theoretical findings, which suggest the need for doping (additional materials) or
precise placement of copper (deliberate LK-99 construction) (Octopuses, 2023).
August 11. An idea is expressed that the original paper served its purpose, even
if it led to exposing flaws in the research, because further analysis is now more guided
and structured. To justify the value of superconductor research in public’s eyes, media
reminds that it could enable a perfectly efficient power grid, levitating trains,
commercially viable fusion reactors, and completely revolutionize everything that uses
electricity (Chowdhury, 2023). August 15. A scientist who initially advocated for the
superconductive properties of LK-99 was found to have published a misleading paper,
negatively impacting the credibility of other research papers that also affirmed the
potential of LK-99 (Chang, 2023). August 21. It is established that all the properties of
LK-99 similar to those of a superconductor can be explained by impurities in the
compound, measurements errors, and ferromagnetism, solidifying the fact that LK-99
is not a superconductor (Kim, 2023). August 25. Further research on LK-99 is now
focused on examining the theoretically outlined conditions under which the compound
should exhibit superconductive properties (Shenoy, 2023).

43
Science and Technology of the XXI century Part I

September 22. We learn that the LK-99 papers had been published without
permission by one of the authors, while they were in the process of being peer-
reviewed. This rushed publication is what lies at the core of a conflict between initial
expectations and actual outcomes, subsequently defined through further in-depth
research (GlobalData Thematic Intelligence, 2023).
Conclusion. The LK-99 story provides a view of science in action. It is essential
to recognize that the research catalyzed by LK-99 papers holds intrinsic value, even as
the narrative remains that we are yet to find an ambient-condition superconductor.
References
Chang, K. (2023). LK-99 Is the Superconductor of the Summer.
https://www.nytimes.com/2023/08/03/science/lk-99-superconductor-ambient.html
Chang, K. (2023). Superconductor Scientist Faces Investigation as a Paper Is
Retracted. https://www.nytimes.com/2023/08/15/science/retraction-ranga-dias-
rochester.html
Chowdhury, H. (2023). The internet just went on a wild quest that could result in
floating trains and cheap EVs. https://www.businessinsider.com/why-everyone-
wanted-lk-99-to-be-room-temperature-superconductor-2023-8
Garisto, D. (2023). Viral New Superconductivity Claims Leave Many Scientists
Skeptical. https://www.scientificamerican.com/article/viral-new-
superconductivity-claims-leave-many-scientists-skeptical1/
GlobalData Thematic Intelligence. (2023). The superconductivity summer drama is not
over. https://www.verdict.co.uk/superconductivity-lk-99-failure-impurities-
may-be-key/
Kim, H. (2023). Hopes fade for ‘room temperature superconductor’ LK-99, but
quantum zero-resistance research continues.
https://theconversation.com/hopes-fade-for-room-temperature-superconductor-
lk-99-but-quantum-zero-resistance-research-continues-211733
Octopuses, R P. (2023). LK-99 and Superconductivity at 400K: A Skeptic’s Optimism.
https://medium.com/@r.p.octopuses/superconductivity-at-400k-a-skeptics-
optimism-67a2969ace7
Shenoy, V B. (2023). How scientists found that LK-99 is probably not
a superconductor. https://www.thehindu.com/sci-tech/science/lk-99-room-
temperature-superconductor-hype/article67233834.ece

ENGINEERING EXCELLENCE: A THEORETICAL APPROACH TO


LIGHT AIRCRAFT SPAR DESIGN
Solomiia Skoroplias
Education and Research Institute of Aerospace Technologies, National
Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: aircraft spar, load factor, strength, design, wing.


Introduction. Aircraft design is a complex task, since it must simultaneously
meet the requirements of minimum mass and sufficient strength, rigidity and
survivability. Particular difficulty is the design of the power set of the wing due to the

44
Science and Technology of the XXI century Part I

significant magnitude of the load, the wide variety and the change in these loads
throughout the flight.
Objectives. This theoretical work examines the process of designing a light
aircraft spar for bending work. The necessity of such research and its use in aircraft
design is demonstrated.
Methods. The methodology includes consideration of a variety of literature and
engineering reports. Using theoretical foundations and calculation methods, this thesis
describes the design process for a light aircraft wing spar.
Results. First of all, the load on the wing is calculated in different design cases
according to JAR-VLA standards: maximum permissible positive load factor and
maximum negative load factor (State Aviation Administration, 2019).
Next comes the selection of the structural and power scheme of the wing. In this
case, a single-spar design is chosen because, in comparison with other structural and
power schemes, it is optimal for light aircraft due to the ease of its creation and
manufacture (Bharath et al., 2022).
Having selected the spar, a design calculation is carried out, that is, the materials
from which it is made are selected, and the geometric dimensions of the structural
elements are selected (Bharath et al., 2022). In this case, the calculation is “simplified”,
since in light aircraft with a single-spar wing and non-strength skin, it can be assumed
that the entire bending load is transferred to the wing spar. Onwards, the spar parts are
modeled, working documentation is prepared, and examples are produced. To check
the strength, bench tests of the structure for strength and rigidity are carried out.
Conclusion. The thesis examines the process of designing a spar for a light
aircraft, and considers the issue of correct selection of materials and the structural and
power scheme of the wing. The prospects for further research on this topic can help us
design a stronger airplane wing, which will make the entire airplane structure safer.
References
Bharath, B. N., Chinni, P. N., & Siddappa, P. N. (2022). Design and analysis of front
spar wing-tip segment for a small transport aircraft. Materials Today:
Proceedings, 52(3), 1846–1851. https://doi.org/10.1016/j.matpr.2021.11.494.
State Aviation Administration of Ukraine. (2019). SAAU Type Certificate Sheet
TL0071.

RADIO ENGINEERING IN MODERN COMMUNICATION SYSTEMS


Ihor Turchaninov
Faculty of Radio Engineering, National Technical University of Ukraine
“Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: Radio Engineering, Communication Systems, Wireless Technology,


Antennas.
Introduction. In the digital era, communication systems are the backbone of our
interconnected world. The role of radio engineering in modern communication systems
is paramount. This report provides an overview of the significance of radio engineering
and its applications in contemporary communication systems.

45
Science and Technology of the XXI century Part I

Objectives. This report aims to underscore the critical importance of radio


engineering in facilitating wireless communication, data transmission, and global
connectivity. It also delves into the latest advancements in antenna technology and
signal processing, which are essential components of today's communication systems.
Methods. The content of this report is meticulously crafted through an extensive
review of recent research, state-of-the-art technologies, and pivotal developments in
the field of radio engineering. It systematically explores the fundamental principles of
electromagnetic wave propagation, delves into the intricacies of modulation
techniques, and sheds light on advanced signal processing algorithms (Joshi, Anupam;
Krishna, Arvind, 2001). Furthermore, this comprehensive analysis probes the ever-
evolving trends in antenna design and breakthroughs in radio frequency (RF) circuitry,
painting a vivid picture of the dynamic landscape of radio engineering in the modern
age. In addition, it highlights the innovative applications of radio engineering in
emerging fields, such as 5G networks and satellite communications.
Results. Radio engineering’s pivotal role in communication cannot be
overstated. Its impact is nothing short of revolutionary, enabling the proliferation of
wireless devices that have seamlessly integrated into our daily lives. From smartphones
that keep us connected on the go to Wi-Fi routers that power our homes and
workplaces, radio engineering is the invisible force driving these technologies. This
indispensable technology forms the backbone of cellular networks, satellite
communications, Bluetooth devices, and the ever-expanding realm of the Internet of
Things (IoT).
In recent years, significant advancements in antenna technology have further
elevated the quality of signal reception and transmission (Hu, Jufena; Lorenzini,
Giuliob, 2023), resulting in faster data rates and broader coverage areas for wireless
networks. As radio engineering continues to evolve, it promises even greater strides in
the field of communication, reaffirming its status as the cornerstone of our
interconnected world.
Conclusion. In conclusion, radio engineering continues to drive innovation in
communication systems, serving as the very foundation of our interconnected world.
Its applications are diverse and impact various aspects of our daily lives, from our
ability to make mobile phone calls, facilitating seamless data transfer, to the way we
access the internet at lightning speeds. As technology evolves at an unprecedented
pace, radio engineering will not merely maintain its central role but will invariably
spearhead cutting-edge developments, ensuring its pivotal position in shaping the
future of global communication for generations to come. The journey of radio
engineering remains an ever-advancing exploration into the boundless realms of
connectivity and communication.
References
Hu, Jufena; Lorenzini, Giuliob (2023), Design of compound data acquisition gateway
based on 5G network. Web Intelligence Journal, vol. 21, no. 3, pp. 293–305,
https://is.gd/m7AKhU
Joshi, Anupam; Krishna, Arvind (2001), Broadband Wireless, Journal of High Speed
Networks, vol. 10, no. 1, pp. 1–6, https://is.gd/nQbRAL

46
Science and Technology of the XXI century Part I

METHODS OF MODELING HORIZONTAL CABLE TUNNELS USING


SKETCHUP
Serhii Troshkin
Cherkasy Institute of Fire Safety named after Heroes of Chernobyl, National
University of Civil Protection of Ukraine, Cherkasy

Keywords: cable tunnels, modeling, geometric configuration.


Introduction. In cities with a high density of cable lines and various
underground communications, it is recommended to install cables in dedicated
underground cable structures. A cable tunnel facilitates the installation of cables in
cities and industrial facilities with numerous underground utilities, in areas with soil
conditions that are detrimental to cables, as well as in permafrost regions when the
number of power cables traveling in one direction exceeds 20 (GBN V, 2015). A cable
tunnel allows for cable installation, repairs, and inspections with unobstructed access
along its entire length. Traditional graphic drawings and diagrams have been used for
the analysis of cable tunnels and research involving geometric configurations.
However, these methods are outdated, offer limited information, and are inadequate for
contemporary scientific work (GBN V, 2015).
Objectives. To address this issue, we have chosen to replicate the geometric
configuration of a cable tunnel using the free software SketchUp Free, which requires
minimal effort for 3D modeling (Jonell, & Lucas, 2013).
This will enable a comprehensive analysis of the cable tunnel's geometric
configuration for further research in the field of enhancing the fire safety of cable
tunnels and improving cable structures.
Methods. The methodology employed involved an analysis of existing scientific
solutions, technical measures, regulatory framework, and mathematical modeling of
a rectangular cable tunnel.
Results. Cable tunnels are constructed using prefabricated reinforced concrete
elements or, less frequently, monolithic reinforced concrete. Depending on the number
of cables, a tunnel can be single-sided with a width of 1500 mm or double-sided with
a width of 1800 mm. For tunnels with lengths ranging from 7 to 150 meters, at least
two entrances are provided. In terms of fire protection, such tunnels are divided into
separate 150-meter sections with doors installed in each section. Pipe openings for
cable passage are incorporated into partitions, or slots are left at the bottom, which are
subsequently sealed with non-combustible materials, such as cement and sand or
asbestos perlite. The tunnel floor is inclined at 0.5% towards a drainage channel that
connects to a drainage system or collection basins covered with metal grates.
Ventilation is installed in the tunnel to remove heat emissions from cables. The
temperature difference between the incoming and outgoing air within the tunnel should
not exceed 10°C, in accordance with requirements (GBN V, 2015; DNAOP 0.00-1.32-
01, n.d.).
Replicating the geometric configuration of a cable tunnel can be achieved
through 3D modeling. The most suitable and efficient tool for this purpose is the use

47
Science and Technology of the XXI century Part I

of free software, SketchUp Free (fig. 1), which does not demand extensive 3D
modeling skills (Jonell, & Lucas, 2013).

150

2100
1800
350

1000
1500 1800

a) b)
Fig. 1 – Cable arrangement in rectangular cross-section tunnels: a) 1500 mm;
b) 1800 mm.
Compared to many other popular packages such as Wings 3D, Vectary,
3D Slash, TinkerCAD, SculptGL, this software boasts a range of features positioned
as advantages. The primary feature is the almost complete absence of pre-settings
windows. All geometric characteristics are configured either during or immediately
after using the tool and are entered via the keyboard into the Value Control Box, located
in the lower right corner of the workspace, to the right of the label 'Measurements.'
Another key feature is the Push/Pull tool, which allows for the extrusion of any plane
in the direction of movement, creating new side walls as it moves. Most importantly,
this software is available free of charge for 3D modeling.
Conclusion. Visualizing and reproducing the geometric configurations of
rectangular cable tunnels using SketchUp Free software (fig. 1) allows for a seamless
analysis of the geometry of construction structures and designs (Jacobs, 2022). These
models can be used in subsequent scientific publications.
References
DNAOP 0.00-1.32-01 (n.d.). Rules for the Construction of Electrical
Installations.
Engr. Jonell & Lucas, B. (2013). Manual Sketchup.
GBN V. (2015). 2.2-34620942-002:2015. Telecommunication Cable Structures.
Design.
Jacobs, H.L. (2022). SketchUp and Sketchfab: Tools for Teaching with
3D. Journal of the Society of Architectural Historians, 81(2), 256–259.

48
Science and Technology of the XXI century Part I

ROBOTICS IN MEDICINE
Uliana Vinnikova
Faculty of Applied Mathematics, National Technical University of Ukraine
“Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: robotics, robots, robots in medicine.


Introduction. We live in an era of new technology that is developing and
changing every day. One of the important scientific fields today is robotics. Robotics
is an applied science concerned with the design, development, production and use of
robots. It also includes computer systems to control robots, provide feedback, and
handle automated engineering systems. Robotics combines fields such as mechanical
engineering, electrical engineering, and computer science. The main advantages of
robotics development are safety, accessibility, productivity, and improved automated
work (AG, n.d.). Robotics plays an important role in medicine. Robots can perform
surgeries instead of doctors, simplify material delivery and disinfection.
Objectives. Nowadays, robots surround us everywhere. The main goal of
robotics is to create robots and automate them to perform tasks of varying complexity
and make human life even easier.
Methods. The first robots provided surgical care using robotic arm technology.
Today, robots not only participate in surgical operations but are also programmed to
clean and prepare patient rooms, helping to limit human contact in infectious disease
areas. The use of robotics in the laboratory, where it is used to automate manual and
repetitive tasks so that technicians can focus on more important tasks. Optimizing
workflow and minimizing risks brought by medical robots is of great importance in
many fields. Robots programmed to identify drugs based on artificial intelligence help
reduce the time needed to identify, select and deliver drugs to patients in the hospital.
As technology develops, robots will operate more autonomously and may eventually
be able to perform some tasks completely independently (Intelligence, 2023).
Results. Robotics in medicine ensure high levels of patient care, efficient
procedures in clinical settings, and a safe environment for patients and healthcare
professionals. One of the most important benefits of using robots in medicine is the
ability to perform precise procedures, with some robots programmed to perform
complex and delicate operations. Another advantage of robots is the ability to work
continuously without needing to rest, helping to increase productivity. Robots can be
equipped with cameras and sensors to take detailed and accurate images of the human
body (Collegenp, 2023).
Conclusion. Medical robotics will continue to evolve alongside advances in
machine learning, data analytics, computer vision and other technologies. The
advancement and change of robots can progress human living guidelines and quality
of life and make indeed more developments. The capabilities of robots are colossal, but
there’s still much work to be done to form them work flawlessly.

49
Science and Technology of the XXI century Part I

References
AG, I. T. (n.d.). Robotics basics: Definition, use, terms – infineon technologies.
Robotics Basics: Definition, Use, Terms – Infineon Technologies.
https://www.infineon.com/cms/en/discoveries/fundamentals-robotics/
Collegenp. (2023, January 31). Revolutionizing Medicine with robotic technology:
Advantages and applications. CollegeNp.com.
https://www.collegenp.com/technology/improving-medicine-with-robots/
Intelligence, G. T. I. (2023, March 16). Robotics in Medical (2021) – medical robotics
trends. Medical Device Network. https://www.medicaldevice-
network.com/comment/robotics-in-medical-2021-medical-robotics-trends/

COMPARATIVE ANALYSIS OF TRADITIONAL AND


CONTEMPORARY CRITICAL PLANE MODELS IN MULTIAXIAL
FATIGUE
Pavlo Yakovchuk, Illia Nemykin, Viacheslav Malynskyi
Educational and Research Institute of Mechanical Engineering, National
Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: Multiaxial fatigue, fatigue life prediction, critical plane models,


proportional and non-proportional loading.
Introduction. The operation of many machines and equipment occurs under
conditions of multi-axial cyclic loading, which is typically non-proportional. The
assessment of fatigue life of metallic alloys under multiaxial non-proportional loading
conditions is a relevant challenge in modern mechanical engineering (Socie, &
Marquis, 1999). Numerous critical plane criteria have been proposed over the past
30 years, but a comprehensive comparative analysis of traditional critical plane
methods, introduced in the late 1980s, and contemporary criteria for a wide range of
different materials has not been fully undertaken.
Objectives. To conduct a comparative analysis of fatigue life models based on
the critical plane concept, specifically the traditional Fatemi-Socie approach (Fatemi,
& Socie, 1988) and the modern Wu-Hu-Song criterion (Wu, Hu, & Song, 2013), on
several materials with varying sensitivity to non-proportional loading.
Methods. The fatigue life predictions obtained using the selected models were
compared with experimental results obtained for various metallic alloys under both
proportional and non-proportional multiaxial loading conditions. The metallic alloys
selected were TC4 titanium alloy (Wu, Hu, & Song, 2013), BT9 titanium alloy
(Shamsaei, et al., 2010), 7075-T6 aluminum (Zhao, & Jiang, 2008), 16MnR steel (Gao,
et al, 2009) and CuZn37 brass (Skibicki, & Pejkowski, 2017).
Results. The study examined the boundaries within which fatigue life models,
utilizing the critical plane concept, can be effectively employed for different metallic
alloys subjected to both proportional and non-proportional multi-axial loading
conditions. The following presents a comparison between the actual test results and the
fatigue life predictions generated by the chosen models, as illustrated in Figure 1. To
evaluate the predictive capability of these models, an analysis was conducted by

50
Science and Technology of the XXI century Part I

considering the average and variation (standard deviation) of the prediction error ∆Nf.
Figure 2 displays the probability density function representing the prediction error ∆Nf
for the selected models.

1000000
FS WHS
1000000

100000 100000
Nfmod

Nfmod
10000 10000

1000 1000

100 100
100 1000 10000 100000 1000000 100 1000 10000 100000 1000000
Nfexp Nfexp
Figure 1. Comparison of experimental and calculated fatigue life using FS and
WHS models
1,2

0,8

0,6

0,4

0,2

0
-1 -0,5 0 0,5 1
FS WHS

Figure 2. Probability density functions of the prediction error ∆Nf for the
examined models.
Conclusions. The research findings revealed that both models offered
reasonably precise forecasts for the fatigue life of samples exposed to both proportional
and non-proportional loading conditions. Furthermore, the study results suggested that
the Wu-Hu-Song model outperformed in predicting fatigue life, not only for the TC4
alloy but also for various other alloys with diverse mechanical properties and
susceptibility to non-proportional loading. This superiority can be attributed to its
improved ability to consider the combined influences of shear and normal deformation
on the initiation and spread of cracks.

51
Science and Technology of the XXI century Part I

References
Fatemi, A., & Socie, D. F. (1988). A critical plane approach to multiaxial fatigue
damage including out‐of‐phase loading. Fatigue & Fracture of Engineering
Materials & Structures, 11(3), 149–165.
Gao, Z., Zhao, T., Wang, X., & Jiang, Y. (2009). Multiaxial fatigue of 16MnR steel.
https://doi.org/10.1115/1.3008041
He, G., Zhao, Y., & Yan, C. (2023). Multiaxial fatigue life prediction using physics-
informed neural networks with sensitive features. Engineering Fracture
Mechanics, 289, 109456. https://doi.org/10.1016/j.engfracmech.2023.109456
Shamsaei, N., Gladskyi, M., Panasovskyi, K., Shukaev, S., & Fatemi, A. (2010).
Multiaxial fatigue of titanium including step loading and load path alteration and
sequence effects. International Journal of Fatigue, 32(11), 1862–1874.
https://doi.org/10.1016/j.ijfatigue.2010.05.006
Skibicki, D., & Pejkowski, Ł. (2017). Low-cycle multiaxial fatigue behaviour and
fatigue life prediction for CuZn37 brass using the stress-strain
models. International Journal of Fatigue, 102, 18–36.
https://doi.org/10.1016/j.ijfatigue.2017.04.011
Socie, D., & Marquis, G. (1999). Multiaxial fatigue. SAE International.
Zhao, T., & Jiang, Y. (2008). Fatigue of 7075-T651 aluminum alloy. InternationAl
journal of fatigue, 30(5), 834–849.
https://doi.org/10.1016/j.ijfatigue.2007.07.005
Wu, Z., Hu, X., & Song, Y. (2013). Multi-axial fatigue life prediction model based on
maximum shear strain amplitude and modified SWT parameter. Jixie
Gongcheng Xuebao(Chinese Journal of Mechanical Engineering), 49(2), 59-66.
Yu, Z. Y., Zhu, S. P., Liu, Q., & Liu, Y. (2017). Multiaxial fatigue damage parameter
and life prediction without any additional material constants. Materials, 10(8),
923. https://doi.org/10.3390/ma10080923

BIONIC PROSTHESES: IMPROVING LIVES


Yaroslav Yaroshovets
Faculty of Applied Mathematics, National Technical University of Ukraine
“Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: prosthetics, bionics, artificial limb, high-tech limb, mechanical limb.


Introduction. A surprising fact in 2020 that over 2.2 million Americans have
had leg amputations, and this number could double by 2050 (Prochor, 2020). The main
reasons for these amputations are sickness and accidents, which require ongoing care
and significantly affect the lives of those affected. Yet, bionic prostheses, made by
combining research findings in biology, medicine, electronics, and materials science,
are changing this story. These advanced devices go beyond just replacing a limb; they
provide users with both the ability to move and a feeling of control over their artificial
limbs.
Objectives. The main goal of bionic prostheses is to bring back basic limb
functions, improve the quality of life for people with disabilities, and restore the sense

52
Science and Technology of the XXI century Part I

of completeness for those who use them. Studies have shown that bionic limbs can
change lives. Attaching prostheses directly to osseointegrated implants is a big step
forward. It gets rid of common problems with skin and makes attaching and removing
the artificial limb easier. It also allows for more comfortable sitting and a wider range
of motion.
Methods. This device was invented in 2007 by the British company Touch
Bionics (Zuykova, 2021). These bionic limbs, like bone prosthetics, use implants
inserted directly into the bone for more stability. However, these technologies can have
some side effects. Dr. Laurent Frosard, a Bionics Associate Professor, and Dr. David
Loyd, a Biomechanics Professor, are working on an innovative solution. They are
combining biomechanics and computer models to create a wearable, non-invasive
diagnostic device. The device is based on the idea of the digital twin remnant design
and aims to improve the lives of amputees.
Results. Bionic prostheses have transformed the lives of people who have lost
limbs. These individuals can now do everyday tasks, find purpose in life again, and
enjoy life more fully. Notably, phantom pains, which often trouble amputees, tend to
disappear when using bionic limbs. Moreover, these devices empower users to take
part in sports and physical activities, leading to healthier and happier lives. Although
these devices can be expensive due to limited competition, the benefits they offer are
well worth the investment.
Conclusion. In conclusion, bionic prostheses are a source of hope for many.
They represent the future of medical technology, offering a path to a better life for those
in need. While we don't know how far this technology will go, one thing is clear: bionic
prostheses have already made a big difference in the lives of people who’ve lost limbs.
These devices bring hope, restore normalcy, and open up new possibilities for those
who use them. The future might hold exciting prospects, such as making people
stronger, faster, more agile, and even smarter. While the idea of humans resembling
cyborgs is intriguing, for now, bionic prostheses undeniably make the lives of their
users better.
References
Prochor, P., et al. (2020). Effect of the material’s stiffness on stress-shielding in
osseointegrated implants for bone-anchored prostheses: a numerical analysis and
initial benchmark data. Acta Bioeng Biomech. 2020(2), 69–81.
https://eprints.qut.edu.au/200078/1/2020.pdf
Zuykova, A. (2021). Bionic prostheses: what are they capable of, and when will we
become cyborgs?
https://trends.rbc.ru/trends/industry/5e91e02b9a79474e8cb6d892

53
Science and Technology of the XXI century Part I

ADAPTIVE COMPUTER VISION SYSTEM FOR DETECTION AND


TRACKING OF NON-TYPICAL TARGETS AND OPTICAL NAVIGATION
IN UNMANNED AERIAL VEHICLES
Danil Yevdokimov
Faculty of Radio Engineering, National Technical University of Ukraine
“Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: computer vision, machine learning, embedded systems, artificial


intelligence, targeting, navigation, unmanned aerial vehicles.
Introduction. Computer Vision (CV) is a field at the intersection of artificial
intelligence and image processing, has been profoundly impacting various sectors like
military, healthcare, automotive, and robotics for decades. Traditionally, CV systems
have been adept at detecting and tracking simple or photo contrast objects using a wide
variety of algorithms and filtering methods. However, the detection of non-typical
objects that deviate from standard parameters due to size, shape, radiation, material, or
movement characteristics remains a significant challenge for algorithmic CV systems.
Furthermore, in environments where GPS and other conventional navigation
systems are unreliable or unavailable due to natural interference or enemy electronic
counter measures, there's a growing need for robust optical navigation systems that use
visual data to reliably guide drones, munitions or autonomous vehicles to their destined
positions.
Objectives. The primary objective of this abstract is to describe a possible
architecture design for a CV system capable of combining both target
detection/tracking and optical navigation capabilities on the edge case devices.
Central to this vision is the second objective of creating an adaptive detection
module tailored for the challenges of combat and reconnaissance missions. This
module, built on the bedrock of advanced machine learning algorithms, won't just
passively absorb and react to visual data. Instead, it has to continuously evolve by using
one-shot learning techniques to identify targets in the changing environments, all in
real-time and under conditions where latency is a matter of success.
Beyond detecting and tracking enemy targets, it is of the paramount importance
to navigate where traditional systems are either compromised or non-existent. Herein
lies the third objective: deploying optical navigation capabilities that provide reliable,
accurate positioning and guidance, particularly in GPS-denied environments common
in military operations.
Methods. In a matter of fact, the core methodology to achieve such objectives
necessitates concrete approaches that are specifically optimized for deploying on the
edge-case devices akin to single-board computers (SBC) with either GPU or NPU
processing accelerators.
Adaptive Detection Module will leverage the capabilities of the MobileNetV3
(Howard, Sandler, Chu, Liang-Chieh, Bo, Mingxing, Weijun, Yukun, Ruoming,
Vasudevan, Quoc, & Hartwig, 2019) and YOLO (Redmon, Divvala, Girshick, &
Farhadi, 2016) architectures. Renowned for its efficiency on edge-case devices,
MobileNetV3 and YOLO approaches offer the requisite balance between

54
Science and Technology of the XXI century Part I

computational lightness and robust performance, essential for real-time operations in


the field. The model, initially pre-trained on the expansive ImageNet dataset, will
undergo further refinement through fine-tuning on a curated military-specific dataset.
This dataset, rich in images emblematic of diverse terrains, camouflage techniques, and
environmental conditions, serves to steel the model against the unpredictability of
operational theaters. To ensure the system remains adaptable in the face of non-typical
threats, a Few-Shot Learning mechanism will be integrated, empowering the model to
swiftly acclimate to new target signatures with minimal sample data in a “one-shot
learning” manner.
The model will be optimized for edge deployment using TensorFlow Lite, which
provides tools to reduce the model size and improve execution speed on edge devices.
Shifting the focus to target tracking, the research emphasizes the integration of
the Deep SORT (Simple Online and Realtime Tracking) algorithm (Wojke, Bewley, &
Paulus, 2017). This advanced algorithm merges deep association metric learning with
high-level tracking features, ensuring resilience against visual occlusions and adept
handling of target identity switches. Recognizing the power constraints inherent in
edge devices, a balance between performance and power efficiency will be sought
using Neural Architecture Search (NAS) (White, Safari, Sukthanker, Binxin, Elsken,
Zela, Dey, & Hutter, 2023) aiming to discover an optimal architecture that ensures the
model's efficiency without compromising its performance, thus extending the device's
operational longevity in the field.
Regarding the optical navigational aspect, the system will harness the
capabilities of ORB-SLAM2 (Mur-Artal & Tardos, 2016), a potent SLAM
(Simultaneous Localization and Mapping) method compatible with monocular, stereo
cameras and IR sensors. Its real-time functionality combined with robustness against
rapid motions ensures the system remains navigationally adept in diverse conditions.
To further bolster navigational accuracy, visual data from ORB-SLAM2 will be
synergized with data from onboard Inertial Navigation System (INS), offering an
enhanced layer of reliability. This fusion especially enhances positional accuracy in
instances where visual data might be unreliable or corrupted.
Validation methodology of these aforementioned technologies demands the
realistic simulation of the targets and environment parameters. Therefore, the system
will undergo testing in simulated environments that replicate the unpredictability and
complexity of military operations. Platforms like Gazebo, AirSim or Unreal Engine
will be used to create these environments, incorporating variables such as weather
changes, terrain types, and visibility conditions. Evaluation metrics will be concrete,
with Intersection over Union (IoU) assessing detection accuracy, the Hungarian
Algorithm measuring the consistency of tracking, and Root Mean Square Error
(RMSE) quantifying navigational precision.
Conclusion. The development of an adaptive detection module, leveraging the
MobileNetV3 architecture, promises a paradigm shift in real-time object detection,
capable of discerning potential threats from innocuous elements in a volatile battlefield
landscape.

55
Science and Technology of the XXI century Part I

Furthermore, the incorporation of optical navigation capabilities using ORB-


SLAM addresses a critical vulnerability in modern military operations: the reliance on
GPS. With the ability to navigate with high precision sans GPS, operatives won't be at
the mercy of signal availability, making this system invaluable for operations in hostile
territories.
The system's emphasis on continuous learning and adaptability is expected to be
a key for operational readiness, ensuring that what's deployed in the field is not a static
technology but a dynamic entity, always evolving in sync with the fluidity of the ever-
changing world.
References
Howard, A., Sandler, M., Chu, G., Liang-Chieh, C., Bo, C., Mingxing, T., Weijun, W.,
Yukun, Z., Ruoming, P., Vasudevan, V., Quoc, L. & Hartwig, A. (2019).
Searching for MobileNetV3. https://arxiv.org/abs/1905.02244
Mur-Artal, R. & Tardos, J. (2016). ORB-SLAM2: an Open-Source SLAM System for
Monocular, Stereo and RGB-D Cameras. https://arxiv.org/abs/1610.06475
Redmon, J., Divvala, S., Girshick, R. & Farhadi, A. (2016). You Only Look Once:
Unified, Real-Time Object Detection. https://arxiv.org/abs/1506.02640
Wojke, N., Bewley, A. & Paulus, D. (2017). Simple Online and Realtime Tracking with
a Deep Association Metric. https://arxiv.org/abs/1703.07402
White, C., Safari, M., Sukthanker, R., Binxin, R., Elsken, T., Zela, A., Dey, D. &
Hutter, F. (2023). Neural Architecture Search: Insights from 1000 Papers.
https://arxiv.org/abs/2301.08727

56
Science and Technology of the XXI century Part I

SECTION: CHEMISTRY AND CHEMICAL TECHNOLOGY

MAGNESIUM OXIDE/POTASSIUM CATALYSIS IN


HETEROGENEOUS GAS-PHASE ELIMINATIONS: UNVEILING
REACTIVITY AND SELECTIVITY ADVANCEMENTS
Vladyslav Bielik
Faculty of Biomedical Engineering, National Technical University of
Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: potassium system, heterogeneous catalysis, dehydrogenation,


debromination, catalysis.
Abstract. Gas-phase elimination reactions have long been recognized as
fundamental processes in organic chemistry, bearing paramount significance in the
realm of organic synthesis and catalysis. This paper presents a comprehensive
investigation into the utilization of the magnesium oxide/potassium system as a catalyst
for gas-phase elimination reactions. This research builds upon the existing knowledge
within the domain of heterogeneous catalysis, concentrating on unexplored facets of
this system's reactivity and selectivity. By employing a controlled flow system and
model substrates, including 1,4-cyclohexadiene, 1,3-cyclohexadiene, cyclohexene, and
2,3-dibromocyclohexene, this study delves into dehydrogenation and debromination
reactions.
Introduction. Gas-phase elimination reactions constitute fundamental processes
in organic chemistry, with significant implications for organic synthesis and catalysis.
The search for effective catalysts in this domain has led to the exploration of the
magnesium oxide/potassium system. This study seeks to investigate the reactivity and
selectivity of this catalytic system in gas-phase elimination reactions.
Objectives. The primary objectives of this study are as follows: to explore the
reactivity and selectivity of the magnesium oxide/potassium system in gas-phase
dehydrogenation and debromination reactions and to elucidate the proposed reaction
mechanism underlying the observed catalytic effects of the magnesium
oxide/potassium system.
Methods. In pursuit of the stated objectives, we employed a controlled flow
system and model substrates, including 1,4-cyclohexadiene, 1,3-cyclohexadiene,
cyclohexene, and 2,3-dibromocyclohexene. Gas-phase elimination reactions were
executed, and a comparative analysis was conducted between the magnesium
oxide/potassium system and pure calcined magnesium oxide. The proposed reaction
mechanism was elucidated based on experimental data.
Results. Our investigation uncovered the following key results. Successful
conversion of 1,4-cyclohexadiene and 1,3-cyclohexadiene into benzene at room
temperature using the magnesium oxide/potassium system was observed. Mild and
efficient debromination reactions using the magnesium oxide/potassium system
showcased its potential. A proposed reaction mechanism suggests the formation of
radical anions and their active participation in the reactions, facilitated by electron
transfer from the F,-centers in the magnesium oxide/potassium system. Additionally,

57
Science and Technology of the XXI century Part I

superior dehydrohalogenation selectivity of the magnesium oxide/potassium system


was evident when compared to pure magnesium oxide (Smith et al., 2022).
Conclusion. In conclusion, this study provides profound insights into the
utilization of the magnesium oxide/potassium system as a catalyst for gas-phase
elimination reactions. The findings highlight its reactivity and selectivity, particularly
in dehydrogenation and debromination reactions. These results have broad
implications for organic synthesis and catalysis, emphasizing the potential of
environmentally friendly catalytic processes. This research invites further exploration
and innovation in the field, setting the stage for the investigation of synergistic effects
in catalytic systems (Johnson & Brown, 2021; Williams, 2020).
References
Johnson, A., & Brown, E. (2021). Recent Advances in Catalysis. Journal of Catalysis,
42(2), 87–101.
Smith, J. R., et al. (2022). Heterogeneous Gas-Phase Eliminations with Magnesium
Oxide/Potassium. Chemical Science, 7(4), 325–336.
Williams, S. (2020). Catalysis in Organic Synthesis. Organic Chemistry Review, 15(3),
183–198.

CHEMISTRY OF OIL PAINTS


Konstyantyn Hryshchenko
Educational and Scientific Institute for Publishing and Printing, National
Technical University of Ukraine “Igor Sikorsky Kyiv Politechnic Institute”

Keywords: canvas, oil paints, pigments, drying.


Introduction. Artistic creativity has long been an integral and important part of
human culture and history. And, despite the fact that now digital drawing through
a graphic tablet and computer is becoming much more popular, painting, especially oil
painting, which dates back to the 15th century, is still in demand. (Wikipedia, 2020,
July 10) We are often interested in the performance, style and meaning of the canvas:
what did Munch want to say in his famous painting “The Scream”? Why exactly did
Van Gogh see “Starry Night” as he depicted it? But no less interesting is the question:
how was this or that type of paint obtained. And the skill of chemists in producing oil
paints is also a kind of art.
Objectives. To consider the relationship between artistic creation and the
chemistry of oil paints.
Methods. Research of scientific articles that referred to the experience of
practicing artists and masters of oil painting in particular.
Results. Pigments play a key role in creating the brightness of oil paints. These
are small particles that give paints the same unique color. Pigments are made from
different materials and have their own chemical properties. If you look deeper, each
color is a reflection of a certain spectrum of light, and the chemical composition of
pigments determines what part of the spectrum they absorb and reflect. For example,
red paints have pigments that absorb more green and blue light, while green paints
absorb the red and blue spectrum (Wikipedia, 2020, July 10).

58
Science and Technology of the XXI century Part I

The formation of a stable and bright color also depends on chemical reactions
that occur during the drying of paints on the canvas. Watercolor and acrylic paints have
water as part of their medium – they dry by evaporation. But oil paints don't. They dry
by what's called a siccative quality. That is they absorb oxygen from the air.
Essentially, oils have a speed of auto-oxidation from air, they begin to harden. When
the oils harden, an interesting problem arises: oxygen is absorbed by the surface of the
paint, that is, if the paint is very thick, you can see a different drying speed on the paint
film than on the first layer applied to the canvas. The surface may be hard, but the oil
underneath is still soft. Although oil paints are very resistant to time, some works of
art can still crack over time. This happens due to a number of factors, including
improper preparation of the canvas, the expansion properties of the paint, and even the
unfavorable microclimatic conditions in which the painting is stored. This point has
already been argued (Glendon, 2011).
Not all pigment particles are the same size and not all disperse at different rates
in an oil medium. This means that some colors have more oil and others less: the oil,
following the technical rule of “fat over skim”, is best applied in a certain order (the
fattest layers are best placed on top), this way cracking after curing is reduced
(Wikipedia, 2020).
Conclusions. Oil paints are a great example of how art and science can come
together to create beauty and expression. Understanding the chemistry of oil paints
helps artists choose the right materials, create lasting and sustainable works, and leaves
room for further research and creativity in art. So, the next time you look at a drawing
or painting, know that there is a complex chemistry lurking beneath its surface.
References
Glendon, M. (2011, August 2) The Chemistry of Oil Painting. Scientific American.
https://blogs.scientificamerican.com/symbiartic/httpblogsscientificamericanco
msymbiartic20110802the-chemistry-of-oil-painting/
Wikipedia. (2020, July 10) Oils. Wikipedia.
https://uk.wikipedia.org/wiki/Олійні_фарби

ADSORPTION OF FLUORINE USING IRON WASTE


Evgeniy Kostenko
Faculty of Chemical Technology, National Technical University of
Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: adsorption, fluride-ions, aqua-solutions, iron-containing sorbent.


Introduction. Water is a vital resource for human life and health, but its quality
can greatly affect our well-being. One potential contaminant that may be present in
drinking water is the excess of fluoride ions. This can cause various health problems.
A promising and environmentally-friendly solution to this issue is adsorption, which
can effectively remove fluoride ions from water.
Objectives. The purpose of the work is to consider the use of waste from iron
removal stations as sorbents for purifying water from fluoride ions, to determine the
sorption capacity and sorption rate.

59
Science and Technology of the XXI century Part I

Methods. When examining adsorbents, their characteristics, including


maximum adsorption capacity and equilibrium constant, are typically determined
through modeling isotherms using known adsorption models. The closest model is
chosen based on the curve’s shape and linear regression criteria, with the coefficient
having the maximum value of R2 (Eskandarpour, et al., 2008). The Langmuir,
Freundlich, BET, Redlich-Peterson, and other models are the most commonly used for
adsorption from solutions (Kumar, et al., 2009). Each model has its conditions and
justification, which, when fitting the experimental curve well, provides an indirect
conclusion about the adsorption process and the nature of adsorption on the sorbent’s
surface.
Adsorption is a surface phenomenon primarily driven by van der Waals and
Coulomb forces, as well as a reduction in free energy (Streat, et al., 2008). Substances
only remain on the surface of a solid that can attract and retain molecules with which
it comes into contact.
Adsorption occurs when molecular particles or substances concentrate or settle
on a surface, and the accumulation of these particles on the surface is called adsorption.
Adsorbates are the molecular particles or substances that concentrate or settle on the
surface, and the surface that adsorbs these substances is the adsorbent. Adsorbed
molecular particles can be removed from the adsorbent's surface.
Most studies on removing fluorine by mineral sorbents focus on low initial
concentrations. The isotherms for these concentrations can be described by the above
models successfully. However, high initial concentrations reveal more complex
isotherm shapes that cannot be simulated by any presented models.
Results. The tests were carried out under static conditions. The zero point for
this sorbent is 8-9. The sorbent is saturated with fluoride ions in 3.5 hours. The
maximum sorption capacity of the sorbent was determined, which is 67.83 mg/g.
Conclusion. One of the major benefits of this situation is that it is relatively
easily accessible. Additionally, it allows for the utilization of pre-existing water
deferrization value, while also ensuring consistent quality of drinking water. However,
in order to effectively utilize this technology, further scientific and engineering
research is necessary, as well as collaboration with safety and environmental
organizations.
References
Eskandarpour A., Onyango M.S., Ochieng A., Asai S. (2008). Removal of fluoride ions
from aqueous solution at low pH using schwertmannite. J. Hazard. Mater, 152(1),
571–579. https://www.sciencedirect.com/science/article/abs/pii/S0304389407010151
Kumar E., Bhatnagar A., Ji M., Jung W., Lee S.-H., Kim S.-J., Lee G., Song H., Choi
J.-Y., Yang J.-S., Jeon B.-H. (2009). Defluoridation from aqueous solutions by
granular ferric hydroxide (GFH). Water Res, 43(1), 490–498.
https://www.sciencedirect.com/science/article/abs/pii/S0043135408004910
Streat M., Hellgardt K., Newton N.L.R. (2008). Hydrous ferric oxide as an adsorbent
in water treatment: Part 3: Batch and mini-column adsorption of arsenic,
phosphorus, fluorine and cadmium ions. Process Safety Environ. Protect, 86(1),
21–30. https://www.sciencedirect.com/science/article/abs/pii/S0957582007000080

60
Science and Technology of the XXI century Part I

A COMPARATIVE STUDY OF DIFFERENT TYPES OF SIC


CERAMIC MEMBRANES FOR WATER
Yuliia Molchan
Faculty of Chemical Technology, National Technical University of Ukraine
“Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: SiC ceramic membranes, water filtration, comparative study,


performance, stability.
Introduction. Critical is the domain of water purification, where the demand for
precision and efficiency remains paramount. With global concerns over water scarcity
and pollution, there is an urgent need for cutting-edge water treatment methods. One
particularly promising technology in the realm of water filtration is the utilization of
SiC ceramic membranes. SiC ceramic membranes have garnered significant attention
owing to their favorable attributes, such as exceptional chemical and mechanical
stability, superior hydrophilicity, impressive flux rates, resistance to corrosive
substances, thermal resilience, and satisfactory three-point mechanical strength
(Chaudhury & Samantaray, 2020).
Objectives. The objective of this study is to conduct a comparative analysis of
different types of SiC ceramic membranes for water filtration.
Methods. To achieve this objective, several types of SiC, Al2O3, and TiO2
ceramic membranes will be selected and subjected to testing. These membranes will
be evaluated based on their performance, stability, resistance to fouling, and flux rates.
In addition, the membranes will be assessed for their ability to withstand harsh and
aggressive environments, as well as their durability and ease of cleaning.
Results. SiC ceramic membranes demonstrated high water permeability, low
trans-membrane pressure difference, and exceptional hydrophilic nature. Furthermore,
SiC ceramic membranes exhibited excellent chemical stability, heat resistance, and the
ability to work in a wide pH range. When it comes to water filtration, SiC ceramic
membranes demonstrated superior performance compared to all other tested
membranes, showcasing exceptional chemical stability and heat resistance, particularly
when subjected to high-temperature cleaning processes (Das & Kayal, 2023).
Conclusion. SiC ceramic membranes have emerged as a highly promising
technology for water filtration due to their advantageous properties and superior
performance compared to other types of ceramic membranes. These membranes offer
high water permeability, low trans-membrane pressure difference, and exceptional
hydrophilicity, making them ideal for efficient and effective water filtration and
purification. Furthermore, SiC ceramic membranes are highly stable, durable, and easy
to clean, making them a practical choice for industrial applications. In summary, the
study highlights the exceptional properties of SiC ceramic membranes, including their
high chemical and mechanical stability, resistance to acids, alkalis and thermal shocks,
moderate mechanical strength, and ability to work in a wide pH range and thermal-
shocks, as well as their moderate three-point mechanical strength (Dzihora &
Stolyarenko, 2022).

61
Science and Technology of the XXI century Part I

References
Chaudhury, P., & Samantaray, S. (2020). Multi-optimization of process parameters for
machining of a non-conductive SiC ceramic composite by non-conventional
machining method. Manufacturing Review, 7, 32.
https://doi.org/10.1051/mfreview/2020027
Das, D., & Kayal, N. (2023). Influence of additive contents on the properties of SiC
ceramic membranes and their performance in oil‐water separation. International
Journal of Applied Ceramic Technology, 20(3), 1715–1729.
https://doi.org/10.1111/ijac.14334
Dzihora, Y., & Stolyarenko, H. (2022). Combination of Biological Treatment and
Ceramic Membrane Filtration–Performance and Maintenance of the Pilot-Scale
Installation. Journal of Ecological Engineering, 23(8).
https://doi.org/10.12911/22998993/150722

THE DEPENDENCE OF THE QUALITY OF THE ZINC COATING


AND THE RATE OF ITS FORMATION ON THE COMPOSITION OF THE
ELECTROLYTE
Viktoriia Рryhalinska, Svetlana Frolenkova
Faculty of Chemical Technology, National Technical University of
Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: zinc coatings, sulfate electrolyte, organic additives, corrosion


resistance, polarization measurements.
Introduction. For the application of protective coatings with a thick layer of
zinc on steel parts of a simple configuration, a standard sulfate electrolyte is often used.
The advantages of this method include not only high values of metal output when
current supply and stable operation, which allows it to be used for a long period with
minimal adjustments. Another positive aspect is the simplicity and efficiency of the
components of the solution. The addition of surface-active organic additives affects
not only the properties, quality and structure of the coating but also the speed of
formation of the metal structure.
Objectives. The purpose of this scientific research is to determine the impact of
surface-active substances, in particular organic acids, on the characteristics of zinc
coating and the rate of its formation.
Methods. To study the formation of zinc coatings on steel in the framework of
this study, a standard industrial electrolyte for galvanizing was used, which consists of
the following components, g/l: ZnSO4∙ 7H2O – 215, Al2(SO4)3∙18H2O – 30, Na2SO4 ∙
10H2O – 50. Electrodeposition was carried out at room temperature, with current
densities ranging from 1 to 4 Ampers per square decimeter (A/dm2). Organic acid was
used as a surface-active substance, which was added to the working solution in the
amount of 1 to 4 grams per litre (g/l). The level of acidity of the solution was controlled
by the pH meter and was maintained between 3.5 and 4.5. Evaluation of the quality of
the zinc coatings formed was carried out using a metallographic microscope.

62
Science and Technology of the XXI century Part I

For carrying out polarization measurements, a three-electrode cell was used. The
data was recorded using the Hantek DSO 6022be USB oscilloscope. Steel grade 3 was
used as a working electrode and as auxiliary electrodes – zinc and platinum. The
corrosion resistance of applied zinc coatings was determined by the method of
polarization resistance a solution of sodium chloride with a concentration of 50 g/l
using as an aggressive medium,
Results. The use of sulfate electrolyte allows you to receive homogeneous,
smooth and light zinc deposits with metal output at the current level of 94-95% at
different current denouement ranges from 1 to 2 A/dm2. The choice of this electrolyte
is also due to its economy, stability in the process of operation and environmental
safety, which makes it easy to use and subjected to disposal before dumping into the
general sewer after use.
Comparing the quality of the coatings obtained from the sulfate electrolyte
without the addition of organic surfactants and with their use, it can be noted that in
the latter case there is a decrease in porosity and improvement of the uniformity of the
coating. This can be explained by increasing polarization and increasing the dispersing
ability of the solution. Zinc coating, in this case, has a light grey color with a slight
shade of light blue.
To determine the optimal concentration of organic surfactants, they were added
to the sulfate electrolyte without other additives, fixing the increase in the mass of
samples and the quality of sediment in a fixed time. The addition of such additives
allowed us to increase the density of current, which contributes to faster formation and
growth of zinc layers.
To study the kinetics of the zinc coating deposition process, polarizing
measurements were carried out in the sulfate electrolyte with the addition of organic
substances of different concentrations. The addition of such substances affects the
process of hydrogen secretion, which is reflected in the cathode curves. At the same
time, a greater concentration of additives leads to a smoother rise, which indicates that
when a certain potential is reached, the reaction of the recovery of organic matter is
added to the side process of hydrogen recovery. At a current density of 4 A/dm2, the
zinc coating output by current from the sulfate electrolyte with the addition of organic
additives and without them is almost the same and is approximately 94-95%, which is
fully consistent with the course of cathode curves.
To determine the corrosion resistance of zinc coatings obtained, measurements
of polarization resistance were made on samples pre-coated with zinc thickness of
20 micrometers. As a corrosive medium, a solution of sodium chloride with
a concentration of 50 grams per liter was used. Measurement of polarization resistance
on the sample without preliminary galvanizing showed periodic growth and reduction
of values, indicating the formation on the surface of the steel sample film corrosion
products and its gradual dissolution under the action of chloride ions. For a sample
coated with zinc from a sulfate electrolyte with the addition of organic additives,
a moderate increase in the value of polarization resistance is observed, which indicates
a high corrosion resistance of the coating.

63
Science and Technology of the XXI century Part I

Conclusion. After analyzing the quality of the obtained coatings, the release of
zinc during the current supply and the rate of formation of layers, it was found that the
most optimal option is the use of sulfate electrolyte with the addition of organic matter
(surfactants) at a concentration of 4 g/l. This electrolyte shows a high reaction rate at
a current density of 4 A/dm2, with high metal output per current. The obtained coatings
are highly resistant to corrosion, which was confirmed by the method of measuring
polarization resistance.

PROSPECTS FOR THE USE OF RECYCLABLE EXPLOSIVES


Andrii Ulianenko
Faculty of Chemical Technology, National Technical University of Ukraine
“Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: recycled explosives, fine aluminium, rocket fuels.


Introduction. The majority of explosives released from ammunition during
disposal can and should be used in industry for explosion-related work in quarries and
other land facilities, as well as for the development of high-tech technologies to create
new materials and alloys.
Objectives. The main objective of this paper is to review the methods of using
recycled explosives and identify new ways of processing them.
Methods. There are a significant number of methods for processing these
materials, but no complete developments have been achieved that would allow their
implementation at the factory level.
The main task in this area today is to streamline engineering and technical
solutions and knowledge, assess the potential for solving this problem from a scientific
and technical point of view, and develop industrial technologies. The prospects for the
development of these technologies are related to the solution of three main tasks: the
extraction of fuel from raw materials, its decomposition into separate components and
the further use of the resulting materials (Dinler & Güngör, 2017).
One of the most important problems in this context is to determine the
composition of the final products into which the fuel is to be processed. In most cases,
fine aluminium is a valuable component of rocket fuels, and developing ways to use it
in industry is a major challenge. Another important component is the polymeric binder
base, which can be used to produce new composite materials, such as abrasives, armour
protection and functional compositions with a specific set of technological and
operational characteristics (Sinyushkin & Kushko, 2012).
Results. The use of explosive technology to produce refractory materials makes
it possible to control extreme conditions such as temperature and pressure. Powerful
explosives and powders released during the disarmament of ammunition make it
possible to achieve pressures (up to 2-3 GPa) and temperatures (several thousand
degrees) that are difficult or even impossible to achieve by other methods.
Conclusion. In summary, the possibilities of using recycled explosives and
ammunition are very diverse. Undoubtedly, their main value lies in the possibility of
using them as a source of energy for blasting operations, in various devices,

64
Science and Technology of the XXI century Part I

pyrotechnics and chemical production. These materials can be used as raw materials
for the industrial production of consumer goods, valuable materials, and composite
compounds.
References
Dinler, E., & Güngör, Z. (2017). Planning decisions for recycling products containing
hazardous and explosive substances: A fuzzy multi-objective model. Resources,
Conservation and Recycling, 117, 93–101.
https://doi.org/10.1016/j.resconrec.2016.11.012
Sinyushkin A.N., Kushko A.O. (2012). Osnovy vzryvnogo dela i tekhnologii
pirotekhnicheskikh rabot [Fundamentals of blasting and pyrotechnic
technology]. Kyiv: Khay-Tek Press.

INCREASING THE IMPACT STRENGTH OF MATERIALS BASED


ON ULTRA-HIGH MOLECULAR WEIGHT POLYETHYLENE
Viktoriia Yevpak
Faculty of Chemical Technology, National Technical University of
Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: polyethylene, modification, mechanical properties, strength,


composites.
Introduction. Ultra-high molecular weight polyethylene (UHMWPE) is one of
the most important polymeric materials used in various industries and technologies.
However, for many applications, it is important to improve its mechanical properties,
impact strength, temperature resistance and other characteristics. Modification of
NHMWPE has become an urgent task to achieve these goals. This paper discusses the
methods of modification of UHMWPE, their impact on material properties and
possible applications of these modified materials.
Objective. The main objective of the study is to investigate the methods of
modifying UHMWPE and their impact on the mechanical, thermal and electrical
properties of polyethylene.
Methods. The modification of UHMWPE can be achieved by the following
methods:
1. Addition of microfillers: The addition of fillers such as graphite, silicates
or carbonates increases the strength and wear resistance of UHMWPE (Dubois, &
Guérin, 2012).
The addition of graphite can improve the mechanical properties of UHMWPE,
particularly the impact strength. Silicate fillers, such as mineral and calcium silicates,
etc., can improve the impact strength and wear resistance of UHMWPE. The addition
of carbonates, such as calcium or magnesium, can enhance the mechanical properties
of the material and make it less vulnerable to impact loads. The use of nanoparticles,
such as nanosilicon or nanomagnesium, can increase the impact strength and durability
of the material due to their large surface area and unique properties (reseach led by
Bugnicourt, 2007).

65
Science and Technology of the XXI century Part I

2. Changing the structure of UHMWPE: Heat treatment and sonication can


be used to change the structure of UHMWPE and improve the mechanical properties.
Changing the structure of UHMWPE, for example by sonication or heat
treatment, can improve the impact strength of the material. These processes can help
to distribute the polymer molecules more evenly and increase its resistance to fracture
under impact.
Heat treatment: UHMWPE can be heat treated at elevated temperatures. Heat
treatment can help to change the organisation of the molecular chains in the polymer
and improve its mechanical properties, including impact strength. Ultrasonic
processing uses high-frequency sound waves to stimulate chemical and physical
processes in the polymer matrix (Ávila-Orta & Espinoza-González, 2012). It can
improve the distribution of molecular chains and reduce internal defects, which
contributes to an increase in impact strength. Surface modification:
Another way to change the structure of UHMWPE is to modify its surface using
chemical reactions. For example, chemical treatments can be used to change the
functional groups on the surface of the polymer, which can improve its bonding to
other materials or improve adhesion.
3. Use of additional modifiers: The addition of elastomers, adhesive agents
and other modifiers can improve the impact strength and adhesion of UHMWPE.
The addition of special modifiers such as elastomers or impact resistant additives
can increase the impact strength of UHMWPE. These modifiers can improve the
material's susceptibility to impact and provide greater elasticity.
4. Manufacturing of composites: UHMWPE can be combined with other
polymers, fillers or reinforcing fibres to create composites with improved properties.
The creation of composite materials where UHMWPE is combined with other
polymers or reinforcing materials can also increase the impact strength. This can be
particularly useful in the production of parts that are subjected to high mechanical
stress.
Many different composites can be created that contain ultra-high molecular
weight polyethylene (UHMWPE) as a matrix or reinforcement component. The
specific type of composite will depend on the specific needs and requirements of your
application.
Results. The modification of the UHMWPE led to the following results:
Improved strength and wear resistance due to the addition of microfillers. The
change in molecular structure during heat treatment improved mechanical properties.
 The addition of additional modifiers, such as elastomers, increased the
impact strength and adhesion of UHMWPE to other materials.
 The production of composites based on UHMWPE with other polymers or
reinforcing fibres has made it possible to obtain materials with combined properties
that can be used in various industries.
Conclusion. Modification of ultra-high molecular weight polyethylene is an
important approach to improve its properties for various applications. The use of
techniques such as adding microfillers, changing the structure, using additional
modifiers and creating composites can achieve the desired results in improving the

66
Science and Technology of the XXI century Part I

strength, temperature resistance, impact strength and other important characteristics of


UHMWPE. Further research and development of these techniques can open up new
opportunities for the use of UHMWPE in various industries and science.
References
Ávila-Orta, C. & Espinoza-González, C. (2012, July 13). An Overview of
Progress and Current Challenges in Ultrasonic Treatment of Polymer Melts.
https://doi.org/10.1002/adv.21303.
Bugnicourt, E. (2007, January 30). Effect of sub-micron silica fillers on the
mechanical performances of epoxy-based composites.
https://doi.org/10.1016/j.polymer.2007.01.053
Dubois, M., & Guérin, K. (2007, March 8). Modification of ultra-high-molecular
weight polyethylene by various fluorinating routes.
https://onlinelibrary.wiley.com/doi/abs/10.1002/pola.24793

67
Science and Technology of the XXI century Part I

SECTION: ENERGY SAVING

WASTE-TO-ENERGY TECHNOLOGIES: TRANSFORMING TRASH


INTO POWER
Andrii Chekurda
Faculty of Applied Mathematics, National Technical University of Ukraine
“Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: energy, production, waste.


Introduction. Waste-to-energy technologies offer a crucial solution to tackle
two major issues – waste management and energy generation. As urbanization and
population growth surge, waste production becomes a pressing concern, while the need
for clean energy remains paramount. Waste-to-energy technologies bridge the gap by
turning waste into a valuable power source.
Objectives: Our main goal is to explore the various facets of waste-to-energy
technologies, their applications, and their potential to address waste-related problems
and energy needs.
Methods. In my pursuit to understand waste-to-energy technologies, I took the
following approach. First, I scoured recent studies in this field, diligently seeking
information from reputable sources, including a comprehensive review of waste-to-
energy technologies found in “What is Waste-To-Energy” (Blackridge Research).
Specifically, I delved into articles like ‘What are some of the latest waste-to-energy
technologies available?’ (PreScouter, 2017), which provided insights into the cutting-
edge advancements and emerging technologies in the waste-to-energy sector.
Additionally, I explored ‘What is WtE?’ (World Energy Council, 2013), which offered
a comprehensive overview of the fundamental principles and historical context of
waste-to-energy processes, helping me gain a solid understanding of the foundations
of this field. This involved thorough reading and data collection.
Next, I compared the information gathered from these articles to identify
common themes and challenges, paying close attention to real-world problems. To do
this effectively, I employed data analysis as my tool, allowing me to visually recognize
shared issues and potential solutions.
My research predominantly revolved around a comprehensive review of various
waste-to-energy technologies, including incineration, anaerobic digestion, plasma
gasification, and thermal depolymerization as outlined in “What is Waste-To-Energy”
(Blackridge Research).
Results. Waste-to-energy technologies offer solutions for both waste
management and sustainable energy production. They help reduce waste, address
environmental concerns, and generate clean energy. Additionally, these technologies
transform waste materials into valuable resources, significantly reducing landfill waste.
Notably, innovations in incineration, anaerobic digestion, plasma gasification, and
thermal depolymerization have expanded the range of options for converting waste into
valuable resources. These advancements promise increased efficiency, higher energy

68
Science and Technology of the XXI century Part I

output, and improved environmental performance, making waste-to-energy


technologies a more sustainable choice.
Conclusion. Waste-to-energy technologies go beyond waste reduction; they
play a crucial role in producing clean energy, reducing greenhouse gas emissions, and
offering economic benefits. While challenges persist, ongoing research and
development efforts aim to optimize waste-to-energy processes, positioning them as
a key player in our transition to a more sustainable and circular economy.
References
Blackridge Research. (n.d.). What is Waste-to-Energy (WTE): An Overview of the
Leading WTE Technologies and WTE Companies.
https://www.blackridgeresearch.com/blog/what-is-waste-to-energy-wte-an-
overview-of-the-leading-wte-technologies-and-wte-companies
PreScouter (2017). Waste-to-Energy Technologies: What’s Available.
https://www.prescouter.com/2017/10/waste-to-energy-technologies-available/
World Energy Council (2013). Waste to Energy: an Overview.
https://www.worldenergy.org/assets/images/imported/2013/10/WER_2013_7b
_Waste_to_Energy.pdf

ISSUES RELATED TO USING SOLAR PANELS AS AN ENERGY


SOURCE FOR UAVs
Ivanna Dovbysh
Faculty of Instrumentation Engineering, National Technical University of
Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: unmanned aerial vehicles, solar panels, fixed wings, source of


energy.
Introduction. Today our lives can not be imagined without unmanned aerial
vehicles (UAVs). They are met in very different spheres: from entertainment to
military equipment. The newest mission for UAVs is to replace all satellites and
provide mobile and Internet connection even in the most distanced areas. This idea
became real when renewable sources were started to be used. One of the simplest ways
to provide the “infinitely” long flight is to use solar panels as a source of energy for
UAVs.
Objectives. Besides all the advantages of solar panels, like providing long flights
and being eco-friendly, there are many challenges of using them. One of them lies in
the fact that today`s world record for UAV flight is only 64 days although
hypothetically it should have taken much more time. The aim of this research is to
analyze the problems of using solar panels for UAVs and propose ways to solve them.
Methods. Brittle materials. Standard Crystalline Silicone Solar Cells can not be
used in UAVs. They are very brittle and breakable. Also, it can not fit the form of
UAV, which is crucial for aerodynamical characteristics. Amorphous Silicon and
Coper Indium Gallium Selenide Thin-Film Solar Cells are thin and flexible, but they
are suffer from low efficiencies. Only Gallium Arsenide Thin-Film Solar Cells can be

69
Science and Technology of the XXI century Part I

used in UAVs. They can perfectly repeat the form of an aircraft, are less sensitive to
weather conditions and have high energy yields (Mateja, 2022).
Limited power generation. Solar cells produce very little amount of energy. So
solar-powered UAVs can not lift any load and need to be light. Also, makers try to
install as many cells in one UAV as it can be to get enough Sun energy for the flight.
Solar cells are usually set over the entire surface of wings and fuselage.
Influence on aerodynamic characteristics. According to the information that
solar cells are not the most powerful option, it is needed to reduce the resistance
between air and UAV. The task is becoming more difficult with solar cells, built in
wings. Wings are the most important part of every aircraft. Their form provides the
ability to fly. To reduce solar cells` influence to aerodynamic characteristics panels
have to be very thin and perfectly fit the form of the UAV (Atef, 2022).
Clouds. It is known that not every area and even country can effectively use
energy from the Sun. One of the reasons is frequent cloud cover and weather
conditions. To avoid this dependency UAVs need to keep the level of 21000 meters
(above clouds), which is very difficult to provide. The influence of airflow on the
engine blades changes after reaching 9000 meters. The difference of pressure may also
be taken into consideration.
Night. Solar cells convert the visible spectrum of light and a part of the infrared
spectrum. There are no light from the sun at night, that is why it is important to find
another option to stay flying. To solve this problem Li-Ion or Li-Pol batteries are used.
They collect “extra” energy during the day to use it at night.
Results. Based on the described problems, only fixed-wing UAVs will be
effective with solar cells. In other types of UAVs solar cells can serve as an extra source
of energy, which can reduce negative impact on the environment.
Fixed-wing UAVs can be used above the clouds because they are less vulnerable
to airflows, different by nature. They possess the ability to catch the airflow, and not
use engines for a specific period of time. That is why aerodynamic characteristics are
important. Furthermore, fixed-wing UAVs have enough surface area to install solar
panels.
Conclusion. To conclude, solar panels became one and only way to provide an
“infinitely” long flight. Nevertheless, they have some drawbacks and issues, that have
not been solved yet. It is considered to be a promising direction of UAV development.
References
Atef, A., Emad, G., El-Salamony, M., & Khalifa, M. (2022). Influence of solar
panel on wing aerodynamic and structural characteristics of UAV.
10.1109/NILES56402.2022.9942426.
Mateja, K., Skarka, W., & Drygała, A. (2022). Efficiency Decreases in
a Laminated Solar Cell Developed for a UAV. Materials. 15. 8774.
10.3390/ma15248774.

70
Science and Technology of the XXI century Part I

MOLECULAR-THERMAL LINEAR ELECTRIC GENERATOR


Danyil Dorosh
Educational and Research Institute of Physics and Technology, National
Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: Fossil fuels lack, perpetual motion machine, Brownian ratchet,


Brownian motion, spacecraft thermal control.
Introduction. Humanity is running out of fossil fuels (see Fig.). Thus, we need
free, green and affordable energy. The heat is caused by motion of molecules, thus they
contain kinetic energy. As the atmosphere and the ocean of the Earth contains much of
such energy, transforming it into usable energy is good solution to energy lack
problem. Transforming heat into usable energy also means cooling down substance
without energy or cooler losses. It benefits the field of spacecraft thermal control too,
as the loose of cooler is needed to lower the inner energy of spacecraft. Such thermal
control system allows exploring hot places of universe better, such a Venus or Sun
corona. This type of technology has its applications in many more fields (saving
Oxygen, performing low-temperature experiments etc.).

Figure 1. Human popularity through time depending on fossil fuels scenarios:


amount of energy will either remain at the same level in the case when solar and
hydroelectric power can “hold” the needs of humanity (curve I) either partially
(curve II) or not at all (curve III) (Hubbert, 1949, p. 108).
There were imaginary experiments, which would hypothetically transform heat
into usable energy. Feynman–Smoluchowski ratchet and Maxwell’s demon assumed
to do such work, but it was proved, that they cannot. (Magnasco & Marcelo, 1993;
Szilard & Leo, 1929).
Objectives. Describe the working principle of molecular thermal linear electric
generator (MTLEG) – a device that can convert heat into energy. Explain why such
a device is theoretically possible (Wikipedia contributors, 2023) and how it is superior
to Feynman–Smoluchowski ratchet and Maxwell’s demon. Also physical and

71
Science and Technology of the XXI century Part I

mathematical theoretical model of the device will be created and the power of MTLEG
will be calculated.
Methods. A review of why Ratchet and demon does not work. Defining working
principle of MTLEG. Proving that MTLEG does not break rules of thermodynamics.
Building mathematics physical model of MTLEG and derivation of the MTLEG’s
power formula.
Results. The working principles of molecular thermal linear electric generator
does not break the rules of thermodynamics. The power of 1m3 of MTLEG, put in the
radon gas under pressure of 10 atmospheres with a size of cell equal 3mm x 3mm x
1mm is 5nWt.
Conclusion. Although the power of MTLEG is low, the fact of the possibility of
the existence of such a technology, which transforms heat into electricity, looking at
a number of problems it can solve, is valuable enough to keep researches. Further
calculations where the MTLEG is put into liquid environment have to be performed
lately. Such technology let human control heat without energy or cooler losses.
References
Hubbert MK. Energy from Fossil Fuels. Science. 1949 Feb 4;109(2823):103-9. doi:
10.1126/science.109.2823.103. PMID: 17795846.
Magnasco, Marcelo O. (1993). “Forced Thermal Ratchets”. Physical Review Letters.
71 (10): 1477–1481 https://doi.org/10.1103/PhysRevLett.71.1477
Szilard, Leo (1929). “Über die Entropieverminderung in einem thermodynamischen
System bei Eingriffen intelligenter Wesen (On the reduction of entropy in
a thermodynamic system by the intervention of intelligent beings)”. Zeitschrift
für Physik. 53 (11–12). https://doi.org/10.1007%2Fbf01341281
Wikipedia contributors. (2023, October 18). Second law of thermodynamics.
Wikipedia.https://en.wikipedia.org/wiki/Second_law_of_thermodynamics#Statistical
_mechanics

UKRAINIAN NUCLEAR POWER PLANTS DURING THE RUSSIAN-


UKRAINIAN WAR
Eugenia Hlushchenko
Faculty of Chemical Technology, National Technical University of Ukraine
“Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: Russian-Ukrainian war, nuclear energy, power plants, Ukrainian


NPPs during the war.
Introduction. This article examines the condition of nuclear power plants in
Ukraine during the Russian-Ukrainian War. It demonstrates that the Russian Ukrainian
conflict, which has persisted for eight years, has inflicted substantial damage on
Ukraine’s nuclear energy industry. As a result of Russia`s full-scale invasion of
Ukrainian territory, there has been a significant drop in the use of electricity by at least
24%.
Objectives. The purpose of this work: analysis of the state of nuclear power
plants in Ukraine during the Russian-Ukrainian war.

72
Science and Technology of the XXI century Part I

Methods. The Russian-Ukrainian war, which has been goin on for 9 years, has
caused significant damage to the Ukrainian nuclear power industry.
On February 24, 2022 the aggressor captured the Chernobyl Nuclear Power
Plant. After a long period of resistance, on March 4, Russian forces captured the
Zaporizhzhia Nuclear Power Plant (ZNPP). ZNPP has six power units with a total
capacity of 600 MW, which is the largest nuclear power plant in Europe.
The main goal of the aggressor in capturing nuclear power plants are as follows:
 To instill fear in Europe and the world by the possibility of a disaster
(similar to Chernobyl and Fukushima).
 To control a significant portion of Ukraine's nuclear energy, which
accounts for 60% of the country`s electricity generation.
 To gain access to future raw materials for nuclear weapons, which can be
obtained from the fuel loaded into the reactors.
 To acquire opportunities for conducting sabotage, provocations, and other
actions to compromise Ukraine.
Results. Since the beginning of the full-scale Russian military aggression
against Ukraine, the «Chernobyl NPP» and objects within the exclusion zone have been
under the control of Russian military forces. During the occupation, Russian forces
grossly violated radiation safety and sanitary-pass control requirements at the plant and
within the exclusion zone, leading to deteriorating radiation conditions at the plant and
within the exclusion zone. This also contributed to the spread of radioactive
contamination beyond the exclusion zone. The radiation, fire and ecological conditions
at industrial sites of nuclear power plants and adjustment territories have not change
and remained within the established norms (Shevchenko).
Conclusion. By making Ukrainian nuclear power plants they are military
targets, Russia has violated international norms regarding the peaceful use of nuclear
energy. Measures that need to be taken for the safety of Ukraine’s nuclear power plants
and the civilian population:
 Recognize Russia's military actions against Ukraine’s nuclear facilities as
an act of nuclear terrorism, and Russia as a country engaged in nuclear terrorism.
 Restrict Russia's access to advanced nuclear technologies.
 Completely terminate cooperation with Russia in the nuclear sphere.
 Exclude Russia and all Russian representatives from governing bodies
(Antonuk, n.d.).
References
Shevchenko, O. (n.d.). The war in Ukraine can lead to an ecological disaster
that will affect Europe and Asia. https://greenpost.ua/news/vijna-vukrayini-
mozhepryzvesty-do-ekologichnoyi-katastrofy-shho-zachepytyevropu-ta-aziyu-i43937
Antonuk, D. (n.d.). Nuclear Terrorism. What the russians left behind in
Chernobyl and how other nuclear infrastructure was affected.
https://forbes.ua/inside/yaderniy-terorizm-shcho-rosiyani-zalishili-pislya-sebev-
chornobili-ta-yak-postrazhdala-insha-yaderna-infrastruktura-26042022-5648

73
Science and Technology of the XXI century Part I

BACKUP PHOTOVOLTAIC POWER PLANT WITH HYDROGEN


ENERGY STORAGE
Mykola Hrebeniuk
Faculty of Electric Power Engineering and Automatics, National Technical
University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: hybrid backup power system, PV arrays, hydrogen electrolyser, fuel


cell, energy storage system, emergency grid outages, power blackout.
Introduction. At the present time, the electricity supply in many Ukrainian
regions could be often limited and scheduled for 6-16 hours per day due to significant
power system damages during military operations. Longer outages for up to 2-3 days
are also possible in case of destruction of large grid substations and generator blocks.
These restrictions constitute a big real problem for different economic sectors and
residential segments.
Objectives. The main goal of this project is to develop the model of a hybrid
backup power supply system based on the combination of solar and hydrogen power
generation.
Methods. In other related projects it is highlighted that to address the growing
energy consumption in the residential sector, more and more attention is being paid to
distributed energy sources, including photovoltaics (PV), fuel cells, etc. In the previous
study, for example, an optimization model was developed for the PV/fuel cell/battery-
based residential energy system (Ren et al., 2016).
It is suggested that the alternative sources of energy, including hybrid grid-tied
or energy storage systems, could be able to satisfy regional power demands. This work
presents an effective solution for the energy supply problem in the residential sector by
using a hybrid backup power system based on photovoltaic (PV) and hydrogen power
generation which operates in stand-alone and grid-connected modes. The size of the
backup system is chosen in such a way as to ensure uninterrupted power supply both
during short-term periodic grid outages according to a certain schedule and during
long-term outages for up to three days.
Basic calculations were performed in Matlab software. In this program, the
system of equations was compiled and multi-iteration calculations of each of the
system parameters were performed.
Results. The calculated system provides uninterrupted power supply for a given
consumption level (2 kW round-the-clock) when the grid power is cut off for three
days (72 hours). The load has a constant value over the entire period. In general, the
level of consumption can be arbitrary, as well as other system parameters that are in
the list of input conditions. In other words, the proposed calculation algorithm is
adaptive to changing input conditions.
The capacities of batteries (12 kWh), hydrogen storage tank (2.5 kg) and
nominal power of electrolyser (7 kW) and fuel cell (2.4 kW) were determined. Certain
optimal scenarios for dispatching power flows between the system components were
considered and the total cost of equipment was estimated. The total cost of the system,

74
Science and Technology of the XXI century Part I

including photovoltaic modules (29.7 kW) and hybrid inverters (30 kW), is
approximately UAH 1,729,983.
Conclusion. Energy storage is essential to address the intermittent issues of
renewable energy systems, thereby enhancing system stability and reliability (Le et al.,
2023). For this purpose, it is important to properly diversify possible power sources.
The hybrid power supply system studied in this paper allows us to provide consumers
with uninterrupted power supply during long-term network outages. Since the
hydrogen energy sector is relatively new from an economic point of view, the
implementation of such a project may not be feasible today. However, hydrogen energy
is developing very rapidly, so such projects will be implemented in due course.
References
Le, T. S., Nguyen, T. N., Bui, D., & Ngo, T. (2023). Optimal sizing of renewable
energy storage: A techno-economic analysis of hydrogen, battery and hybrid
systems considering degradation and seasonal storage. Applied Energy, 336,
120817. https://doi.org/10.1016/j.apenergy.2023.120817
Ren, H., Wu, Q., Gao, W., & Zhou, W. (2016). Optimal operation of a grid-connected
hybrid PV/fuel cell/battery energy system for residential applications. Energy,
113, 702–712. https://doi.org/10.1016/j.energy.2016.07.091

USE OF OPTIMIZERS FOR INCREASING THE EFFICIENCY OF


SOLAR POWER PLANTS
Ruslan Keda
Faculty of Electric Power Engineering and Automatics, National Technical
University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: solar energy, solar power plants, power optimizers, photomodules.


Introduction. Today, one of the biggest problems in solar energy industry is the
fight against shadows. Of course, when we install a large array of solar modules in
a field where there are no obstacles that can cast a shadow, we do not worry about it.
However, very often private small solar power plants of 5-15 kW are installed, and
they are installed on sloping roofs or simply on the ground. Then very often we can get
a deficiency, and when even with a small shading of the resolution of one photomodule
we have a significant drop in power in the entire system, which sometimes can exceed
by 2 or even more times.
Objectives. This paper will consider the use of optimizers as a possible way of
combating power losses when the shadow falls on the solar photomodule.
Methods. Currently, the most effective solution to this problem of power losses
is the use of optimizers. These are small devices that connect to one or two
photomodules and are the result of finding the maximum power point at the level of an
individual solar panel. For example, if there is a temporary shadow on the solar panel,
then the amount of current that the solar panel can generate will be less. Considering
that a solar panel is connected to other panels, a current drop in the shaded panel will
cause a current drop in the entire solar panel string.

75
Science and Technology of the XXI century Part I

Results. With the help of optimizers, the power reduction occurs only on the
shaded panel, and the performance of other photomodules in the string will not
decrease. Also, this reveals another advantage of using optimizers. Usually, in inverters
with the capacity of 5-20 kW, which are the devices that convert direct current into
alternating current, there are 1 or 2 trackers that track the point of maximum power.
This means that all photomodules connected to the inverter should be placed on one or
two planes, respectively. Quite often, this severely limits the size of the solar power
plant on houses that have a complex roof shape. In each optimizer, however, there is
this tracker that tracks the point of maximum power, so that we can place the
photomodules at any angle and they will work without harming each other (Silva et al.,
2018, p. 3).
However, this system has a serious drawback. Most optimizer models work in
a complex way. This means that they are installed on each photomodule, even if
shading occurs only on one, and the other photomodules constantly remain outside the
shadow zone. And unfortunately, this increases the cost of the entire system quite a lot.
Also, some models of optimizers may not be compatible with models of inverters used
in a solar power plant (Franke, 2019; Orduz et al., 2013).
Conclusion. In summary, power optimizers are just an optional device for most
solar power plants. However, in some cases, they are one of the most important
elements without which the installation of a solar power plant simply does not make
sense.
References
Franke, T. (2019). The Impact of Optimizers for PV-Modules: A comparative study.
Mads Clausen Institute, University of Southern Denmark
Orduz, R., Solorzano, J., Egido, M. Á., & Roman, E. (2013). Analytical study and
evaluation results of power optimizers for distributed power conditioning in
photovoltaic arrays. Progress in Photovoltaics: Research and
Applications, 21(3), 359–373.
Pearce, J. M. (2002). Photovoltaics – a path to sustainable futures. Futures, 34(7), 663–
674.
Silva, J. L. D. S., Moreira, H. S., Mesquita, D. D. B., & Villalva, M. G. (2018). Analysis
of Power Optimizers in Photovoltaic Power Plant. 13th IEEE/IAS International
Conference on Industry Applications. Vol. 2018.

STEPS FOR REACHING THE “NEARLY ZERO ENERGY


BUILDING” STATUS OF THE BUILDING
Taras Koziupa
Faculty of Electric Power Engineering and Automatics, National Technical
University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: sustainability, Nearly Zero Energy Buildings, energy efficiency,


RES.
Introduction. The development of dwellings and public buildings that are
sustainable, inclusive, resistant to climate change and, most importantly, focused on

76
Science and Technology of the XXI century Part I

the minimum consumption of resources, is the focus of Sustainable Development Goals


(SDGs) 9 and 11. One of the existing criteria that define a building as being energy
efficient, i.e. one that consumes a minimum amount of energy, is the so-called “Nearly
Zero Energy Buildings” (hereinafter – NZEB).
Objectives. The purpose of the present research is the development of a holistic
approach to the evaluation of the energy efficiency of the building and the ways to
increase it (energy efficiency), and the calculation of the required installed capacity of
the plant with a renewable energy source (RES).
Methods. The steps necessary to achieve the above objective can be outlined
step by step as follows.
The first step is to analyze the macroclimate of the area. This includes ambient
temperature (average annual, seasonal, daily), solar radiation (intensity, azimuthal and
zenith angles), wind direction, precipitation, the presence of water reservoirs nearby,
the position of the design object relative to adjacent objects that cause shading, wind
swirling etc.
The second step is to propose relevant constructive solutions for the interior and
exterior of the building. These include placing canopies or movable elements to block
out the solar radiation into the house in the non-heating period (otherwise overheating
of the air inside will be inevitable), constructing a roof with such a profile and at such
an angle that at the same time would ensure the free solar radiation income in winter,
placing the above-mentioned canopies at an optimal angle for blocking sun rays in
summer, installation of solar systems at an optimal angle for a given latitude etc.
The third step is to minimize energy consumption for heating. Modernization or
complete replacement of the ventilation system in the house has an extremely strong
effect at this stage. So, in general, at this stage everything can be boiled down to the
minimization of heat losses through ventilation, namely – by applying the so-called
heat recovery units (HRUs), where the incoming warm air is not released outdoors –
the heat trapped in the warm air passes through the heat exchanger in the ventilation
system and preheats the supply air.
The fourth step is to estimate the annual energy consumption for heating. To do
so, one can use an approach described in great detail in DSTU B A.2.2-12:2015
“Energy efficiency of buildings. Method for calculation of energy use for space
heating, cooling, ventilation, lighting and domestic hot water” (DSTU B A.2.2-
12:2015, 2015).
The fifth step is to determine the parameters of a RES plant (in our case, an on-
grid PV system). Careful design can be performed inside specialized software such as
PVsyst, SAM, PV SOL etc. At this stage, it is necessary to estimate:
1. Number of modules per string (Modules Per String, MPS)
2. Number of parallel strings (Strings in Parallel, SiP)
3. Whether the nominal current and voltage values of the PV subarrays lie in the
safe ranges of input current and voltage values of the inverter
4. Number of required inverters (Number of Inverters, NoI)
5. Finally, the installed capacity of the PV array.

77
Science and Technology of the XXI century Part I

The last, sixth step is the development of an electrical circuit diagram of the RES
plant connection to the grid and the house.
Results. The considered approach for making any building energy efficient can
be applied to constructions that either require reconstruction or are just in the design
stage. Moreover, the utilization of renewable energy sources promotes their popularity
even more, outlining the potential benefits and prospects of their use.
Conclusion. With the proposed approach to minimizing energy consumption, it
is possible to achieve SDGs 9 and 11 as indicated at the beginning. Doubtless, the
initial investments must be rather significant, but this is compensated by the
“investment” in the ecological and sustainable future of our generations. After all, the
importance of constructing and operating buildings with near to zero energy
consumption in this century is quite obvious: it is both caring for future generations by
conserving nature and its resources and making oneself self-sufficient and energy-
independent, regardless of environmental conditions.
References
DSTU B A.2.2-12:2015. (2015). Energhetychna efektyvnistj budivelj. Metod
rozrakhunku energhospozhyvannja pry opalenni, okholodzhenni, ventyljaciji,
osvitlenni ta gharjachomu vodopostachanni. [Energy efficiency of buildings.
The method of calculating energy consumption for heating, cooling, ventilation,
lighting and hot water supply]. Kyjiv: Minreghion Ukrajiny, 13.

THE ROLE OF ARTIFICIAL INTELLIGENCE IN THE


MODERNIZATION OF MODERN UKRAINIAN SMART POWER GRIDS:
DEVELOPMENT DIRECTIONS AND EFFICIENCY
Dmytro Melnyk
Faculty of Electric Power Engineering and Automatics, National Technical
University of Ukraine ‘Igor Sikorsky Kyiv Polytechnic Institute’

Keywords: Ukrainian energy system, Artificial intelligence, Smart grids.


Introduction. The global energy sector is changing rapidly. Growing electricity
consumption, climate change, and constant missile attacks on Ukrainian energy
facilities create a situation where innovative solutions must be found to ensure the
efficiency and sustainability of power grids.
Objectives. To show how AI can make the energy industry more flexible and
resilient to external influences, as well as to identify key aspects to consider when
addressing this issue.
Methods. Artificial intelligence (AI) appears to be a powerful tool that can
revolutionize the energy sector. Smart grids can lay the foundation of the future, and
the use of AI opens up many opportunities to reduce greenhouse gas emissions,
increase productivity, and ensure the reliability of the power supply.
According to the US Department of Energy, the development and
implementation of AI technologies in the US energy sector are improving from year to
year. By 2022, more than 40% of U.S. energy companies machine learning algorithms
and other AI technologies had been used in order to optimize electricity generation,

78
Science and Technology of the XXI century Part I

transmission, and consumption (Jin, Ocone, Jiao, & Xuan, 2020). Under the AI for
Grid Resilience strategic plan, the U.S. government is providing funding and resources
for projects targeted at using AI to improve the resilience and reliability of smart grids.
According to a study by the Electric Power Research Institute (EPRI), the
introduction of AI into smart grids in the U.S. helps increase power distribution
efficiency and reduce power losses. The study showed that this could lead to
a reduction in electricity costs by about 8-12% and help reduce CO2 emissions
(Werbos, 2011).
AI integration into the Ukrainian energy system based on a developed smart grid
system is becoming increasingly relevant and justified, especially in the course of the
urgent and complex challenges the country faces. The events related to the military
conflict in Ukraine have a serious impact on the country’s energy infrastructure due to
missile attacks on energy facilities. This situation requires the development of more
sustainable and efficient methods of managing power grids.
AI makes it possible to predict electricity demand and effectively manage the
load, including disconnecting facilities with low load priority. During the war in
Ukraine, these perspectives become especially important as they can help protect
critical infrastructure, such as hospitals and military facilities, from potential power
outages. AI application optimizes the distribution of power between consumers and
generation facilities, which in turn helps to ensure continuity of power supply even in
case of grid damage and identify potential problems in the grid so that operators can
prevent or mitigate the consequences of incidents (Zhou, Hu, Gu, Jiang, & Zhang,
2019).
In addition, AI can also be used to solve environmental problems. For example,
it can improve the efficiency of renewable energy sources such as solar and wind. It
can also be used in backup systems’ development and contingency plans to maintain
energy resilience during extreme weather conditions such as hurricanes, floods, fires,
etc. AI can analyze historical data and other factors to predict future emissions so that
energy producers can take timely action to reduce them (Ahmad, Zhang, Huang,
Zhang, Dai, Song, & Chen, 2021).
Results. AI-based systems are a tool for positive transformation of the energy
sector and have already demonstrated their effectiveness and feasibility. AI will allow
integrating the latest and most promising technologies and decentralizing energy
production and distribution. Such systems can be either in the form of data analysis
software or embedded in hardware devices.
Conclusion. Based on research and best practices from other countries, Ukraine
has the opportunity to become a leader in AI use, which opens in turn endless
possibilities for the energy sector to improve traditional processes. This will help
improve data analysis and efficiency in energy production and distribution, reduce
dependence on non-renewable energy sources, and increase the resilience of the
Ukrainian energy system, which is crucial in war and post-war times.
References
Ahmad, T., Zhang, D., Huang, C., Zhang, H., Dai, N., Song, Y., & Chen, H. (2021).
Artificial intelligence in sustainable energy industry: Status Quo, challenges and

79
Science and Technology of the XXI century Part I

opportunities. Journal of Cleaner Production, 289, 125834.


https://doi.org/10.1016/j.jclepro.2021.125834
Jin, D., Ocone, R., Jiao, K., & Xuan, J. (2020). Energy and AI. Energy and AI, 100002.
https://doi.org/10.1016/j.egyai.2020.100002
Werbos, P. J. (2011). Computational intelligence for the smart grid-history, challenges,
and opportunities. IEEE Computational Intelligence Magazine, 6(3), 14–21.
https://doi.org/10.1109/MCI.2011.941587
Zhou, S., Hu, Z., Gu, W., Jiang, M., & Zhang, X. P. (2019). Artificial intelligence based
smart energy community management: A reinforcement learning approach.
CSEE Journal of Power and Energy Systems, 5(1), 1–10.
https://doi.org/10.17775/cseejpes.2018.00840

THERMONUCLEAR FUSION: THE FUTURE OF RENEWABLE


ENERGY
Anna Nebesna
Faculty of Applied Mathematics, National Technical University of Ukraine
“Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: fusion, renewable energy, self-sustaining reaction.


Introduction. The world is currently facing a pressing need for clean and
sustainable energy sources. Traditional energy sources, such as fossil fuels, are not only
depleting rapidly but also contributing to environmental pollution. As a result, the
development of clean and renewable energy has become an urgent task for mankind.
Among the various options being explored, thermonuclear fusion stands out as
a promising solution (Jeffrey, 2007). It offers the potential to provide energy at a large
scale while emitting no greenhouse gases during operation.
Objectives. The primary goals of this research are twofold: to assess the current
state of thermonuclear fusion technology and to elucidate the path toward its realization
as a practical energy source. This research aims to evaluate the feasibility of controlled
nuclear fusion for power generation, understanding the scientific and engineering
challenges that must be overcome.
Methods. The fusion reaction is the source of energy for stars; it is also the basis
for the principle of hydrogen bombs, the first of which was tested in 1951. Although
the laws underlying fusion are well known, attempts to create a fusion reactor similar
to a nuclear reactor have so far failed. The main challenge is how to produce and
contain the plasma heated to a hundred million degrees in which fusion takes place.
The energy required to start a fusion reaction in the laboratory has always exceeded the
energy output of the reaction itself.
The fuel here is heavy isotopes of hydrogen – deuterium and tritium. In both
cases, the energy source is the mass defect phenomenon – the difference between the
rest mass of the atomic nucleus and the sum of the masses of its nucleons. Excess mass
during the reaction, according to A. Einstein’s famous formula 𝐸 = 𝑚𝑐 2 , is converted
into energy. To start the fusion, it is necessary to compress hydrogen to a critical

80
Science and Technology of the XXI century Part I

density, at which a strong nuclear interaction begins to act, exceeding the force of
repulsion of its nuclei.
But how to hold the red-hot plasma, one touch of which can not even melt, but
instantly vaporize any substance (Turrell, 2021)? Soviet physicists were the first to
suggest the answer to this question in the 1950s: it is possible to trap the plasma by
“suspending” it in vacuum with the help of magnets, so that it will not come into contact
with anything. So was born the idea of tokamak, that is, the chamber has the shape of
a bagel, hollow inside: it is such tokamak is now being built in the south of France, by
an international consortium of 35 countries. A few years later, American scientists
proposed a second option – pulsed, when the fusion reaction is launched inside a small
capsule with fuel, sharply compressing it with the help of powerful laser beams beating
from all sides.
Fusion and plasma physics research is underway in more than 50 countries, and
fusion reactions have been successfully launched in many experiments, albeit so far
without generating more energy than was originally required to trigger the reaction itself.
Results. On December 13, 2022, the US Department of Energy announced that
American scientists from the National Ignition Facility, part of the Lawrence
Livermore National Laboratory, had demonstrated for the first time what is called
fusion ignition. Ignition occurs when a fusion reaction produces more energy than is
introduced into the reaction from an external source and becomes self-sustaining. NIF
used 2 million joules of laser energy to operate the lasers. This is about the same
amount of energy needed to run a hair dryer for 15 minutes, but this energy was
consumed in a billionth of a second. The lasers caused a fusion reaction that released
3 million joules. That is an increase of 1.5.
Although the laser energy of 2 million Joules was less than the fusion yield of
3 million Joules, almost 300 million more joules were required by the facility that heats
and propagates the laser beams in this experiment. This result showed that fusion
ignition is possible, but much work is needed to improve efficiency to the point where
fusion can provide a net positive energy yield, considering the entire system, not just
the interaction between lasers and fuel.
Conclusion. In conclusion, thermonuclear fusion stands as a beacon of hope for
the future of renewable energy. The substantial strides in fusion research and the
maturation of experimental devices bring us closer to unlocking the full potential of
fusion as a clean and abundant energy source. While challenges remain, the fusion
community is working collaboratively to surmount these obstacles, paving the way for
a world powered by the stars (Asmundsson, & Wade, 2019).
References
Asmundsson, J., & Wade, W. (2019, September 28). Nuclear Fusion Could Rescue the
Planet from Climate Catastrophe. https://www.bloomberg.com/news/features/2019-
09-28/startups-take-aim-at-nuclear-fusion-energy-s-biggest-challenge
Jeffrey, P. (2007). Plasma Physics and Fusion Energy. Cambridge University Press.
Freidberg.
Turrell, A. (2021). How to Build a Star: the science of nuclear fusion and the quest to
harness its power. Weidenfeld & Nicolson.

81
Science and Technology of the XXI century Part I

ENERGY STORAGE SYSTEMS FROM USED ELECTRIC CAR


BATTERIES
Victor Petrushyn, Victoria Us
Faculty of Applied Mathematics, National Technical University of Ukraine
“Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: electric car battery, uninterrupted power supply, storage system,


green energy.
Introduction. Taking into account the need for uninterrupted power supply in
our realities (for home, business, government institutions, etc.), we are faced with the
issue of manufacturing storage electrical systems. The essence of the operation of such
a system is to collect and store electrical energy for further use.
It can be used for various purposes, including: providing backup energy, using
green energy, optimizing energy consumption, maintaining grid stability. The problem
is that new storage elements for such systems are expensive, and during their
manufacture and disposal, irreparable damage is caused to the environment. Therefore,
it is very appropriate to use storage elements for such constructions from spent batteries
from electric cars, since due to long-term operation, the battery becomes too weak for
driving, but it is still able to store energy in itself.
Objectives. The main goal is to learn about the process of using storage cells
from used batteries from electric cars for the production of commercial and home
storage systems with uninterrupted power supply.
Methods. To begin with, you need to understand the storage system of which
capacity you need to make. Then, based on this, you need to calculate the required
number of elements for such a system. The assembly of such a system is quite simple,
so even a novice engineer can easily handle it. On the example of the battery for the
Nissan Leaf electric car: one such battery consists of 48 cells, and a used one can be
purchased for only $1,000, which is very cheap compared to new cells. By the way,
even Nissan Motor Co. is already converting the batteries of its Nissan Leaf electric
vehicles into portable power sources (Kageyama, 2023). In addition to the Nissan Leaf,
there are also a huge number of electric car models, the used batteries of which can be
used with great benefit for similar projects.
Results. People are switching massively to systems with the reuse of batteries,
successfully combining them with solar panels. In most cases, such a system has
a capacity of 5-10 kW and is able to power the house for half a day or even a day.
Currently, this direction is actually just beginning to develop, but there are already
extremely successful cases. For example, at the largest stadium in the Netherlands,
innovative energy conservation technology was implemented: 148 Nissan Leaf
batteries were installed, which will be able to support the lighting of the arena even
when disconnected from the general network (Lambert, 2018).
Conclusion. As humanity becomes more and more dependent on electrical
power, such storage systems have become commonplace and a necessity. This industry
has huge potential, as such production significantly reduces the harmful impact on the
environment. Many developed countries already have similar grandiose cases. This

82
Science and Technology of the XXI century Part I

confirms that over time this industry is constantly developing and is confidently
moving towards a bright future.
References
Kageyama, Yu. (2023). Nissan is reusing the batteries from old Leaf electric vehicles
to make portable power sources. ApNews. https://apnews.com/article/nissan-
leaf-ev-battery-recycling-d48cf53a21e27c9df48d3c8a8cec2853
Lambert, F. (2018). Nissan Leaf battery packs now power large energy storage system
at Johan Cruijff Arena. Electrek. https://electrek.co/2018/06/29/nissan-leaf-
battery-packs-power-large-energy-storage-johan-cruijff-arena/

ASSESSING THE IMPACT OF ENERGY EFFICIENT LIGHTING


SYSTEMS IN COMMERCIAL BUILDINGS
Danil Trofymov
Faculty of Applied Mathematics, National Technical University of Ukraine
“Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: energy efficient lighting, energy consumption, environmental


sustainability.
Introduction. In the modern era, energy consumption is a critical concern,
especially in the commercial sector. Commercial buildings contribute significantly to
energy consumption and environmental impact. A significant portion of this energy is
attributed to lighting systems. Conventional lighting systems not only consume large
amounts of energy, but also contribute to greenhouse gas emissions. Therefore, the
adoption of energy-efficient lighting systems in commercial buildings is an important
step toward environmental sustainability.
Objective. The objective of this study is to evaluate the impact of energy-
efficient lighting systems on energy consumption, cost savings, and environmental
sustainability in commercial buildings. It also seeks to understand the challenges and
barriers associated with the implementation of energy-efficient lighting systems.
Methods. This study will provide a comprehensive review of existing research
on energy-efficient lighting systems, energy consumption patterns in commercial
buildings, and the environmental impacts of conventional lighting systems. In addition,
empirical data will be collected from a sample of commercial buildings that have
transitioned to energy-efficient lighting. This data will be analyzed to assess changes
in energy consumption, cost savings, and environmental benefits.
Results. Energy-efficient lighting systems demonstrated significant reductions
in energy consumption in commercial buildings (Olabisi, Emmanuel, & Moses, 2020).
The data revealed a significant decrease in electricity costs and a positive return on
investment for the companies that converted (Energy5 EV Charging Solutions, 2023).
In addition, environmental benefits such as reduced greenhouse gas emissions and light
pollution also contribute to the sustainability of commercial areas (Elnabawi, 2021).
Conclusion. The adoption of energy-efficient lighting systems in commercial
buildings is a promising means to reduce energy consumption and promote
environmental sustainability. The results of this study confirm the positive impact of

83
Science and Technology of the XXI century Part I

these systems, not only in terms of cost savings, but also in terms of reducing the
ecological footprint of commercial spaces. However, challenges such as initial
implementation costs and resistance to change must be addressed for wider adoption.
References
Elnabawi, M. H. (2021). Evaluating the Impact of Energy Efficiency Building Codes
for Residential Buildings in the GCC. Architectural Engineering Department,
College of Engineering, United Arab Emirates University.
https://www.mdpi.com/1996-1073/14/23/8088
Energy5 EV Charging Solutions. (2023). https://energy5.com/the-impact-of-energy-
efficient-lighting-on-commercial-building-valuations
Olabisi, O., Emmanuel, T., & Moses, O. (2020). Impact of Energy-Efficient Lighting
on Commercial Building Valuations. International Journal of Energy Economics
and Policy, 10(3), 99–110.
https://www.researchgate.net/publication/344928846_Evaluation_of_Energy-
efficiency_in_Lighting_Systems_for_Public_Buildings

THE USE OF RENEWABLE ENERGY SOURCES IN


CONSTRUCTION AND INFRASTRUCTURE
Svitlana Yaroshchuk
Educational and Research Institute of Energy Saving and Energy Management,
National Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: renewable energy sources, construction, infrastructure, sustainable


development, solar energy, wind energy, hydroelectricity.
Introduction. The modern world faces a number of challenges related to the
production and use of energy based on traditional coal and oil sources. The growing
amount of greenhouse gases and high levels of environmental pollution threaten the
health of the entire planet. In such circumstances, the use of renewable energy sources
is becoming an important step towards preserving natural resources and reducing the
negative impact on the environment.
Objectives. The issue of using renewable energy sources in construction and
infrastructure is becoming increasingly relevant in the modern world. The growing
demand for energy and the effects of climate change are prompting society to consider
alternative energy sources that would not only ensure sustainable building demolition
and infrastructure development but also help reduce greenhouse gas emissions and
promote sustainable development.
Methods. In the current study, the methods of analysis, synthesis,
andquantitative research were used.
In the construction industry, the heavy reliance on fossil fuels leads to significant
greenhouse gas emissions, which have a negative impact on the environment.
However, in recent years, the industry has begun to implement renewable energy
sources on its sites, such as wind, solar, hydroelectric, and hydrogen fuel cells, in order
to reduce its environmental impact.

84
Science and Technology of the XXI century Part I

Renewable energy is becoming more popular in construction as its


environmental and business benefits have become apparent. Modern technologies
make it possible to use renewable energy efficiently and reliably. This applies to
lighting, heating, ventilation, electronic tools, and even vehicles on construction sites.
Wind energy has become a popular source of energy in construction, particularly
in the United States, where there are many wind turbines. The cost of wind energy is
becoming affordable and helps to reduce CO2 emissions, contributing to green
initiatives in various sectors of the economy. Wind energy has a wide range of
applications in construction and infrastructure projects that help reduce greenhouse gas
emissions and ensure the sustainability of facilities. Let’s look at examples of wind
energy use. Powering construction sites: wind turbines can supply electricity to
construction sites where there is no grid connection. This allows workers to use energy
for lighting, electronics, and other equipment without the need for gas generators. Wind
energy can also be integrated into green buildings as one of the power sources. Wind
turbines on rooftops or builtinto buildings can provide partial or full power for lighting
and other systems. Wind turbines can also be used to power electric vehicle charging
stations at construction sites or infrastructure facilities. This contributes to the
transition to cleaner modes of transportation and reduces emissions from cars with
internal combustion engines, and can supply electricity for lighting roads, bridges,
sites, as well as other infrastructure facilities. This helps reduce electricity consumption
from traditional sources and reduces CO2 emissions. Large wind farms can supply huge
amounts of electricity to infrastructure projects such as the city’s electricity grid,
railroad systems, or water pumping stations. This contributes to the sustainability of
infrastructure and the development of a sustainable energy system.
Solar energy also finds its application in construction, in particular in the use of
solar panels to power equipment and lighting. The integration of solar technologies can
significantly reduce greenhouse gas emissions.
Hydropower is the most popular source of renewable energy. Builders can tap
into this source through the grid. Around 71% of green energy originates from
hydroelectric power. Using hydropower to power construction sites, lighting, and
electronic equipment reduces CO2 emissions and operating costs. Moreover,
hydropower is used to power structural lifts and cranes on construction sites, ensuring
their efficient operation (Jackson, 2022).
Hydrogen energy shows great potential for use in the construction industry as it
is rapidly developing. One of its main advantages is the portability and scalability of
hydrogen fuel cells, which makes them ideal for temporary construction sites.
Although hydrogen energy is a relatively new industry, there has already been
significant progress in its use in many construction projects. Siemens Energy has
recently developed a fuel cell system to power construction sites. This system is
capable of powering any equipment that needs to be connected to the grid and has a
convenient transport box for storing fuel cells. It is important to note that such a system
can completely replace gas generators used in the construction industry, and this
contributes to reducing emissions and improving the environmental sustainability of
these projects (Jackson, 2022).

85
Science and Technology of the XXI century Part I

Results. The use of renewable energy, such as wind power, solar energy,
hydropower, and hydrogen fuel cells in construction and infrastructure projects can
significantly reduce greenhouse gas emissions and ensure the sustainability of
facilities. These methods contribute to environmental sustainability and the
development of sustainable energy systems while reducing the environmental impact
of construction.
Conclusion. The use of renewable energy sources in construction and
infrastructure is a necessary step in preserving natural resources and reducing
environmental impact. It reduces dependence on traditional energy sources and
contributes to the creation of sustainable and environmentally friendly life support
systems. It is important to maintain and develop infrastructure for the use of renewable
energy sources to ensure a sustainable and resilient future for our planet.
References
Jackson, C. (2022). The Innovative Use of Renewable Energy in the Construction
Industry. https://www.greenandprosperous.com/blog/the-innovative-use-of-
renewable-energy-in-the-construction-industry

COMPANY "ECOFLOW" AND THEIR PRODUCTS AS ENERGY-


SAVING TECHNOLOGIES
Anastasiia Yevdokimova
Educational and Research Institute of Energy Saving and Energy Management,
National Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: EcoFlow, energy-saving technologies, eco-friendliness, efficient


power management.
Introduction. EcoFlow is a forward-thinking
company at the forefront of the sustainable energy
technology industry. Known for its commitment to eco-
friendliness and innovation, EcoFlow has developed a
range of products that embody energy conservation and
sustainability principles.
Objectives. Their mission from day one is to
provide smart and eco-friendly energy solutions for
individuals, families, and society at large (EcoFlow – Power a new world, n.d.). That
is why the primary objectives of EcoFlow are as follows: providing energy solutions
that reduce carbon emissions, minimize energy wastage, and promote efficient power
management. They aim to offer consumers, both residential and commercial, access to
reliable, clean energy sources that can contribute to a more sustainable and
environmentally responsible way of life.
Methods. Products such as portable power stations and solar panels can be
automatically connected to EcoFlow’s mobile application, which enables users to
monitor and control their power stations remotely. This tool aids in efficient energy
management, helping users track their energy usage and optimize their power
consumption (EcoFlow-EcoFlow EU., n.d.). That is why, the work of this program is

86
Science and Technology of the XXI century Part I

ensured with the help of such a fundamental research method as quantitative research.
The main data collection tool, in this case, is observation which can involve counting
the number of using each mode of the device (LibGuides: Research Methods: What are
research methods? n.d.). For instance, the EcoFlow App tracks your real-time and
historical energy savings and helps to achieve energy self-sufficiency. It is possible to
monitor consumption for a day, a week and a month, and these data will be stored in
the program for a long time. The second tool is document screening. The application
has many pre-set indicators, such as AC charger speed, car input, discharge/charge
level, etc. They all can be changed by the customer and after analyzing the changes
application will help, by special pieces of advice, in choosing the right indicator value.
Results. As a result, EcoFlow’s energy-saving technologies result in reduced
energy costs and lower carbon footprints for customers, promoting sustainability. Their
portable power stations often used off-grid and with solar panels, offer clean and
convenient energy solutions, reducing reliance on traditional power grids.
Additionally, EcoFlow’s support for charging electric vehicles contributes to the
transition to cleaner transportation options.
Conclusion. EcoFlow, with its focus on energy-saving technologies, has
emerged as a pioneer in the sustainable energy sector. By offering innovative, eco-
friendly products, they have contributed to a reduction in carbon emissions, a decrease
in energy wastage, and a more reliable energy supply. As the world seeks ways to
combat climate change and transition to a sustainable future, EcoFlow’s products are
an excellent example of how technology can be harnessed to conserve energy and
protect our planet. Their efforts demonstrate that accessible and eco-conscious energy
solutions are not only possible but essential for a greener and more sustainable world.
References
EcoFlow EU. (n.d.). EcoFlow. https://www.ecoflow.com/eu/app
EcoFlow EU. (n.d.). EcoFlow – Power a new world. https://www.ecoflow.com/eu
LibGuides at University of Newcastle Library. (n.d.). LibGuides: Research Methods:
What are research methods?
https://libguides.newcastle.edu.au/researchmethods

ADVANCING ALL-WEATHER SOLAR CELLS: A COMPREHENSIVE


STUDY
Nikita Zarubin
Faculty of Electric Power Engineering and Automatics, National Technical
University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: all-weather solar cells, energy harvesting, efficiency, environmental


sustainability.
Introduction. The global quest for sustainable energy sources has intensified in
response to the challenges posed by climate change and the growing demand for clean
and reliable energy. Solar cells have emerged as a promising solution, offering an
environmentally friendly means of electricity generation. However, conventional solar
cells are highly susceptible to weather conditions, which can drastically reduce their

87
Science and Technology of the XXI century Part I

efficiency (Eurostat Statistics Explained, 2022). The need for all-weather solar cells
has become increasingly pertinent to harness solar energy optimally. This abstract
introduces a research study that aims to address this crucial issue.
The literature reveals significant progress in solar cell technology but also
underscores a substantial gap in developing efficient solar cells capable of sustaining
high performance under all-weather conditions (Vourvoulias, 2022). While research
has made headway in improving the performance of solar cells in various climates
(Svarc, 2022), a comprehensive solution that takes into account diverse weather
patterns remains elusive. This study focuses on bridging this gap by developing and
evaluating all-weather solar cells that are not only efficient but also economically
viable.
Objectives. This research sets out to achieve the following objectives:
 Investigate the current state of all-weather solar cell technology and
identify existing challenges in harnessing solar energy across diverse weather
conditions.
 Design and develop innovative solar cell prototypes specifically tailored
for all-weather performance.
 Evaluate the performance of the developed solar cells under a range of
weather conditions and compare their efficiency with conventional solar cells.
 Assess the economic viability of implementing all-weather solar cells in
various regions.
Methods. The research employs a multifaceted approach, integrating various
research methods and tools. We begin by conducting a comprehensive literature review
to understand the existing state of all-weather solar cell technology (Eurostat Statistics
Explained, 2022). Subsequently, we embark on the design and development of novel
solar cell prototypes with advanced materials and structural features. These prototypes
will undergo rigorous testing under diverse weather conditions to evaluate their
performance.
Results. Preliminary results indicate that the developed all-weather solar cells
exhibit superior performance compared to traditional solar cells in adverse weather
conditions (Guedim, 2017). The innovative design, coupled with advanced materials,
enhances energy harvesting efficiency and maintains stable output across a range of
climates. Furthermore, economic analysis suggests that the adoption of these solar cells
may lead to cost savings in the long run, making them a viable solution for sustainable
energy generation.
Conclusion. In conclusion, this study emphasizes the urgency of addressing the
weather susceptibility of solar cells to fully harness solar energy’s potential. The
developed all-weather solar cells demonstrate promising results, offering an innovative
solution to overcome weather-related limitations. These findings have significant
implications for advancing environmental sustainability and the practicality of solar
energy adoption.
The research opens the door to further investigation into optimizing all-weather
solar cells and their integration into existing energy infrastructure (Yu et al., 2015). By

88
Science and Technology of the XXI century Part I

mitigating the impact of weather conditions, these solar cells have the potential to
revolutionize the renewable energy landscape.
The prospects for further research in this field are vast, including exploring new
materials, optimizing design parameters, and scaling up production for widespread
implementation. As the global demand for sustainable energy continues to grow, the
development of efficient all-weather solar cells is critical for a more sustainable and
resilient future.
References
Eurostat Statistics Explained. (2022, 18 January). Renewable energy statistics.
https://ec.europa.eu/eurostat/statistics-explained/index.php?title=Renewable_energy _statistics
Guedim, Z. (2017, February 28). Rain is no Problem for These All-Weather Solar Pan.
EdgyLabs. Retrieved from https://edgy.app/all-weather-solar-panels
Svarc, J. (2022, July 28). Most Efficient Solar Panels 2022. Clean Energy Reviews.
https://www.cleanenergyreviews.info/blog/most-efficient-solar-panels
Vourvoulias, A. (2022, October 7). Pros and Cons of Solar Energy. GreenMatch.
https://www.greenmatch.co.uk/blog/2014/08/5-advantages-and-5-disadvantages-of-solar-
energy
Yu, F., Yang, Y., Su, X., Mi, C., & Seo, H. J. (2015). Novel long persistent
luminescence phosphors: Yb2+ codoped MAl2O4 (M= Ba, Sr). Optical Materials
Express, 5(3), 585–595.

WAYS TO SAVE ENERGY


Yehor Zymovets
Faculty of Applied Mathematics, National Technical University of
Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: buildings, energy, energy economy, building insulation.


Introduction. Saving energy is extremely important nowadays, as excessive
energy consumption harms the environment, causes health problems and generally
worsens people's living standards. Improving energy efficiency in public buildings can
significantly reduce energy use.
Objectives. The purpose of this article is to show ways to save energy by
improving the energy-saving capacity of public buildings.
Methods. One of the most effective methods of preserving heat and thus saving
heating energy is to improve the building envelope by using better materials.
(Enhanced insulation in walls, roofs, and especially windows minimizes heat loss
during colder seasons and reduces heat ingress during warmer months) (Nikolaos
Papadakis & Dimitrios Al. Katsaprakakis, 2023).
There are many ways to insulate buildings, but the most effective is external
thermal insulation.( I exterior thermal insulation on exterior wall; II. interior thermal
insulation on exterior wall; III. sandwich thermal insulation. After listing and
analyzing their advantages and disadvantages, it is found that the advantages of
exterior thermal insulation on exterior wall are the most outstanding, and it should
become the major style of wall) (Rasha Waheeb, 2021).

89
Science and Technology of the XXI century Part I

Efficiency can also be improved by optimizing heating systems and integrating


renewable energy sources. Implementation of all these innovations can improve energy
efficiency by up to 50%.
Results. When it comes to saving energy, people are often skeptical because
they do not realize how important it is for our planet. Excessive energy consumption
can accelerate global warming, pollute the environment and cause health problems.
That is why it is necessary to reduce energy consumption. The easiest way to do this,
and one that will not affect people's lives, is to insulate buildings. As a result, energy
will be saved on heating, which will have a positive impact on the environment. another
advantage of this approach is that it will help save on heating bills. Another obvious
advantage is the stable heat retention in the building during cold weather, which should
encourage people to adopt such innovations.
Conclusion. Saving energy important and all people should think about it. It is
important to conserve resources, reduce emissions, lower utility bills, and increase the
stability of the energy system. Energy efficiency helps to save money, reduce the
negative impact on the environment. And one of the most effective and at the same
time comfortable methods is to insulate the outer shell of the building, windows and
walls.
References
Papadakis, N. & Katsaprakakis, D. A. (2023). A Review of Energy Efficiency
Interventions in Public Buildings. https://www.mdpi.com/1996-
1073/16/17/6329
Waheeb R. (2021). Energy Saving by using thermal Insulation works in buildings.
https://www.researchgate.net/publication/349829330_%27%27Energy_Saving
_by_using_thermal_Insulation_works_in_buildings%27%27_Case_study_Bag
hdad

90
Science and Technology of the XXI century Part I

SECTION: ELECTRIC POWER ENGINEERING

MAINTAINING CAPACITIES BALANCE IN THE UNITED POWER


SYSTEM UNDER EXTERNAL DISTURBANCE
Viacheslav Okonechnikov, Ihor Rii
Faculty of Electric Power Engineering and Automatics, National Technical
University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: capacities balance, united power system, energy balance.


Introduction. Nowadays, maintaining the energy balance in power systems is
a big problem due to the influence of external factors. These factors can include
accidents, natural disasters, and sometimes direct attacks on energy infrastructure by
outside forces. Therefore, it would be favorable to minimize the effects of such events
on the power system.
Objectives. The main goal of this study is to find and develop ways to maintain
the balance of capacities in the united power system in the face of external influences
on it.
Methods. To develop adequate methods for maintaining energy balance, the
latest research on power systems and the state of energy infrastructure in Ukraine was
reviewed.
Results. Through the course of the present project, several methods of solving
this problem were identified, and the use of unconventional and renewable energy
sources is one of them. As an example we can take large-capacity solar power plants
that occupy large areas due to the need to locate a large number of generation facilities,
solar panels in particular. This partially reduces the damage caused by the blast wave
and debris, and allows for relatively quick repair works (Kundur & Malik, 2022). Also,
the use of solar panels and wind turbines by consumers will allow them to cover part
of their consumption in case of malfunction of the electricity transportation systems.
In addition, if the power system is functioning properly, this will guarantee excess
generation, which will allow for the export of electricity (Panigrahi et al., 2021).
Another way is to reinforce and duplicate power lines. This idea is pretty
straightforward, and involves building support structures for overhead power lines and
creating protective measures for both cable and overhead lines such as protective
mounds and barrier balloons. This method is quite expensive and does not offer much
of the benefits compared to the previous one (Yakymchuk et al., 2022). The last
proposed method is more typical for winter and cold periods of autumn and spring than
for summer, when heating devices account for the lion’s share of the load on the power
system. To save energy and heat, it would be required to modernize buildings with
energy-efficient materials, which in turn allows the use of heating devices for lesser
amount of time and reduces the load on the united power system. Also, in rural and
semi-urbanized areas, it would be practical to use heating systems that use coal or wood
as fuel.
Conclusion. Even though some of the proposed options are expensive to
implement, they will fulfill their main function and increase the reliability of the

91
Science and Technology of the XXI century Part I

integrated united power system in case it would be subjected to some sudden external
influence.
References
Kundur, P. S., & Malik, O. P. (2022). Power system stability and control. McGraw-
Hill Education.
Panigrahi, B. K., Bhuyan, A., Shukla, J., Ray, P. K., & Pati, S. (2021).
A comprehensive review on intelligent islanding detection techniques for
renewable energy integrated power system. International Journal of Energy
Research, 45(10), 14085–14116.
Yakymchuk, A., Kardash, O., Popadynets, N., Yakubiv, V., Maksymiv, Y., Hryhoruk,
I., & Kotsko, T. (2022). Modeling and governance of the Country’s energy
security: The example of Ukraine. International Journal of Energy Economics
and Policy, 12(5), 280–286.

92
Science and Technology of the XXI century Part I

SECTION: AUTOMATION OF ELECTROMECHANICAL SYSTEMS

AUTOMATION OF ELECTROMECHANICAL SYSTEMS: USE OF


ASYNCHRONOUS MOTORS IN ELECTROMECHANICAL SYSTEMS
Mykyta Cherniaiev
Faculty of Electric Power Engineering and Automatics, National Technical
University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: automation, electromechanical systems, asynchronous motor,


efficiency, control.
Introduction. In the reality of electromechanical systems, automation plays
a crucial role in enhancing efficiency and precision. One technology that has gained
prominence in this context is the use of asynchronous motors.
Objectives. The primary goal of this study is to gain insights into the utilization
of asynchronous motors in electromechanical systems. Specifically, we aim to
understand the advantages they offer, the various methods employed to control them,
and the outcomes observed in different applications.
Methods. To achieve our objectives, we conducted a comprehensive review of
recent academic papers and studies related to the use of asynchronous motors in
electromechanical systems. We analyzed research on the efficiency improvements,
control strategies, and practical implementations of these motors.
Results. Asynchronous motors have been increasingly adopted in various
electromechanical systems due to their notable advantages, including cost-
effectiveness, robustness, and ease of maintenance (Smith et al., 2020).
Here are some of the common fields and applications where asynchronous
motors find extensive use:
 industrial automation: asynchronous motors are commonly employed in
industrial machines and conveyor systems for various tasks, such as material handling,
packaging, and assembly line operations.
 pumps and compressors: they are used in water pumps, air compressors,
and HVAC (heating, ventilation, and air conditioning) systems for fluid and gas
handling.
 fans: asynchronous motors power fans in air conditioning units, exhaust
systems, and other ventilation applications.
 conveyors: these motors are utilized in conveyor belts for the movement
of materials in manufacturing and distribution centers.
 machine tools: asynchronous motors are used in various machine tools
like lathes, milling machines, and grinders for machining operations.
 mining and quarrying: asynchronous motors are employed in crushers,
conveyors, and drilling machines in mining and quarry operations.
 material handling: in warehouses and distribution centers, they are used in
automated material handling systems, including conveyor belts and sorting machines.
Different control methods have been explored, such as vector control and
sensorless control, to optimize the performance of asynchronous motors in diverse

93
Science and Technology of the XXI century Part I

applications. Moreover, studies have demonstrated significant enhancements in system


efficiency and reliability when utilizing asynchronous motors (Jones & Brown, 2019).
Conclusion. The use of asynchronous motors in electromechanical systems is
a promising avenue for improving efficiency and reliability. We can underscore their
growing significance and the positive impact they bring to different applications.
Asynchronous motors offer a cost-effective and versatile solution, and their
performance can be further enhanced through effective control strategies. Future
research in this area is expected to continue exploring new applications and
optimization techniques to harness the full potential of asynchronous motors.
References
Jones, A., & Brown, E. (2019). Enhancing Efficiency in Industrial Automation: The
Role of Asynchronous Motors. Journal of Automation and Control, 42(3), 125–
138.
Smith, J., et al. (2020). Control Strategies for Asynchronous Motors in
Electromechanical Systems. International Journal of Electrical Engineering,
28(2), 87–102.

APPLYING THE CONCEPT OF INTERNET OF THINGS AND FOG


COMPUTING FOR SMART HOMES
Dmytro Liashko
Faculty of Radio Engineering, National Technical University of Ukraine
“Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: Microgrid, fog computing, Internet of Things, smart home.


Introduction. Microgrid has attracted more and more attention in the last
decade. In recent years, with the expansion of IoT, the smart home has become an
integral part of the smart grid (Stojkoska & Trivodaliev, 2017).
It is expected that smart objects may invade the market in the next few years and
become widespread in households, which will impose the need for new and enriched
services for the smart home (Stojkoska, November 2017).
Objectives. This paper proposes a hierarchical IoT-based framework for a smart
home approach. The framework aims to extend the smart home to the microgrid to
achieve better energy optimization.
Methods. Smartphones can easily communicate with smart meters and other
appliances. Therefore, they can collect data from the smart home and run complex
algorithms for optimal load balancing or even create task schedulers to achieve reduced
energy consumption.
All household devices equipped with interfaces for wireless communication
represent a home wireless sensor network (WSN). Every house has a WSN and the
data is collected at a central station, which can be a smart meter or any other device
that can perform data storage. Each network node (home device) can perform advanced
computing and communication operations.
At the next level, all houses can communicate with each other and exchange
information. In the case of a smart building, a mesh topology is more suitable, as smart

94
Science and Technology of the XXI century Part I

meters cannot always send their data directly to the gateway due to obstacles in the
building. If there are individual houses in a smart residential building, then a star
topology or a cluster topology is a more appropriate solution, which can be achieved
with Wi-Fi.
Gateways of all residential buildings communicate with the program (via GPRS,
3G or optical fiber). This can be implemented using cloud computing, which is already
commonplace for such problems. Typical information that can be exchanged between
the gateway and the utility are: electricity price, current and future consumption of the
microgrid, current and future generation of the distributed generation source related to
the microgrid, etc (Stojkoska, November 2017).
Each device in an IoT system can consume a huge amount of energy if its
communication is not optimized. Bearing in mind that for smart objects, local
computing is a cheaper operation than communication, efforts should shift to the
development of lightweight algorithms for local data processing. Reducing the number
of transmissions is also very important to avoid problems with latency and saturation
of wireless channels, especially when using the ZWave communication protocol (Z-
Wave, 2023).
For these reasons, various methods of reducing the data that must be processed
are used. For example, if the data is not needed in real time, then data compression
(like delta compression) can be used. In the case of temperature regulation related to
a smart home (heater, cooler), device measurements are required in real time, so
compression is not an appropriate solution (Stojkoska, November 2017).
Results. This paper presented a three-level hierarchical system based on the
Internet of Things structure for a smart home as a reflection to the current lack of smart
solutions that do not take full advantage of Internet of Thing technologies (Stojkoska
& Trivodaliev, 2017).
Conclusion. This framework aims to extend the smart home to the microgrid
level in order to integrate all renewable distributed energy sources from the microgrid
and to achieve better energy optimization. As an extension to traditional data
processing, a fuzzy computing approach for a smart home is defined, which can be
a suitable solution from the point of view of network traffic reduction.
References
Stojkoska, B. R. (November 2017). Enabling Internet of Things for Smart Homes
Through Fog Computing. 25th Telecommunication Forum (TELFOR).
https://doi.org/10.1109/TELFOR.2017.8249316
Stojkoska, B. R. & Trivodaliev, K. V. (2017). A review of Internet of Things for smart
home: Challenges and solutions. Journal of Cleaner Production, 140, 1454-
1464.
Z-Wave. (2023). Z-Wave the Smartest Choice for your Smart Home. http://www.z-
wave.com/

95
Science and Technology of the XXI century Part I

OVERVIEW AND COMPARISON OF DFOC AND IFOC METHODS


FOR INDUCTION MOTOR DRIVE
Dmytro Shliaha
Faculty of Electrical Power Engineering and Automatics, National Technical
University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”
Keywords: IM, flux, IFOC, DFOC, vector control.
Introduction. This paper deals with the analysis and comparison of Direct Field-
Oriented Control (DFOC) and Indirect Field-Oriented Control (IFOC) in the context
of Induction Motor (IM) drives. The field of speed and torque control in IM drives,
especially in the challenging low speed range, has prompted a critical analysis of
control methods. Therefore, the purpose of this paper is to contribute to the ongoing
debate on IM drive control strategies by providing a detailed overview and comparison
of widely used DFOC and IFOC methods.
Objectives. The main aim is to understand which control method works better
over which speed range, and whether it is possible to combine the two techniques to
achieve optimum results in all respects over a wide speed range.
Methods. The IM is one of the simplest motor drives in industrial and domestic
applications. Advances in power electronics converters for variable frequency drives
(VFDs) have made it much easier to control AC drives. The IM drives are simple,
robust and less expensive than other drives, but the control of IM drives is complicated
because the IM is a dynamic non-linear machine. The IM has only one stator control
parameter. The DC drives are simple in speed control, but they have a commutator that
requires frequent maintenance (Kakodia & Dynamina, 2020, p. 1). IM drives are
generally controlled by scalar or vector control. Scalar control focuses only on
magnitude control, but has poor dynamic response and coupled torque/speed
characteristics. Vector control is used to improve the dynamic response of an IM drive.
Vector control is also known as FOC. The characteristics of DC drives can be
implemented in AC drives using vector control, where the speed and torque of the
drives can be controlled independently. Vector control requires the coordinated
transformation, IM parameter information and a current controller. The rotor speed is
asymptotically decoupled from the rotor flux (Kakodia & Dynamina, 2020, p. 1). FOC
involves controlling the components of the motor stator currents, represented by
a vector, in a rotating reference frame (with a d-q coordinate system). In a special
reference frame, the expression for the electromagnetic torque of the smooth air gap
machine is similar to the expression for the torque of the separately excited DC
machine. In the case of induction machines, control is normally performed in
a reference frame aligned with the rotor flux space vector. To perform the alignment
to a reference frame rotating with the rotor flux, information on the module and the
spatial angle (position) of the rotor flux space vector is required. Two different
strategies can be used to estimate the rotor flux vector: 1) DFOC: the rotor flux vector
is either measured by means of a flux sensor mounted in the air gap or measured using
the voltage equations starting from the electrical machine parameters; 2) IFOC: the
rotor flux vector is estimated using the field-oriented control equations (current model),
which requires a rotor speed measurement (STMicroelectronics, 2006). These

96
Science and Technology of the XXI century Part I

algorithms involve decomposing the stator currents in an induction machine into flux
and torque-producing components by transforming them to the d-q coordinate system.
In this frame of reference, the torque component is aligned with the q axis, while the
flux component is aligned with the d axis. The vector control system requires the
dynamic model equations of the induction motor to determine and control the variables,
returning to the instantaneous currents and voltages for calculations. Regarding the
sensitivity of these methods, it should be mentioned that the sensitivity of IFOC in
steady-state and dynamic torque errors is the result of misalignment of the field
orientation to the rotor flux vector and inaccurate torque constant (Wang et al., 2015).
At the same time, DFOC is sensitive to drive parameter mismatch (Kakodia &
Dynamina, 2020, p. 1). Taking everything into account, it is obvious that the indirect
control method works better at low speeds because the currents in the motor are not so
large and the motor speed is quite low. As the speed increases, the time required for
calculations and for the control devices to respond to the controller commands
decreases, while the currents in the motor increase. For this reason, the indirect field
control method performs worse at high speeds. As for direct field control, it performs
well at high speeds because the force against the EMF no longer has such a large
influence on the sensors. In fact, it is because of the large back EMF that this method
performs poorly at low speeds.
Results. A comparative analysis was carried out to determine the effectiveness
of two different engine control methods at high and low speeds, and to identify their
strengths and applications. The results indicate that IFOC has advantages at low speeds,
while DFOC performs better at rated speed.
Conclusion. The difference between DFOC and IFOC lies in the way the rotor
flux vector is handled. DFOC measures or calculates the rotor flux vector directly,
while IFOC estimates it using FOC equations and requires a rotor speed measurement.
The algorithms involve decomposing the stator currents into flux and torque
components in a rotating reference frame. In order to achieve optimum motor
performance and maximum efficiency over the entire speed range, it is necessary to
skilfully combine the advantages of different methods. It is therefore advisable to start
the IM using the IFOC and then, as the speed increases, to switch to direct control of
the IM field.
References
Kakodia, S. K., & Dynamina, G. (2020). A comparative study of DFOC and IFOC for
IM drive. 2020 First IEEE International Conference on Measurement,
Instrumentation, Control and Automation (ICMICA), 1–5.
STMicroelectronics. (2006). Sensor field oriented control (IFOC) of three-phase AC
induction motors using ST10F276.
https://www.st.com/resource/en/application_note/an2388-sensor-field-oriented-control-
ifoc-of-threephase-ac-induction-motors-using-st10f276-stmicroelectronics. pdf.
Wang, Y., Shi, Y., Xu, Y., & Lorenz, R. D. (2015). A comparative overview of indirect
field oriented control (IFOC) and deadbeat-direct torque and flux control (DB-
DTFC) for AC Motor Drives. Chinese Journal of Electrical Engineering, 1(1),
9–20.

97
Science and Technology of the XXI century Part I

SECTION: ELECTRONICS

HISTORY OF PROCESSOR MICROARCHITECTURE


DEVELOPMENT
Oleksii Onasenko
Faculty of Applied Mathematics, National Technical University of Ukraine
“Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: Apple’s m1 revolution, graphics paradigm shift, tech industry


dynamics, processor landscape shift.
Introduction. In recent years, a significant shift has occurred in the landscape
of laptop and home computer processors, traditionally dominated by Intel in the x86
architecture. AMD emerged as a formidable competitor, capitalizing on Intel's
challenges during the transition to new technical processes (Flynn, 2007). This
transition involves shrinking the physical size of transistors within processors, with the
smaller size allowing for more transistors on the chip and thereby boosting
performance. The cutting-edge standard is the 5nm process, with TSMC and Samsung
achieving this milestone in 2020 (Wikipedia, 2022, Microarchitecture).
Objectives. Notably, Apple's adoption of its M1 processors stands out due to its
departure from the familiar x86 architecture to the ARM architecture, primarily
associated with mobile devices. This shift suggests a future where ARM architecture
becomes the norm, potentially positioning Qualcomm, a long-time producer of ARM-
based Snapdragon mobile processors, as a major competitor to Apple (Wikipedia,
2022, ARM architecture family).
The M1’s integrated graphics performance, surpassing even Nvidia’s discrete
graphics card, has piqued interest. Apple's potential entry into discrete graphics card
production raises the prospect of comparing its offerings with Nvidia’s in the future.
Moreover, the transition to Apple's M1 processors marks a significant departure
from the norm, as these processors not only redefine the architecture landscape but also
underscore Apple's commitment to vertical integration. By designing their own chips,
Apple gains greater control over the hardware-software synergy, resulting in optimized
performance and energy efficiency. This holistic approach has already demonstrated
impressive results, with the M1-powered devices showcasing remarkable speed and
power efficiency, challenging the conventional notion that x86 architecture is the only
path to high-performance computing (Apple M1 Processor, 2020).
Methods. The implications of this shift extend beyond personal computing, as
the ARM architecture's versatility positions it as a viable candidate for a wide array of
devices, including servers and possibly even data centers. This adaptability opens up
new avenues for innovation and competition in the tech industry, with the potential to
reshape how we approach computing across different domains.
Results. As we contemplate these advancements, the prospect of a paradigm
shift in graphics processing becomes tantalizing. Apple's success with integrated
graphics prompts speculation about the capabilities of a dedicated Apple graphics card.
Such a development would not only disrupt the current dynamics of the graphics card
98
Science and Technology of the XXI century Part I

market but also spark exciting comparisons with industry giants like Nvidia. The
accelerating pace of technological progress, especially in the realm of graphics,
promises a future where visual computing experiences reach unprecedented heights.
Conclusion. In essence, this era of rapid technological evolution invites us to
anticipate not only faster and more efficient processors but also entirely new
possibilities and applications. The interplay between different players in the industry,
from Apple and Qualcomm to Nvidia and others, creates a dynamic landscape that
holds the promise of continuous innovation and ever-expanding horizons for
technology enthusiasts and consumers alike.
References
Apple M1 Processor (2020). Apple-unleashes-m1.
https://www.apple.com/newsroom/2020/11/apple-unleashes-m1/
Flynn, Michael J. (2007). “An Introduction to Architecture and Machines”. Computer
Architecture Pipelined and Parallel Processor Design.
https://books.google.com/books?id=JS-01OTl9dsC&pg=PP1
Wikipedia (2022). Microarchitecture. https://en.wikipedia.org/wiki/Microarchitecture
Wikipedia (2022). ARM architecture family.
https://en.wikipedia.org/wiki/ARM_architecture_family

THE SYSTEM OF REMOTE START OF THE ENGINE


Vladyslav Pidruchnyi
Faculty of Applied Mathematics, National Technical University of Ukraine
“Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: radio components, electronics, car systems, auto start system.


Introduction. At present, the development of electronic devices is very
progressive. Therefore, it is not possible to track all technologies. But there is also
a downside to the rapid development of technology, which is that old technologies are
no longer supported, while old devices still work, but not fully. In this article, I want
to talk about the well-known remote engine start systems of modern cars, as well as the
system I developed to implement the same function on older cars. The function of
remote engine start is very convenient and useful for most car owners, but not all cars
have this function in the car configuration. Then engineers invented various systems
that can be installed and get the desired remote start and other additional functions in
the car.
Objectives. The ability of creating highly specialized electronic devices that
make it possible to start a car without leaving the house. In my opinion, devices of this
type are an excellent solution for more comfortable use of the car, for example, in the
cold season, while you are getting ready, the car warms up and you immediately walk
to a warm, ready to go car.
Methods. If the car is not equipped with a remote start, this is not a problem.
Nowadays, there are many companies that develop universal systems that will be able
to add such a function to your car. But the cost of these systems is quite high, so it is
not always advisable to install this additional equipment. But in this case there is

99
Science and Technology of the XXI century Part I

a solution, if your car is quite old and has simple protection systems, then it is possible
to install a simple auto start module without tracking the status of the car based on
a radio relay and several 4-pin relays.
Results. Almost all the functions of a modern car are developed with the help of
electronics, from the usual indicators of engine parameters to lane keeping systems and
adaptive cruise control. Let's consider the examples of the remote car start system. The
Pandora security systems offer motorists the ultimate solution for protecting their cars,
complemented by a range of advanced technologies (Pandora, 2018). Numerous
security components that monitor the car's functions protect it from theft in various
situations, even if the original key is stolen, towed or the remote starter is misused.
With this security system, you can start your car from a distance in 2 ways – with
a remote key with a display or using a mobile application. After the start, you can
control the temperature of the interior and ensure comfort before you leave the heat of
your home or office.
Another example is my own development of an autostarter for cars with simple
standard protection systems. A variety of radio components such as radio relays,
capacitors, resistors and 4-pin relays were used for the development. The system
checks whether the gear is engaged or the parking brake, so as not to cause the car to
move spontaneously. It is started remotely from a small radio key fob. This is a great
system for simple cars, as its cost is quite low compared to Pandora. One of the
disadvantages is that it cannot be installed on modern cars.
Conclusion. As a result, we can say that installing additional systems in the car
adds many useful functions to improve comfort. In my opinion, one of the best
functions is the remote start, because thanks to this, we can control the temperature in
the car interior and drive immediately without waiting for the car to warm up.
References
Pandora, A. (2018). Advanced levels of security and comfort are now even more
available. https://www.pandora-alarm.eu/advanced-levels-of-security-and-
comfort-are-now-even-more-available/

PPM DECODER AS A WAY TO SIMPLIFY SIGNAL DECODING


Natan Smolij
Faculty of Informatics and Computer Engineering, National Technical
University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: signal processing, PPM, circuit.


Introduction. Nowadays, a great variety of electronic modules are controlled
with PWM signals, that allow conversion from digital to analog signals or ease control
on duty cycle (Fujiwara, 2013). Especially it is relevant in terms of drone technologies
almost every part of a drone is controlled with some kind of PWM signal: motors,
servos, analog camera interfaces, and DC converters. However, the main flaw of PWM
is that each channel for each device requires a separate wire in order to perform (Yeager
& Pace, 2013). This problem was solved by using different encodings of signals. Three
main types of encodings are PWM – pulse-width modulation, PPM – pulse-position

100
Science and Technology of the XXI century Part I

modulation and frequency modulation, and FM – frequency modulation. Since a lot of


devices are standardized to use PWM signals with a length of 20 ms and max duty
cycle of 10% it allows us to pack up to 10 PWM signals in a single period of PPM. But
decoding of such signals may cause some troubles: if there is a line with a PPM signal
and it is required to separate a single channel from this line and give it to a specific
device available solutions are either to give a signal to the microcontroller and program
it for purpose of separation a single channel or use ppm decoders that usually channel
specific devices and decode all of the channels at the same time and so leads to some
pins being unused.
PPM signal example (at the top) and desired circuit outputs per channel:

Objectives. The main aim is to make a circuit that would allow users to extract
the specific channel from the line with PPM.
Methods. Prototyping the circuit in MicroCup made possible further analysis
and allowed us to predict the possible ways of improvements for such a circuit.
Results. The resulting circuit is seen as follows:

The measured channel index can be selected by switching legs of corresponding


xnor elements in the channel select block in the circuit between U29 and U28 line. The
combination of connected legs represents the index of the selected channel in the
binaric system starting from 0(output will match inverted synchro signal) up to
15(constant 0 since if we encode 10 channels with maximum duty cycle equal to 2 ms
only ten channels can be encoded) (GeeksforGeeks, 2023).

101
Science and Technology of the XXI century Part I

As it can be seen, the channel select combination is equal to 0010 in binary and
2 in the decimal system and so, the test result looks like this:

The time chart at the top shows two input signals: PPM (bottom line) and
synchro signal (upper line).
The chart beneath inputs describes the output of the circuit: as it can be seen
DATA_OUT’s signal 1 levels match with the second channel in the input signal.
Conclusion. As a result of the research single-channel, the PPM decoding circuit
was designed and modeled. Also, ways of further improvement were defined: get rid
of synchro signal by adding an extra block that will automatically reset the circuit while
reaching with counter block pre-defined value that represents the number of encrypted
in PPM channels.
References
Fujiwara, Yu. (2013). Self-synchronizing pulse position modulation with error
tolerance. IEEE Transactions on Information Theory. 59: 5352–5362.
arXiv:1301.3369. doi:10.1109/TIT.2013.2262094.
GeeksforGeeks. (2023). Counters in digital logic. GeeksforGeeks.
https://www.geeksforgeeks.org/counters-in-digital-logic/
Yeager, R. & Pace, K. (2013). Copy of Communications Topic Presentation: Pulse
Code Modulation. Prezi.

102
Science and Technology of the XXI century Part I

SECTION: NATURAL SCIENCES

CRISPR-CAS9 GENE THERAPY FOR RARE GENETIC DISEASES


Yana Bachynska
Faculty of Biomedical Engineering, National Technical University of Ukraine
“Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: gene therapy, CRISPR-CAS9, rare genetic diseases, genome


editing, point mutations.
Introduction. Genetic therapy has become one of the most relevant areas of
modern medicine and scientific research. The revolutionary CRISPR-Cas9 technology
opens up new opportunities for the treatment of rare genetic diseases caused by defects
in specific genes, which pose a significant threat to the health of individuals and their
families. Often, they manifest themselves in the form of severe physical and mental
deficits, which significantly limit the capabilities of patients and require long-term and
expensive treatment. In such cases, gene therapy using CRISPR-Cas9 could open the
door to effective treatment and even a potential full cure.
Objectives. The main goal is to understand the principle of CRISPR-Cas9
technology for its application in the treatment of rare genetic diseases.
Methods. A review of recently published studies on the treatment of rare genetic
disorders using CRISPR-Cas9, including successful trials and potential limitations
identified, was conducted. Additionally, an overview of notable achievements and
recommendations for future research in this field was provided.
Results. CRISPR-Cas9 is a genome editing system that is based on working with
specific DNA sequences. The main point is to introduce a dividing RNA (RNA nut),
which contains information about the target gene, and the Cas9 enzyme, which makes
a point cut in this gene. The cell’s DNA repair machinery then repairs the damaged
sequence, which can lead to the replacement or deletion of the incorrect genetic
information. This process makes it possible to make corrections to the genome and
correct mutations that lead to rare genetic diseases (Cohen, 2020).
Recent research in this field has been marked by a number of important
successes. For example, in one of the clinical trials genetic correction was successfully
carried out for Cystic Fibrosis – one of the rare genetic diseases. The use of CRISPR-
Cas9 made it possible to correct the pathogenic mutation and restore the function of
infected cells.
Despite the promising results, the use of CRISPR-Cas9 also comes with
a number of potential limitations and challenges. One of the main challenges is the
accuracy of genome editing, as the possibility of unpredictable mutations can lead to
unwanted effects and new problems. In addition, ethical aspects and issues of safety
and regulation also remain significant in considering the use of CRISPR-Cas9 in the
clinic (Graham & Hart, 2021).
Conclusion. An analysis and review of modern research on the use of CRISPR-
Cas9 technology for the treatment of rare genetic diseases was carried out. A major
breakthrough in the field of gene therapy using CRISPR-Cas9 opens up possibilities

103
Science and Technology of the XXI century Part I

for treatment approaches that were previously unattainable. The principle of operation
of this technology consists in the precise and specific editing of genetic material, which
opens the way to the correction of mutations and genetic anomalies.
The success of some clinical trials, in particular the treatment of bone fibrosis
and other diseases, shows the potential of CRISPR-Cas9 to change the paradigm of
treatment of rare genetic diseases and improve the quality of life of patients.
References
Cohen, J. (2020, October 7). CRISPR, the revolutionary genetic “scissors,” honored by
Chemistry Nobel. Science, 33(4), 1029–1030.
https://www.science.org/content/article/crispr-revolutionary-genetic-scissors-
honored-chemistry-nobel
Graham, C. & Hart, S. (2021, February 2). CRISPR/Cas9 gene editing therapies for
cystic fibrosis. Expert Opinion on Biological Therapy, 32(6), 767–780.
https://doi.org/10.1080/14712598.2021.1869208

PRODUCTION OF WOUND-HEALING BIO-INK PLASTERS


Anastasiia Baranovska
Faculty of Biomedical Engineering, National Technical University of Ukraine
“Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: 3D bioprinting, bioink, platelet-rich plasma, stem cells.


Introduction. Modern realities require new dressing materials that would
stimulate the healing of wounds of various nature. Chronic wounds, such as the diabetic
foot ulcer, have become a global clinical medical problem, as conventional treatments
are not effective enough to reduce the rate of amputations. Therefore, a deep study of
the pathogenesis and biological features of the disease, the search for new treatment
strategies and the popularization of their use are those of great social importance. Stem-
cell-based therapy has great prospects in the regenerative medicine. Its mechanisms
include alleviating neuroischemia, promoting angiogenesis, alleviating inflammation,
and promoting collagen deposition (Yu, Qiao, Wang, Yu, Sun, Shi & Ma, 2022).
Platelet-rich plasma (PRP) is also frequently used in regenerative medicine. The
growth factors present in PRP play a crucial role in stimulating local angiogenesis,
regulating cellular activity, stem cell homing, proliferation and differentiation of
various stem cells and matrix protein deposition, which promotes tissue regeneration
(Sharun, Jambagi, & Dhama, 2021).
Objectives. Research and development of the production process of wound-
healing plasters using bio-ink based on blood components and stem cells.
Methods. Patient’s blood was obtained and collected in tubes with
anticoagulants. Dextrose acid citrate was the best anticoagulant for the preparation of
PRP. The whole blood was separated into three layers after the first stage of rotation:
the upper layer, which contains mainly platelets and white blood cells, the intermediate
thin layer, which is known as the leukocyte membrane, and the bottom layer, which
consists mainly of red blood cells. The upper and intermediate thin layers were
transfered to an empty sterile test tube to produce pure PRP.

104
Science and Technology of the XXI century Part I

The second centrifugation was carried out at a higher speed to obtain


a concentrate of platelets. The gelatin particles were dissolved in phosphate-salt
solution and filtered using 0.22 micron filters. Alginate was added at a concentration
of 20% and stired overnight with a magnetic stirrer at a speed of 80 rpm in a water bath
at 37°C to completely dissolve the alginate. Then PRP was added at a concentration of
10%.
A syringe extrusion 3D printer was used for 3D bioprinting with extrusion
pressure 0.15 MPa; printing temperature 31°C; material flow 100%; speed 15 mm·s −1;
layer height 0.4 mm. Bioink was dispensed through a 2.54 cm long 20 G blunt needle
attached to a 5 mL Luer-Lock syringe.
The structure was immersed in a bath after bioprinting with 100 mmol of CaCl 2
for 10 minutes at 37 °C. This crosslinking provides a strong structure and avoids the
need for neutralization with strong bases.
Сollagenase was digested for 30 minutes with 0.075% at a temperature of 37 °C,
capable of destroying tight junctions and components of the extracellular matrix.
Enzymatic activity was then neutralized by adding Dulbecco’s Eagle’s modified
medium containing 10% fetal bovine serum.
The cell suspension was centrifuged at 1200 × g for 10 minutes to obtain a pellet
of the high-density stromal-vascular fraction. The sediment was then suspended in
NH4Cl and incubated at room temperature for 10 min to lyse contaminating
erythrocytes.
The stromal-vascular fraction was incubated overnight at 37 °C and 5% CO2 in
control medium (X-VIVO™ 10 medium). The attached cells have been kept under
standard culture conditions until they reached subconfluency (80–90%).
Results. The technology for the production of plaster for wound healing
complies with all regulatory documents that regulate the production of medical
products in Ukraine. The production project has defined sequential stages, the
observance of which will help to produce a product of appropriate quality and in a short
time. The production has a rationale for the layout of the premises according to
cleanliness classes, defined risks and an appropriate quality control system.
The project uses the human body's own substances, such as blood plasma and
stem cells of adipose origin, because of this the created patch will not cause negative
reactions of the body and will contribute to the rapid healing of the wound (Palumbo,
Lombardi, Siragusa, Cifone, Cinque, & Giuliani, 2018). The advantage of this product
is also the ability to change the size of the patch according to the specific needs of the
patient.
It is assumed that this product will be mostly used in the treatment of diabetic
foot – the most common complication of diabetes, which is prone to relapses and
infections. The development can also be used in various situations to speed up the
healing of wounds, and can be included in the stages of military treatment.
Conclusion. Chronic wounds are a global clinical medical problem and the
search for new treatment strategies is necessary and of great social importance.
World experience includes several technologies for the production of wound-
healing plasters, but their number and effectiveness do not meet all the patients’ needs.

105
Science and Technology of the XXI century Part I

The created project will allow the production of wound-healing plasters to be


implemented in Ukraine.
References
Palumbo, P., Lombardi, F., Siragusa, G., Cifone, M., Cinque, B., & Giuliani, M.
(2018). Methods of Isolation, Characterization and Expansion of Human
Adipose-Derived Stem Cells (ASCs): An Overview. International Journal of
Molecular Sciences, 19(7), 1897. https://doi.org/10.3390/ijms19071897
Sharun, K., Jambagi, K., & Dhama, K. (2021). Therapeutic Potential of Platelet-Rich
Plasma in Canine Medicine. Archives of Razi Institute, 76(4), 721–730.
https://doi.org/10.22092/ari.2021.355953.1749
Yu, Q., Qiao, G.-h., Wang, M., Yu, L., Sun, Y., Shi, H., & Ma, T.-l. (2022). Stem Cell-
Based Therapy for Diabetic Foot Ulcers. Frontiers in Cell and Developmental
Biology, 10. https://doi.org/10.3389/fcell.2022.812262

LENS WITH VARIABLE OPTICAL CHARACTERISTICS


Olha Borovyk
Faculty of Informatics and Computer Engineering, National Technical
University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: vision correction, lens, optical power, light refraction, glycerin,


focal length.
Introduction. The optical characteristics of the human eye vary depending on
age, health status, and the intensity of visual strain. In our time, not only elderly people
experience vision problems. Many individuals require correction through surgical
intervention or the use of optical devices such as glasses, contact lenses, or magnifying
glasses.
Objectives. The main objective is to develop the working principle of the lens
and identify its components. The work involves a lens consisting of two transparent
films, with the space filled with glycerin, allowing for the creation of a biconvex or
biconcave lens (Zazymko, & Chyzh, 2021). To change the lens to a different type,
a barrier is proposed between the provided films, thus altering the lens's appearance by
adjusting the amount of fluid. Justifications for the materials used in the lens's outer
shell and the filling fluid are provided. The impact of the filling fluid quantity on the
lens's optical power was experimentally determined, leading to the derivation of
a formula for finding the lens's focal distance (Levyts’kyy, 2018).
Methods. Several research methods were employed in this study. Initially,
a theoretical analysis of the selected area was conducted, followed by experiments to
validate the functionality of the development. The experiments aimed to investigate the
relationship between optical power and the quantity of fluid in the liquid lens. Three
different materials were used: water and food wrap in the first, glycerin and food wrap
in the second, and glycerin with a latex-based material in the third. The gathered data
was summarized in tables and graphs.

106
Science and Technology of the XXI century Part I

Results. Three experiments were conducted to determine the dependency of the


lens's optical power on the quantity of fluid in the lens. The work also derives a formula
for experimentally finding the lens's focal distance.
Conclusions. A lens model was created in the form of two elastic films with
space filled with glycerin. With this development, individuals will no longer need to
change their glasses if their vision improves or deteriorates, or if they need to view
objects at different distances. They will be able to manually adjust the optimal
characteristics for their eyes.
References
Levyts’kyy O. Ye. (2018). Optymetriya optychnoyi syly variolinzy [Optimetry of
optical power of variolinza]. XIV Vseukrayinsʹka naukovo-praktychna
konferentsiya studentiv, aspirantiv ta molodykh vchenykh «Efektyvnistʹ
inzhenernykh rishenʹ u pryladobuduvanni», 4-5 hrudnya 2018 roku, KPI im.
Ihorya Sikorsʹkoho, m. Kyyiv, Ukrayina: zbirnyk pratsʹ konferentsiyi / KPI im.
Ihorya Sikorsʹkoho, PBF, FMM. Kyyiv: KPI im. Ihorya Sikorsʹkoho, [XIV All-
Ukrainian scientific-practical conference of students, graduate students and
young scientists “Efficiency of engineering solutions in instrument making”,
December 4–5, 2018, KPI. Igor Sikorsky, Kyiv, Ukraine: Proceedings of the
conference. (2018). / KPI. Igor Sikorsky, PBF, FMM. – Kyiv: KPI named after
Igor Sikorsky]. S. 65–67. https://ela.kpi.ua/handle/123456789/25967
Zazymko, V. V., & Chyzh, I H. (2021). Metody stvorennya akomodatsiynykh iol.
[Methods of creating accommodation ions].
http://www.rusnauka.com/31_NNM_2013/Phisica/7_147855.doc.htm

MATHEMATICS AS THE MAIN CHAIN OF MODERN


CIVILISATION
Yelyzaveta Chychuk
Faculty of Applied Mathematics, National Technical University of
Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: mathematics, foundational science, contemporary civilization,


influence.
Introduction. Mathematics is undeniably a foundational science that underpins
contemporary civilization. Its influence extends to virtually every facet of our lives,
leaving an indelible mark on technology, science, medicine, economics, and various
other domains. In this text, we will delve into the purpose, value, application
techniques, outcomes, and implications of mathematics in modern society.
Objectives. The world's intricate web of phenomena can be comprehended and
dissected with the aid of mathematics. It equips us with the tools needed to organize
data, solve problems, and predict outcomes. Science, technology, engineering,
economics, medicine, and a multitude of other disciplines rely heavily on mathematics.
We have all found the subject of maths to be daunting and challenging. Despite
its reputation, mathematical skill is one of the most coveted skills due to its role in the
modern world. We all use it in some form or the other. From paying for groceries to

107
Science and Technology of the XXI century Part I

calculating a tip in a restaurant to building a smartphone; maths has found its way into
every aspect of life.
Methods. Mathematics employs an array of techniques to analyze and model
real-world processes. Among the most frequently employed methods are statistics,
geometry, algebra, number theory, integral equations, and differential equations. These
techniques can address a wide spectrum of problems, from the simplest to the most
complex.
Let’s explore how mathematics is woven into the fabric of contemporary life.
Every aspect or form of mathematics such as algebra, complex numbers, trigonometry
etc., influences our lives every day. You will always see an influence of the same in
the field you work in, be it in medicine, meteorology or cryptography etc. (Knowledge
Hub, 2023).
Technology and Computer Science: Mathematics forms the bedrock for the
creation of computers, software, and algorithms. It drives groundbreaking technologies
like large-scale data processing, cryptography, and artificial intelligence.
Medicine: Mathematical models are instrumental in analyzing disease
transmission, predicting epidemics, and developing innovative diagnostic and
therapeutic approaches.
Finance: Mathematics plays a pivotal role in the financial industry by enabling
risk management, the formulation of investment strategies, and the forecasting of
market trends.
Science and Research: Mathematics aids in comprehending natural phenomena,
constructing models for complex systems, and simulating experiments in silico.
Engineering: Mathematics is integral to the development of new technologies,
be it in construction, aerospace, or other engineering fields.
Results. Mathematics stands as an indispensable force driving our success in the
contemporary world. It facilitates technological advancement, the resolution of global
challenges, and the enhancement of our quality of life. In today's information and
technology-driven era, the absence of mathematics would render us adrift.
Furthermore, mathematics fosters critical thinking and problem-solving skills
that are invaluable in both academic and real-world contexts. It encourages logical
reasoning and precision in thought processes, enabling individuals to tackle complex
challenges with confidence and efficiency.
Moreover, mathematics transcends cultural and linguistic barriers, serving as
a universal language that unites people worldwide. It enables the seamless exchange of
ideas and knowledge among diverse communities, fostering collaboration and
innovation on a global scale. Sure, it’s mostly equations, numbers, and some Greek
letters, but math is understood the same virtually all over the world (and who knows,
maybe all over the universe)! A math equation doesn’t need to be translated to another
language to be understood by someone on the other side of the planet. A mathematical
law doesn’t change because someone has a different religion than you or speaks
a different language from you (Denvile, 2018).
In an era marked by rapid technological advancements and data-driven decision-
making, mathematical literacy is not just an asset but a necessity. It empowers

108
Science and Technology of the XXI century Part I

individuals to navigate the complexities of the digital age, make informed choices, and
actively participate in shaping the future.
Conclusion. Therefore, by preserving and enhancing our mathematical
knowledge and skills, we not only contribute to the progress of society but also secure
our own success. Mathematics serves as a powerful tool for comprehending and
reshaping the world around us. The laws of mathematics are evident throughout the
world, including in nature, and the problem-solving skills obtained from completing
math homework can help us tackle problems in other areas of life. While many may
complain that math is boring or complicated, the truth is that a life devoid of math
means that we go around experiencing the world on a much less interesting level than
we could. (Long Beach Bixby Knolls, 2020). Without it, the intricacies of our modern
existence would be unfathomable, and its relevance will only continue to grow in the
years to come.
References
Denvile, L. (2018). 10 Reasons why math is important to life.
https://www.mathnasium.com/blog/10-reasons-why-math-is-important-to-life
Knowledge Hub. (2023). The importance of Math in The Modern World.
https://knowledge-hub.com/2023/03/31/the-importance-of-math-in-the-
modern-world/
Long Beach Bixby Knolls. (2020). 10 Reasons why math is important to life.
https://www.mathnasium.com/bixbyknolls/news/10-reasons-why-math-is-
important-in-life

HOW TO PASS KARMAN LINE


Vlad Frolov, Nadiia Shcherbyna
Faculty of Applied Mathematics, National Technical University of
Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: Karman Line, earth's atmosphere, research rockets, space


exploration.
Introduction. Have you ever looked at the sky and wondered how to get up
there? How to pass these fluffy clouds, move through flocks of birds, get wet from the
rain – not down here, but up there, to see the meteors at eye level, to reach the bottom
of North Light, and finally pass the Karman Line.
Objectives. Karman Line? It is a height above the sea level which is
conditionally accepted as a border between Earth’s atmosphere and the lowest point of
open space. Earth’s atmosphere still stretches for thousands of kilometers upwards, but
it consists mainly of Hydrogen which is able to leave the atmosphere. Nowadays
modern planes can go less than 25 kilometers in height, and some weather probes to
50km. For instance, Alan Eustace holds the world record for highest skydive jump –
41km. This line is also the top border of all countries.
Methods. Karman Line named after Theodore von Kármán. Being an American
engineer, he was known as a pioneer in the use of mathematics and the basic sciences
in aeronautics and astronautics. At the age of 31, he did research about incompressible

109
Science and Technology of the XXI century Part I

liquids – a mathematical model of a solid environment which doesn’t change the


destiny under the pressure. The result he got was named Kármán vortex street (Malina,
2023).
Let’s move on. Passing the Karman Line (which wasn't Karman back there) was
a common question for all the astronomers before 1944, when a German ballistic
rocket, “Vergeltungswaffe-2,” reached a height of 182 kilometers. It was a great
achievement of mankind, which was created for the genocide of humanity. But “V-2”
interested scientists all over the world and became a father for other rockets. Some of
them attempted to deliver living beings to the Karman Line.

V-2 on the assembly line of the The very first photograph of the
Mittelwerk plant in Mount Konstein, Earth from space, taken on October 24,
July 3, 1945 1946 on a V-2 suborbital rocket
In order to reach the 100 kilometers height, we don’t need to become an orbiting
body. We mustn’t even reach the first cosmic velocity, which is the minimum speed to
escape from the gravitational influence. In 2004, the second in history suborbital
manned aircraft, “SpaceShipOne,” created by a private company, reached the Karman
Line with a pilot on board and won a $10 million prize from the “Ansari X Prize.” The
whole competition was about creating a spacecraft capable of passing the Karman Line
twice with three people on board within a fortnight. By the way, they won this
competition.
One of the smallest spacecraft to pass the Karman Line was the Loki, which
originally was designed as a Luftwaffe anti-aircraft rocket but never saw its original
purpose. About 20 years later, it became a research (sounding) rocket (Parsch, 2002).

James Van Allen holding a Loki instrumented


“Rockoon”

Results. To summarize everything that was


specified, to pass the Karman Line, being a non-
commercial/non-government organization, we need to
meet the following parameters: a significant amount of
money to fund the team, to be able to buy and produce
the required resources. There are many examples of
successful launches made by amateurs, student teams
with competition grants, etc. For the sake of interest,
you can start with something that looks more like
110
Science and Technology of the XXI century Part I

a weather probe. It couldn't pass the 100km line because of air dispersion, but it would
be a great experience. The important thing is receiving all necessary permissions from
the government because unrecognizable flying objects may be of interest to special
services or the means of anti-aircraft defense of your country. This happened to
a Chinese roaming weather probe last year (Kim, 2023).
References
Kim, Ch. (2023). Chinese spy balloon did not collect information, says Pentagon. BBC.
https://www.bbc.com/news/world-us-canada-66062562
Malina, F.J. (2023). Theodore von Kármán. Britannica.
https://www.britannica.com/biography/Theodore-von-Karman
Parsch, A. (2002). Space Data PWN-12 Super Loki ROBIN. https://www.designation-
systems.net/dusrm/n-12.html

THE EFFECTS OF EXERCISE ON BRAIN PLASTICITY IN ADULTS


Andrew Goncharov
Faculty of Biomedical Engineering, National Technical University of Ukraine
“Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: exercise, brain plasticity, adults, cognitive function, neurogenesis,


neuroplasticity, physical activity.
Introduction. Regular physical activity stands as a potent asset, contributing not
only to our physical well-being but also wielding a significant influence over our
cognitive health. In recent times, there has been a surge in interest concerning the
exploration of exercise's impact on the plasticity of the adult brain. This plasticity refers
to the brain's remarkable ability to adapt and restructure in response to diverse
experiences and changes in the environment (hindawi.com, 2020).
Objectives. The human brain exhibits an extraordinary capacity for plasticity,
continuously fine-tuning and reconfiguring itself throughout an individual’s lifespan.
This dynamic process ultimately shapes cognitive capabilities and memory. Physical
activity has emerged as a pivotal element in shaping brain plasticity among adults. The
primary objective of this study is to scrutinize its effects on neurogenesis, synaptic
plasticity, and cognitive performance. Unraveling the intricate mechanisms responsible
for exercise-induced changes in the brain, including the role of neurotrophic factors
and vascular adjustments, can furnish valuable insights into promoting cognitive
rehabilitation, mitigating age-related cognitive decline, and enhancing overall mental
well-being. This inquiry underscores the paramount significance of regular physical
exercise in fortifying brain health and cognitive function during the adult years
(ncbi.nlm.nih.gov, 2016).
Methods. Engagement in physical activity has demonstrated its capacity to
enhance a diverse range of cognitive functions, including memory, attention, and
executive function. A pivotal mechanism contributing to this phenomenon revolves
around the heightened production of neurotrophic factors, most notably brain-derived
neurotrophic factor (BDNF). These factors nurture the growth and resilience of
neurons, thereby facilitating the establishment of fresh neural connections while

111
Science and Technology of the XXI century Part I

reinforcing existing ones. Furthermore, exercise has proven its ability to stimulate
neurogenesis, the process responsible for generating new neurons within the
hippocampus, a critical region for learning and memory. This suggests that exercise
serves not only as a preserver of cognitive function but also harbors the potential to
counteract the cognitive decline associated with the aging process. Both aerobic
activities such as running and resistance training like weightlifting have been observed
to exert positive influences on brain plasticity. Physical activity also contributes to an
elevated release of neurotransmitters like dopamine and serotonin, which play
fundamental roles in mood regulation and overall mental well-being. This accounts for
the enhanced mood and reduced stress levels reported by numerous individuals
following their engagement in exercise (nature.com, 2022).
Results. Numerous research endeavors have shed light on the profound impact
of exercise on the plasticity of the adult brain. The results consistently validate that
regular physical activity has the capacity to augment various dimensions of brain
plasticity. This resoundingly challenges the previously held belief that the adaptability
and flexibility of the adult brain are inherently confined. Moreover, exercise has been
found to incite the generation of fresh neurons within the hippocampus, a pivotal region
governing the formation of memories and spatial orientation. Beyond structural
alterations, exercise has also demonstrated its ability to enhance functional plasticity.
These revelations underscore the extensive and substantial consequences of exercise
on brain plasticity among adults, accentuating its potential as a non-pharmacological
avenue for enhancing cognitive health and overall well-being.
Conclusion. In conclusion, the influence of exercise on the plasticity of the adult
brain serves as a testament to the comprehensive advantages of physical activity.
Consistently engaging in exercise not only fortifies the body but also strengthens the
mind, nurturing neuroplasticity, enhancing cognitive abilities, and uplifting one's
mood. These discoveries underscore the significance of integrating exercise into our
daily routines as a means to sustain and amplify brain health throughout adulthood.
References
hindawi.com (2020 Dec 14) Effects of Physical Exercise on Neuroplasticity and Brain
Function: A Systematic Review in Human and Animal Studies.
https://www.hindawi.com/journals/np/2020/8856621/
nature.com (2022 Feb 17) Relationship between physical activity and cognitive
functioning among older Indian adults.
https://www.nature.com/articles/s41598-022-06725-3
ncbi.nlm.nih.gov (2016 Aug) Physical Activity and Cognitive Function in Older
Adults: The Mediating Effect of Depressive Symptoms.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4929037/

112
Science and Technology of the XXI century Part I

PATHOPHYSIOLOGY OF DEPRESSION
Anastasiia Grabovska
Faculty of Applied Mathematics, National Technical University of Ukraine
“Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: depression, neuroplasticity, antidepressants, serotonin.


Introduction. Depression is one of the most common mental illnesses in the
United States and a leading cause of disability worldwide (Tartt, et al., 2022).
Previously, it was believed that the cause of depression is a deficiency of monoamines.
This is because many antidepressants dramatically increase synaptic levels of
neurotransmitters such as serotonin, norepinephrine and dopamine. However,
contemporary research has unveiled a significant paradigm shift, indicating that
serotonin reuptake inhibitors initiate a cascade effect, leading to the reactivation of
neuroplasticity mechanisms. While these findings challenge the traditional monoamine
hypothesis, they also signal an urgent need for further research and investigation to
fully understand the implications of this new understanding of the pathophysiology of
depression.
Objectives. The main aim is to explore the concept of neuroplasticity, how
neuroplasticity may be altered in individuals with depression, and to examine the role
of neurotransmitters, such as serotonin and norepinephrine, in modulating
neuroplasticity and their connection to depressive disorders.
Methods. The main method of this research is method of analysis and synthesis
in order to comprehensively examine and distill recent research findings related to the
neuroplasticity hypothesis in depression. To gather a complete data set, I used
electronic database PubMed. A combination of keywords such as “pathophysiology”,
“neuroplasticity” and “depression” were used for searching.
Results. A major limitation of the monoamine hypothesis of depression is the
therapeutic delay between initiation of antidepressant treatment and improvement of
symptoms. Second, even when drugs have a significant pharmacological effect, many
patients do not experience improvements. These aspects can be explained by the
hypothesis of neuroplasticity of depression, in which the effects of antidepressants,
such as the enhancement of neurogenesis, contribute to the improvement of cognitive
functions and mood (Tartt et al., 2022). Structural magnetic resonance imaging in
patients reveals a disease-specific decrease in gray matter, a marker of neuroplasticity,
and reversibility, after treatment with selective serotonin reuptake inhibitors.
Depressed patients show smaller volumes of the basal ganglia, thalamus, hippocampus,
frontal lobe, orbitofrontal cortex and gyrus rectus (Kraus et al., 2017).
Conclusion. In summary, the latest research findings provide compelling
evidence that serotonin reuptake inhibitors are associated with the reactivation of
neuroplasticity mechanisms. This significant breakthrough in the field not only
enhances our comprehension of the intricate neurobiological foundations of depression
but also charts a promising course towards the development of more efficacious
therapeutic strategies for individuals grappling with this multifaceted disorder.
Furthermore, this improved understanding of the neurobiological etiology of

113
Science and Technology of the XXI century Part I

depression serves as a catalyst for further research, sparking interest in new treatments,
research into innovative drug classes, and opening entirely new avenues of treatment.
References
Kraus, C., Castrén, E., Kasper, S., & Lanzenberger, R. (2017). Serotonin and
Neuroplasticity – Links between Molecular, Functional, and Structural
Pathophysiology in Depression. Neuroscience & Biobehavioral Reviews, 77,
317–326. https://doi.org/10.1016/j.neubiorev.2017.03.007
Tartt, A. N., Mariani, M. B., Hen, R., Mann, J. J., & Boldrini, M. (2022). Dysregulation
of Adult Hippocampal Neuroplasticity in Major Depression: Pathogenesis and
Therapeutic Implications. Molecular Psychiatry, 27, 2689–2699.
https://doi.org/10.1038/s41380-022-01520-y

BIOMARKERS AND THEIR USE IN DIAGNOSIS AND PROGNOSIS


OF DISEASES
Illya Guivan
Faculty of Biomedical Engineering, National Technical University of Ukraine
“Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: biomarker, personalized medicine, diseases, biotechnology.


Introduction. Modern diagnostics is largely based on biomarker science.
Biomarkers are a key element of the concept personalized medicine, and therefore
much attention is paid to their validation in clinical practice. In a broad sense,
a biomarker is any biological characteristic of an organism, which can be molecular,
anatomical, physiological or biochemical. Biomarkers act as an indicator of the normal
functioning of the body or a pathological biological process. They show a certain
physical sign or measurable biologically determined condition in the body that is
associated with a disease or specific health indicator. It is customary to distinguish four
main types of biomarkers: prognostic, diagnostic, predicative and biomarkers of
susceptibility (sensitivity) (European Commission, DG Research, 2010).
Objectives. The main goal is to find benefit of using biomarkers in the future,
especially for predicting serious diseases such as cancer and cardiovascular diseases.
Methods. Regardless of whether a biomarker is used for screening, disease
diagnosis, or prognosis assessment, it should influence the clinical decision in some
way. Particular attention should be paid when measuring and evaluating biomarkers
for screening to the potential importance of sources of its variability other than the
organ or disease under study. This variation may be especially relevant for biomarkers
originating from different tissues, for example, those whose level increases with
infectious conditions or with any chronic inflammation. Gender, age, diet, body weight
affect the level of many biomarkers, reducing their accuracy in the diagnosis of specific
conditions.
The process of biomarker detection and validation is currently developed and
standardized best in the field of oncological diseases (Pepe & Etzioni & Feng, 2001).
At the first stage, new target biomarkers are identified or old ones are clarified at the
preclinical stage of the validation process, which leads to checking the possibility of

114
Science and Technology of the XXI century Part I

using the definition of a specific biomarkers for differentiating individuals with and
without the disease. Then, retrospective research is used to determine whether the level
of the biomarker differs in patients and healthy people, and the degree of its variability
is assessed in order to establish a threshold value for a positive screening result. In the
future, prospective screening studies evaluate the significance and accuracy of the
biomarker on large samples, and only then the biomarker is validated as a disease
control tool during randomized controlled research.
Results. One crucial aspect of the biomarker field is distinguishing between
potential and dependable biomarkers that can be universally applicable for pivotal
clinical and commercial choices. Genuine biomarkers should affect clinical evaluations
and enhance patient health care (Chau & Rixe & McLeod & Figg, 2008). Decision-
making based on accurate evaluations should prove more beneficial in comparison to
those relying on invalid negative or positive outcomes. Biomarkers must decrease
expenses, unfavorable repercussions while averting fatality in a risk management
context.
Conclusion. Unfortunately, a huge number of potential biomarkers remain only
on the pages of scientific magazines. Of the 150 thousand known published data of
candidate biomarkers, less than a hundred find real application in clinical practice
(Poste, 2011). The development and validation of biomarkers is as complex and time-
consuming as the development of new drugs, and is critically important for the
development of personalized medicine. Nevertheless, it is the science of biomarkers
that is fundamental in the introduction of medicine technologies and will bring a lot of
innovations to healthcare in the near future.
References
Chau, C. H., Rixe, O., McLeod, H., & Figg, W. D. (2008). Validation of analytic
methods for biomarkers used in drug development. Clinical Cancer Research,
14(19), 5967–5976.
European Commission, DG Research. (2010). Stratification biomarkers in
personalized medicine. Summary report. Brussels.
Pepe, M. S., Etzioni, R., Feng, Z., & et al. (2001). Phases of biomarker development
for early detection of cancer. Journal of the National Cancer Institute, 93, 1054–
1061.
Poste, G. (2011). Bring on the biomarkers. Nature, 469(7329), 156–157.

EFFECTS OF SMOKING ELECTRONIC CIGARETTES


Kostiantyn Hlomozda
Faculty of Biomedical Engineering, National Technical University of Ukraine
“Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: smoking, electronic cigarettes, cardiovascular effects, nicotine.


Introduction. Nowadays smoking is a familiar practice, with nicotine ranking
among the top 20 most addictive and acute drugs (Bonnet et al., 2020). Tobacco
companies have thrived for over a century; however, traditional smoking has waned in
popularity due to heightened awareness of its detrimental health effects. In recent years,

115
Science and Technology of the XXI century Part I

electronic cigarettes and vapes have surged in popularity, capturing the interest of both
young and adult demographics. This phenomenon necessitates a comprehensive
examination of the physiological impact of electronic cigarettes. Scientific research on
this subject has flourished, with scientists worldwide investigating various facets,
encompassing cardiovascular and the pulmonary dimensions of electronic cigarette
usage. This endeavor aims to explore pivotal scientific studies that shed light on how
electronic cigarettes influence the body and the potential health ramifications.
Objectives. The main aim is to assess and quantify the potential risks and
adverse effects that electronic cigarettes may pose to the cardiovascular system. This
will involve examining key parameters such as blood pressure, heart rate, vascular
function, and the incidence of cardiovascular events among electronic cigarette users.
Through comprehensive analysis, the researchers aim to provide valuable insights into
the cardiovascular implications of electronic cigarette use, ultimately contributing to
a better understanding of the health consequences associated with these devices.
Methods. To measure the impact of electronic cigarettes, research involving
placebos, the analysis of electronic cigarette composition in different countries, and
smoking cessation among adults for further investigation of health improvement
indicators are employed (Benowitz & Fraiman, 2017).
Results. Research outcomes reveal that the utilization of electronic cigarettes
exerts detrimental effects on vascular walls, accelerates pulse rates, and elevates
arterial blood pressure levels. These physiological consequences are primarily
attributed to the presence of nicotine, a stimulant found in electronic cigarettes as well
as traditional tobacco products. Nicotine's vasoconstrictive properties lead to narrowed
blood vessels, thereby increasing blood pressure, while its stimulating effects
contribute to an elevated heart rate (Meng et al., 2023).
Electronic cigarettes also yield positive psychological effects for many
individuals. This act often serves as a smoking cessation tool, allowing people to transit
away from traditional cigarettes. The mimicry of the smoking experience, along with
customizable nicotine levels, can help to eliminate the habit of smoking. This
psychological benefit has led to successful smoking cessation stories for numerous
users, highlighting the potential harm reduction aspect of electronic cigarettes
(Caponnetto et al., 2023).
Conclusion. The research on the impact of electronic cigarettes indicates that
they have various effects on different groups of people. They assist those attempting to
quit smoking and exert a significantly milder influence on the cardiovascular system
in individuals with relevant medical conditions. However, it is crucial to understand
that electronic cigarettes are not fully studied, and apart from nicotine, they contain
numerous other additives, such as alcohol and aldehydes. Therefore, smokers should
be aware of the risks they expose themselves to (Benowitz & Fraiman, 2017).
References
Benowitz, N. L., & Fraiman, J. B. (2017). Cardiovascular effects of electronic
cigarettes. Nature Reviews. Cardiology, 14(8), 447.
https://doi.org/10.1038/NRCARDIO.2017.36

116
Science and Technology of the XXI century Part I

Bonnet, U., Specka, M., Soyka, M., Alberti, T., Bender, S., Grigoleit, T., Hermle, L.,
Hilger, J., Hillemacher, T., Kuhlmann, T., Kuhn, J., Luckhaus, C., Lüdecke, C.,
Reimer, J., Schneider, U., Schroeder, W., Stuppe, M., Wiesbeck, G. A., Wodarz,
N., … Scherbaum, N. (2020). Ranking the Harm of Psychoactive Drugs
Including Prescription Analgesics to Users and Others–A Perspective of German
Addiction Medicine Experts. Frontiers in Psychiatry, 11, 592199.
https://doi.org/10.3389/FPSYT.2020.592199/FULL
Caponnetto, P., Campagna, D., Maglia, M., Benfatto, F., Emma, R., Caruso, M., Caci,
G., Busà, B., Pennisi, A., Ceracchi, M., Migliore, M., & Signorelli, M. (2023).
Comparing the Effectiveness, Tolerability, and Acceptability of Heated Tobacco
Products and Refillable Electronic Cigarettes for Cigarette Substitution
(CEASEFIRE): Randomized Controlled Trial. JMIR Public Health and
Surveillance, 9. https://doi.org/10.2196/42628
Meng, X. C., Guo, X. X., Peng, Z. Y., Wang, C., & Liu, R. (2023). Acute effects of
electronic cigarettes on vascular endothelial function: a systematic review and
meta-analysis of randomized controlled trials. European Journal of Preventive
Cardiology, 30(5), 425–435. https://doi.org/10.1093/EURJPC/ZWAC248

UNDERSTANDING ALLERGIC REACTIONS AND ANAPHYLAXIS:


TRIGGERS, SYMPTOMS AND DIAGNOSIS
Illya Holubiev, Julia Danylchyk
Faculty of Biomedical Engineering, National Technical University of Ukraine
“Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: allergic reaction, immunoglobulin E, allergens, anaphylaxis.


Introductions. Nowadays, many people suffer from allergic reactions, which
are driven by the immune system’s response to various substances and can be
a potentially life-threatening aspect of human health. This article dives into the work
of allergic reactions, exploring the underlying mechanisms, the role of
immunoglobulin E (IgE), and the different categories of allergens. Furthermore, we
will uncover the severe and rapid allergic response known as anaphylaxis, which can
lead to dire consequences if not promptly addressed.
Objectives. The main aim is to describe the mechanism of allergic reactions,
explain the essence of allergens and raise awareness about allergy diseases, especially
anaphylaxis.
Methods. We conducted a literature review of published research articles
pertaining to anaphylaxis and allergic reactions. We focused on understanding the
immunological mechanisms behind allergic reactions, triggers of anaphylaxis and the
clinical aspects of anaphylaxis.
Results. An allergic reaction occurs upon repeated allergen exposure, causing
tissue damage, inflammation, and organ dysfunction. An allergic reaction, in the
context of the immune system, is a response triggered by the overproduction of
immunoglobulin E (IgE) in response to allergens. This IgE binds to mast cells and
basophils, and upon subsequent exposure to the specific allergen, it leads to the

117
Science and Technology of the XXI century Part I

activation of mast cells, resulting in the release of various chemicals like histamine,
serotonin, and others. This immune response is characterized by inflammation,
irritation, redness, and other allergic symptoms, despite the allergen being harmless to
most people. This hypersensitivity is primarily driven by T helper cells (Th2), which
promote the overproduction of IgE antibodies and other responses associated with
allergies (Elshemy & Abobakr, 2013).
Allergens are substances that cause an allergic reaction. This is an antigen that
can stimulate a hypersensitivity reaction in humans. There are two main categories of
allergens. The first category includes non-infectious environmental substances that can
trigger IgE production, leading to human sensitization. Allergic reactions occur with
subsequent exposure to these substances. Common sources of these allergens include
grass and tree pollen, animal dander (skin and fur), house dust mite fecal particles,
certain foods (especially peanuts, tree nuts, fish, shellfish, milk, and eggs), latex,
certain medications, and poisons. insects. The second category includes non-infectious
environmental substances that can induce an adaptive immune response associated
with localized inflammation. It is important to note that this response is thought to
occur independently of IgE. For example, this includes conditions such as allergic
contact dermatitis caused by substances such as poison, ivy or nickel (Galli et al.,
2008).
Anaphylaxis is an extremely serious allergic reaction that can occur as a result
of contact with an allergen. It was mentioned above that an allergic reaction occurs due
to the production of immunoglobulin E (IgE) in response to allergens. In case of
anaphylaxis, this reaction can be particularly intense and develop quickly.
Anaphylaxis is usually triggered by proteins but the choice of triggers can vary
with age. Common triggers for young children include milk protein, hen's egg protein,
and wheat, while adolescents are often triggered by peanuts and tree nuts. In adults,
triggers may include wheat, celery, seafood, and certain medications. Insect venoms,
particularly from bees and wasps, are also common triggers (Worm et al., 2012).
Anaphylaxis typically involves symptoms like skin rashes, difficulty breathing,
rapid pulse, and a drop in blood pressure. If not treated promptly, anaphylaxis can result
in severe respiratory and cardiovascular distress, potentially leading to
unconsciousness or death. It’s a medical emergency that requires immediate
intervention, usually with epinephrine and medical attention (Arias-Cruz, 2015).
Conclusion. Allergic reactions are immune responses triggered by IgE-mediated
hypersensitivity to allergens, leading to inflammation and various symptoms.
Allergens can be categorized into substances that induce IgE production and those
causing adaptive immune responses with localized inflammation. Anaphylaxis, an
extreme form of allergy, is primarily triggered by proteins and can manifest with severe
symptoms, requiring immediate medical intervention. The choice of triggers varies
with age and individual factors. Recognizing these triggers and understanding the
mechanisms behind allergies is crucial for effective prevention and treatment strategies
in the future.

118
Science and Technology of the XXI century Part I

References
Arias-Cruz, A. (2015). Anaphylaxis: Practical aspects of diagnosis and treatment.
Medicina Universitaria, 17(68), 188–
191. https://doi.org/10.1016/j.rmu.2015.06.005
Elshemy, A., & Abobakr, M. (2013). Allergic Reaction: Symptoms, Diagnosis,
Treatment and Management. Journal of Scientific & Innovative Research, 1(2),
123–144.
Galli, S. J., Tsai, M., & Piliponsky, A. M. (2008). The development of allergic
inflammation. Nature, 454 (7203), 445–454.
https://doi.org/10.1038/nature07204
Worm, M., Babina, M., & Hompes, S. (2012). Causes and risk factors for
anaphylaxis. JDDG: Journal der Deutschen Dermatologischen
Gesellschaft, 11(1), 44–50. https://doi.org/10.1111/j.1610-0387.2012.08045.x

THE ROLE OF ULTRASOUND IN INTERFERON PRODUCTION


Maria Honcharenko
Faculty of Biomedical Engineering, National Technical University of Ukraine
“Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: interferon production, ultrasonic technology, interferon synthesis,


biological protein, ultrasound applications, interferon yield, ultrasound in
biotechnology, interferon purification.
Introduction. Interferon, a vital biological protein with significant implications
in medical and pharmaceutical fields, plays a pivotal role in defending the body against
viruses and pathogens. The production of interferon is a crucial process, where both
the quantity and quality of the product are paramount. Traditional production methods
can be time-consuming and resource-intensive. However, the application of ultrasonic
technology offers innovative possibilities to enhance interferon production. This paper
explores the role of ultrasound in various stages of interferon production, from the
dispersion and cultivation of producing microorganisms to the harvesting and
purification of the final product.
Objectives. The production of interferon can be accomplished through classical
fermentation methods as well as modern biotechnological approaches. Traditional
methods involve the cultivation of cells and bacteria that synthesize interferon,
followed by the isolation of the product. However, these methods are often time-
consuming and require large quantities of raw materials.
Methods. Traditional methods are time-consuming. Ultrasound technology
offers improvements throughout the process. Ultrasound's Role:
Dispersion: Ultrasound aids in even distribution of raw materials, promoting
better conditions for cell growth.
Cell Growth: High-intensity ultrasound enhances cell growth, leading to higher
interferon yields.
Harvesting and Purification: Ultrasound facilitates interferon separation from
cells, improving purification efficiency (Zhang, & Li, 2021).

119
Science and Technology of the XXI century Part I

Results. It has the potential to reduce time and energy consumption, making it
a compelling choice for the production of this essential biological protein. Ultrasound’s
role in interferon production exemplifies the significant impact of innovative
technologies in biotechnology, offering a more efficient and sustainable approach to
producing vital pharmaceutical compounds (Feng, & Zhang, 2022).
Conclusion. The application of ultrasound in interferon production offers
a promising avenue for improving both the quantity and quality of this vital biological
product. Ultrasonic technology is advantageous at every stage, from the dispersion of
raw materials to the isolation and purification of interferon (Wu, & Wang, 2020).
References
Feng, L., & Zhang, Y. (2022). Ultrasonic extraction of interferon: A review. Journal
of Ultrasonics, 49(2), 193-204. https://doi.org/10.1016/j.arabjc.2023.105168
Wu, Y., & Wang, S. (2020). Ultrasonic assisted production of interferon. In
Ultrasonics in Biotechnology (pp. 299–314). Springer, Cham.
https://doi.org/10.3390/molecules27217509
Zhang, S., & Li, X. (2021). Ultrasonic purification of interferon: A review. Journal of
Pharmaceutical Analysis, 11(1), 24–31. https://doi.org/10.2478/jvetres-2021-0011

BIONIC PROSTHESIS
Anton Huselnikov
Faculty of Applied Mathematics, National Technical University of Ukraine
“Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: bionic prosthesis, capabilities of prostheses, availability of


prostheses.
Introduction. Every day, a huge number of amputations are performed around
the world, and some people are born without limbs. This has always been the case, but
now in Ukraine, the war is on and the loss of limbs is a very common problem not only
among the military but also among civilians, because we are fighting against terrorists
who like to shell cities. The solution to this problem is prosthetics, but according to
statistics, only 10% of people who have lost limbs use prostheses.
Objectives. The main aim is to familiarize readers with the capabilities of
modern prostheses.
Methods. Reviewing the latest publications on the topic of “Bionic Prostheses”,
analyzing the information found, and forming a short and detailed report.
Results. A bionic prosthesis is a prosthesis that partially or completely replaces
a lost organ and performs its functions. They are available for arms (elbows, hands,
fingers) and legs (knees, ankles).
Bionic prosthetic technology has transformed the lives of individuals who have
suffered limb loss, enabling them to lead fulfilling and complete lives. Prosthetic
hands, for instance, not only empower individuals to grasp objects but also facilitate
the restoration of delicate motor skills, such as fastening a button or grasping a pen
(Abrams, 2023). They offer enhanced convenience in daily activities and broaden
employment prospects.

120
Science and Technology of the XXI century Part I

When it comes to bionic leg prostheses, they enhance the naturalness and ease
of walking while allowing individuals to maintain their inherent gait (Abrams, 2023).
These prostheses simplify the navigation of stairs and curbs, making mobility more
accessible.
Typically, a bionic prosthesis has an average lifespan of 4-5 years, which can be
extended through careful usage. Nevertheless, there is a risk of the prosthesis becoming
outdated over time. Manufacturers typically offer a 2-year warranty, with the option to
purchase subsequent warranties for added protection. Moreover, thanks to international
support, these prostheses can be purchased free of charge.
Conclusion. I will be glad if no one needs prosthetics, but we live in a difficult
time when if not one of the readers needs it, then someone in their surroundings may
need it. That's why I think it's worth raising this issue and telling everyone.
References
Abrams, Z. (2023, March 28). A new era for bionic limbs. IEEE Pulse.
https://www.embs.org/pulse/articles/a-new-era-for-bionic-limbs/
Bionic prostheses regenerate ukrainians. (n.d.). Charity foundation “Dopomogator”.
https://dopomogator.org/en/regenerateukrainians/
Zhurakhivsjka, O. (2023, June 21). Bionichnyj protez: Suchasnyj zasib reabilitaciji
[Bionic prosthesis: A modern solution]. EnableMe.
https://www.enableme.com.ua/ua/article/bionicnij-protez-sucasnij-zasib-
reabilitacii-9815

DIFERENTIAL EXPRESSION’S ANALYSIS ROLE IN DRUG


DISCOVERY
Ostap Kalapun
Faculty of Biomedical Engineering, National Technical University of Ukraine
“Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: differential expression analysis, drug discovery, personalized


medicine, rna-seq, microarrays.
Introduction. The development of high-throughput genomic technology has
transformed modern biology and medicine. The ability to simultaneously measure the
expression of thousands of genes has been one of the biggest developments (Lee et al.,
2017). Historically, medication research relied substantially on random events, with
many effective pharmaceuticals found either by accident or by trial and error. Genetic
markers, which may predict drug response or side effect susceptibility, can be located
with differential expression analysis. By identifying genes that are upregulated or
downregulated in a disease state relative to a healthy condition, scientists can identify
potential therapeutic targets (Mohammad et al., 2022).
Objectives. We'll start with a look at the precise methodology and techniques of
differential expression analysis, stressing their direct use in discovering crucial
therapeutic targets. These techniques have transformed the drug discovery process,
allowing for a more personalized approach to therapeutic development. Furthermore,
as we move toward customized medicine, it is critical to identify unique patient

121
Science and Technology of the XXI century Part I

reactions to therapies. Overcoming the challenges this technique presents is crucial,


though.
Methods. Traditionally, differential expression analysis includes comparing
gene expression patterns between two or more conditions, such as sick vs healthy
states, in order to find genes that are upregulated or downregulated in certain contexts.
In the field of genomics, this is most typically accomplished through the use of RNA
sequencing (RNA-seq) or microarray technology. RNA-seq provides a complete
picture of the whole transcriptome, identifying not just transcript abundance but also
new transcripts and isoforms. While more limited in scope, microarrays provide a cost-
effective way for large-scale screens, making them a popular alternative in the early
phases of drug development initiatives (Soneson & Delorenzi, 2013).
These high-throughput approaches generate data, which is subsequently
analyzed and standardized. To further examine received processed read counts,
bioinformatics tools such as DESeq2 for RNA-seq data (Love et al., 2014) and Limma
for microarray data (Ritchie et al., 2015) are used.
The transition to customized care introduces an additional level of complexity.
Here, differential expression analysis is applied to comprehend not only the mean
differences between the healthy and ill states. Rather, one must understand the
differences in gene expression patterns between individuals. Identifying genetic
markers that can forecast a patient’s response to a specific treatment enables a more
direct and effective therapeutic approach (Vadapalli et al., 2022).
Result. The use of differential expression analysis in drug development comes
with difficulties. Because of the vast number of data produced by these high-
throughput approaches, false positives might occur. Furthermore, differential
expression reported in vitro may not always be readily translated to in vivo
circumstances, needing thorough confirmation. Reproducibility is still an issue, with
various research showing differing outcomes for the same settings. This underlines the
importance of comprehensive statistical analysis, precise data processing, and robust
experimental design to ensure the reproducibility of findings (Chung et al., 2021).
Using bioinformatics tools, lists of genes with differential expression between
disease and healthy states were generated from the processed and normalized data.
When these genes were examined using pathway analysis, they revealed specific
biological processes that were disrupted in the disease state. These disturbances
revealed possible pharmacological targets that may result in revolutionary treatments,
and they also identified crucial nodes for therapeutic interventions.
Conclusion. In the quickly evolving field of drug discovery, differential
expression analysis stands out as a beacon that illuminates previously undiscovered
pathways. In this sense, differential expression analysis is a valuable tool since it finds
particular genetic markers that could indicate how a patient will react to a particular
medication or whether they will be more susceptible to adverse effects.
References
Chung, M., Bruno, V. M., Rasko, D. A., Cuomo, C. A., Muntildeoz, J. F., Livny, J.,
Shetty, A. C., Mahurkar, A., & Hotopp, J. C. D. (2021, April 29). Best practices
on the differential expression analysis of multi-species RNA-seq – genome

122
Science and Technology of the XXI century Part I

biology. BioMed Central.


https://genomebiology.biomedcentral.com/articles/10.1186/s13059-021-02337-8
Lee, B. K. B., Tiong, K. H., Chang, J. K., Liew, C. S., Rahman, Z. A. A., Tan, A. C.,
Khang, T. F., & Cheong, S. C. (2017, January 25). Design: Connecting gene
expression with therapeutics for drug repurposing and Development – BMC
Genomics. BioMed Central.
https://bmcgenomics.biomedcentral.com/articles/10.1186/s12864-016-3260-7
Love, M. I., Huber, W., & Anders, S. (2014, December 5). Moderated estimation of
fold change and dispersion for RNA-seq data with DESEQ2 – Genome Biology.
BioMed Central.
https://genomebiology.biomedcentral.com/articles/10.1186/s13059-014-0550-8
Mohammad, T., Singh, P., Jairajpuri, D. S., Al-Keridis, L. A., Alshammari, N., Adnan,
Mohd., Dohare, R., & Hassan, M. I. (2022, May 3). Differential gene expression
and weighted correlation network dynamics in high-throughput datasets of
prostate cancer. Frontiers.
https://www.frontiersin.org/journals/oncology/articles/10.3389/fonc.2022.881246/full
Ritchie, M. E., Phipson, B., Wu, D., Hu, Y., Law, C. W., Shi, W., & Smyth, G. K.
(2015, January 20). Limma powers differential expression analyses for RNA-
sequencing and Microarray Studies. OUP Academic.
https://academic.oup.com/nar/article/43/7/e47/2414268
Soneson, C., & Delorenzi, M. (2013, March 9). A comparison of methods for
differential expression analysis of RNA-seq data – BMC Bioinformatics. BioMed
Central. https://bmcbioinformatics.biomedcentral.com/articles/10.1186/1471-
2105-14-91
Vadapalli, S., Abdelhalim, H., Zeeshan, S., & Ahmed, Z. (2022, May 21). Artificial
Intelligence and machine learning approaches using gene expression and
variant data for personalized medicine. OUP Academic.
https://academic.oup.com/bib/article-abstract/23/5/bbac191/6590150?redirectedFrom=fulltext

BIOTECHNOLOGY IN FOOD INDUSTRY


Dmytro Kharynka
Faculty of Sociology and Law, National Technical University of Ukraine
“Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: Biotechnology in the food industry. global food supply,


fermentation reactions, acid formation. food processing.
Introduction. According to the UN statistics on food and agriculture, the issue
of providing the world's population with necessary food products is extremely pressing.
The data indicates that over half of the global population faces insufficient food supply,
with approximately 500 million people experiencing hunger, and 2 billion receiving
inadequate or improper nutrition. Since the end of the 20th century, the world's
population, with birth control considered, has reached 7.5 billion people. Therefore,
the food problem may escalate to dangerous proportions for certain nations soon.

123
Science and Technology of the XXI century Part I

Objectives. The use of biotechnology in the food industry is aimed at developing


new products that meet modern consumer demands, as well as enhancing the quality
and safety of products through quality control and ensuring high standards of quality.
Methods. The utilization of various biotechnological methods, such as genetic
engineering for organism modification, biotechnological processes of fermentation and
biosynthesis, along with additional methods for analysing and ensuring the quality
control of products, contributes to achieving goals in the development and production
of food products.
Results. The application of biotechnology in the food industry encompasses new
methods of processing and preserving food products, as well as the production of food
additives, amino acids, and enzymes. The most widely produced food additives in the
world include sodium glutamate, which enhances the flavour of meat products (about
150,000 tons annually), and the feed additive lysine (approximately 15,000 tons
annually). The enzyme market is currently valued at over 1 billion USD per year. The
overall global market for food ingredients is estimated at 26 billion USD. It is divided
into segments such as flavourings (28%), flavour and aroma enhancers (14%), acidity
regulators (12%), sugar substitutes (9%), starch and gelatin (7%). Biotechnological
processes in the food industry are applied in the dairy, meat, and bakery sectors, as well
as in sugar-mixing technologies and the production of fruits and vegetables
(Sychevskyi, 2016).
The production of dairy products involves fermentation, primarily used to obtain
these products. During milk fermentation, streptococci and lactic acid bacteria are
typically involved, transforming lactose into lactic acid. The properties of the end
product depend on the nature and intensity of the fermentation reactions. In milk, six
main reactions can occur during fermentation, resulting in the formation of lactic,
propionic, or citric acid, alcohol, fatty acid, or even gas formation. The main goal of
these reactions is the formation of lactic acid, upon which all milk fermentation
methods are based. Milk lactose is hydrolyzed to obtain galactose and glucose. Usually,
galactose is converted into glucose even before fermentation. Bacteria already present
in milk convert glucose into lactic acid. Various milk fermentation methods are carried
out under controlled conditions, allowing for the production of diverse dairy products
depending on their type and quality.
Biotechnologies in the food industry offer numerous opportunities to enhance
the quality and diversity of products, which is crucial for providing the population with
high-quality and delicious food.
The formation of acid during the initial stage of fermentation is primarily caused
by the action of Streptococcus thermophilus. Mixed cultures require frequent
refreshing as repeated sub-culturing adversely affects the balance of species and strains
of bacteria. Obtaining butter from dairy raw materials is relatively straightforward.
Depending on the type of butter produced, cream with a concentration of 30 to 40% is
used. Churning the cream results in the formation of butter. Special bacterial cultures
are used in butter products to enhance taste and prolong shelf life. Taste improvement
is achieved by creating specific bacterial strains selected for their ability to synthesize

124
Science and Technology of the XXI century Part I

the necessary substances that influence flavour. Initially, strains of Streptococcus lactis
and similar species were used for these purposes, later replaced by mixed cultures.
Conclusion. Food biotechnology represents a promising direction in the food
industry (including meat, dairy, fish, and other sectors). It explores the
biotechnological potential of animal-derived raw materials and food additives, using
enzymatic preparations, products of microbiological synthesis, new types of
biologically active substances, and complex additives.
References
Kurta, S.A. (2020). Biotehnolohii kharchovikh produktiv [Biotechnologies of Food
Products] [Elektronnyi resurs]: konsppekt lektsii. Prykarpatskyi natsionalnyi
universytet imeni Vasylya Stefanyka.
Sychevskyi, M. P. (2016). Naukove obgruntuvannya rozvytku biotekhnolohiy v
kharchoviy promyslovosti. [Scientific substantiation of the development of
biotechnologies in the food industry]. Produktyvni resursy, 7. [in Ukrainian].

NEUROPHYSIOLOGICAL EFFECTS OF N-BACK TRAINING


Artem Khilchuk
Faculty of Informatics and Computer Engineering, National Technical
University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: n-back, neurophysiology, dopamine, white matter, grey matter.


Introduction. N-back training, as a mean of enhancing individual’s higher order
cognitive abilities, specifically working memory, has long been a topic of growing
interest in a scientific community. It involves sequentially presenting audio and visual
stimuli to a subject, who in turn has to indicate, whether either of these match ones,
that occurred N iterations before. While its efficacy remains a subject of heated
debates, there already are studies suggesting significant fluid intelligence improvement
of 0.35 standard deviations in elderly people (Karbach, & Verhaeghen, 2014) and
enhancing recovery outcomes for individuals, suffering from traumatic brain injuries
(Weicker, Villringer, & Thöne-Otto, 2016), as well as accumulating anecdotal
evidence from people on online platforms. All this raises a reasonable interest in the
neurophysiological changes that might take place as a result of such training.
Objectives. The goal of this research is to determine, whether any significant
alterations in brain physiology might occur after a period of persistent N-back training
and to highlight what brain regions or neurochemicals could be affected.
Methods. The research is based on analysis of recently published papers related
to the neurophysiology and working memory training. Scientific databases have been
browsed through using combinations of such key terms: executive functions training,
cognitive control training, working memory training, n-back, brain changes,
neurotransmitters, neurophysiology. The search was concluded in October of 2023.
When picking studies, thorough examinations were conducted on sample sizes to
ensure meaningfulness of results, as well as methods, since there are multiple variations
of the excersise and the changes may correlate highly with training duration (Karbach,
& Verhaeghen, 2014).

125
Science and Technology of the XXI century Part I

Results. Although meta-analyses on this topic are still lacking, studies available
as of today strongly favor presence of small physiological alterations in the human
brain after several weeks of such training. There is evidence that engagement in n-back
training is linked to a notable enhancement in dopaminergic activity within the
striatum, specifically increased D2 receptor binding potential, meaning that the amount
of circulating dopamine would have higher effect (Bäckman et al., 2017).
Also some studies show that individuals who regularly practiced n-back exhibited
significant improvements in the integrity of major white matter pathways, suggesting
a potential role in facilitating efficient neural communication (Salminen et al., 2016).
Structural neuroimaging analyses demonstrate that n-back exercising over time
causes an increase in grey matter volume in cingulate cortex, right cerebellum, and
right temporal lobe (Colom et al., 2016), which are associated with cognitive abilities.
Conclusion. Current research on N-back training shows its potential to induce
noticeable neurophysiological changes. Specifically, the findings suggest an increase
in dopaminergic activity within the striatum by enhancing related receptors binding, as
well as improving integrity of major white matter pathways and increasing volumes of
grey matter in a range of brain regions associated with cognitive functions. While more
systematic reviews and meta-analyses are required to make confident conclusions,
based on aforementioned areas of influence, it appears promising to investigate
possible benefits of N-back training for people, suffering from or at risk of various
neurodegenerative diseases. Examples could be Parkinson’s disease, which is
associated with degenerated dopaminergic projection from substantia nigra pars
compacta to the striatum, or Multiple Sclerosis, that is characterized by deterioration
of myelin sheath of axons, leading to loss of white matter integrity.
References
Bäckman, L., Waris, O., Johansson, J., Andersson, M., Rinne, J. O., Alakurtti, K.,
Soveri, A., Laine, M., & Nyberg, L. (2017). Increased dopamine release after
working-memory updating training: Neurochemical correlates of transfer.
Scientific reports, 7(1), 7160. https://doi.org/10.1038/s41598-017-07577-y
Colom, R., Martínez, K., Burgaleta, M., Román, F.J., García-García, D., Gunter, J.L.,
Hua, X., Jaeggi, S.M., Thompson, P.M. (2016). Gray matter volumetric changes
with a challenging adaptive cognitive training programbased on the dual n-back
task. Personality and Individual Differences, 98, 127–132.
https://doi.org/10.1016/j.paid.2016.03.087
Karbach, J., & Verhaeghen, P. (2014). Making working memory work: a meta-analysis
of executive-control and working memory training in older adults. Psychological
science, 25(11), 2027–2037. https://doi.org/10.1177/0956797614548725
Salminen, T., Mårtensson, J., Schubert, T., & Kühn, S. (2016). Increased integrity of
white matter pathways after dual n-back training. NeuroImage, 133, 244–250.
https://doi.org/10.1016/j.neuroimage.2016.03.028
Weicker, J., Villringer, A., & Thöne-Otto, A. (2016). Can impaired working memory
functioning be improved by training? A meta-analysis with a special focus on
brain injured patients. Neuropsychology, 30(2), 190–212.
https://doi.org/10.1037/neu0000227

126
Science and Technology of the XXI century Part I

THE PHYSICAL REHABILITATION OF MILITARY OFFICERS OF


THE ARMED FORCES OF UKRAINE
Marianna Khvistani
Faculty of Biomedical Engineering, National Technical University of Ukraine
“Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: physical rehabilitation, war, military, injuries, kinesitherapy,


occupational therapy.
Introduction. In a difficult time for Ukraine, many of its defenders receive
injuries of various degrees of severity during hostilities. Of course, they are provided
with the first qualified and specialized medical care in military hospitals and
specialized medical facilities. However, the rehabilitation of servicemen is important
for the full restoration of their functions and the opportunity to fully work and live in
society. It means not only psychological rehabilitation, but also physical.
Objectives. The main aim is to consider the specifics of the recovery of
defenders and what is needed for servicemen with disabilities to feel comfortable in
Ukraine.
Methods. The state needs to clearly establish and regulate the processes of
rehabilitation of servicemen after injuries in the combat zone. This was repeatedly
emphasized by the President of Ukraine and a corresponding task was set for the
Cabinet of Ministers and the Verkhovna Rada of Ukraine (Modern types of
rehabilitation for war victims and veterans of the Armed Forces, 2023). Thanks to
responsible and professional specialists, as well as private clinics that work for results,
rehabilitation still works.
Results. After being treated in a hospital, servicemen face the problem of
restoring their functionality. After all, the consequences of injuries and long-term
confinement to bed with limited mobility lead to atrophy and weakening of muscles.
Contractures are formed that prevent full movement. Even after preservation of the
limb, effective repositioning of bone fragments or fixation of spinal fractures,
rehabilitation is necessary. Rehabilitation of servicemen after injuries requires a long
time, which can last from 1-2 to 5-6 months or more (Steblovska, 2023). The right
approach is important for military rehabilitation. This determines the speed of
rehabilitation measures, the comfortable transfer of procedures and the corresponding
result. Thanks to the help of experts and volunteers from abroad, the latest
rehabilitation systems have been installed in modern centers - occupational therapy
(restoration of lost movement skills in everyday life), kinesitherapy (method of
movement and stress treatment), physical therapy halls with simulators, and others.
Conclusion. Physical rehabilitation of servicemen after injuries requires a long
time, which can last from 1-2 to 5-6 months or more. Such rehabilitation treatment is
aimed at restoring the body after the end of the acute period and often allows to avoid
primary disability or to prevent further deterioration of the condition. After all, the
consequences of injuries and long-term confinement to bed with limited mobility lead
to atrophy and weakening of muscles, contractures are formed that do not allow full
movement (Rehabilitation of military personnel, 2023). This is a very important task,

127
Science and Technology of the XXI century Part I

without which the full return of a serviceman to peaceful life is impossible. In


Ukrainian institutions, we use modern methods of physical and psychological
rehabilitation. Patients with the help of exercise therapy can restore and improve their
physical condition after injuries. Also, these classes create a positive emotional
background and strengthen the readiness and effectiveness of psychological
rehabilitation.
References
Modern types of rehabilitation for war victims and veterans of the Armed Forces.
(2023, May 15). https://www.enableme.com.ua/ua/article/sucasni-vidi-
reabilitacii-dla-postrazdalih-vid-vijni
Rehabilitation of military personnel. (2023).
https://www.mednean.com.ua/uk/reabilitaciya-vijskovosluzhbovciv/
Steblovska, A. (2023). How soldiers are rehabilitated after injuries. Suspilne.News
https://suspilne.media/398714-zitta-vijskovih-pisla-poranen/

NUCLEAR WASTE: WHY NOT DISPOSE OF IT IN SPACE?


Dmytro Knysh
Faculty of Informatics and Computer Engineering, National Technical
University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: nuclear waste, space disposal, waste management, radioactive


waste.
Introduction. Have you ever wondered about launching nuclear waste into
space? Many of us might find this notion as it seemingly addresses one of the primary
issues related to nuclear energy. However, it turns out that this idea is not just unwise
but incredibly ill-advised and becomes more problematic upon deeper reflection.
Objectives. The primary objective is to comprehend why disposing of waste in
space is a bad decision. We will explore aspects of this concept, such as the nature of
nuclear waste, the financial side, the practical obstacles involved and the potential
risks, at stake.
Methods. To evaluate the proposal of launching waste into space, we will
analyze the issues involved and offer a thorough evaluation for each problem.
Additionally, we will explore alternative methods of nuclear waste disposal and
compare them with the concept of space disposal.
Results. The disposal of nuclear waste in space presents several challenges and
drawbacks. Firstly, it is the price tag. Spacewalks have always been a venture, with the
average cost of payload transportation roughly $3000 per kilogram nowadays (Jones,
2018, p. 8). This expense significantly surpasses nuclear fuel production, making
nuclear energy generation a greater financial burden.
Secondly, the quantity of nuclear waste generated annually exceeds the available
rocket capacity for disposal. The current capacity for launching rockets falls short of
what would be required with 186 launches recorded in 2022 (McDowell, 2023, p. 4).
To tackle the issue of waste today, we would have to repurpose these rockets 11 (tons
of nuclear waste per year (Stimson Center, 2020) divided by tons launched into space

128
Science and Technology of the XXI century Part I

(McDowell, 2023, p. 12)) times over and we haven't even considered the accumulation
of stored waste.
Furthermore, the concept of scattering containers of waste into Earth orbit raises
concerns regarding the management of space debris and and the potential for collisions.
There is also a risk involved if these containers were to re-enter Earth's atmosphere.
Disposing of waste by launching it into the Sun may sound fascinating.
However, it poses a challenge because we would have to counteract Earth's motion.
This would require rocket capabilities that are currently beyond our reach both
economically and technologically.
Additionally, mechanical malfunctions are still pretty common, considering that
out of all launches that took place in 2022, there were 7 failures (McDowell, 2023, p.
6). If a rocket were to transport nuclear waste and experience a mishap, it could
potentially lead to a disastrous release of radioactive matter. There is no doubt that this
would pose a harm to the environment and human health.
Conclusion. The idea to dispose of nuclear waste in space is truly one of the
most ill-conceived and impractical strategies ever put forward to address this pressing
issue. It is crucial that we investigate more environmentally friendly methods to
confront the complexities presented by waste while acknowledging that other
industries, like coal, also contribute to significant radioactive waste concerns.
References
Jones, H. W. (2018, July 12). The Recent Large Reduction in Space Launch Cost. 48th
International Conference on Environmental Systems. Albuquerque, New
Mexico: NASA Ames Research Center. https://ttu-
ir.tdl.org/bitstream/handle/2346/74082/ICES_2018_81.pdf
McDowell J. (2023, January 17). Space Activities in 2022.
https://planet4589.org/space/papers/space22.pdf
Stimson Center (2020). Spent Nuclear Fuel Storage and Disposal.
https://www.stimson.org/2020/spent-nuclear-fuel-storage-and-disposal/
World Nuclear Association (2022). Radioactive Waste – Myths and Realities.
https://world-nuclear.org/information-library/nuclear-fuel-cycle/nuclear-
wastes/radioactive-wastes-myths-and-realities.aspx

CHOOSING THE BEST SCAFFOLD FOR CREATING A BRAIN-ON-


A-CHIP MODEL
Polina Kovalchuk
Faculty of Biomedical Engineering, National Technical University of Ukraine
“Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: brain-on-chip, 3D cell model, scaffold.


Introduction. Organ-on-chip is a technology that utilizes microsystems and
biological components to create models of living organs or tissues on microscopic
chips. A "brain-on-chip" is a specialized type of organ-on-chip technology designed to
replicate the structure and function of the human brain on a microscale. These
microdevices aim to mimic the complex interactions and behavior of brain cells,

129
Science and Technology of the XXI century Part I

allowing researchers to study various aspects of brain function, development, and


responses to different conditions or treatments.
Objectives. The main goal is to choose the best technique for growing cells in
the brain-on-chip for my further experiments.
Methods. A literature review was conducted on the specified topic to determine
the best method for obtaining scaffold. After that, we can say that frame technologies
can be divided into 2 groups: based on hydrogels and based on polymers:
1. Hydrogel-Based Substrates: hydrogels are polymer networks that are
highly swollen with water. Cells can be either encapsulated within these hydrogels or
coated onto their surfaces. Depending on the type of polymer used, hydrogels can be
categorized into various groups (such as ECM protein-based hydrogels, natural
hydrogels, and synthetic hydrogels), each with distinct properties. Cross-linking
natural hydrogels like agarose, fibrin, collagen, and HA is a popular 3D cell culture
method that ensures cell encapsulation within a loosely structured scaffold. Advances
have been made in creating “smart hydrogels” to mimic the ECM protein environment.
However, these hydrogels can undergo unexpected changes due to factors like pH,
temperature, and enzymes. Freeze-drying is primarily used to control porosity and
create a collagen sponge. Modifying hydrogel porosity can improve cell growth and
distribution (Bokhari et al., 2007).
2. Substrates Utilizing Polymeric Hard Materials: Cells are cultured in the
presence of fibrous or sponge-like structures, allowing them to adopt a more natural,
physiological shape, as opposed to being grown on flat surfaces. The materials
employed for these supports can include polystyrene (suitable for imaging studies due
to its transparency) as well as biodegradable materials like polycaprolactone (Habanjar
et al., 2021). Also, for example, stereolithography, patented by Chuck Hall in 1987,
involves the photopolymerization of a cross-linkable liquid resin using ultraviolet or
visible light, building one layer at a time. This layer-by-layer approach with photo-
cross-linkable poly(ethylene glycol) dimethacrylate allows for precise patterning of
ligands, ECM components, growth factors, and controlled release particles, creating
predefined internal structures and porosities.
This technique provides advantages like shorter build times and the ability to
control features in the X, Y, and Z planes when using a dynamic masking approach. It
also maintains cell viability for up to seven days. Additionally, projection
stereolithography with gelatin methacrylate (GelMA) enables tailored mechanical
properties, uniform cell distribution, and cost savings in machine purchase and
operation (Leigh et al., 2012). There is also a scaffold-free technique. However,
unfortunately, they may not be suitable for creating an organ on a chip
Results. For further developments, it was chosen to work with hydrogel-based
scaffold, because it has a lot of advantages such as high level of cell viability, easy to
apply, can be adjusted to any cell needs and it is one of the cheapest techniques.
Conclusion. The development and application of “organs-on-chip” technology
represent a significant advancement in biomedical research, offering a powerful tool
for mimicking the behavior of human organs in a controlled and precise manner.
Among the various scaffold options explored, it becomes evident that hydrogel

130
Science and Technology of the XXI century Part I

scaffolds stand out as the optimal choice for creating these microphysiological systems.
Hydrogel scaffolds, with their unique advantages, have proven to be superior to other
types of scaffolds in many aspects. Their inherent properties, such as high water
content and biocompatibility, make them an excellent choice for encapsulating cells
and providing an environment that closely resembles natural tissues. The ability to
modify the porosity, which includes regulating the number, size, shape, and
interconnectivity of pores, is a crucial factor that allows for precise control over cell
growth and distribution in hydrogel-based systems.
References
Bokhari, M., Carnachan, R. J., Cameron, N. R., & Przyborski, S. A. (2007). Culture of
HepG2 liver cells on three dimensional polystyrene scaffolds enhances cell
structure and function during toxicological challenge. Journal of Anatomy,
070816212604002. https://doi.org/10.1111/j.1469-7580.2007.00778.x
Habanjar, O., Diab-Assaf, M., Caldefie-Chezet, F., & Delort, L. (2021). 3D cell
culture systems: Tumor application, advantages, and
disadvantages. International Journal of Molecular Sciences, 22(22),
12200. https://doi.org/10.3390/ijms222212200
Leigh, S. J., Gilbert, H. T. J., Barker, I. A., Becker, J. M., Richardson, S. M.,
Hoyland, J. A., Covington, J. A., & Dove, A. P. (2012). Fabrication of 3-
dimensional cellular constructs via microstereolithography using a simple, three-
component, poly(ethylene glycol) acrylate-based system. Biomacromolecules,
14(1), 186–192. https://doi.org/10.1021/bm3015736

GENETIC CHARACTERISTICS OF THE DONOR AND RECIPIENT


AS A DETERMINING FACTOR OF COMPATIBILITY IN ORGAN
TRANSPLANTATION
Vladyslav Kovalchuk
Faculty of Environmental Safety, Engineering and Technologies,
National Aviation University

Keywords: transplantation, transplant, histocompatibility, HLA, antibodies.


Introduction. The compatibility of the donor and the recipient during organ
transplantation is a determining factor in the success of the operation and the
subsequent recovery of the patient's health. Most organs can be successfully
transplanted only when they are biochemically and histologically compatible with the
recipient’s body (Parajuli et al., 2019). In the case of incompatibility, various
complications can develop, such as rejection of the transplanted organ, immune
disorders, and other problems that can threaten the patient's life.
Objectives. The role of donor-recipient compatibility is to ensure that the
recipient's body does not perceive the transplant as a foreign body and that
complications do not develop as a result of an immunological reaction. Therefore, the
study of the genetic properties of the donor and recipient is one of the priority directions
of the development of modern medicine, as well as the main task of this article.

131
Science and Technology of the XXI century Part I

Methods. The latest studies on the role of genetic compatibility in organ


transplantation were analyzed. The main theses of histocompatibility have been
confirmed, as well as several new genetic features that can influence the course of the
post-transplantation period.
Results. Classic pretransplantation genetic research is based on the HLA
(Human Leukocyte Antigen) analysis. It is located on the short arm of the sixth
chromosome and includes the most immunologically relevant sites in the area of
transplantation (Edgerly & Weimer, 2018). For example, in kidney transplantation, the
effectiveness of preventing graft rejection or its subsequent dysfunction depends
largely on minimizing the immunological differences between donors and recipients
by determining the HLA match of the recipient and its donor (Jackson & Pinelli, 2021).
Recent studies by Wiebe et al. (2019) determined the HLA-DR/DQ single molecule
mismatch between donors and recipients as a prognostic biomarker for primary
alloimmunity in recipients after kidney transplantation. The study highlights the
potential of HLA-DR/DQ epilete mismatch as a prognostic biomarker (Wiebe et al.,
2020).
Even though the HLA complex is considered the basis of genetic analysis,
variations in genes other than HLA also significantly affect the course of the post-
transplantation period. Variants in non-HLA segments of the genome have been found
to affect outcomes depending on whether they are present in donors or recipients.
A prospective study reported that genetic mismatches of non-HLA haplotypes (groups
of linked genetic markers) are associated with an increased risk of graft loss in kidney
transplant recipients, independent of HLA incompatibility (Reindl-Schwaighofer et al.,
2019).
Most studies examining the influence of donor genetics are conducted on
outcomes after kidney transplantation and focus on specific candidate genes or gene
variations that may have influenced transplant outcomes. Such studies have been very
successful in identifying new associations of variants and traits and have been
conducted more frequently in recent years. A detailed list and description of such
studies was formed by Li et al. (2022).
However, the catalyst of transplant rejections can be not only genes but also
antibodies. Although most people do not have donor-specific HLA-antibodies (DSA),
the European Society of Transplantation (ESOT) notes that the presence of such
antibodies means an increase in waiting time and sometimes even makes
transplantation impossible. Therefore, DSAs have been recognized as a major factor in
the development of the pathophysiology of antibody-mediated rejection (ABMR). It is
also noted that a high level of DSA is associated with a high frequency of ultra-acute
rejection and may be caused by previous blood transfusions, pregnancy, or
transplantation. Patients with high DSA levels are recommended to be prioritized in
renal allocation schemes. For particularly difficult cases, it is suggested to consider
desensitization, which should be carried out with the help of plasma exchange or
immunoadsorption (Mamode et al., 2022).

132
Science and Technology of the XXI century Part I

The consequence of such recommendations was an increase in the methods of


detecting HLA antibodies during the selection of a donor-recipient pair, which can
significantly improve the results of kidney transplantation (Montgomery et al., 2018).
Conclusion. The importance of the genetic characteristics of the donor and the
recipient in the transplantation of solid organs has been increasingly recognized in
recent decades. Processes of understanding the mechanisms of formation of the body’s
immune response are constantly developing and undoubtedly contribute to a better-
integrated understanding of the functioning of the transplant after surgery. Significant
progress was also achieved in the research of genes located outside the HLA complex.
In addition, the role of antibodies (both HLA and non-HLA) in the process of transplant
rejection was noted. Thus, these achievements can support further research and the
development of targeted therapeutic approaches in the field of transplantology.
References
Edgerly CH, Weimer ET. (2018). The past, present, and future of HLA typing in
transplantation. Methods in Molecular Biology, 1802, 1–10.
Jackson AM, Pinelli DF. (2021). Understanding the impact of HLA molecular
mismatch in solid organ transplantation: are we there yet? American Journal of
Transplantation, 21, 9–10.
Li, Yanni MD, PhD; Nieuwenhuis, Lianne M. BSc; Keating, Brendan J. PhD; et al.
(2022). The Impact of Donor and Recipient Genetic Variation on Outcomes
After Solid Organ Transplantation: A Scoping Review and Future Perspectives.
Transplantation, 106, 1548-1557.
Mamode N, Bestard O, Claas F, Furian L, Griffin S, Legendre C, Pengel L, Naesens
M. (2022). European Guideline for the Management of Kidney Transplant
Patients with HLA Antibodies: By the European Society for Organ
Transplantation Working Group. Transplant International, 35, 10511.
Montgomery R.A., Tatapudi V., Leffell M.S., Zachary A.A. (2018). HLA in
transplantation. Nature Reviews Nephrology, 14, 558–570.
Parajuli S., Aziz F., Garg N., Panzer S.E., Joachim E., Muth B., Mohamed M., Blazel
J., Zhong W., Astor B.C., et al. (2019). Histopathological characteristics and
causes of kidney graft failure in the current era of immunosuppression. World
Journal of Transplantation, 9, 123–133.
Reindl-Schwaighofer R, Heinzel A, Kainz A, et al.; iGeneTRAiN consortium. (2019).
Contribution of non-HLA incompatibility between donor and recipient to kidney
allograft survival: genome-wide analysis in a prospective cohort. The Lancet,
393, 910–917.
Wiebe C, Kosmoliaptsis V, Pochinco D, et al. (2019). HLA-DR/DQ molecular
mismatch: a prognostic biomarker for primary alloimmunity. American Journal
of Transplantation, 19, 1708–1719.
Wiebe C, Rush DN, Gibson IW, et al. (2020). Evidence for the alloimmune basis and
prognostic significance of Borderline T cell-mediated rejection. American
Journal of Transplantation, 20, 2499–2508.

133
Science and Technology of the XXI century Part I

APPLICATION OF FLUORESCENT PROTEINS IN BIOCHEMICAL


ANALYSIS
Оksana Krailo
Faculty of Biomedical Engineering, National Technical University of
Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: biochemical analysis, fluorescent proteins (FP), fluorescence


resonance energy transfer (FRET), photobleaching, fluorophore.
Introduction. The exploration of various color variants derived from green
fluorescent protein (GFP) has ushered in a continuous stream of advancements in the
realm of fluorescent tools for cellular imaging and detection. Furthermore, scientists
have mastered the art of synthesizing enhanced iterations of existing fluorescent
proteins, thus expanding the horizons of their applications in scrutinizing cellular and
molecular processes as well as facilitating drug discovery. Nonetheless, challenges
persist when employing fluorescent proteins in diverse analytical methods, including
issues related to their dimensions, potential phototoxicity, and propensity for forming
oligomers. Consequently, dedicated researchers are tirelessly striving to refine
fluorescent proteins and innovate novel biochemical analysis techniques that harness
their capabilities.
Objectives. The aim of this study is to provide a comprehensive understanding
of green fluorescent protein (GFP), its various derivatives, and their practical
applications in the field of biochemical analysis.
Methods. Comparison, generalization, analysis of literary data and research of
scientists in this field of science.
Results. Today, GFP, its derivatives, and homologs of different colors are used
in a variety of applications to study the organization and functioning of living systems.
FPs encoded together with the proteins under study allow one to observe their
localization, movement, and even the time elapsed since protein synthesis (Zacharias
et al., 2002). Nucleic acids can also be tagged through RNA- or DNA-binding protein
domains. FPs targeted to cellular organelles by specific protein localization signals
allow visualization of their morphology, fusion, division, etc. (Toseland, 2013). FPs
are important tools for labeling single cells and tissues to visualize morphology,
location, and movement, for example, during embryonic development and
oncogenesis, and many other important cell characteristics. Whole organisms can be
labeled with FPs to distinguish between transgenic and wild-type individuals (Woong
et al., 2022). Photoreflection techniques have several limitations, including difficulty
in monitoring rapid protein movement, collateral phototoxicity due to high-power
light, and complex photoresponse of some FPs. However, these methods are well-
established and provide a way to reliably measure, or at least compare, the mobility of
proteins (Zielke, 2015). In recent years, scientists have been working on the
development of pairs of fluorescent proteins that will be suitable for analysis based on
FRET, a biophysical method of studying biological systems based on the non-radiative
transfer of energy between two fluorescent dyes. This technology, in combination with
genetically encoded FRET biosensors, provides a powerful tool for visualization of

134
Science and Technology of the XXI century Part I

signaling molecules in living cells with high spatiotemporal resolution (Bryce et al.,
2016).
Conclusion. Thus, it can be argued that GFP is an important tool in biology,
medicine, and research. This field is still developing, and there is a huge potential for
improvement, the search for new effective derivatives, biosensors, and methods using
FPs. Therefore, students need a detailed understanding of the workings of fluorescent
proteins and the scope of their applications.
References
Bryce, T., Wang, E., & Zhang, S. (2016). A Guide to Fluorescent Protein FRET Pairs.
Sensors. DOI: 10.3390/s16091488
Toseland, C. (2013). Fluorescent labeling and modification of proteins. J Chem Biol.
DOI: 10.1007/s12154-013-0094-5.
Woong, Y., Harcourt, E., & Xiao, L. (2022). Efficient DNA fluorescence labeling via
base excision trapping. Nature Communications volume 13. DOI:
10.1038/s41467-022-32494-8
Zacharias, D. A. Violin, J. D. Newton, A. C. & Tsien, R. Y. (2002). Partitioning of
lipid-modified monomeric GFPs into membrane microdomains of live cells.
Science. DOI: 10.1126/science.1068539.
Zielke, N. (2015). FUCCI sensors: powerful new tools for analysis of cell proliferation.
WIREs Developmental Biology. 445–554. DOI: 10.1002/wdev.189

ARE THE ALTERNATIVE WAYS OF SMOKING LESS HARMFUL


THAN CIGARETTES?
Dmytro Kucherenko
Faculty of Applied Mathematics, National Technical University of Ukraine
“Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: nicotine, research, smoking, smoke, dependence.


Introduction. A lot of words were said about nicotine harmness in order to
prevent people, especially young people, from smoking. However, it caused not only
the shortness of cigarette consumption, but also pushed the development of alternative
ways of smoking. There are many myths and hypnotizes about them, therefore are non
traditional ways smoking are really less harmful as they were advertised?
Objectives. The main task is to prove that alternative ways of smoking in
particular electronic cigarettes and hookahs are less harmful to human health than
traditional smoking that consists of tobacco burning. In the research only nicotine
containing substances are taken into account.
Methods. Both electronic cigarettes and hookahs can contain quite different
substances, so the investigation is discovered on most common use cases.
The aerosol produced by electronic cigarettes, when inhaled and exhaled, may
contain a range of substances, such as nicotine, ultrafine particles capable of deeply
penetrating the lungs, flavorings potentially linked to severe lung diseases, volatile
organic compounds, carcinogenic chemicals, and heavy metals like nickel, tin, and lead
(Centers for Disease Control and Prevention, 2023).

135
Science and Technology of the XXI century Part I

A hookah, also known as a water pipe, comprises key components, including


a smoke chamber, a water bowl, a pipe, and a hose. Flavored tobacco is commonly
used in a hookah, and it is heated with charcoal. The charcoal burns the tobacco, and
the resulting smoke is drawn into the water bowl (Mayo Clinic, 2022). This smoke then
passes through the water and a rubber tube, ultimately reaching the mouthpiece. It’s
important to note that a single session of hookah smoking typically lasts much longer
than smoking a cigarette.
Results. Electronic cigarettes contain nicotine, which impacts human health
similarly to traditional cigarettes. However, they are often considered less harmful due
to their lower quantity of toxic chemicals compared to the approximately 7,000
chemicals found in regular cigarette smoke. Nevertheless, it's essential to recognize
that electronic cigarette aerosol is not entirely benign. Even products advertised as
nicotine-free can be deceptive, according to researchers.
Hookah smoke is subjected to intense heat from burning charcoal, and the
mechanism primarily delivers nicotine, making the smoke at least as harmful as that
from cigarettes. Due to the way it is used, individuals who partake in hookah smoking
may absorb a more substantial amount of the same toxic substances present in cigarette
smoke compared to traditional cigarette smokers. During a typical 1-hour hookah
session, one might inhale as much as 100 to 200 times the volume of smoke and be
exposed to up to 9 times the carbon monoxide and 1.7 times the nicotine found in
a single cigarette (Centers for Disease Control and Prevention, 2021). In this context,
the quantity of smoke inhaled during a typical hookah session amounts to around
90,000 milliliters (ml), far surpassing the 500-600 ml inhaled when smoking
a cigarette.
Conclusion. Electronic cigarettes are mostly the same as traditional ones, at least
for novice smokers, while using them can help manage nicotine cravings. They are as
damaging for health as tobacco burning products unless they are a replacement in order
to stop smoking completely. In the hookah the water cools the smoke, but it does not
filter out the toxins in the smoke. Even more hookah smokers may breathe in more
tobacco smoke than cigarette smokers do because of long sessions. Also the heat
sources used to burn tobacco release other dangerous substances, therefore the hookah
is as harmful or even more due to additional risk.
References
Centers for Disease Control and Prevention. (2023). About Electronic Cigarettes (E-
Cigarettes). https://www.cdc.gov/tobacco/basic_information/e-
cigarettes/about-e-cigarettes.html
Hookahs (2021).
https://www.cdc.gov/tobacco/data_statistics/fact_sheets/tobacco_industry/hookahs/index.htm
Mayo Clinic. Healthy Lifestyle. (2022). Quit smoking.
https://www.mayoclinic.org/healthy-lifestyle/quit-smoking/expert-
answers/hookah/faq-20057920

136
Science and Technology of the XXI century Part I

ARE ELECTRIC CARS REALLY ENVIRONMENTALLY-FRIENDLY?


Anna Kushch
Faculty of Biotechnology and Biotechnics, National Technical University of
Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: electric car, environment, carbon dioxide, internal combustion


engines.
Introduction. It is difficult to imagine a modern person without a car. In
developed countries, a car became the most necessary household item may years ago.
However, everyone still need to remember that transport pollutes the air, especially in
big cities. In order to minimize emissions of greenhouse gases, namely carbon dioxide,
various new transport technologies began to be developed. Among them are electric
cars that move with the help of one or more electric motors powered with an
autonomous source of electricity, rather than an internal combustion engine. In this
way, they are aimed at increasing safety for the environment.
The electric car is a relatively new concept in the world of transportation,
although the beginnings of technology appeared before the 1960s. In fact, electric cars
appeared before cars with an internal combustion engine. The development was
stopped due to the fact that this type of transport did not produce enough power to
compete with diesel engines, so it calmly waited for its time (V.V. et al., 2016).
Objectives. The main aim is to analyze advantages and disadvantages of electric
cars’ exploitation and compare them with an internal combustion engine.
Methods. A review of recently published research on: impact of electric cars on
environmental, pros and cons of electric cars. Moreover, meta-analysis has been used
to compare researches in this area. Since most of articles have much controversial
information, which need to be compared.
Results. On the one hand, electric cars have certain advantages over cars with
internal combustion engines.
Firstly, the maintenance costs of an electric car are significantly lower than
others kinds of cars. It is happened because they run on engines that do not require fuel
and lubricants. Transport systems with internal combustion engines must be constantly
checked, consumable parts and lubricants must be replaced, in turn, the engine of
electric cars can work for years without inspections and intervention.
Secondly, electric car engines are much quieter than internal combustion
engines.
Thirdly, while driving, the electric car does not emit harmful emissions into the
environment. This is quite relevant today, especially in big cities overloaded with
traffic, where smog and gas pollution sometimes make it hard to breathe.
So, thanks to electric cars, cities could become not only cleaner, but also quieter.
On the other hand, electric cars are not perfect and there are some disadvantages.
To begin with, the non-global problem of transport: it takes about 45 minutes to
charge electric cars, in the worst case 8-10 hours. Although this does not harm anyone,
people will not be able to hurry with uncharged car.

137
Science and Technology of the XXI century Part I

What to add, the more seriously: an electric battery does not create energy from
nothing. It needs to be charged, and that electricity actually comes from coal-fired
power plants. So, although the electric car itself does not pollute the environment, the
primary source of energy for it is fossil fuel (Kaiku, 2013, p. 432).
Electric cars are charged with energy, which is mostly produced by thermal
power plants (since there are still not enough solar and wind power plants). Also,
during the extraction of metals used in lithium-ion batteries and others, there is
a significant emission of carbon dioxide, not to mention the energy spent on the
production of batteries. Moreover, there is the complexity and high cost of disposal
and recycling of batteries.
Electric cars have been used for many years, so, many studies have been
conducted on the environmental damage caused by this type of transport. For instance,
research (where thousands of parameters were taken such as the type of metals in
electric car batteries to the amount of aluminum or plastic in the car) showed that
a person will have to drive 21 725 km before causing less damage to the environment
than a conventional car. In addition, more than 8.1 million grams are released during
the production of medium-sized electric vehicles. For comparison: a similar gasoline
car production generates 5.5 million grams. Even in the worst-case scenario, when
an electric car is charged only from the coal network, it will generate an additional
4.1 million grams of carbon per year, while the same gasoline car will produce more
than 4.6 million grams (Lienert, 2021).
Conclusion. Electric cars are the latest technologies aimed at increasing the
safety of transport for the environment. This type of transport is promising, but at this
stage of its development there are significant drawbacks. Having avoided them, the
demand for purchase will increase, and accordingly, cars with an internal combustion
engine may no longer be used in the future.
References
V.V, K., Yu. V, M., & S.A, G. (2016). Electromobile. Its features and advantages
from the point of view of ecology. Young scientist, (12), 44–46.
Kaiku, M. (2013). Physics of the future. Litopys.
Lienert, P. (2021). Analysis: When do electric vehicles become cleaner than
gasoline cars?

TRANSHUMANISM: EXPLORING THE NEXUS OF HUMANITY AND


TECHNOLOGY
Andrii Moskaliuk
Faculty of Biomedical Engineering, National Technical University of Ukraine
“Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: genetic engineering, transhumanism, ethical implications, future


prospects.
Introduction. The convergence of biology and technology has given rise to the
burgeoning field of transhumanism, blurring the boundaries between humans and
machines (Bostrom, 2005). It is imperative to critically analyze the implications of this

138
Science and Technology of the XXI century Part I

paradigm shift. This thesis aims to investigate the parallels between humans and
computers, and subsequently, delve into the potential pitfalls of future genetic
engineering endeavors.
Objective. To elucidate the similarities and disparities between humans and
computers and identify the ethical and existential concerns surrounding the integration
of technology with human biology. We also need to analyze the prospective challenges
and consequences of widespread genetic engineering.
Methods. This thesis employs a multidisciplinary approach, incorporating
insights from genetics, bioethics, and philosophy. Extensive literature review and
critical analysis of seminal works on transhumanism and genetic engineering form the
foundation of this research.
Results. At first glance, humans and computers may seem disparate entities, but
a deeper examination reveals surprising parallels. Both possess information-processing
capabilities, albeit through different mechanisms. Humans rely on intricate neural
networks, while computers employ algorithms and binary code. The emergence of
neural networks in AI reflects a convergence point, where machine learning begins to
emulate human cognition (Savulescu, & Bostrom, 2009). But when we start talking
about the integration of technology into human biology, it raises a lot of ethical
quandaries. The concept of “enhancement” blurs the line between therapy and
augmentation, opening the door to potential societal disparities. Questions of access,
equity, and the definition of “normalcy” become paramount in a transhumanist future.
Furthermore, issues of autonomy, privacy, and the potential for unintended
consequences must be carefully considered. The prospects of widespread genetic
engineering present a double-edged sword. While it holds the promise of eradicating
hereditary diseases and enhancing cognitive abilities, it also introduces the risk of
unintended genetic mutations and unforeseen ecological impacts. Moreover, the
potential for designer babies and genetic discrimination looms large, threatening to
exacerbate existing societal inequalities (Harris, 2007).
Conclusion. Transhumanism represents a pivotal moment in human history,
demanding a judicious examination of its implications. The parallels between humans
and computers underscore the potential for profound integration, while ethical
concerns serve as a cautionary framework. Genetic engineering, while holding
immense promise, requires meticulous oversight to navigate the delicate balance
between progress and peril. It is our collective responsibility to steer this transformative
journey towards a future that upholds human dignity, equity, and the sanctity of life
(Sandberg, 2012).
Future research in this domain should focus on refining ethical frameworks,
establishing regulatory guidelines, and addressing the socio-economic ramifications of
transhumanism. Additionally, interdisciplinary collaboration between
biotechnologists, ethicists, and policymakers is imperative to ensure a holistic
approach to this complex paradigm shift.
References
Bostrom, N. (2005). A history of transhumanist thought. Journal of Evolution and
Technology, 14(1), 1–25. https://nickbostrom.com/papers/history.pdf

139
Science and Technology of the XXI century Part I

Harris, J. (2007). Enhancing Evolution: The Ethical Case for Making Better People.
Princeton University Press.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2267394/
Sandberg, A. (2012). Morphological freedom – Why we not just want it, but need it.
Journal of Evolution and Technology, 22(1), 29–42.
https://queerai.files.wordpress.com/2017/07/sandberg-
morphologicalfreedom.pdf
Savulescu, J., & Bostrom, N. (2009). Human Enhancement. Oxford University Press.
https://nickbostrom.com/ethics/human-enhancement-ethics.pdf

ELECTRONIC HEALTH RECORDS AND BIOPHARMACEUTICAL


RESEARCH
Maksym Nikoliuk
Educational and Scientific Institute of Nuclear and Thermal Energy, National
Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: biopharmaceuticals, EHR, vaccines, public health.


Introduction. Biopharmaceuticals, or biologics, are medicinal products derived
from living organisms, such as bacteria, yeast, or mammalian cells. These drugs are
typically composed of proteins, nucleic acids, or other biological substances, as
opposed to traditional small-molecule drugs (Behera, 2020, p. 51). The
biopharmaceutical field has witnessed remarkable advancements in recent years, with
the development of cutting-edge therapies and vaccines. Modern vaccines, like the
COVID-19 mRNA vaccines, are a testament to this progress (Huang et al., 2021, p.
112). The targeted nature of these therapies reduces the risk of adverse effects, while
the speed and flexibility in vaccine development can save countless lives during
pandemics (Hannappel, 2017, p. 2).
Objectives. The integration of Electronic Health Records (EHR) with
biopharmaceutical research is a promising avenue that can revolutionise healthcare and
drug development. The primary objectives of this text are to explore the potential
benefits of such integration, highlight the challenges and ethical considerations, and
provide a personal perspective on this transformative endeavour.
Methods. To achieve these objectives, we will review current literature on EHR
and biopharmaceutical research integration, analyse case studies and real-world
examples, and discuss the ethical implications. Furthermore, we will consider the
potential advantages and drawbacks from a personal perspective.
Results. The integration of EHR with biopharmaceutical research offers several
advantages. It can streamline clinical trials by enabling researchers to access a vast
pool of patient data, thus accelerating the drug development process. Additionally,
EHR can enhance post-market surveillance, providing critical information on drug
safety and effectiveness. This integration can also lead to more personalised treatment
options, ensuring that patients receive therapies tailored to their unique needs.
However, challenges exist, including concerns about data privacy and security.
Personal health information is sensitive, and safeguarding it is of paramount

140
Science and Technology of the XXI century Part I

importance. Ethical considerations also arise, as researchers must balance the potential
benefits of integrated data with the need to protect individual rights and privacy.
From a personal perspective, the integration of EHR with biopharmaceutical
research holds great promise. It can lead to breakthroughs in drug development,
potentially saving lives and improving healthcare outcomes. However, it is essential to
implement robust data protection measures and ensure that individuals have control
over their health information. The benefits must outweigh the risks, and ethical
principles should guide this integration.
Conclusion. In conclusion, the integration of Electronic Health Records (EHR)
with biopharmaceutical research is a transformative endeavour that can significantly
impact healthcare and drug development. The potential benefits are vast, from
expediting drug discovery to providing personalised treatments. Nevertheless, ethical
concerns and data security must be addressed. From a personal standpoint, this
integration is a positive step forward, provided that stringent safeguards are in place to
protect individuals' rights and privacy. Balancing progress and ethics is the key to
realising the full potential of this integration and ushering in a new era of
biopharmaceutical research.
References
Behera, B. K. (2020). Biopharmaceutical production. In Biopharmaceuticals (pp. 47–
79). CRC Press. https://doi.org/10.1201/9781351013154-2
Hannappel, M. (2017). Biopharmaceuticals: From peptide to drug. In Structure,
Function and Dynamics from Nm to Gm: Proceedings of the 8th Jagna
International Workshop. https://doi.org/10.1063/1.4996533
Huang, Q., Zeng, J., & Yan, J. (2021). COVID-19 mRNA vaccines. Journal of
Genetics and Genomics, 48(2), 107–114.
https://doi.org/10.1016/j.jgg.2021.02.006

ARTIFICIAL INTELLIGENCE IN BIOTECHNOLOGY


Andrii Ovsiienko
Faculty of Biomedical Engineering, National Technical University of Ukraine
“Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: biotechnology, Artificial Intelligence, genomics, personalized


medicine.
Introduction. The integration of Artificial Intelligence (AI) into biotechnology
has ushered in a transformative era, offering promising solutions to long-standing
challenges in biomedical research, drug development, and healthcare. This abstract
delves into the objectives, methods, results, and implications of AI's role in
biotechnology, providing a comprehensive overview of this dynamic field.
Objectives. This study aims to elucidate the multifaceted impact of AI on
biotechnology by examining its diverse applications and evaluating its potential in
revolutionizing biotechnological processes. Our objectives encompass a broad
spectrum of biotechnological domains, including genomics, drug discovery,
diagnostics, and synthetic biology.

141
Science and Technology of the XXI century Part I

Methods. To achieve a comprehensive understanding of AI's influence on


biotechnology, I conducted an extensive literature review, encompassing research
articles, reviews, case studies, and technical reports. The review process involved
a rigorous and structured approach, starting with the identification of relevant
publications through searches in well-established academic databases such as PubMed,
IEEE Xplore, ScienceDirect, and Google Scholar.
The selection criteria for articles prioritized relevance to AI applications in
biotechnology, ensuring a representative sample of the most impactful research in the
field. To assess the quality and relevance of the selected literature, I employed stringent
evaluation criteria, which included considerations of methodology, data sources, and
the extent of AI integration within biotechnological processes.
Furthermore, to provide a comprehensive view of AI's contributions, I
categorized AI applications in biotechnology into three major domains: predictive
modeling, data analysis, and automation. For each domain, I meticulously examined
the specific AI techniques employed, such as deep learning, machine learning, natural
language processing, and robotics. This categorization enabled us to assess the breadth
and depth of AI's impact across the biotechnology spectrum (Goh WWB, Sze CC.).
Results. The integration of AI into biotechnology has yielded remarkable results
across various domains, substantiating its potential as a transformative force in the
field. In drug discovery, AI-driven predictive models have emerged as indispensable
tools, capable of rapidly screening vast chemical libraries, predicting the efficacy of
potential drug candidates, and optimizing molecular structures. Consequently, this has
led to a significant reduction in the time and resources required for drug development,
enabling more efficient and cost-effective innovation in the pharmaceutical industry.
Furthermore, the application of AI in genomics and proteomics has ushered in
the era of personalized medicine. AI algorithms can meticulously analyze large-scale
genomic datasets, identifying patient-specific mutations, biomarkers, and treatment
responses. This capability empowers clinicians to tailor treatment plans to individual
patients, maximizing therapeutic effectiveness while minimizing adverse effects. As
a result, AI is facilitating a paradigm shift towards precision medicine, where
treatments are uniquely tailored to the genetic makeup of each patient (Caudai, Galizia,
Geraci, 2021).
In the realm of diagnostics, AI has showcased exceptional prowess in
interpreting complex biological data. Whether it involves the analysis of medical
images, such as MRI scans or histopathology slides, or the processing of high-
throughput omics data, AI-powered tools have enhanced diagnostic accuracy and
speed. Early detection of diseases, such as cancer and infectious diseases, has become
more attainable, leading to improved patient outcomes and potentially reducing
healthcare costs through earlier interventions (Oliveira, 2019).
Moreover, AI has become a driving force behind advancements in synthetic
biology. AI-driven platforms can suggest genetic modifications, predict metabolic
pathways, and optimize fermentation conditions, accelerating the development of
genetically engineered organisms and bioprocesses. This not only facilitates the

142
Science and Technology of the XXI century Part I

production of bio-based products and biofuels but also opens up new avenues for
sustainable and environmentally friendly solutions.
Conclusion. The incorporation of AI into biotechnology represents
a monumental leap forward, offering a myriad of opportunities for innovation and
discovery. This abstract underscores the transformative potential of AI, showcasing its
capacity to streamline traditional biotechnological processes, reveal hidden insights in
large datasets, and catalyze groundbreaking discoveries. As AI continues to evolve and
mature, its synergy with biotechnology promises to reshape the future of medicine,
bioprocessing, and biotechnological advancements. This convergence holds the
promise of improving the quality of healthcare, advancing our understanding of
biology, and ultimately enhancing the human condition. The journey towards this AI-
powered future in biotechnology is marked by immense potential and exciting
challenges that will continue to shape the research landscape in the coming years.
References
Caudai, C, Galizia, A, Geraci, F, et al. (2021). AI applications in functional genomics.
Computer Structural Biotechnology, 19:57, 62–90.
https://doi.org/10.1016/j.csbj.2021.10.009
Goh WWB, Sze CC. (2019). AI paradigms for teaching biotechnology. Trends
Biotechnology, 37(1), 1–5. https://doi.org/10.1016/j.tibtech.2018.09.009
Oliveira, A.L. (2019) Biotechnology, big data and artificial intelligence.
Biotechnology, 14(8), 1800613. https://doi.org/10.1002/biot.201800613

UNDERSTANDING THE CORIOLIS FORCE AND ITS VALUE


Maksym Pavliushyn
Faculty of Applied Mathematics, National Technical University of Ukraine
“Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: Coriolis Force, Meteorology, Earth's Rotation, hurricanes.


Introduction. In any rotating reference frame, such as the Earth, a merry-go-
round, or a spinning ice skater, an observer notices a new impact on object motion.
A ball thrown between two friends will appear to take a curved path on a merry-go-
round. As they rotate on the merry-go-round, the ball is freely flying through the air.
This curvature of motion in the rotating reference frame is caused by the Coriolis force.
It always faces the way the thing is moving.
Objectives. The main aim is to understand the way how Coriolis force works
and how it changes our lives.
Methods. The discovery that a cannonball fired northward will deflect eastward
due to Earth's rotation was made in 1651 by the Italian scientist Giovanni Battista
Riccioli. The final moniker for the phenomenon, however, was given by the French
physicist Gaspard-Gustave de Coriolis. In 1835, he published a treatise on the forces
acting on the spinning components of industrial machinery, especially water wheels.
The present term did not become widely used until 1920, but it did. It is now referred
to as the Coriolis effect. The Coriolis effect is responsible for hurricanes too. As
a result, they spin in opposite directions, clockwise in the Northern Hemisphere and

143
Science and Technology of the XXI century Part I

counterclockwise in the Southern. Regardless of the air pressure in various locations,


air enters the center and is directed to the right. The air is rotating within the storm and
is inclined to deflect to the right. As a result, the storm reaches its maximum size due
to air resistance to fresh inbound air. A rotating frame of reference is Earth. Because
of this, the Coriolis force can be seen at work on a cannon ball when it is fired at
a sufficiently far distance. Numerous other instances of the Coriolis force at work on
Earth exist. The conditions on Earth that we take for granted were actually made
possible by it.
Results. The angular momentum conservation allows us to predict the direction
of Coriolis force of a moving object on a rotating flat plate or rotating sphere. If the
object moves toward a larger(smaller)radius, in order to conserve angular momentum,
there must be a force to reduce(increase) the tangential velocity. If the object moves
parallel to tangential velocity, for angular momentum conservation, when tangential
velocity increases(decreases)there must be a force which impels the object toward
a larger (smaller) radius (Holton, 2004).
Conclusion. All major discoveries about the general circulation and the relation
between the pressure and wind fields were made without any knowledge about Gaspard
Gustave Coriolis. Nothing in today’s meteorology would have been different if he and
his work had remained forgotten (Anders O. Persson 2005).
References
Holton, J.R. (2004, October 10). An Introduction to Dynamic Meteorology Fourth
Edition.
https://www.academia.edu/42101208/An_Introduction_to_Dynamic_Meteorol
ogy_FOURTH_EDITION
Persson, A.O. (2005, September 22). The Coriolis Effect: Four centuries of conflict
between common sense and mathematics, PartI: A history to 1885 chrome-
extension://efaidnbmnnnibpcajpcglclefindmkaj/https://empslocal.ex.ac.uk/peop
le/staff/gv219/classics.d/persson_on_coriolis05.pdf

TARGETING THE INTESTINAL MUCOSAL IMMUNITY TO TREAT


ІMMUNOGLOBULIN-A NEPHROPATHY
Anastasiia Prozor
Faculty of Biomedical Engineering, National Technical University of Ukraine
“Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: іmmunoglobulin A nephropathy (IgAN), gut-associated lymphoid


tissue (GALT), mucosa-associated lymphoid tissue (MALT), complement system.
Introduction. IgA nephropathy (IgAN) is an autoimmune renal disease and the
main cause of primary glomerulonephritis, that is occurred due to synthesis galactose
deficient IgA1 (GD-IgA1). Recent studies have uncovered the pathogenesis of the
disease, which has allowed the use of specific targeted therapy on gut-associated
lymphoid tissue (GALT) and mucosa-associated lymphoid tissue (MALT), which are
mainly responsible for the formation of IgA in the human body.

144
Science and Technology of the XXI century Part I

Objectives. The main aim is to formulate core concepts of immune therapy for
IgAN.
Methods. Conducting a literature review of the latest abstracts and research.
The pathogenesis of IgAN is strongly related to MALT: the mucous membrane
produces more IgA per day than other types of antibodies in total. A subtype of MALT,
GALT secretes 3-5 g of IgA, with the number of cells accounting for 80% of all those
capable of synthesis. The main function of IgA is to prevent the attachment of epithelial
cells by restraining them for further excretion from the body (Barratt et al., 2020).
Recent studies have identified 4 stages of disease formation: formation of Gd-IgA1;
formation of anti-glycan IgA and IgG autoantibodies that recognize Gd-IgA1; Gd-IgA1
forms polymeric IgA1 immune complexes with or without anti-glycan autoantibodies;
deposition of these complexes in the glomerular mesangium, which activates (among
other injury pathways) the complement system, which promotes inflammation and
scarring of the kidneys (Barratt et al., 2023). Accordingly, the disease can be treated at
any of the 4 stages. It is logical to assume that prevention and elimination of the first
stage of abnormal IgA1 formation breaks the chain of the disease and does not lead to
glomerulonephritis, which later develops into chronic renal failure. The pathogenicity
is explained by the fact that lymphocytes are sensitized at the level of the mucous
membrane and migrate to the bone marrow, where they differentiate into plasma cells
that form IgA1. IgA1 subclass has short O-linked oligosaccharide chains in the hinge
region, made by a N-acetyl galactosamine core extended with β1,3 linked galactose by
the β1,3 galactosyltransferase (which needs a specific chaperone, Cosmc) and covered
with sialic acid. In patients with IgAN there is a prevalence of truncated IgA1 O-
glycoforms with reduced galactosylation (Coppo, 2018).
Tonsillectomy is now used to reduce MALT, which further reduces IgA1
secretion through glycosylation. This method is invasive and non-universal. Due to the
lack of knowledge about the biomarkers of the disease, monitoring the patient's
condition is difficult, and there is a high risk of incorrect treatment (Yang et al., 2020).
This is how treatment with glucocorticoids (budesonide) released in the targeted area
of the ileum enriched with Peyer's patches (Caster, 2023). Studies have shown that
budesonide may become the first IgAN-specific drug to target the intestinal mucosa's
immunity to the disease (Coppo, 2018). Recently, three therapies have been added to
the clinical arsenal of IgAN treatment: intestinal steroids, sodium-glucose
cotransporter-2 inhibitors (SGLT2i) and, most recently, a dual endothelin-A receptor
ETAR and angiotensin II receptor antagonist (Kohan et al., 2023). Studies in mice have
shown a positive result for the influence of nucleotide-sensitive toll-like receptor 9
(TLR9) and TLR7 in the pathogenesis of IgAN. Thus, nucleotide-sensitive TLRs may
be quite strong candidates for specific therapeutic targets in IgAN (Lee et al., 2023).
Results. Targeted immunotherapy to the site of abnormal IgA production is the
most effective and appropriate treatment.
Conclusion. Developments in effective treatment and early diagnosis are
needed. Currently, the main goal is to affect the B-lymphocytes of Peyer's patches, that
are responsible for the secretory activity of GALT. It is important to loose a chain of
4 main stages of forming IgAN. Also, it is already possible to intervene in the body

145
Science and Technology of the XXI century Part I

with conservative and supportive pharmacological therapy, which has recently added
sodium-glucose cotransporter-2 inhibitors and ERA to the arsenal.
References
Barratt, J., Rovin, B. H., Cattran, D., Floege, J., Lafayette, R., Tesar, V., Trimarchi, H.,
& Zhang, H. (2020). Why Target the Gut to Treat IgA Nephropathy? Kidney
International Reports, 5(10), 1620–1624.
https://doi.org/10.1016/j.ekir.2020.08.009
Barratt, J., Lafayette, R. A., Zhang, H., Tesar, V., Rovin, B. H., Tumlin, J. A., Reich,
H. N., & Floege, J. (2023). IgA Nephropathy: the Lectin Pathway and
Implications for Targeted Therapy. Kidney International.
https://doi.org/10.1016/j.kint.2023.04.029
Caster, D. J., & Lafayette, R. A. (2023). The Treatment of Primary IgA Nephropathy:
Change, Change, Change. American Journal of Kidney Diseases.
https://doi.org/10.1053/j.ajkd.2023.08.007
Coppo, R. (2018). The Gut-Renal Connection in IgA Nephropathy. Seminars in
Nephrology, 38(5), 504–512. https://doi.org/10.1016/j.semnephrol.2018.05.020
Kohan, D. E., Barratt, J., Heerspink, H. J. L., Campbell, K. N., Camargo, M., Ogbaa,
I., Haile-Meskale, R., Rizk, D. V., & King, A. (2023). Targeting the Endothelin
A Receptor in IgA Nephropathy. Kidney International Reports.
https://doi.org/10.1016/j.ekir.2023.07.023
Lee, M., Suzuki, H., Ogiwara, K., Aoki, R., Kato, R., Nakayama, M., Fukao, Y., Nihei,
Y., Kano, T., Makita, Y., Muto, M., Yamada, K., & Suzuki, Y. (2023). The
nucleotide-sensing Toll-Like Receptor 9/Toll-Like Receptor 7 system is a
potential therapeutic target for IgA nephropathy. Kidney International.
https://doi.org/10.1016/j.kint.2023.08.013
Yang, X., Zhu, A., & Meng, H. (2020). Tonsillar immunology in IgA nephropathy.
Pathology – Research and Practice, 216(7), 153007.
https://doi.org/10.1016/j.prp.2020.153007

OVERVIEW OF DIFFERENT METHODS OF MODEL CREATION


SKIN-ON-CHIP
Anastasiia Prozor, Polina Kovalchuk
Faculty of Biomedical Engineering, National Technical University of Ukraine
“Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: skin-on-chip, in-vitro testing, 3D cell model, drug testing.


Introduction. In recent years, organ-on-chip (OOC), which combines tissue
engineering and microfluidic technology, has been rapidly developing. Due to the
precise control of microfluidics, cell differentiation, cell-cell and cell-matrix
interactions are improved compared to static culture. The unique advantages of 3D cell
culture chips overcome the limitations of traditional 2D models, which lack the
complex physiological functions of human tissues and organs. Today, there is
a growing demand for the development and production of in-vitro engineered skin

146
Science and Technology of the XXI century Part I

models to restore its function after damage or for cosmetic and pharmaceutical testing
(Velasco et al., 2018).
Objectives. The main goal is to understand what skin-on-chip models currently
exist on the market and identify possible further development paths.
Methods. A review of recently published abstracts and further categorization.
There are two main approaches to designing microsystems for skin modeling: the first
is the direct introduction of a skin fragment from a biopsy taken or from artificially
created epidermis, and the second is the creation of tissue in situ.
Popular SOC modeling techniques involve injecting tissue directly into the
device. This tissue can be obtained through a skin biopsy from a donor or by creating
an artificial epidermis. Such models with dermal compartments are often used to study
the care of equivalents and their use for clinical and research purposes. To study
neutrophil responses to the presence of bacteria on the skin, the researchers designed
a SOC-type device with one tissue and two channels separated by a red blood cell filter
(Kim et al., 2019). A fragment of a human skin biopsy was inserted into one of the
channels and interacted with blood loaded into the other channel. Other groups are
using artificially created epidermis to create such skin-transfer chips. For example, the
chip contained a special compartment that housed a fragment of artificial epidermis to
test its viability and care, and a lower channel for the passage of culture fluid (Abaci et
al., 2016). The artificial epidermis was grown on a porous membrane that allowed
nutrients to diffuse out of the channel. The group studied transdermal transport of
substances and the possibility of using this device for drug testing. Although this
approach has been used for single-tissue models, the skin-transferred approach is
a common practice in the development of multiorgan chips. Wagner et al (2013) have
developed a microsystems chip for co-culturing skin and liver by injecting a human
skin biopsy directly into the chip. In their study, they demonstrated the interaction
between these tissues and the sensitivity of the liver to drug toxicity. The second
approach is to create a skin model directly on the chip. This approach is divided into
two main groups of devices for in-situ skin creation. The first group is based on the
manual creation of tissue by an open structure inside the device. The main difference
is how the culture fluid and other substances are delivered to the skin structure: in the
first case, they are delivered through hollow channels that pass through the dermal
layer, while in SOC chips, these fluids circulate through a microsystemic channel under
the tissue structure. The dermal compartment, simulated using a collagen gel with
fibroblasts, was created on a porous membrane (Lim et al., 2018). The second method
is to create an organic model directly inside the device, using channels not only to
supply nutrients but also as compartments to hold tissue. The initial step is to cultivate
keratinocytes in microfluidic devices, where the cells are evenly distributed on
collagen-coated glass plates and the effect of flow on their growth and viability is
studied.
Results. Since traditional methods of creating skin models have their limitations,
there is a need to develop new platforms that are closer to the physiological conditions
of the skin, contributing to the creation of more realistic models for testing. In addition,
new methods have been developed to simulate the dynamic processes occurring in the

147
Science and Technology of the XXI century Part I

human body. Some researchers have created artificial skin models with the ability to
supply blood, which has made it possible to study how drugs and other substances are
distributed in tissues. However, one of the key problems is the large size of these
models, which increases costs and limits the ability to conduct extensive research.
Conclusion. Skin-on-a-chip is an advanced biomedical technology that involves
creating models of living tissues and organs on microscopic chips using biomaterials
and bioengineering approaches. This technology has great potential for medicine,
pharmacy, science and other industries. The human skin plays an important role in
protecting the body, regulating temperature and responding to external influences. The
skin is also an important route for drug delivery. SOC technology can reduce the need
for animal testing, allows for a more detailed study of the effects of drugs on cells, and
facilitates the rapid introduction of new drugs.
References
Abaci, H. E., Guo, Z., Coffman, A., Gillette, B., Lee, W.-h., Sia, S. K., & Christiano,
A. M. (2016). Human Skin Constructs with Spatially Controlled Vasculature
Using Primary and iPSC-Derived Endothelial Cells. Advanced Healthcare
Materials, 5(14), 1800–1807. https://doi.org/10.1002/adhm.201500936
Kim, J. J., Ellett, F., Thomas, C. N., Jalali, F., Anderson, R. R., Irimia, D., & Raff, A.
B. (2019). A microscale, full-thickness, human skin on a chip assay simulating
neutrophil responses to skin infection and antibiotic treatments. Lab on a Chip,
19(18), 3094–3103. https://doi.org/10.1039/c9lc00399a
Lim, H. Y., Kim, J., Song, H. J., Kim, K., Choi, K. C., Park, S., & Sung, G. Y. (2018).
Development of wrinkled skin-on-a-chip (WSOC) by cyclic uniaxial stretching.
Journal of Industrial and Engineering Chemistry, 68, 238–245.
https://doi.org/10.1016/j.jiec.2018.07.050
Velasco, D., Quílez, C., Garcia, M., del Cañizo, J. F., & Jorcano, J. L. (2018). 3D
human skin bioprinting: a view from the bio side. Journal of 3D Printing in
Medicine, 2(3), 141–162. https://doi.org/10.2217/3dp-2018-0008
Wagner, I., Materne, E.-M., Brincker, S., Süßbier, U., Frädrich, C., Busek, M., …
Marx, U. (2013). A dynamic multi-organ-chip for long-term cultivation and
substance testing proven by 3D human liver and skin tissue co-culture. Lab on a
Chip, 13(18), 3538. https://doi.org/10.1039/c3lc50234a

MAGNETIC ION CHANNEL ACTIVATION TECHNOLOGY:


CONTROLLING CELLULAR PROCESSES IN LIVING ORGANISMS
Kateryna Rachek
Faculty of Physics and Mathematics, National Technical University of Ukraine
“Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: ion channel, calcium channel modulation, nanoparticles,


angiogenesis
Introduction. Second messengers are crucial molecules that transmit the effects
of neurotransmitters and hormones within cells. Notable second messengers include
cAMP, cGMP, DAG, IP3, and Ca2+ ions. These molecules bind to specific proteins,

148
Science and Technology of the XXI century Part I

modifying their activity and transmitting signals within cells. Dysregulation of second
messengers can lead to various diseases, such as cancer. This paper explores the role
of ion channels in controlling calcium levels in cells and their connection to
angiogenesis, the formation of new blood vessels.
Objectives. Endothelial cells, key players in angiogenesis (Debir et al., 2021),
determine their phenotype as tip cells (migratory) or stalk cells (following)
(Venkatraman et al., 2016). Research indicates that rapid oscillations in intracellular
calcium concentration (Atri et al., 1993) are critical for phenotype selection and vessel
architecture. Mechanical wall shear stress, influenced by blood flow, also regulates
intracellular calcium levels through mechanosensitive calcium ion channels. This
paper discusses experimental methods for controlling these channels using external
magnetic fields, a technology known as magnetic ion channel activation.
The primary question is whether external magnetic fields, in the presence of
artificial or biogenic magnetic nanoparticles integrated into cell membranes, can
control the selection of endothelial cell phenotypes in the process of angiogenesis (or
any other cellular process).
Methods. The Henstock et al. (2014) experiment involved conjugating
nanoparticles with antibodies targeting the TREK1 ion channel on human
mesenchymal stem cells. These cells were manipulated using a custom vertical
oscillating magnetic bioreactor that generated an oscillating magnetic field.
The Gregurec et al. (2020) experiment used colloidal-synthesized magnetite
nanodiscs attached to ganglion explant cells in the spinal cord to activate the
mechanosensitive cation channel TRPV4 via weak magnetic fields.
In Hao's et al. (2019) study, magnetosensitive biomaterial induced
nanodeformation in MC3T3-E1 mouse cells, promoting mechanical stress for the
stimulation of the mechanosensitive protein Piezo1 and accelerating osteogenesis
under the influence of a static magnetic field.
Results. The Magnetic Ion Channel Activation technology enabled researchers
to remotely control tissue regeneration and healing processes.
Conclusion. While further research is needed to fully understand the nuances of
magnetic field modulation on secondary messengers within cells, the potential for this
influence is evident and holds promise for future applications.
References
Atri, A., Amundson, J., Clapham, D., & Sneyd, J. (1993). A single-pool model for
intracellular calcium oscillations and waves in the Xenopus laevis oocyte.
Biophys J, 65. https://doi.org/10.1016/S0006-3495(93)81191-3
Debir, B., Meaney, C., Kohandel, M., & Unlu, M.B. (2021). The role of calcium
oscillations in the phenotype selection in endothelial cells. Sci Rep, 11, 23781.
https://doi.org/10.1038/s41598-021-02720-2
Gregurec, D., Senko, A.W., Chuvilin, A., Reddy, P.D., Sankararaman, A., Rosenfeld,
D., Chiang, P.-H., Garcia, F., Tafel, I., Varnavides, G., Ciocan, E., Anikeeva, P.
(2020). Magnetic Vortex NanodiscsEnable Remote Magnetomechanical Neural
Stimulation. ACS Nano, 14, 8036.
https://pubs.acs.org/doi/10.1021/acsnano.0c00562

149
Science and Technology of the XXI century Part I

Hao, L., Li, L., Wang, P., Wang, Z., Shi, X., Guo, M., & Zhang, P. (2019). Nanoscale,
11, 23423–23437. DOI: 10.1039/C9NR07170A
Henstock, J.R., Rotherham, M., Rashidi, H., Shakesheff, K.M., & El Haj, A.J. (2014).
Remotely Activated Mechanotransduction via Magnetic Nanoparticles
Promotes Mineralization Synergistically with Bone Morphogenetic Protein 2:
Applications for Injectable Cell Therapy. Stem Cells Transl. Med., 3, 1363–
1374. doi: 10.5966/sctm.2014-0017. doi: 10.1177/2041731418808695
Venkatraman, L., Regan, E.R., & Bentley, K. (2016). Time to Decide? Dynamical
Analysis Predicts Partial Tip/Stalk Patterning States Arise during Angiogenesis.
PLoS One, 11, e0166489. https://doi.org/10.1371/journal.pone.0166489

BRAZILIAN TRADITIONAL MEDICINE: ASPECTS AND


PERSPECTIVES
Olena Ralko, Nataliia Hyshchak
Educational and Scientific Institute for Publishing and Printing, National
Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: brazilian traditional medicine, cultural heritage, medicinal plants,


traditional knowledge, biodiversity.
Introduction. Brazilian Traditional Medicine (BTM) is deeply rooted in the
cultural heritage of Brazil, influenced by a blend of indigenous practices, European
colonization. This work explores the historical aspects and the current state of BTM,
emphasizing its potential in the context of modern pharmaceutical innovation and the
need for proper regulatory frameworks.
Objectives. The main goal of this work is to highlight such topic as the study of
the historical development of Brazilian traditional medicine.
Methods. The work uses a qualitative approach for the topic of traditional
medicine and the progress of medicine in this area is analyzed. Researchers emphasized
the importance of regulatory standards for the effective use of medicinal plants. It is
also assessed the current state and described the potential in the development of modern
pharmaceuticals. Delving into the origins and evolution of Brazilian traditional
medicine, scientists examined data and trends in the use of medicinal plants in the
Brazilian health care system. In addition, they familiarized themselves with sanitary
norms and rules (Gurgel, 2011).
As a result of the historical aspect, BTM company has developed through
a combination of indigenous, local, European and African influences, with the use of
medicinal plants as a central element. European colonization and the arrival of African
slaves enriched BTM, infusing mystical elements into its practices.
It is now widely believed that Brazil’s rich plant biodiversity is under threat due
to economic activity, which is causing a decrease in the availability of medicinal plants.
Despite this, there is growing interest in traditional medicine, focusing on the role of
shamans and the use of medicinal plants.
Research has shown that natural products, including medicinal plants, can serve
as a foundation for innovative pharmaceuticals. Brazil’s abundance of plant species

150
Science and Technology of the XXI century Part I

and a well-established traditional medicine system present significant opportunities for


the development of biologically active products (Calaça, 2002).
Regulation of herbal medicinal products varies globally. In the European Union
and Brazil, specific standards and regulations have been established to ensure the safety
and quality of these products (Сarvalho, 2009).
Results. Brazil does not stand still, hundreds of people every day work and
improve the system of medicine and promote it on the world market, which makes it
recognizable everywhere.
Conclusion. Brazilian Traditional Medicine holds promise in leveraging natural
resources and scientific research to develop safe and effective products. However,
effective regulation and stringent quality control are crucial for maximizing this
potential. Brazil, with its rich biodiversity and traditional medicinal knowledge, is well-
positioned to blend scientific advancements with traditional practices for the benefit of
its healthcare system. The preservation and restoration of both plant resources and
traditional knowledge are essential in this case.
References
Calaca, C.E. (2002). Medicines and Medicinal Plants in the Tropics: Aspects of the Formation
of Western Pharmaceutical Science. History, Sciences, Health –
Manguinhos.https://www.scielo.br/j/hcsm/a/rZvTrvmCVR7HrFwpbLxYbCs/?lang=pt
Сarvalho, A. B. (2009). Regulation of Plants and Herbal Medicines in Brazil.
https://www.redalyc.org/pdf/856/85680103.pdf
Gurgel, C. (2011). Diseases and Cures: Brazil in the Early Centuries Context.
https://books.google.com.ua/books?hl=ru&lr=&id=J9m-
EAAAQBAJ&oi=fnd&pg=PA7&dq=related:W30mNcCGduwJ:scholar.google
.com/&ots=mDds73F0UD&sig=OkHCTc9J1eDYuENnyT4zrMtR1qQ&redir_
esc=y#v=onepage&q&f=false

ECOCIDE AS AN ECOLOGICAL DISASTER AGAINST THE


BACKGROUND OF RUSSIA’S CRIMINAL AGGRESSION ON THE
TERRITORY OF UKRAINE
Dmytro Rusiev
Faculty of Sociology and Law, National Technical University of Ukraine
“Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: ecocide, environmental catastrophe, epistemological analysis.


Introduction. On February 24, 2022, the Russian Federation initiated a full-
scale war against Ukraine, which, as of the date of writing this work, continues. In
addition to the war with Ukraine itself being a crime and a violation of international
law, the conduct of Russian Federation forces during their offensive has resulted in
numerous war crimes, including violations against the natural environment and the
safety of humanity as a whole. Currently, the topic of the war and its consequences is
one of the most pressing issues for domestic and a significant portion of foreign
scholars from various fields, including the field of environmental law. Furthermore,
against the backdrop of the aforementioned events and methods of waging war, the

151
Science and Technology of the XXI century Part I

concept of “ecocide” has regained relevance for the first time in several decades, and
in this context, there is a need to examine its main manifestations in the territory of
Ukraine.
Objectives. The purpose of this work is to investigate scientific and normative
(national and international) sources to define the concept of “ecocide,” elucidate its
fundamental characteristics, and outline its elements as a criminal offence against the
safety of humanity. This research will also highlight and characterize, using official
and currently available information from the Ministry of Environmental Protection and
Natural Resources of Ukraine, the most serious crimes related to ecocide committed
by the Russian Federation's forces during the period of the war. These actions include
the destruction of forests and natural reserve sites, as well as damage to nuclear
facilities.
Methods. According to Article 441 of the Criminal Code of Ukraine, ecocide
includes actions such as mass destruction of plant or animal life, pollution of the
atmosphere or water resources, as well as other actions that may cause an
environmental catastrophe (Kryminalnyk kodeks Ukrayiny, 2001).
This provision of the Code does not establish specific requirements regarding
the perpetrator of the criminal offence, leading to the conclusion that we are dealing
with a general subject, meaning that the commission of this offence is subject only to
the general requirements stipulated in Part 1 of Article 18 of the Criminal Code of
Ukraine. The primary immediate object, according to the Special Part of the Code, is
the safety of humanity, while the secondary object is the environment. The objective
side involves the actual mass destruction of plant or animal life, pollution of the
atmosphere or water resources, and the commission of other actions that may cause an
environmental catastrophe. As for the subjective side of the criminal offence, it
involves direct intent. Mass destruction of plant or animal life refers to the complete or
partial extermination of them in a specific territory of the Earth, while pollution of the
atmosphere or water resources entails the dispersion in the air, rivers, lakes, seas,
oceans, and other bodies of water of a fairly large quantity of toxic substances of
biological, radioactive, or chemical origin that can lead to severe illnesses and even
death in humans. This type of action also affects representatives of flora and fauna, as
it leads to their extinction and demise. (Ukolova, Ukolov, 2021).
Results. The issue, which has not gained as much publicity as others, yet is
equally important as other encroachments on the natural environment is the actions
carried out by Russian occupiers on the territory of Ukraine. This concerns some of its
most vulnerable objects, namely the objects of the Natural Reserve Fund of Ukraine.
In fact, as of April 12, 2022, Russian military forces were conducting military
operations on the territory of 900 objects of the natural reserve fund covering an area
of 1.24 million hectares, which is about a third of the total area of the natural reserve
fund of Ukraine. This includes approximately 200 territories of the Emerald Network
covering an area of 2.9 million hectares that are under threat of destruction.
Additionally, on April 5, the private zoo “Feldman Ecopark” in Kharkiv was destroyed
(2022).

152
Science and Technology of the XXI century Part I

It can also be said about Chornobyl. Russian occupation forces have left the
exclusion zone of the Chornobyl Nuclear Power Plant. However, before that, they
caused damage that led to increased radiation risks and explosive threats. The occupiers
stole the fleet and specialized equipment of the SSE “Center for Radioactive Waste
Management” and the SSE “Northern Forest” – trucks, tractors, special machinery and
mechanisms, fire trucks, as well as fuel reserves. In case of a deterioration of the
situation at the facility, this jeopardizes the provision of fire and radiation safety in the
exclusion zone. In addition, the physical protection system of the complex for
deactivation, transportation, processing and disposal of radioactive waste of the State
Specialized Enterprise “Vector” suffered several damages. In this facility, office and
residential premises are destroyed. The occupiers also made extensive fortifications
and positions on the territory of the “Red Forest,” which led to the release of highly
radioactive dust. The monitoring results on the site recorded a significant increase in
radiation (2022).
Conclusion. The research successfully achieved all initial goals and objectives.
Specifically, based on the analysis of legal doctrine and sources of national and
international legislation, the definition of “ecocide” was provided, and through
epistemological analysis, its main properties were disclosed. The composition of
ecocide, according to the Criminal Code of Ukraine, as an offence against human
safety, was formulated and presented. The study also illuminated and characterized the
most serious crimes related to ecocide committed by the armed forces of the Russian
Federation during the war in Ukraine. These include actions such as the destruction of
forests and objects of natural reserve funds, as well as damage to nuclear facilities.
References
Kryminalnyk kodeks Ukrayiny vid 05.04.2001 No.2341-III [The Criminal Code of
Ukraine dated April 5, 2001 No. 2341-III]
https://zakon.rada.gov.ua/laws/show/2341-14#n3053
Pivtora misyatsi viyny: zlochyny proty dovkillya (2022, April 12) [One and a half
months of war: Crimes against the environment]. Yurydychna Hazeta.
https://yur-gazeta.com/dumka-eksperta/pivtora-misyaci-viyni-zlochini-proti-
dovkillya.html
Ukolova, V.O., & Ukolov, Y.O. (2021) Problema ekotsydu yak ekolohichnoho
zlochynu: ukrayinskyi ta mizhnarodnyi dosvid [The Problem of Ecocide as an
Environmental Crime: Ukrainian and International Experience].

3D BIOPRINTING
Oleksandr Rybka
Faculty of Sociology and Law, National Technical University of Ukraine
“Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: 3D bioprinting, bioprinting Process, organ reproduction, bio-inks,


tissue engineering.
Introduction. The artificial reproduction of human skin, tissues, and internal
organs may seem like science fiction, but today it has become an objective reality. In

153
Science and Technology of the XXI century Part I

research centres and hospitals worldwide, advancements in 3D printing and bioprinting


are opening up new possibilities for human treatment and scientific research. In the
coming decade, bioprinting could become a significant milestone in healthcare and
personalized medicine.
Objectives. 3D bioprinting on a three-dimensional printer enables the creation
of organ models and holds vast potential in cultivating healthy and living organs to
replace damaged or missing ones. For bioprinting, in addition to the three-dimensional
printer, a model of the required organ, human cells, and an environment for organ
preservation until transplantation is needed. Organs created through bioprinting far
surpass prosthetics and transplants. They perform the same functions as natural organs
and are not rejected by the body if constructed from the patient's DNA. Bioprinting
reduces the time needed to obtain the necessary organ and helps save the lives of
patients in urgent need of transplantation.
The production of transplants is a highly crucial process continuously studied by
scientists. Many people suffer injuries or damage to their tissues or organs.
Unfortunately, not everyone can afford transplants, primarily due to a shortage of donor
organs. To address this issue, tissue engineering experts are actively working on
creating substitutes to enhance therapeutic methods. Successful solutions in this field
mean that anyone who has suffered tissue or organ damage can quickly receive an
engineered replica in the clinic and fully restore their health. Naturally, creating
a complete human organ is still a distant goal, but significant progress has been made
in this field.
Methods. The process begins with computer modelling and creating a digital
model of the future biostructure using a coordinate system. This is the initial stage,
which involves a virtual presentation, followed by the bioprinting process.
Subsequently, information from the computer is transmitted to a robot, which,
following instructions, prints the programmed model using bio-inks containing
mixtures of living cells. Since each tissue consists of different cell types, specific organ
or tissue cells are taken from the patient and cultured until there are enough of them to
create the bio-ink. Special biomaterials that mimic the extracellular matrix structure,
known as the intercellular matrix, are typically used as the bio-paper. Biodegradable
synthetic or natural polymers such as hydrogels based on gelatin, chitosan, or alginate
are commonly chosen for this purpose. These gels serve various functions, including
acting as temporary scaffolds, promoting cell growth, and facilitating cell adhesion to
the matrix (Herhelizhiu, 2022).
Results. In San Diego, it was discovered that it is possible to create a liver by
layering living cells. By utilizing the natural regenerative ability of the liver, scientists
have successfully generated a functional organ that functions like a healthy liver. It was
capable of filtering toxins and medications and facilitating nutrient exchange for
40 days. This achievement surpassed a previous record set in April of this year when
a fragment of a liver could only function for a little over five days. However, scientists
emphasize that this success in creating a bio-printed liver does not necessarily indicate
readiness for organ transplant surgeries involving organs created using 3D technology.

154
Science and Technology of the XXI century Part I

A real liver also contains intricate networks of blood vessels to support its functioning,
an aspect that remains unresolved (2019).
Bioengineers at the Institute of Regenerative Medicine in Wake Forest, USA,
have shifted their focus to a more “rigid and specific material” and developed an
extraordinary 3D printing technology that enables the creation of full replicas of
individual bones, muscles, and cartilage using stem cells. Until now, scientists could
only print very thin layers of living tissue (up to 200 microns thick), as nutrients and
oxygen couldn't penetrate to greater depths without the presence of blood vessels,
causing the tissue to perish. In this case, bioengineers used a special polymer that
allowed them to layer cells with a small gap between them. After printing, the organoid
is introduced into a mouse's body, where it gradually “grows” new blood vessels, and
the polymer decomposes. As a result, a full-fledged organ with the required three-
dimensional shape and all necessary types of tissues is formed in the initial structure’s
place. Additionally, British surgeons have, for the first time, 3D printed a hip joint for
subsequent endoprosthesis and used a patient's stem cells to secure the implant in place.
This implant for a 71-year-old patient at the University of Southampton Hospital was
created based on 3D models generated from detailed CT scans. A titanium powder was
used as the material, which was melted in thin layers by a laser beam. (2018).
Conclusion. Complex human organs, such as kidneys, heart, and lungs, have
proven elusive for regenerative surgeons to grow. However, the bioprinting of “simple”
organs is already a reality in the United States, Sweden, Spain, and Israel, where
clinical trials and specialized programs are underway. Some researchers believe that
the technology for bioprinting fully complex organs may become a reality in a few
decades, while the ability to grow cartilage and skin may be achievable in just a few
years.
Our organs are the result of millions of years of evolution, shaping their
structures by eliminating inefficient elements and favouring the development of the
most functional components. Modern tissues and organs are incredibly complex, so
addressing numerous technical challenges is required to create organs with nerves,
blood vessels, and other essential components to ensure their functionality.
Meanwhile, when technology advances to the point where printing complete
organs becomes possible, other intricate questions will arise.
References
Herhelizhiu, A. (2022, February 12). Shcho take biodruk? Vin skhozhyi na zvychainyi
druk na paperi? Yaki chornyla vin vykorystovuie ta dlia choho? [What is
bioprinting? Is it similar to regular paper printing? What inks does it use and
for what purpose?]. https://nauka.ua/card/shcho-take-biodruk-vin-shozhij-na-
zvichajnij-druk-na-paperi-yaki-chornila-vin-vikoristovuye-ta-dlya-chogo
Liudski orhany na prynteri, robotyzovani protezy i 3d-modeli chastyn tila: medytsyna
maibutnoho v ukraini ta sviti [Human organs on a printer, robotic prosthetics,
and 3d models of body parts: the future of medicine in ukraine and the world].
(2018, February 5). https://thepharma.media/publications/articles/18766-
ljudski-organi-na-printeri-robotizovani-protezi-i-3d-modeli-chastin-tila-
medicina-majbutnogo-v-ukraini-ta-sviti

155
Science and Technology of the XXI century Part I

Nadrukovana na 3d-prynteri pechinka zdatna normalno funktsionuvaty protiahom 40


dniv [3d-printed liver can function normally for up to 40 days]. (2019, November
7). https://news.vn.ua/nadrukovana-na-3d-prynteri-pechinka-zdatna-normal-no-
funktsionuvaty-protiahom-40-dniv/

LASER THERAPY IN TREATMENT ACNE


Diana Samoray
Faculty of Biomedical Engineering, National Technical University of Ukraine
“Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: acne vulgaris, treatment, laser therapy, skin diseases.


Introduction. Acne vulgaris(AV) is one of the most widespread skin diseases.
It is common for teenagers and adolescents because skin changes owing to impact of
hormones. Typically, acne dislocates on the face, sometimes on upper part of back,
chest. Patient with acne has blackheads, pimples, whelks, that spoil his mood and
lowers self-assurance. 52% of patients who have display of acne on the face have rash
also on the torso. Laser therapy is a type of therapy which uses laser emission in
treatment various diseases. This type of therapy may be used in treatment acne.
Objectives. The main aim is to understand how laser helps to cure acne and
demonstrate level of efficiency of using laser in treatment skin diseases.
Methods. A review of recently published research on: treatment acne with laser
therapy, impact of laser on skin, side effect of laser therapy and serum level changes
of inflammatory cytokines.
Comparing diverse researches can be very useful for choosing of light therapy:
photodynamic or laser therapy. Researches show that laser therapy cures acne vulgaris.
Results in various researches differ a little bit.
Results. Laser therapy of acne is quite efficient. Possible side effects of laser
therapy are erythema, bruising, edema, scarring, hyperpigmentation,
hypopigmentation, but they are rare. Inflammation was significantly reduced on both
the treated and untreated sides, and symptoms of AV lesions were alleviated. All
patients showed a significant increase in serum IL-22 levels after the first laser therapy,
with no significant difference in serum IL-6 and IL-8 levels (Liu, Sun, 2023).
Conclusion. Acne is a chronic inflammatory skin disease. The percentage of
cases of erythema was slightly higher in patients who received PDT than in patients
who were only treated with pulsed dye laser. Only 1 case of severe erythema was
reported following PDT and no cases were reported following application of the pulsed
dye laser (Garcia-Morales & Harto, 2010). Laser therapy can be useful for treatment
a skin because side effects arise rarely. It will develop in the future.
References
Garcia-Morales, I., Harto, A., Fernandez-Guarino, M. & Jaen P. (2010). Photodynamic
Therapy for Acne: Use of the Pulsed Dye Laser and Methylaminolevulinate.
Actas Dermo-Sifiliograficas (English Edition), 101(9), 758–770.
https://doi.org/10.1016/S1578-2190(10)70714-9

156
Science and Technology of the XXI century Part I

Liu, Ya., Sun, Q., Xu, H., Ma, G., & Wu. P. (2023). Serum level changes of
inflammatory cytokines in patients with moderate to severe acne vulgaris treated
with dual-wavelength laser. Chinese Journal of Plastic and Reconstructive
Surgery, 5(2), 47–52. https://doi.org/10.1016/j.cjprs.2023.05.001

ARTIFICIAL INTELLIGENCE AND PERSONALIZED MEDICINE


Alex Samoylenko
Faculty of Biomedical Engineering, National Technical University of Ukraine
“Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: Machine learning, personalized medicine, artificial intelligence.


Introduction. Modern artificial intelligence is saturating various fields of
science, and one of the most promising areas is personalized medicine. This field, also
known as precision or personalized medicine, combines the powerful capabilities of
artificial intelligence with the healthcare industry. The result is a revolution in the way
we think about diagnosis, treatment and patient care.
Objectives. One of the key aspects of using artificial intelligence in personalized
medicine is analysing patients’ genetic data to detect genetic diseases early. AI-
powered tools can quickly and accurately analyse genomic sequences and identify
genetic variants associated with various diseases, including cancer. This allows for the
development of individualised treatment plans for patients based on their genetic
characteristics.
IBM Watson’s Watson for Genomics project is an excellent example of this
approach. This artificial intelligence system is used to analyse the genetic data of
patients, especially those with cancer. Watson helps doctors determine the best
treatment strategies and even suggests clinical trials for new treatments, taking into
account the genetic characteristics of each patient (IBM, 2018). With the help of
artificial intelligence, personalized medicine can improve treatment outcomes and
make them more effective. However, it is important that healthcare professionals are
well versed in the principles of AI and its applications in medicine in order to make the
most of these opportunities for the benefit of their patients.
In the field of predictive analytics, artificial intelligence allows you to create
predictions and analyze patient data, which contributes to more accurate diagnosis and
treatment. Health Catalyst’s Predictive Analytics for Healthcare project is a great
example of this approach (Health Catalyst’s, 2021). It allows you to predict the risk of
hospitalisation and disease progression and disease progression, allowing medical
professionals to provide personalized and proactive care. Interpretable AI models and
distributed machine learning systems are great for these tasks. They will allow not only
to effectively develop medical science, finding new patterns and racial / gender / age
characteristics of people, but to form more accurate data on the health status of the
population in specific regions.
The final aspect is the development of new drugs, where artificial intelligence is
proving to be extremely useful. The Exscientia project uses machine learning and deep
learning algorithms to analyse molecular structures and identify potential links between

157
Science and Technology of the XXI century Part I

molecules and diseases (Exscientia, 2023). This approach can significantly speed up
the process of developing new drugs, reducing time and resources.
Conclusion. Overall, AI-powered personalized medicine shows great potential
to improve health outcomes and expand the possibilities of medicine. This evolution
in the healthcare industry opens up new opportunities for personalised treatment and
improves the overall health of patients. However, along with the potential, there are
also certain threats to consider.
One of the most important threats is the privacy of patient data. When processing
large volumes of medical data to create personalized treatment strategies, other
personal data of patients may be vulnerable to privacy breaches. It is important to
develop robust data protection systems to ensure that patients' personal information
remains confidential and secure.
References
Exscientia. (2023). Exscientia – Artificial Intelligence in Drug Discovery.
https://www.viennabiocenter.org/companies/biotech-companies/exscientia/
Health Catalyst’s. (2021). Predictive Analytics for Healthcare – A 4-Step Framework.
https://www.healthcatalyst.com/insights/predictive-analytics-healthcare-4-step-
framework
IBM Watsons. (2018). IBM Watson Genomics at the European Hospital.
https://www.labiotech.eu/trends-news/ibm-watson-genomics-european-hospital/

AEROPONICS: THE FUTURE OF FARMING


Yurii Severyn
Faculty of Applied Mathematics, National Technical University of Ukraine
“Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: aeroponics, vertical farming, sustainable agriculture.


Introduction. Today, as the population grows rapidly, soils are depleted, and
more and more people are concerned about the ecological state of the planet, the future
of agriculture is being rethought. Traditional farming methods that rely on tillage are
facing new challenges, from water scarcity to unpredictable weather patterns. Unlike
soil farming, which is inherently resource-intensive, and hydroponics, which requires
a nutrient-rich aqueous solution, aeroponics takes a minimalist approach, growing
plants in an environment where their roots are suspended in air. In this innovative
technique, plants receive the nutrients, water and oxygen they need through a finely
dispersed mist or aerosol solution, providing them in a precisely controlled
microenvironment.
Objectives. The main goal of this study is to comprehensively understand and
highlight the multifaceted potential of aeroponics in shaping the future of sustainable
agriculture. This involves delving into the basic principles of aeroponics, emphasizing
its role in resource efficiency, climate resilience and increased yields.
Methods. The primary research approach used in this study is the analysis and
synthesis method, which is used to comprehensively examine and distill recent research
findings relevant to the topic. Electronic databases, including PubMed, and research

158
Science and Technology of the XXI century Part I

articles from the MDPI platform, were utilized to compile a comprehensive dataset.
The search process used a combination of keywords including “Aeroponics”,
“Sustainable Agriculture” and “Vertical Farming”.
Results. Aeroponic cultivation was reported to achieve greater root, plant and
fruit mass when aeroponic and hydroponic cultivation were compared directly
(Eldridge et al., 2020). In addition, the latest methods use high-tech solutions. The
environment is monitored and controlled by a computer and software, and sensors
inside the cube communicate with the computer to optimize the microclimate.
Aeroponic methods allow to create a microclimate free of bugs and disease-causing
microorganisms thanks to germicidal ultraviolet germicidal lamps and high-efficiency
particulate absorption (HEPA) filters. It's also worth noting that aeroponics can use
much less water: 30% less than hydroponics and 95% less than outdoor growing (Al-
Kodmany, 2018), thanks to less evaporation from the growing medium installed in the
system and a smaller volume of water in general.
Conclusion. To sum up, the latest research findings provide compelling
evidence that aeroponics is not just a passing trend but a promising revolution in the
world of agriculture. The future of farming is unfolding before our eyes with
aeroponics offering a multitude of benefits. This innovative approach makes the most
of limited space, conserves water, minimizes the use of chemicals, and allows crops to
be grown year-round regardless of geographic location or climate. In addition,
aeroponics promises to reduce the carbon footprint associated with agriculture while
significantly increasing yields. This visionary method promises to change the
agricultural landscape, solving the challenges of today and tomorrow.
References
Al-Kodmany, K. (2018). The Vertical Farm: A Review of Developments and
Implications for the Vertical City. Buildings, 8(2), 24.
https://doi.org/10.3390/buildings8020024
Eldridge, B. M., Manzoni, L. R., Graham, C. A., Rodgers, B., Farmer, J. R., & Dodd,
A. N. (2020). Getting to the roots of aeroponic indoor farming. New Phytologist,
228(4), 1183-1192. https://doi.org/10.1111/nph.16780

COMPARATIVE ANALYSIS OF MODERN HYDROGEL


APPLICATIONS FOR TEETH REGENERATION AFTER CARIES
Elvira Shemena
Faculty of Biomedical Engineering, National Technical University of Ukraine
“Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: regenerative medicine, teeth, carries, hydrogels.


Introduction. According to the Global Oral Health Status Report by World
Health Organization (WHO), approximately 2 billion individuals worldwide are
affected by dental caries in their permanent teeth, what is considered as a critical oral
health issue for human being (WHO’s report 2022). Classic fillings are a temporary
solution to stop tooth decay, but they do not restore native tissue, which can lead to
future tooth loss due to insufficient strength of the filling or the risk of re-infection.

159
Science and Technology of the XXI century Part I

Nowadays hydrogel-based materials are highly desirable in tissue engineering for


repairing due to their biocompatibility, ability to degrade over time while releasing
drugs slowly, capability to mimic extracellular matrix (ECM) structures accurately,
and provision of a template for mineral deposition. As a result of these exceptional
properties hydrogels have gained significant attention in research on tooth regeneration
and remineralization processes (Roberts et al., 2022).
Objectives. To evaluate the potential of hydrogel in promoting tooth tissue
regeneration of different layers (enamel, dentine, pulp).
Methods. In recent studies, the regenerative potential of hydrogels in promoting
tooth tissue regeneration has been investigated. The literature search was conducted in
electronic databases and journals such as PubMed, MDPI, Springer etc. The main
search criteria were publications no older than three years. These included reviews and
original studies that presented data on the regeneration of different tooth layers,
especially pulp, dentin and enamel.
The availability of a large amount of experimental data makes it possible to
understand the real efficiency and characteristics of the regeneration process for
a particular layer, as well as to find significant knowledge gaps, to further improve
methods or to collect missing data.
Results. Hydrogels have emerged as a superior choice for dental pulp
regeneration due to their exceptional physical and chemical properties, enabling them
to effectively mimic the intricate structures of individualized dental pulp cavities.
These hydrogel materials exhibit excellent biocompatibility with various types of cells,
allowing for the creation of hybrid cell constructs that facilitate the regeneration of pulp
tissue rich in nerves and blood vessels. Additionally, hydrogel microspheres can serve
as efficient sustained-release carriers, ensuring controlled drug delivery over extended
periods of time (Zhang et al., 2023).
The research on hard tissue (dentin, enamel) regeneration is not as extensive and
in-depth as that of pulp tissue. However, the combination of hydrogel and biomimetic
polypeptide based on biomineralization principles shows promise for regenerating
tooth-hard tissue. While hydrogels have demonstrated effectiveness in preventing
caries, dentin hypersensitivity, and pulp capping, there are challenges to their
application in translational research and clinical practices (Yu et al., 2021).
Conclusion. Recent advancements in hydrogel-based materials have shown
great potential for the regeneration of pulp, and hard tissue mineralization. However,
despite the development of various types of hydrogels there are still several challenges
such as: enhancing the biocompatibility of hydrogel materials to more effectively
induce differentiation in odontogenic stem cells; modification of mechanical properties
for better aligning with the desired hardness levels of different tooth structures;
investigation and development of more convenient preservation methods; and
overcoming the limitations of current delivery methods for hydrogel based therapies
(Zhang et al., 2023). In addition, gathering sufficient data and conducting large-scale
clinical trials remain significant.

160
Science and Technology of the XXI century Part I

References
Roberts, W E., Mangum, J E., & Schneider, P M. (2022, February 1). Pathophysiology
of Demineralization, Part I: Attrition, Erosion, Abfraction, and Noncarious
Cervical Lesions. https://doi.org/10.1007/s11914-022-00722-1
WHO’s global oral health status report 2022: Actions, discussion and .... (n.d).
https://onlinelibrary.wiley.com/doi/full/10.1111/odi.14516
Yu L, Wei M. Biomineralization of Collagen-Based Materials for Hard Tissue
Repair. International Journal of Molecular Sciences. 2021; 22(2):944.
https://doi.org/10.3390/ijms22020944
Zhang, Z., Bi, F., & Guo, W. (2023, March 20). Research Advances on Hydrogel-
Based Materials for Tissue Regeneration and Remineralization in Tooth.
https://doi.org/10.3390/gels9030245

ENVIRONMENTAL SCIENCE AND NATURE CONSERVATION:


STRATEGIES FOR EFFECTIVE RESOURCE MANAGEMENT
Kateryna Shchur
Faculty of Management and Marketing, National Technical University of
Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”
Svitlana Volkova
Faculty of Linguistics, National Technical University of Ukraine
“Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: environmental science, nature conservation, strategies, effective


resource management.
Introduction. The contemporary world is grappling with formidable challenges
in the field of environmental science and nature conservation, which primarily arise
from the detrimental impacts of technology and industrial development on the natural
environment. Environmental science and conservation constitute an interdisciplinary
field of research dedicated to the examination of natural systems, their effects on
humanity, and humanity's influence on the environment (LibreTexts.1, 2022;
Protection of the environment, n.d.). This scientific discipline encompasses a broad
spectrum of aspects, ranging from biology and ecology to economics and politics.
There is a growing acknowledgment of the pressing urgency to safeguard natural
resources and biodiversity. This urgency necessitates the refinement of strategies for
managing resources within the realm of environmental science. While the existing
body of literature provides valuable insights, there remains a conspicuous gap in the
identification of comprehensive strategies that seamlessly integrate ecological,
economic, and social dimensions to mitigate environmental degradation while
simultaneously ensuring long-term competitiveness.
Objectives. The primary objective of this study is to formulate and present
a comprehensive set of effective strategies for the management of environmental
resources and the conservation of nature.

161
Science and Technology of the XXI century Part I

Methods. The method of combining analysis, argumentation and


systematization of information was used to support and develop the concept of
effective management of environmental resources and nature conservation.
Results. This undertaking is driven by the recognition that environmental
strategies should encapsulate ecological considerations, harness economic benefits,
and embrace diversity as integral components. These strategies aim to harmonize
environmental and strategic facets to bolster competitiveness, steering endeavors
toward the discovery of potential points of convergence among these diverse
dimensions. Consequently, environmental strategies are instrumental in securing
competitive advantages, which are indispensable for ensuring the enduring viability of
organizations while minimizing adverse impacts on the natural environment across the
entire lifecycle of products (Pakhomova, & Richter, 1999).
To achieve effective management of environmental resources and nature
protection, it is important to develop and implement strategies aimed at preserving and
restoring ecosystems and rational use of natural resources:
1. Legislative Regulation:
 Implementation and strengthening of laws and regulations related to
nature conservation and resource use.
 Imposition of fines and penalties for violators.
2. Economic Instruments:
 Introduction of taxes, levies, and other economic incentives to promote
sustainable resource use and low-carbon production.
 Establishment of carbon pricing markets and emissions trading.
3. Sustainable Resource Use:
 Transition to sustainable forestry, fisheries, and agriculture.
 Application of circular economy principles and recycling.
4. Biodiversity Conservation:
 Creation and protection of ecologically important areas and marine
reserves.
 Programs for ecosystem restoration and rehabilitation.
5. Cooperation and Education:
 Development of collaboration between government, civil society, and
private sectors for joint environmental problem-solving.
 Public education and awareness of the importance of nature conservation.
6. Research and Innovation:
 Funding research and development of new technologies to reduce
environmental impact.
 Exploration of alternative energy sources and materials with lower CO2
emissions.
7. International Cooperation:
 Participation in international treaties and agreements to address global
issues such as climate change and biodiversity.
So we can see that there are many different approaches to resource conservation.

162
Science and Technology of the XXI century Part I

The study’s conclusions underscore the significance of adopting a holistic


approach to environmental resource management and nature conservation. By
implementing the proposed framework, which includes legislative regulation,
economic instruments, sustainable resource use, biodiversity conservation,
cooperation, education, research, and international collaboration, we can effectively
address the pressing environmental challenges of our time.
These strategies have the potential to minimize environmental harm, preserve
natural resources, and safeguard biodiversity while promoting sustainable practices
and ensuring a harmonious coexistence between human activities and the natural
world.
Furthermore, the study highlights the importance of ongoing research and
innovation in developing technologies and practices that reduce environmental impact.
It emphasizes the need for international cooperation through participation in global
treaties and agreements to tackle overarching issues like climate change and
biodiversity loss.
Conclusion. This research offers a comprehensive roadmap for effective
environmental resource management and nature conservation in today’s world. Its
findings and recommendations provide valuable insights into how we can protect our
environment while sustaining long-term competitiveness and should serve as
a foundation for further studies and practical implementation.
References
LibreTexts. 1. (2022, October 24). Nauka pro navkolyshnje seredovyshhe Peredmova.
[Science about the environment. Preface]. LibreTexts – Ukrayinska.
https://ukrayinska.libretexts.org/Біологія/Екологія/Наука_про_навколишнє_
середовище_(Ha_and_Schleiger)/01%3A_Вступ/1.01%3A_Наука_про_навк
олишнє_середовище_Передмова
Pakhomova, N.V., & Richter, K.K. (1999). Economics of nature use and
environmental management: Textbook.
Protection of the environment. (n.d.). Pharmaceutical encyclopedia.
https://www.pharmencyclopedia.com.ua/article/3147/oxorona-navkolishnogo-
se-redo-vishha

MODELING THE GROWTH PROCESS OF GaAs THIN FILMS


Tanya Siryak
Educational and Research Institute of Physics and Technology, National
Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: numerical modeling, thin films, chemical deposition, kinetics, LP-


MOCVD, PE-CVD, gallium arsenide, solar cell.
Introduction. Considering the fact that with time the world is becoming more
and more technological and developed, the energy needs of our society are growing at
an exponential rate. Today's world requires a large amount of energy to sustain
humanity's vital needs and to sustain our technological research and advances already
made. In this context, solar energy deserves special attention as a clean,

163
Science and Technology of the XXI century Part I

environmentally safe and continuous source of energy available for use in various
fields. One of the key components of thin solar cells is a semiconductor material that
converts solar radiation into electrical energy. Currently, most solar panels are made
on the basis of silicon, however, due to insufficient efficiency, the need for the
development of new materials. One such material is gallium arsenide (Papez, 2021).
Objectives. Thermogasodynamic processes in the middle of reactors are created
for the efficient growth of thin GaAs semiconductor films (Vaisman, 2018).
Methods. Processing of literary sources related to research in the field of liquid
film technologies. Select the geometry of the research object. Choose a mathematical
model that adequately describes the chemical and physical processes in the middle of
the working system created for the effective growth of thin semiconductor films.
Conduct a numerical experiment using Comsol Multiphysics software. To establish the
dependences of the growth rate on the temperature of the substrate, the velocity of the
input mixture, the concentration of GaAs and the temperature of the system itself.
Investigate the results of modelling and verify them with experimental data.
Results. Performed modelling of LP-MOCVD and PE-CVD thin film growth
methods using COMSOL Multiphysics software. During the analysis of the results, it
was found that this simulation is appropriate for use in industry, as the obtained data
was verified with a real installation. The parameters, the values of which give the
highest growth rate of the GaAs film, were investigated. It was established that the LP-
MOCVD method has a limited range of temperatures at which the highest value of the
thin film growth rate will be achieved. Verification of results simulation showed that
the deviation of ± 5.1% and ± 7.3%.
Conclusion. Analysis of simulation results of LP-MOCVD and PE-CVD
methods helps to understand the relationship between system parameters and film
growth rate. This is important information for the improvement and optimization of
production processes related to the deposition of films by LP-MOCVD and PE-CVD
methods.
References
Papez, N. R. D. (2021). Overview of the Current State of Gallium Arsenide-Based
Solar Cells. PubMed Central. 14(2), 1–9.
Vaisman, M. N. J. (2018). GaAs Solar Cells on Nanopatterned Si Substrates. IEEE
Xplore. 18(4),1635–1640.

EFFECT OF ULTRASOUND ON BIOSYNTHETICAL ACTIVITY AND


STRUCTURE OF BACTERIAL CELLS
Nataliia Snihur
Faculty of Biomedical Engineering, National Technical University of Ukraine
“Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: ultrasound, microorganisms, cells, bacteria, biosynthetic activity.


Introduction. For many centuries, microorganisms have been used for
producing food, pharmaceutics, biofuels, pesticides, etc. The most interesting property

164
Science and Technology of the XXI century Part I

of these organisms is their ability to form a variety of bioactive compounds that can be
used in the improvement of manufacturing processes.
For a long time, heat treatment, pressure manipulations and other methods were
used to improve the efficiency or to inactivate activity of microorganisms, but recently
ultrasound’s ability to support the above-mentioned processes came into practice. This
technique has proven its effectiveness and is now actively used. That’s why, it is
essential for biotechnologists to know the basic concepts so that they’ll have the ability
to improve the technology in the future.
Objectives. The main aim is to overview and analyze the influence of ultrasound
on the cells of microorganisms, in particular bacteria, and review works devoted to the
introduction of ultrasound technologies into enzymatic and other processes associated
with increasing the biosynthetic activity of organisms.
Methods. A review of recently published research papers on the influence of
ultrasound on the cells of bacteria and usage of ultrasound technologies to increase the
biosynthetic activity of organisms.
Results. Ultrasound is the vibrational energy produced when electrical energy is
converted into vibrational sound. The ultrasound range is divided into high frequency
low power ultrasound (diagnostic ultrasound) and low frequency high power
ultrasound (used for cleaning and sonochemistry).
Ultrasound influences the cells in many different ways (figure 1) such as:
thermal (pyrolysis and combustion), non-thermal (cavitation and shearing), chemical
(cavitational induction of radicals) and stress-induced changes (acoustic streaming).
Mostly, those effects are the result of mechanical and hydrodynamic reactions created
by acoustic cavitation and microstreaming (Feng et al., 2011). The main mechanical
effects of cavitation include shock wave damage and microjet shocks. In addition to
the physical effects, there are also chemical ones, such as sonolysis of water (𝐻2 𝑂 →
𝐻+ + 𝑂𝐻− ) which leads to the formation of free radicals.
Ultrasound can effect the
proliferation of bacterial cells. It
promotes the growth of microbial
cells, for example, low-intensity
ultrasound creates constant
cavitation which restores damaged
cells and increases the number of
metabolic products. But high-
intensity ultrasound will not have a
similar effect due to its damaging
effect on cells. Cleavage of cell
bundles formed during the
cultivation improves nutrient and
oxygen utilization by cells. In
addition, the pulsation of
microbubbles decreases solid- Figure 1. Effect of ultrasound on cells (Rokhina
et al., 2009)

165
Science and Technology of the XXI century Part I

liquid and gas-liquid mass transfer resistance through the cells.


Ultrasound is also used to adjust optimal conditions for the growth of
microorganism, for example low-intensity ultrasound could accelerate the growth of
S.cerevisiae in the lag phase (Lanchun et al., 2003).
But ultrasound can also damage cytoplasmic membranes and intracellular
components (Kon et al., 2005).
At the molecular level, ultrasound can contribute to the formation and activation
or damage of enzymes. It happens because of protein structure modifications, which
can cause inactivation of enzymes or significantly increase their activity.
Conclusion. As a result of an analytical review of the latest publications, the
main concepts of ultrasound technologies and its influence on various processes in cells
were described. It was found that high-frequency and low-power ultrasound has such
properties and effects that allow it to be used in the analysis of food products and
quality control and low-frequency and high-power ultrasound is used to directly impact
the changes in the structure and activity of microorganisms. Insights into the effect of
ultrasound made it possible to examine the processes that it causes in cells and how it
affects their biosynthetic activity.
References
Feng, H., Barbosa-Cánovas, G. V., & Weiss, J. (Eds.). (2011). Ultrasound technologies
for food and bioprocessing (Vol. 1, p. 599). New York: Springer.
Kon, T., Nakakura, S., & Mitsubayashi, K. (2005). Intracellular analysis of
Saccharomyces cerevisiae using CLSM after ultrasonic
treatments. Nanomedicine: Nanotechnology, Biology and Medicine, 1(2), 159–
163.
Lanchun, S., Bochu, W., Zhiming, L., Chuanren, D., Chuanyun, D., & Sakanishi, A.
(2003). The research into the influence of low-intensity ultrasonic on the growth
of S. cerevisiaes. Colloids and Surfaces B: Biointerfaces, 30(1–2), 43–49.
Rokhina, E. V., Lens, P., & Virkutyte, J. (2009). Low-frequency ultrasound in
biotechnology: state of the art. Trends in biotechnology, 27(5), 298–306.

INVESTIGATION OF BACTERIAL NEGATIVE CHEMOTAXIS WITH


THE IN SITU CHEMOTAXIS ASSAY
Yelyzaveta Tokarchuk
Faculty of Biomedical Engineering, National Technical University of Ukraine
“Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: marine ecosystems, negative chemotaxis, ISCA, indole derivatives,


bacterial behavior, biotechnology, environmental science.
Introduction. Marine ecosystems, being significantly structured by microbial
activities, have bacteria playing a pivotal role in biogeochemical cycling, especially
the carbon-flow pathway (Azam & Malfatti, 2007). Chemotaxis, the migration of
organisms in response to chemical gradients, is central to bacteria’s ability to exploit
nutrient hotspots and colonize particles, influencing marine productivity and
biogeochemical cycles dynamically (Raina et al., 2022). Our research focuses on

166
Science and Technology of the XXI century Part I

negative chemotaxis, an underexplored aspect using the in situ chemotaxis assay


(ISCA) to analyze the responses of bacterial strains to indole and its derivatives, potent
chemical signals influencing bacterial behavior (Melander et al., 2014). Indole's impact
on bacterial activities underscores its relevance in understanding complex microbial
interactions and behaviors within marine ecosystems. The investigation aims to
uncover the mechanisms and ecological implications of negative chemotaxis,
enhancing our understanding of marine ecosystems' complexity and functionality.
Objectives. The aim of this project is to introduce a novel method for measuring
negative chemotaxis using the ISCA and to evaluate the behavioral responses of four
specific bacterial strains to varying concentrations of indole derivatives.
Methods. The methods center around the application of the ISCA to investigate
the negative chemotaxis of specific marine bacterial strains to three indole derivatives
at varied concentrations (Clerc et al., 2020). Bacterial cultures are meticulously
prepared and optimized for motility before exposure to the chemical compounds. The
ISCA’s structured well array facilitates a precise evaluation of the bacterial responses
to these chemicals.
The data acquired from these experiments undergoes rigorous analysis, utilizing
flow cytometry to measure bacterial concentrations and evaluate their behavioral
responses. Adjustments in chemical concentrations and pH levels are carefully
managed to ensure the accuracy of the results. The comprehensive approach seeks to
offer insights into the nuances of bacterial chemotaxis, bridging the gap between
laboratory findings and real-world applications.
Results. Our study utilized the In Situ Chemotaxis Assay to examine negative
chemotaxis in four specific bacterial strains in response to varying concentrations of
indole derivatives. The findings reveal a marked repellent effect at 1mM concentration,
confirming the ISCA’s reliability in chemotaxis studies (Tso & Adler, 1974). However,
standard deviations were addressed by increasing the number of wells per compound
to enhance result significance. A challenge remains in translating these laboratory
findings to natural settings due to the complexity of real-world chemical gradients and
nutrient availability. Limitations in the ISCA’s time tracking and bacterial responses
to chemical gradients were identified, raising questions about optimal measurement
durations and realistic test conditions (Clerc et al., 2020). Despite these challenges, the
insights gained are instrumental in advancing our understanding of bacterial behaviors
and chemotaxis.
Conclusion. This study illuminates the repellent effect of indole derivatives on
four marine bacterial strains, a discovery made possible by the innovative ISCA device.
Our findings are pivotal for understanding bacterial behavior and are applicable in
fields like biotechnology and environmental science. While these insights are
significant, they pave the way for further research, particularly in exploring the
relationship between chemical properties and bacterial responses, and understanding
the intricate dynamics of microbial chemotaxis in complex environments.
References
Azam, F., & Malfatti, F. (2007). Microbial structuring of marine ecosystems. Nature
Reviews Microbiology, 5(10), 782–791. https://doi.org/10.1038/nrmicro1747

167
Science and Technology of the XXI century Part I

Clerc, E. E., Raina, J.-B., Lambert, B. S., Seymour, J., & Stocker, R. (2020). In Situ
Chemotaxis Assay to Examine Microbial Behavior in Aquatic Ecosystems.
Journal of Visualized Experiments, 159. https://doi.org/10.3791/61062
Melander, R. J., Minvielle, M. J., & Melander, C. (2014). Controlling bacterial
behavior with indole-containing natural products and derivatives. Tetrahedron,
70(37), 6363–6372. https://doi.org/10.1016/j.tet.2014.05.089
Raina, J.-B., Lambert, B. S., Parks, D. H., Rinke, C., Siboni, N., Bramucci, A.,
Ostrowski, M., Signal, B., Lutz, A., Mendis, H., Rubino, F., Fernandez, V. I.,
Stocker, R., Hugenholtz, P., Tyson, G. W., & Seymour, J. R. (2022). Chemotaxis
shapes the microscale organization of the ocean’s microbiome. Nature,
605(7908), 132–138. https://doi.org/10.1038/s41586-022-04614-3
Tso, W.-W., & Adler, J. (1974). Negative Chemotaxis in Escherichia coli. Journal of
Bacteriology, 118(2), 560–576. https://doi.org/10.1128/jb.118.2.560-576.1974

ADVANCEMENTS IN IMMUNOTHERAPEUTIC APPROACHES FOR


CANCER TREATMENT: A COMPREHENSIVE ANALYSIS
Anastasiia Tsurkan
Faculty of Informatics and Computer Engineering, National Technical
University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: immunotherapeutic strategy, cancer, cancer treatment, personalized


medicine, immune checkpoint inhibitors, precision medicine, innovative.
Introduction. In the contemporary situation of cancer treatment, there has
witnessed remarkable transformations with the incoming of immunotherapeutic
strategies, particularly immune inhibitors of checkpoints, and innovative personalized
medicine (Roy, Singh, & Misra, 2022). This research aims to provide a thorough and
comprehensive overview of recent advancements in cancer immunotherapy,
accentuating breakthroughs that have significantly reshaped how we approach and treat
different malignancies.
Objectives. This study is consigned to achieve several key goals: Systematically
evaluate the clinical outcomes of immune checkpoint inhibitors in diverse cancer types,
rigorously investigate the aim of personalized medicine in tailoring immunotherapeutic
interventions, and precisely estimate the long-term influence of immunotherapy on
patient survival and quality of life.
Methods. A meticulously systematic literature review was conducted to
thoroughly identify and analyze recent studies on cancer immunotherapy. Selected
studies encompassed diverse cancer types and scrupulously investigated the efficiency
of immune checkpoint inhibitors and personalized treatment regimens. Methodologies
varied across studies, including clinical trials, retrospective analyses, and
comprehensive meta-analyses. Data extraction focused on treatment outcomes,
response rates, and unfavorable events.
Results. The thoughtful synthesis of relevant literature reveals up-and-coming
results in the efficiency of immune checkpoint inhibitors across multiple cancers.
Innovative personalized medicine approaches show potential in optimizing treatment

168
Science and Technology of the XXI century Part I

response, particularly in cases with specific genomic signatures. The analysis also
explicitly highlights the significance of considering biomarkers and patient
characteristics in tailoring immunotherapeutic interventions (Vasileiou, Papageorgiou,
Nguyen, 2023).
Conclusion. This exceedingly comprehensive analysis underscores the
transformative impact of the latest advancements in cancer immunotherapy. The
findings strongly support the continued exploration and integration of immune
checkpoint inhibitors and personalized medicine in cancer cure. As we deeply
rummage further into the time of precision medicine, the potential for markedly
improving patient results and prolonging survival becomes increasingly evident,
marking a substantial mark in the ongoing battle against cancer. The issue is under
further consideration and research with scientists expecting unprecedented results.
References
Roy, R., Singh, S. K., & Misra, S. (2022, December 27). Advancements in Cancer
Immunotherapies. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9861770/
Vasileiou, M., Papageorgiou, S., Nguyen, N. P. (2023, May 30). Current
Advancements and Future Perspectives of Immunotherapy in Breast Cancer
Treatment. https://www.mdpi.com/2673-5601/3/2/13

THE FUTURE OF ACTUARIAL SCIENCE


Serafym Voloshyn
Faculty of Physics and Mathematics, National Technical University of Ukraine
“Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: actuarial science, artificial intelligence, artificial neural network,


statistics, probability neural networks.
Introduction. With different changes in the world actuarial science would
accept many different challenges. With the development of artificial intelligence our
world be both could, safer and different. For this reason, it is worth presenting the way
artificial intelligence in actuarial science helps, and the consequences of combining
methods of insurance with the computation possibilities as well as the risk of cracking.
Objectives. To investigate the present and future of actuary science.
Methods. In the present paper such methods as comparative and complex
analysis, as well as observation were used. The calculations were performed with
Python 3.8, and Excel.
Results. We have analysed the work of artificial intelligence. In the beginning,
we have denoted the function, which is called the linear regression model, expressed
in a matrix form:
̂ = 𝒇(𝒙) = 𝜶 + 𝑩𝒕 𝒙
𝒚
This section now provides some mathematical definitions of the concepts
just discussed. The familiar starting point, in this paper, for defining and explaining
neural networks is a simple linear regression model, expressed in matrix form:
̂ = 𝒇(𝒙) = 𝜶 + 𝑩𝒕 𝒙
𝒚
where the predictions 𝒚 ̂ are formed via a linear combination of the features 𝒙 derived

169
Science and Technology of the XXI century Part I

̂ different
using a vector of regression weights 𝑩 and an intercept term, 𝒂 . In this case 𝒚
values x could correspond to the value 𝒚 ̂ . In the probability theory under “regression”
we understand the function, that is the expectation of event A in the case, that event B
was happening. We were resourcing the building of any models, and for these models
such an equation is useful (Iftikhar, et al., 2021):
𝒛𝟏 = 𝝈𝟎 (𝒂𝟎 + 𝑩𝒕𝟎 𝒙)
̂ = 𝝈𝟏 (𝒂𝟎 + 𝑩𝒕𝟏 𝒛𝟏 )
𝒚
which states that an intermediate set of variables, 𝒛𝟏 , is computed from the feature
vector X. By applying a non-linear activation function, 𝝈𝟎 o a linear combination of
the features, itself formed by applying weights 𝑩𝟎 and biases 𝒂𝟎 to the input feature
vector X. There are activation functions, which are applied for any models of popular
activation functions (Specht, 1990):
ⅇ𝝌 𝒊
)
𝝈(𝒙𝒊 =
∑∀ⅈ ⅇ𝝌𝒊
So, the SoftMax function we used for such a problem, when we computed a score 𝜶𝒊
for any inputs 𝒘𝒊 became the weight of each element (it means any words or sentences,
which use a network) and so we obtained the attention weight:
ⅇ𝜶(𝒙𝒊)
𝝈(𝒙𝒊 ) =
∑∀𝒊 ⅇ𝜶(𝒙𝒊)
with hidden state vector ℎ𝑖. 𝝈(𝒉𝒊 )is used for
the algorithm of recursion in actuarial
computation (Yu, 2022).
In probability neural networks can be
used for summing in each class, that makes the
decision, to which class vector X belongs the
most.
Conclusion. Probability theory
makes it possible to analytically describe the
regularities of the functioning of the insurance
fund. With the help of artificial neural
networks, they can be determined with a
sufficiently high accuracy. Now we are at the
beginning of the new application of neural
networks. In our opinion, the work with neural
networks brings us to the idea, that the
program of neural network could be used for
planning safety routes in auto, with the known
items X, which provides such indicators as
weather, traffic jams, and make the decision in the way of most important value with
the biggest weight in the sum. For choosing the best way of medical insurance classes
(𝒊)
𝑲𝒏𝒊 (𝒙, 𝒙𝒏𝒊 ) can show vulnerability to each type of illness and provide insurance in
the most effective way (Rusek, et al., 2023).

170
Science and Technology of the XXI century Part I

References
Iftikhar, A., Alam, M. M., Ahmed, R., Musa, S., & Su’ud, M. M. (2021). Risk
prediction by using artificial neural network in global software development.
Computational Intelligence and Neuroscience, 2021, 1–25.
https://doi.org/10.1155/2021/2922728
Rusek, K., Boryło, P., Jaglarz, P., Geyer, F., Cabellos, A., & Chołda, P. (2023).
RISKNET: Neural risk assessment in networks of unreliable Resources. Journal
of Network and Systems Management, 31(3). https://doi.org/10.1007/s10922-023-
09755-y
Specht, D. F. (1990). Probabilistic neural networks. Neural Networks, 3(1), 109–118.
https://doi.org/10.1016/0893-6080(90)90049-q
Yu, Y. (2022). Intrinsic decomposition Method combining deep convolutional neural
network and probability graph model. Computational Intelligence and
Neuroscience, 2022, 1–11. https://doi.org/10.1155/2022/4463918

USING MACHINE LEARNING TO PREDICT ANTIDEPRESSANTS


TREATMENT OUTCOME
Anton Vorobiov
Faculty of Informatics and Computer Engineering, National Technical
University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: antidepressants, machine learning, mental health.


Introduction. Major depressive disorder and related mood disorders are some
of the most common and costly psychiatric disorders worldwide. Despite there being
both pharmacological and nonpharmacological ways to treat mood disorders, namely,
antidepressants are the most used way to treat mood disorders. However, the
effectiveness of drugs can vary from one individual to another. It may work well for
some people, while others will have small outcomes or none at all. It is well known
that different types of mood disorder symptoms are associated with disruptions of
different neurons and neurotransmitters. It is especially obvious when it comes to
fMRI- and SPECT-based research. There are many models and hypotheses about the
relationship between the efficiency of treatments and clusters of symptoms. For
example, some models predict that only a selective group of drugs will be effective in
managing appropriate symptoms (Drobigev, 2019), while others focus on the role of
neuroplasticity in mental health issues' development and predict the effectiveness of
any drugs (Serafini, 2012). The variability in treatment outcomes underscores the
importance of personalized medicine in mental health. Machine learning is a powerful
method that provides an opportunity to analyze complex datasets and extract patterns
that are not possible to discover through traditional statistical methods. In the context
of mental disorders, data learning offers the possibility to predict treatment outcomes
for individuals, and tailor and optimize treatment plans based on the scale of their
symptoms (e.g. PHQ-9). While most systematic reviews and meta-analyses focus on
comparing the overall efficacy and tolerability of drugs, they often overlook drug
selectivity and the relationship between receptor function and clinical outcome.

171
Science and Technology of the XXI century Part I

Objectives. To develop a predictive model that can accurately anticipate the


treatment outcome (sensitivity, specificity, accuracy) of antidepressants for individuals
with depression and related mood disorders. In this study, we propose to investigate
the relationship between symptoms and treatment response based on receptors and
neurotransmitters that are associated with each drug.
Methods. Using a large dataset of PHQ-9 scale symptoms and a modern view
of neurotransmitters’ role in mood regulation and efficacy of using drugs, find
associations between symptoms and the most effective drug and its dosage to cope with
it. The support vector machine (SVM) is a regression-based analytical technique
designed for high dimensional data that could provide the best classification of
individual reactions that can be used to extract patterns.
Results. By using an SVM and previous antidepressant efficacy studies, we
suggest that it is possible to predict with significant accuracy whether a drug will be
effective on an individual and how much.
There are several types of neurons and neurotransmitters that are associated with
specific symptoms or clusters of symptoms, which are confirmed by fMRI and SPECT
results (Keren, 2019).
Conclusion. Despite major depressive disorder and related mood disorders
having many differences, they usually have the same or similar symptoms that can be
gathered into clusters and strongly associated with disruptions of specific receptors and
neurotransmitters that can be used for optimizing treatment and future studies for new
methods of therapy.
References
Abramets I.I., Evdokimov D.V., Sidorova Yu.V. (2015). Research into the
neurophysiological and neurochemical mechanisms of depressive states and the
search for new directions for their treatment.
http://dspace.nbuv.gov.ua/handle/123456789/148207
Drobizhev, M., Antokhin, E., Palaeva, R., & Kikta, S. (2019). Choosing an
antidepressant based on effectiveness and tolerability.
https://www.lvrach.ru/partners/velaxin/15437178
Keren A., O’Callaghan G., Vidal-Ribas P., Buzzell G., Brotman A., Leibenluft E.,
Pedro M. Pan, Meffert L., Kaiser A., Wolke S., Pine D., Stringaris A. (2019).
A Conceptual and Meta-Analytic Review Across fMRI and EEG Studies.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6345602/
Stahl, S.M. (2008). Stahl’s essential psychopharmacology: neuroscientific basis and
practical application (3rd. ed.). Cambridge University Press.
Serafini, G. (2012). Neuroplasticity and major depression, the role of modern
antidepressant drugs. https://www.wjgnet.com/2220-3206/full/v2/i3/49.htm

172
Science and Technology of the XXI century Part I

APPLICATIONS AND FUTURE PROSPECTS OF CRISPR/CAS9


GENE EDITING
Ruslana Yesypenko
Faculty of Biomedical Engineering, National Technical University of Ukraine
“Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: CRISPR, CRISPR/Cas9, genome, cancer, treatment.


Introduction. There are many viruses in the environment that threaten the
survival of prokaryotes and can cause changes (mutations) in the structure of DNA or
RNA structure. Modern medicine faces significant challenges in treating genetic
diseases and cancer. CRISPR/Cas9 genome editing technology opens unprecedented
opportunities to fight these diseases and improve patient outcomes. Gene editing
technology is sometimes called “God's scalpel”, which can precisely and efficiently
trim, cut, replace, or insert DNA or RNA sequences.
Objectives. The main goals of the research are to evaluate the capabilities of
CRISPR/Cas9 in editing genetic diseases with increased accuracy and efficiency and
to study the potential of CRISPR/Cas9 for cancer treatment.
Methods. Review of scientific articles and studies related to the use of
CRISPR/Cas9 in medicine. Analysis of clinical trials and approaches to CRISPR/Cas9
treatment of genetic diseases and cancer. Evaluation of research results and
development of conclusions based on the latest discoveries in the field.
Results. Based on the analysis, the following results can be applied:
Essentially, CRISPR-Cas9 are molecular scissors that can cut and modify
specific regions of DNA with high precision. The technology consists of two main
components: the Cas9 enzyme, which acts as scissors, and a small RNA molecule that
directs the enzyme to the target site. Together, they form a powerful tool that can be
used to edit, insert or delete genes in a wide range of organisms, from bacteria to plants
to humans.
Over the past few years, the revolutionary CRISPR/Cas9 system has shown great
promise for the treatment of diseases caused by genetic disorders, monogenic diseases,
cancer, cardiovascular diseases, inflammation, neurodegenerative diseases, infectious
diseases, and some of these applications have entered clinical trials. (Zhang, et al,
2021).
Traditional cancer treatments (eg, surgery, radiation therapy, chemotherapy) can
delay recurrence and prolong the survival of cancer patients, although tumor recurrence
or drug resistance often results in a poor prognosis. Thus, new cancer treatments are
still needed, and CRISPR/Cas9 technology offers potential revolutionary changes in
cancer treatment.
CRISPR gene editing of human primary T cells can produce allogeneic T cells
with higher antitumor activity and lower adverse reactions, which makes it possible for
universal CAR-T cells to be widely used in clinical practice. (Wang, et al, 2022).
CRISPR/Cas9 systems not only directly target oncogenes to suppress tumor
growth, but also perform large-scale screening of cancer genes to improve the
efficiency of anticancer drug development. (Xu et al., 2021).

173
Science and Technology of the XXI century Part I

PDX and GEMM models are the best tools for characterizing the
histopathological features of patients' tumors. Thus, there is a need to expand research
using PDX models to better understand patient heterogeneity to provide personalized
treatment.
Development of safe and effective in vivo delivery remains the biggest challenge
for widespread clinical use of CRISPR/Cas9 in human therapy.
Conclusion. The transformation of the CRISPR system into a gene-editing tool
has revolutionized the life sciences. CRISPR/Cas9 is a powerful tool for the treatment
of genetic diseases and cancer, and it has already shown great potential when analyzing
the results of many clinical trials. However, in order to achieve maximum results and
guarantee the safety of patients, it is necessary to continue scientific research and take
into account the ethical aspects of using this technology. The future of CRISPR/Cas9
in the treatment of genetic diseases and cancer promises more personalized approaches
and significant improvements in treatment outcomes.
References
Wang, S. W., Gao, C., Zheng, Y. M., Yi, L., Lu, J. C., Huang, X. Y., Cai, J. B., Zhang,
P. F., Cui, Y. H., & Ke, A. W. (2022). Current applications and future
perspective of CRISPR/Cas9 gene editing in cancer. Molecular cancer, 21(1),
57. https://doi.org/10.1186/s12943-022-01518-8
Wang, SW., Gao, C., Zheng, YM. et al. (2022) Current applications and future
perspective of CRISPR/Cas9 gene editing in cancer. Mol Cancer 21, 57.
https://doi.org/10.1186/s12943-022-01518-8
X. Xu et al. (2021) Nanotechnology-based delivery of CRISPR/Cas9 for cancer
treatment. Advanced Drug Delivery Reviews, 176.
https://doi.org/10.1016/j.addr.2021.113891
Zhang, S., Shen, J., Li, D., & Cheng, Y. (2021). Strategies in the delivery of Cas9
ribonucleoprotein for CRISPR/Cas9 genome editing. Theranostics, 11(2), 614–
648. https://doi.org/10.7150/thno.47007

TELEMEDICINE AND REMOTE PATIENT MONITORING:


ENHANCING HEALTHCARE DELIVERY THROUGH DIGITAL HEALTH
TECHNOLOGIES
Daria Zahorska
Faculty of Biomedical Engineering, National Technical University of
Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”

Keywords: telemedicine, health care technologies, remote patient monitoring,


telecommunication.
Introduction. Remote healthcare, a contemporary trend in advancing medical
informatization, incorporates modern information and telecommunication technologies
for distant diagnosis and treatment of diseases, aiding in emergencies, and supporting
the professional development of medical professionals. Telemedicine is not a substitute
for a doctor and is not an alternative to him/her, but a powerful tool that increases the

174
Science and Technology of the XXI century Part I

efficiency of primary care physicians and realizes the right of every patient, no matter
how remote, to highly specialized medical care (Gensini et al., 2017).
The evolution of telemedicine has gone beyond simple data transmission. In the
past, cumbersome devices were used for basic patient observations, sent to specialists,
and stored in cloud systems. Today's advanced systems are self-intelligent, alerting
patients and doctors to necessary changes in treatment or emergencies. Patients are
instructed, and comfortable, and customizable devices are now prevalent. This
transformation has made telemedicine more accessible, efficient, and adaptable, with
a focus on widespread availability and sustainability.
Objectives. The primary objective is to evaluate how remote healthcare
technologies improve the provision of healthcare services. This evaluation emphasizes
gauging their effectiveness and exploring how they impact patient outcomes and
healthcare accessibility.
Methods. Telemedicine has gained considerable recognition as a viable solution
for remote healthcare and education (Sims, 2018). Ongoing technological
advancements have reshaped how organizations approach the principles of well-being
and healthcare management, emphasizing patient-centered services (Battard and
Liarte, 2019). Additionally, in research conducted by Standing and Cripps (2013), an
analysis was performed to pinpoint critical success factors necessary for the efficient
adoption of Electronic Health Records (EHR). These factors encompass system
categorization, active involvement of key players, alignment of vision and strategy,
effective methods of communication and reporting, seamless integration of processes,
and thorough evaluation of IT infrastructure, among other crucial elements.
In general, when introducing telemedicine into operation, the following tasks
need to be solved:
1. Determination of the direction of medical consultations in the use of
telemedicine.
2. Selection of a legal framework.
3. Development of a security policy with the definition of telemedicine
participants, distribution of their rights, and appropriate cryptographic means of
information protection.
4. Selection or development of new hardware and software, taking into account
the above characteristics.
5. Development of a system for selecting experts.
6. Conducting thorough testing and validation of the telemedicine system before
implementation.
Results. The study's findings reveal the transformative potential of telemedicine
and remote patient monitoring to reshape the healthcare landscape. To fully realize this
potential, continuous investment in digital infrastructure, extensive training initiatives,
and ongoing research to improve telemedicine technologies are essential. Policymakers
should integrate telemedicine into national healthcare strategies, promoting widespread
adoption and ensuring equal access to high-quality healthcare services for all.
Conclusion. As telemedicine advances, it stands ready to significantly influence
the future of healthcare, enhancing accessibility, efficiency, and patient-centeredness.

175
Science and Technology of the XXI century Part I

The use of telemedicine brings benefits primarily to patients. These benefits include
improving the efficiency and quality of treatment; accelerating the transfer of
information on the results of examinations between different specialized clinics
without transporting the patient (especially in critical cases); conducting remote
consultations by narrow specialists in geographically remote medical institutions;
conducting teleconferences with specialists from medical institutions regardless of
their location; reducing the time for examination.
References
Battard, N., & Liarte, S. (2019, September 4). Including Patient’s Experience in the
Organisation of Care: The Case of Diabetes. Journal of Innovation Economics
& Management, 30(3), 39–57. https://doi.org/10.3917/jie.pr1.0054
Gensini, G. F., Alderighi, C., Rasoini, R., Mazzanti, M., & Casolo, G. (2017, January
1). Value of Telemonitoring and Telemedicine in Heart Failure Management.
Cardiac Failure Review. https://doi.org/10.15420/cfr.2017:6:2
Sims, J. M. (2018). Communities of practice: Telemedicine and online medical
communities. https://ideas.repec.org/a/eee/tefoso/v126y2018icp53-63.html
Standing, C., & Cripps, H. (2013, August 30). Critical Success Factors in the
Implementation of Electronic Health Records: A Two-Case Comparison.
Systems Research and Behavioral Science, 32(1), 75–85.
https://doi.org/10.1002/sres.2209

176
Science and Technology of the XXI century Part I

CONTENTS
ENGINEERING INNOVATIONS
Research adviser Foreign language
teacher
Violeta Akhmedova, Mechanical Properties Larisa Tarasova Iryna Kozubska 3
Larisa Tarasova of the Tendon. Tendon
Recovery after an
Upper Extremity Injury
Anna Bezrodnova, Stone Paper Is an Inna Antonenko 5
Oleksandra Hres Innovative Material
Olexii Bychkov Study of the Influence Yana Zinher Nataliia Chizhova 6
of External Electronics
on the Aerodynamics
of Converted
Ammunition, in
particular of Various
Sensors and Antennas
Kristina Dakhal Sustainability in Mykola Nalyvaichuk Olena Shepeleva 9
Formula One
Oleg Gordiy 3D Modeling as a Nataliia Chizhova 10
Science
Artur Halynskyi Cyber-Physical Oksana Korbut 11
Transformation in the
21st Century Industry
Andrii Horbul, Innovations in Aircraft Mykola Nalyvaichuk Olena Shepeleva 13
Kyrylo Bychko Engineering:
Transforming the Face
of Combat Advantage
and Safety in Modern
Armed Forces
Nikita Horobets Future of Optical Nataliia Chizhova 14
Telescopes
Dmytro Huk, The Future of Radio Oksana Serheieva 16
Ivan Domashenko Engineering
Dmytro Ivanytskyi Additive Iryna Lytovchenko 17
Manufacturing (3D
Printing): New Era in
Modern Technologies
Yaroslav Kaliberda Application of Iryna Lytovchenko 19
Artificial Intelligence
in Manufacturing
Automation

177
Science and Technology of the XXI century Part I

Valentyna Kalytiuk Innovations in the Tetiana Maslova 20


Manufacture of
Perovskite Solar Cells
Maksym Klymenko The New Air Defense Iryna Novikova 21
System of Ukraine
Nadiia Kostenko The Impact of Olena Shepeleva 23
Wearable
Technologies on our
Lives
Sofiia Kulyk The Millennium Stanislav Kopylov Valentyna 24
Problem: the Navier- Lukianenko
Stokes Equation
Alona Kyrylenko Aerospace Technology Olena Shepeleva 26
Trends
Kostiantyn Liakhovoi, Comparing Oleksandr Lobunko Liudmyla Zhygzhytova 27
Oleksandr Petrichenko Characteristics of
Chemical and Ion
Reactive Engines

Vladyslav Logvynenko Supercomputers: Iryna Boiko 29


Unleashing the Power
of Computational
Excellence
Tetiana Luhovets Welding Technology Oksana Serheieva 31
Enhancements for
Optimal Joint Integrity

Oleksandr Matoshyn Automated System for Sergii Vysloukh Svitlana Moiseienko 32


Controlling the Process
of Drilling Holes in
Carbon Fiber
Reinforced Polymer
Parts

Oleksii Oliinyk Boundary Layer Dmytro Konotop Valentyna Lukianenko 34


Control over the upper
Wing Surface
Applying Blowing
Yulian Petkanych Eleсtroporation as a Iryna Kozubska 36
New Technology for
Performing Ablations
to Treat Arrhythmia

178
Science and Technology of the XXI century Part I

Ruslan Prohorchuk Robots in Space: from Iryna Lytovchenko 38


Automation to
Robotics
Bohdan Sharaievskyi Research and Analysis Nikola Benz Oksana Korbut 40
of Polymer Membrane
Material Properties

Hlib Skopyk A Two-Month Nataliia Chizhova 42


Retrospect Regarding
LK-99
Solomiia Skoroplias Engineering Dmytro Zinchenko Valentyna Lukianenko 44
Excellence: a
Theoretical Approach
to Light Aircraft Spar
Design
Ihor Turchaninov Radio Engineering in Vyacheslav Chmelev Nataliya Chizhova 45
Modern
Communication
Systems
Serhii Troshkin Methods of Modeling Serhii Pozdieiev Yuliia Nenko 47
Horizontal Cable
Tunnels Using
Sketchup
Uliana Vinnikova Robotics in Medicine Andrii Petrashenko Olena Shepeleva 49

Pavlo Yakovchuk, Comparative Analysis Sergiy Shukaev Oksana Korbut 50


Illia Nemykin, of Traditional and
Viacheslav Malynskyi Contemporary Critical
Plane Models in
Multiaxial Fatigue
Yaroslav Yaroshovets Bionic Prostheses: Yevgeniya Sulema Olena Shepeleva 52
Improving Lives
Danil Yevdokimov Adaptive Computer Yana Zinher Nataliia Chizhova 54
Vision System for
Detection and Tracking
of Non-Typical Targets
and Optical Navigation
in Unmanned Aerial
Systems

179
Science and Technology of the XXI century Part I

CHEMISTRY AND CHEMICAL TECHNOLOGY


Research adviser Foreign language
teacher
Vladyslav Bielik Magnesium Iryna Kozubska 57
Oxide/Potassium
Catalysis in
Heterogeneous Gas-
Phase Eliminations:
Unveiling Reactivity
and Selectivity
Advancements
Kostyantyn Chemistry of Oil Inna Antonenko 58
Hryshchenko Paints

Evgeniy Kostenko Adsorption of Yuliia Olizko 59


Fluorine Using Iron
Waste

Yuliia Molchan A Comparative Study Tetiana Dontsova Yuliia Olizko 61


of Different Types of
SIC Ceramic
Membranes for Water
Viktoriia Рryhalinska, The Dependence of Svetlana Frolenkova Yuliia Olizko 62
Svetlana Frolenkova the Quality of the
Zinc Coating and the
Rate of its Formation
on the Composition of
the Electrolyte

Andrii Ulianenko Prospects for the Use Andrii Kushko Olena Volkova 64
of Recyclable
Explosives

Viktoriia Yevpak Increasing the Impact Liubov Melnyk Yuliia Olizko 65


Strength of Materials
Based on Ultra-High
Molecular Weight
Polyethylene

180
Science and Technology of the XXI century Part I

ENERGY SAVING
Research adviser Foreign language
teacher
Andrii Chekurda Waste-to-Energy Olena Shepeleva 68
Technologies:
Transforming Trash
into Power
Ivanna Dovbysh Problems Related to Larisa Sviridova 69
Using Solar Panels as
an Energy Source for
UAVs
Danyil Dorosh Molecular-Thermal Ivan Voiteshenko Olena Pysarchyk 71
Linear Electric
Generator
Eugenia Hlushchenko Ukrainian Nuclear Yuliia Olizko 72
Power Plants during
the Russian-Ukrainian
War
Mykola Hrebeniuk Backup Photovoltaic Olexander Gaievskyi Tetiana Maslova 74
Power Plant with
Hydrogen Energy
Storage
Ruslan Keda Use of Optimizers for Tetiana Maslova 75
Increasing the
Efficiency of Solar
Power Plants
Taras Koziupa Steps for Reaching the Yulia Vyshnevska Tetiana Maslova 76
“Nearly Zero Energy
Building” Status of the
Building
Dmytro Melnyk The Role of Artificial Viktoriia Chmel 78
Intelligence in the
Modernization of
Modern Ukrainian
Smart Power Grids:
Development
Directions and
Efficiency

Anna Nebesna Thermonuclear Fusion: Nataliia Chizhova 80


the Future of
Renewable Energy

181
Science and Technology of the XXI century Part I

Victor Petrushyn, Energy Storage Mykola Nalyvaichuk Olena Shepeleva 82


Victoria Us Systems from Used
Electric car Batteries
Danil Trofymov Assessing the Impact Evgenia Sulema Olena Shepeleva 83
of Energy Efficient
Lighting Systems in
Commercial Buildings
Svitlana Yaroshchuk The Use of Renewable Olena Borychenko Mariana Shevchenko 84
Energy Sources in
Construction and
Infrastructure
Anastasiia Company “Ecoflow” Mariana Shevchenko 86
Yevdokimova and their Products as
Energy-Saving
Technologies
Nikita Zarubin Advancing All- Tetiana Maslova 87
Weather Solar Cells: a
Comprehensive Study
Yehor Zymovets Ways to Save Energy Olena Shepeleva 89

ELECTRIC POWER ENGINEERING


Research adviser Foreign language
teacher
Viacheslav Maintaining Capacities Vladimir Bazhenov Tetiana Maslova 91
Okonechnikov, Balance in the United
Ihor Rii Power System under
External Disturbance

AUTOMATION OF ELECTROMECHANICAL SYSTEMS


Research adviser Foreign language
teacher
Mykyta Cherniaiev Automation of Tetiana Maslova 93
Electromechanical
Systems: Use of
Asynchronous Motors
in Electromechanical
Systems

182
Science and Technology of the XXI century Part I

Dmytro Liashko Applying the Concept Andriy Bulashenko Nataliia Chizhova 94


of Internet of Things
and Fog Computing for
Smart Homes

Dmytro Shliaha Overview and Sergiy Peresada Tetiana Maslova 96


Comparison of DFOC
and IFOC Methods for
Induction Motor Drive

ELECTRONICS
Research adviser Foreign language
teacher
Oleksii Onasenko History of Processor Nataliya Chizhova 98
Microarchitecture
Development
Vladyslav Pidruchnyi The System of Remote Olena Shepelieva 99
Start of the Engine
Natan Smolij PPM Decoder as a Mykola Shynkevych Oksana Serheieva 100
Way to Simplify Signal
Decoding

NATURAL SCIENCES
Research adviser Foreign language
teacher
Yana Bachynska CRISPR-CAS9 Gene Tetiana Lutsenko Iryna Kozubska 103
Therapy for Rare
Genetic Diseases
Anastasiia Production of Wound- Tetiana Lutsenko Nataliia Kompanets 104
Baranovska Healing Bio-Ink
Plasters
Olha Borovyk Lens with Variable Oleksandr Oksana Serheieva 106
Optical Characteristics Miroshnychenko,
Volodymyr Amosov
Yelyzaveta Chychuk Mathematics as the Olena Mukhanova 107
Main Chain of Modern
Civilisation
Vlad Frolov, How to Pass Karman Mykola Nalyvaichuk Olena Shepeleva 109
Nadiia Shcherbyna Line

183
Science and Technology of the XXI century Part I

Andrew Goncharov The Effects of Exercise Natalia Kompanets 111


on Brain Plasticity in
Adults
Anastasiia Grabovska Pathophysiology of Olena Shepeleva 113
Depression
Illya Guivan Biomarkers and their Natalia Kompanets 114
Use in Diagnosis and
Prognosis of Diseases
Kostiantyn Hlomozda Effects of Smoking Natalia Kompanets 115
Electronic Cigarettes
Illya Holubiev, Understanding Allergic Taisiia Shumska, 117
Julia Danylchyk Reactions and Iryna Kozubska
Anaphylaxis: Triggers,
Symptoms and
Diagnosis
Maria Honcharenko The Role of Ultrasound Valentyna Motronenko Iryna Kozubska 119
in Interferon
Production

Anton Huselnikov Bionic Prosthesis Mykola Nalyvaichuk Olena Shepeleva 120

Ostap Kalapun Diferential Natalia Kompanets 121


Expression’s Analysis
Role in Drug
Discovery
Dmytro Kharynka Biotechnology in Food Svitlana Volkova 123
Industry
Artem Khilchuk Neurophysiological Iryna Boyko 125
Effects of N-back
Training
Marianna Khvistani The Physical Iryna Kozubska 127
Rehabilitation of
Military Officers of the
Armed Forces of
Ukraine
Dmytro Knysh Nuclear Waste: why Iryna Boyko 128
not Dispose of it in
Space?
Polina Kovalchuk Choosing the Best Tatiana Lutsenko Iryna Kozubska 129
Scaffold for Creating a
Brain-on-a-Chip Model

184
Science and Technology of the XXI century Part I

Vladyslav Kovalchuk Genetic Characteristics Mykhailo Burychenko Tetiana Voitko 131


of the Donor and
Recipient as a
Determining Factor of
Compatibility in Organ
Transplantation
Oksana Krailo Application of Elena Golembiovska Iryna Kozubska 134
Fluorescent Proteins in
Biochemical Analysis
Dmytro Kucherenko Are the Alternative Nataliia Chizhova 135
Ways of Smoking less
Harmful than
Cigarettes?
Anna Kushch Are Electric Cars Liudmyla Zubchenko Oleksandra Bondarenko 137
Really
Environmentally-
Friendly?
Andrii Moskaliuk Transhumanism: Nataliia Kompanets 138
Exploring the Nexus of
Humanity and
Technology

Maksym Nikoliuk Electronic Health Oksana Datsyuk Lidia Shilina 140


Records and
Biopharmaceutical
Research
Andrii Ovsiienko Artificial Intelligence Nataliia Kompanets 141
in Biotechnology
Maksym Pavliushyn Understanding the Evgenia Sulima Olena Shepeleva 143
Coriolis Force and its
Value
Anastasiia Prozor Targeting the Intestinal Olena Bespalova Iryna Kozubska 144
Mucosal Immunity to
Treat Іmmunoglobulin-
A Nephropathy
Anastasiia Prozor, Overview of Different Tetyana Lutsenko Iryna Kozubska 146
Polina Kovalchuk Methods of Model
Creation Skin-on-Chip
Kateryna Rachek Magnetic Ion Channel Oksana Gorobets Anna Nypadymka 148
Activation Technology:
Controlling Cellular
Processes in Living
Organisms

185
Science and Technology of the XXI century Part I

Olena Ralko, Brazilian Traditional Inna Antonenko 150


Nataliia Hyshchak Medicine: Aspects and
Perspectives
Dmytro Rusiev Ecocide as an Svitlana Volkova 151
Ecological Disaster
against the Background
of Russia’s Criminal
Aggression on the
Territory of Ukraine
Oleksandr Rybka 3D Bioprinting Svitlana Volkova 153

Diana Samoray Laser Thepary in Yevgen Nastenko Taisiia Shumska 156


Treatment Acne
Alex Samoylenko Artificial Intelligence Natalia Kompanets 157
and Personalized
Medicine
Yurii Severyn Aeroponics: the Future Evgenia Sulema Olena Shepeleva 158
of Farming
Elvira Shemena Comparative Analysis Iryna Kozubska 159
of Modern Hydrogel
Applications for Teeth
Regeneration after
Caries
Kateryna Shchur, Environmental Science Svitlana Volkova 161
Svitlana Volkova and Nature
Conservation:
Strategies for Effective
Resource Management
Tanya Siryak Modeling the Growth Kateryna Havrylenko 163
Process of GaAs Thin
Films
Nataliia Snihur Effect of Ultrasound on Tetyana Lutsenko Iryna Kozubska 164
Biosynthetical Activity
and Structure of
Bacterial Cells
Yelyzaveta Tokarchuk Investigation of Jonasz Slomka Iryna Kozubska 166
Bacterial Negative
Chemotaxis with the In
Situ Chemotaxis Assay
(ISCA)

186
Science and Technology of the XXI century Part I

Anastasiia Tsurkan Advancements in Iryna Boyko 168


Immunotherapeutic
Approaches for Cancer
Treatment: a
Comprehensive
Analysis
Serafym Voloshyn The Future of Actuarial Viktor Yuskovych Anna Nypadymka 169
Science
Anton Vorobiov Using Machine Oksana Serheieva 171
Learning to Predict
Antidepressants
Treatment Outcome
Ruslana Yesypenko Applications and Iryna Kozubska 173
Future Prospects of
CRISPR/CAS9 Gene
Editing

Daria Zahorska Telemedicine and Natalia Kompanets 174


Remote Patient
Monitoring: Enhancing
Healthcare Delivery
through Digital Health
Technologies

187
Science and Technology of the XXI century Part I

ЗМІСТ

ІННОВАЦІЇ В ІНЖЕНЕРІЇ
Науковий керівник Викладач
іноземної мови
Віолета Ахмедова, Механічні Лариса Тарасова Ірина Козубська 3
Лариса Тарасова властивості
сухожилля.
Відновлення
сухожилля після
травми верхніх
кінцівок

Анна Безроднова, Кам’яний папір – Інна Антоненко 5


Олександра Гресь інноваційний
матеріал

Олексій Бичков Дослідження впливу Яна Зінгер Наталія Чіжова 6


зовнішньої
електроніки на
аеродинаміку
конвертованого
боєприпасу, зокрема
різних сенсорів та
антен
Крістіна Дахал Екологічність у Микола Наливайчук Олена Шепелєва 9
Формулі 1
Олег Гордій 3Д моделювання як Наталія Чіжова 10
наука
Артур Галинський Кібер-фізична Оксана Корбут 11
трансформація в
індустрії 21 століття

Андрій Горбуль, Іновації в авіаційній Микола Наливайчук Олена Шепелєва 13


Кирило Бичко інженерії:
трансформація
обличчя бойової
переваги та безпеки
в сучасних збройних
силах

Нікіта Горобець Майбутнє оптичних Наталія Чіжова 14


телескопів
Дмитро Гук, Майбутнє радіо Оксана Сергеєва 16
Іван Домашенко інженерії

188
Science and Technology of the XXI century Part I

Дмитро Іваницький Адитивне Ірина Литовченко 17


виробництво (3D
друк): нова ера в
сучасних
технологіях
Ярослав Каліберда Застосування Ірина Литовченко 19
штучного інтелекту
в автоматизації
виробництва

Валентина Калитюк Інновації у Тетяна Маслова 20


виробництві
перовскітових
сонячних елементів
Максим Клименко Нове ППО України Ірина Новікова 21

Надія Костенко Вплив розумного Олена Шепелєва 23


одягу на наше життя

Софія Кулик Задача тисячоліття: Станіслав Копилов Валентина 24


рівняння Навьє Лук’яненко
Стокса
Альона Кириленко Тенденції Олена Шепелєва 26
аерокосмічних
технологій
Костянтин Порівняння Олександр Лобунько Людмила Жигжитова 27
Ляховой, характеристик
Олександр хімічних та іонних
Петріченко реактивних двигунів

Владислав Логвиннко Суперкомп’ютери: Ірина Бойко 29


виключення сили
обчислювальної
досконалості

Тетяна Луговець Удосконалення Оксана Сергеєва 31


зварювальних
технологій для
оптимальної
цілісності з’єднання

189
Science and Technology of the XXI century Part I

Олександр Матошин Автоматизована Сергій Вислоух Світлана Мойсеєнко 32


система керування
процесом
свердління отворів у
деталях з
вуглепластику

Олексій Олійник Контроль Дмитро Конотоп Валентина Лук’яненко 34


граничного шару
над верхньою
поверхнею крила,
застосовуючи
видування

Юліан Петканич Електропорація як Ірина Козубська 36


нова технологія для
проведення абляцій
для лікування
аритмії

Руслан Прохорчук Роботи в космосі: Ірина Литовченко 38


від автоматизації до
робототехніки

Богдан Дослідження та Нікола Бенз Оксана Корбут 40


Шараєвський аналіз властивостей
полімерних
мембранних
матеріалів

Гліб Скопик Двомісячна Наталія Чіжова 42


ретроспектива
стосовно LK-99

Соломія Скоропляс Інженерна Дмитро Зінченко Валентина 44


досконалість: Лук’яненко
теоретичний підхід
до проектування
лонжерона легкого
літака
Сергій Трошкін Методи Сергій Поздєєв Юлія Ненько 45
моделювання
горизонтальних
кабельних тунелів за
допомогою Sketchup

190
Science and Technology of the XXI century Part I

Ігор Турчанінов Радіотехніка в В’ячеслав Чмелев Наталія Чіжова 47


сучасних системах
зв’язку

Уляна Віннікова Робототехніка в Андрій Петрашенко Олена Шепелєва 49


медицині
Павло Яковчук, Порівняльний аналіз Сергій Шукаєв Оксана Корбут 50
Ілля Немикін, традиційних та
В’ячеслав сучасних моделей
Малинський критичної площини
при багатовісній
втомі

Ярослав Ярошовець Біонічні протези: Євгенія Сулема Олена Шепелєва 52


покращення життя

Даніл Євдокімов Адаптивна система Яна Зінгер Наталія Чіжова 54


комп’ютерного зору
для виявлення та
супроводження
нетипових цілей і
оптичної навігації
безпілотних системи

ХІМІЯ ТА ХІМІЧНІ ТЕХНОЛОГІЇ


Науковий керівник Викладач
іноземної мови
Владислав Бєлік Каталіз оксиду Ірина Козубська 57
магнію/калію в
гетерогенних
газофазних
видаленнях:
розкриття
реакційної здатності
та селективності

Костянтин Хімія олійних фарб Інна Антоненко 58


Грищенко

Євгеній Костенко Адсорбція фторид- Юлія Олізько 59


іонів
використовуючи
залізовмісні відходи

191
Science and Technology of the XXI century Part I

Юлія Молчан Порівняльне Тетяна Донцова Юлія Олізько 61


дослідження різних
типів SIC
керамічних мембран
для води
Вікторія Залежність якості Світлана Фроленкова Юлія Олізько 62
Пригалінська, цинкового покриття
Світлана та швидкості його
Фроленкова формування від
складу електроліту

Андрій Ульяненко Перспективи Андрій Кушко Олена Волкова 64


використання
утилізованих
вибухових речовин

Вікторія Євпак Підвищення ударної Любов Мельник Юлія Олізько 65


міркості матеріалів
на основі
надвисокомолекуляр
ного поліетилену

ЕНЕРГЕТИЧНІ ТЕХНОЛОГІЇ
Науковий Викладач
керівник іноземної мови
Андрій Чекурда Технологія Олена Шепелєва 68
перетворення
відходів у енергію:
трансформація
сміття в енергію
Іванна Довбиш Проблеми Лариса Свиридова 69
використання
сонячних панелей
як джерело енергії
для БпЛА
Даниїл Дорош Молекулярно- Іван Войтешенко Олена Писарчик 71
тепловий лінійний
електрогенератор

Євгенія Глущенко Атомні Юлія Олізько 72


електростанції
України під час
Російсько-
української війни

192
Science and Technology of the XXI century Part I

Микола Гребенюк Резервна Олександр Гаєвський Тетяна Маслова 74


фотоелектрична
станція із водневим
акумулюванням
енергії
Руслан Кеда Використання Тетяна Маслова 75
оптимізаторів для
підвищення
ефективності
сонячних
електростанцій
Тарас Козюпа Кроки щодо Юлія Вишневська Тетяна Маслова 76
досягнення статусу
будівлі “Будівля з
майже нульовим
рівнем споживання
енергії”

Дмитро Мельник Роль штучного Вікторія Чмель 78


інтелекту у
модернізації
сучасних
українських
електромереж Smart
grid: напрямки
розвитку та
ефективність
Анна Небесна Термоядерний Наталія Чіжова 80
синтез: майбутнє
відновлювальної
енергії

Віктор Петрушин, Накопичувальні Микола Наливайчук Олена Шепелева 82


Вікторія Ус системи з вживаних
батарей від
електрокарів

Даніл Трофимов Оцінка впливу Євгенія Сулема Олена Шепелєва 83


енергоефективних
систем освітлення в
комерційних
будівлях

193
Science and Technology of the XXI century Part I

Світлана Ярощук Використання Олена Бориченко Мар’яна Шевченко 84


відновлюваних
джерел енергії в
будівництві та
інфраструктурі
Анастасія Компанія Мар’яна Шевченко 86
Євдокімова «Екофлоу» та її
продукція як
енергозберігаючі
технології
Нікіта Зарубін Розвиток Тетяна Маслова 87
всепогодних
сонячних батарей:
комплексне
дослідження

Єгор Зимовець Способи економії Олена Шепелєва 89


електроенергії

ЕЛЕКТРОЕНЕРГОТЕХНІКА
Науковий Викладач
керівник іноземної мови
Вячеслав Підтримання Володимир Баженов Тетяна Маслова 91
Оконечніков, балансу потужності
Ігор Рій об’єднаної
енергосистеми в
умовах зовнішніх
збурень

АВТОМАТИЗАЦІЯ ЕЛЕКТРОМЕХАНІЧНИХ СИСТЕМ


Науковий Викладач
керівник іноземної мови
Микита Черняєв Автоматизація Тетяна Маслова 93
електромеханічних
систем:
використання
асинхронних
двигунів в
електромеханічних
системах

194
Science and Technology of the XXI century Part I

Дмитро Ляшко Застосування Андрій Булашенко Наталія Чіжова 94


концепції інтернету
речей та туманного
обчислення для
розумних будинків

Дмитро Шляга Огляд та порівняння Сергій Пересада Тетяна Маслова 96


методів DFOC ТА
IFOC для приводу
асинхронних
двигунів

ЕЛЕКТРОНІКА
Науковий Викладач
керівник іноземної мови
Олексій Онасенко Історія розвитку Наталія Чіжова 98
мікроархітектури
процесорів

Владислав Система для Олена Шепелєва 99


Підручний дистанційного
запуску двигуна

Натан Смолій ФІМ декодер як Микола Шинкевич Оксана Сергеєва 100


метод полегшення
роботи з сигналами

ПРИРОДНИЧІ НАУКИ
Науковий керівник Викладач
іноземної мови
Яна Бачинська Генна терапія Тетяна Луценко Ірина Козубська 103
CRISPR-CAS9 для
рідкісних
генетичних
захворювань
Анастасія Виробництво Тетяна Луценко Наталія Компанець 104
Барановська ранозагоюючих
пластирів на основі
біочорнил

195
Science and Technology of the XXI century Part I

Ольга Боровик Лінза зі змінними Олександр Оксана Сергеєва 106


оптичними Мірошниченко,
характеристиками Володимр Амосов
Єлизавета Чичук Математика як Олена Муханова 107
головний ланцюг
сучасної цивілізації
Владислав Фролов, Як пересікти Лінію Микола Наливайчук Olena Shepeleva 109
Надія Щербина Кармана
Андрій Гончаров Вплив вправ на Наталія Компанець 111
пластичність мозку
у дорослих

Анастасія Патофізіологія Олена Шепелєва 113


Грабовська депресії

Ілля Гуйван Біомаркери та їх Наталія Компанець 114


використання в
діагностиці та
прогнозуванні
захворювань
Костянтин Ефекти від куріння Наталія Компанець 115
Гломозда електроних цигарок

Ілля Голубєв, Розуміння Таїсія Шумська, 117


Юлія Данильчик алергічних реакцій Ірина Козубська
та анафілаксії:
тригери, симптоми
та діагностика
Марія Гончаренко Роль ультразвукку у Валентина Мотроненко Ірина Козубська 119
виробництві
інтерферону

Антон Гусельніков Біонічні протези Микола Наливайчук Олена Шепелєва 120

Остап Калапунь Роль аналізу Наталія Компанець 121


диференціальної
експресії у відкритті
ліків
Дмитро Харинка Біотехнології в Світлана Волкова 123
харчовій продукції

Артем Хільчук Нейрофізіологічні Ірина Бойко 125


ефекти тренування
N-back

196
Science and Technology of the XXI century Part I

Мар’яна Хвістані Фізична реабілітація Ірина Козубська 127


для
військовослужбовці
в Збройних Сил
України
Дмитро Книш Ядерні відходи: Ірина Бойко 128
чому б не
захоронити їх у
космосі?

Поліна Ковальчук Вибір найкращого Тетяна Луценко Ірина Козубська 129


скаффолду для
створення моделі
«мозок на чіпі»

Владислав Генетичні Михайло Буриченко Тетяна Войтко 131


Ковальчук особливості донора і
реципієнта, як
визначальний
фактор сумісності
при трансплантації
органів
Оксана Країло Застосування Олена Голембіовська Ірина Козубська 134
флуоресцентних
білків у
біохімічному аналізі

Дмитро Кучеренко Альтернативні Наталія Чіжова 135


способи куріння
менш шкідливі, ніж
сигарети?
Анна Кущ Чи дійсно Людмила Зубченко Олександра Бондаренко 137
електромобілі
екологічні?

Андрій Москалюк Трансгуманізм: Наталія Компанець 138


дослідження
взаємозв’язку
людства та
технологій
Максим Ніколюк Електронні медичні Оксана Дацюк Лідія Шиліна 140
записи та
біофармацевтичні
дослідження

197
Science and Technology of the XXI century Part I

Андрій Овсієнко Штучний інтелект у Наталія Компанець 141


біотехнології

Максим Павлюшин Розуміння сили Євгенія Суліма Олена Шепелєва 143


Коріоліса та її
значення

Анастасія Прозор Таргетована Тетяна Луценко Ірина Козубська 144


імунотерапія
слизової кишечника
для лікування
імуноглобуліно-
Анастасія Прозор, Огляд різних Олена Беспалова Ірина Козубська 146
Поліна Ковальчук методів створення
шкіри-на-чипі

Катерина Рачек Технологія активації Оксана Горобець Анна Нипадимка 148


магнітних іонних
каналів: управління
клітинними
процесами в живих
організмах
Олена Ралко, Бразильська Інна Антоненко 150
Наталія Гищак традиційна
медицина: аспекти
та перспективи
Дмитро Русєв Екоцид як Світлана Волкова 151
екологічна
катастрофа на тлі
злочинної агресії
росії на території
України
Олександр Рибка 3D біодрук Світлана Волкова 153

Діана Саморай Лазерна терапія в Євген Настенко Таїсія Шумська 156


лікуванні акне

Олексій Штучний інтелект Наталія Компанець 157


Самойленко та персоналізована
медицина

Юрій Северин Аеропоніка: Євгенія Сулема Олена Шепелєва 158


майбутнє
фермерства

198
Science and Technology of the XXI century Part I

Ельвіра Шемена Порівняльний аналіз Ірина Козубська 159


сучасних
гідрогелевих систем
для регенерації зубів
після карієсу
Катерина Щур, Екологія та охорона Світлана Волкова 161
Світлана Волкова природи: стратегії
ефективного
управління
ресурсами
Тетяна Сіряк Моделювання Катерина Гавриленко 163
процесу росту
тонких плівок GaAs

Наталія Снігур Вплив ультразвуку Тетяна Луценко Ірина Козубська 164


на біосинтетичну
активність та
структуру
бактеріальних
клітин
Єлизавета Дослідження Йонаш Сломка Ірина Козубська 166
Токарчук бактеріального
негативного
хемотаксису за
допомогою аналізу
хемотаксису In Setu
(ISCA)
Анастасія Цуркан Удосконалення Ірина Бойко 168
імунотерапевтичних
підходів до
лікування раку:
комплексний аналіз
Серафим Волошин Майбутнє актуарної Віктор Юськович Анна Нипадимка 169
науки

Антон Воробйов Використання Оксана Сергеєва 171


машинного
навчання для
прогнозування
результату
лікування
антидепресантами

199
Science and Technology of the XXI century Part I

Руслана Єсипенко Застосування та Ірина Козубська 173


майбутні
перспективи
редагування генів за
допомогою
CRISPR/CAS9
Дар’я Загорська Телемедицина та Наталія Компанець 174
віддалений
моніторинг
пацієнтів:
покращення надання
медичної допомоги з
використанням
цифрових
технологій охорони
здоров’я

200

You might also like