Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 3

Name : Yoana Devita Natasha

NRP : 02111940000073
Resume singkat kuliah tamu pada GLS
on SDGs 2021
How Artificial Intelligence Change The Future

There are four distinct industrial revolutions that the world either has experienced or
continues to experience today. The first industrial revolution happened in the early 1800s.
During this period of time, manufacturing evolved from focusing on manual labor performed
by people and aided by work animals to a more optimized form of labor performed by people
through the use of water and steam-powered engines and other types of machine tools. In the
last part of the 19th century, the world entered a second industrial revolution with the
introduction of steel and use of electricity in factories. The introduction of electricity enabled
manufacturers to increase efficiency and helped make factory machinery more mobile. It was
during this phase that mass production concepts like the assembly line were introduced as a
way to boost productivity. Starting in the early 2000, a third industrial revolution slowly began
to emerge, as manufacturers began incorporating more electronic and eventually computer
technology into their factories. During this period, manufacturers began experiencing a shift
that put less emphasis on analog and mechanical technology and more on digital technology
and automation software. In the past few decades until now, a fourth industrial revolution has
emerged, known as Industry 4.0. Industry 4.0 takes the emphasis on digital technology from
recent decades to a whole new level with the help of interconnectivity through the Internet of
Things (IoT), access to real-time data, and the introduction of cyber-physical systems.

Based on what was presented by Prof Mohd Najib Bin Mohn Yasin, explained that the
central problems (or goals) of AI research include reasoning, knowledge, planning, learning,
natural language processing (communication), perception and the ability to move and manipulate
objects. General intelligence is among the field’s long-term goals. Intelligence involves the
ability to  interact with the real world, to perceive, understand, and act (e.g., speech recognition
and understanding and synthesis, image understanding, ability to take actions, have an effect).
Intelligence is also about reasoning and planning, that is like modeling the external world, given
input , solving new problems, planning, and making decisions, ability to deal with unexpected
problems, uncertainties. The last but not least, Intelligence involves learning and adaptation, we
are continuously learning and adapting because our internal models are always being “updated”.

Artificial intelligence started in the year of 1946 when the first work which is now
recognized as AI was done by Warren McCulloch and Walter pits in 1943. They proposed a
model of artificial neurons. And then, in the year of 1956, The word "Artificial Intelligence" first
adopted by American Computer scientist John McCarthy at the Dartmouth Conference. For the
first time, AI coined as an academic field. John Mc. Carthy organizes a two-month workshop for
researchers interested in neural networks and the study of intelligence and finally there was an
agreement to adopt a new name for this field of study: Artificial Intelligence. In 1952-1969,
these period was the golden years-early enthusiasm of AI. Arthur Samuel, a computer scientist,
developed a checkers-playing computer program – the first to independently learn how to play a
game. At that time high-level computer languages such as FORTRAN, LISP, or COBOL were
invented. And the enthusiasm for AI was very high at that time. By 1970, however, they realised
that such claims were too optimistic. Although a few AI programs could demonstrate some level
of machine intelligence in one or two toy problems, almost no AI projects could deal with a
wider selection of tasks or more difficult real-world problems. Many of the problems that AI
attempted to solve were too broad and too difficult. A reality check By 1966-1974, the euphoria
about AI was gone, and most government funding for AI projects was cancelled. AI was still a
relatively new field, academic in nature, with few practical applications apart from playing
games. So, to the outsider, the achievements would be seen as toys, as no AI system at that time
could manage real-world problems. In 1969 -1979 is the birth of Knowledge-based or expert
systems. Idea is to give AI systems lots of information to start with. In year1980 -1988, the
progress of AI in industry that was R1 becomes first successful commercial expert system and
some interesting phone company systems for diagnosing failures of telephone service. In the year
1997, IBM Deep Blue beats world chess champion, Gary Kasparov, and became the first
computer to beat a world chess champion. And in the year of 1990 s to the present,
computational power increases (computers are cheaper, faster, and have tons more memory than
they used to) . For an example of the coolness of speed: Computer Chess. Now AI has developed
to a remarkable level. The concept of Deep learning, big data, and data science are now trending
like a boom. 

At its core, machine learning is simply a way of achieving AI. Machine learning is an


application of artificial intelligence (AI) that enables systems to learn and advance based on
experience without being clearly programmed. Machine learning focuses on the development of
computer programs that can access data and use it for their own learning. There are 4 types of
machine learning namely, Supervised learning, Unsupervised learning, Semi-supervised
learning, and Reinforced learning. There are some problem solving using machine learning, the
first is classification algorithm. In Classification, a computer program is trained on a training
dataset, and based on the training it categorizes the data in different class labels. This algorithm
is used to predict the discrete values such as male|female, true|false, spam|not spam, etc. The
second is anomaly detection algorithm, that is a method used to detect something that doesn’t fit
the normal behavior of a dataset. In other words, anomaly detection finds data points in a dataset
that deviates from the rest of the data. The third is regression learning algorithm, The task of the
regression algorithm is to find the mapping function to map input variables(x) to the continuous
output variable(y). Regression algorithms are used to predict continuous values such as price,
salary, age, marks, etc. The fourth is Clustering, that is  an unsupervised classification. This is an
exploratory data analysis with no labelled data available. With clustering, we separate unlabeled
data into finite and discrete sets of data structures that are natural and hidden. The last is
Reinforcement learning, that is a type of machine learning where the model learns to behave in
an environment by performing some actions and analyzing the reactions. 
The Swarm Intelligence In Communication Engineering

Swarm intelligence (SI), an integral part in the field of artificial intelligence, is gradually
gaining prominence, as more and more high complexity problems require solutions which may
be sub-optimal but yet achievable within a reasonable period of time. Mostly inspired by
biological systems, swarm intelligence adopts the collective behaviour of an organized group of
animals, as they strive to survive. SI is proven outperforms the original gradient methods and
conventional numerical techniques based on the some reasons, that are no assumptions about the
problem being optimized, ability to fing high-quality solutions by balancing between exploration
and exploitation, no need for the gradient information of the problem being optimized, simplicity
and easy implementation. The SI algorithms were initially proposed in the 1990s, and now their
applications have been largely examined and are relatively mature, including ACO (Ant Colony
Optimization) in 1992 and PSO (Particle Swarm Optimization) in 1995. SI algorithm proposed
from 2000 to 2010 includes Bacterial Foraging Optimization (BFO)in 2002 , Artificial fish-
swarm Algorithm in 2002, artificial bee colony (ABC) algorithm in 2006, firefly algorithm (FA)
in 2008. Newer and promising SI algorithms, including pigeon, inspired optimization (PIO) in
2014, grey wolf optimizer(GW) in 2014, butterfly optimization algorithm (BOA) in 2015, etc.

An antenna array is a combination of two or more antenna elements that can be placed in
a specific geometry. In a linear antenna array, antenna elements are placed along one axis. The
antenna array produces a beam, this beam can effect by changing the geometry (linear, circular,
spherical etc.) and also by some other parameters i.e. inter-element spacing, excitation amplitude
and excitation phase of the individual element. In mostly wireless communication requires more
directive antenna having high gain. Antenna array has high gain, more directive, spatial
diversity. Antenna array synthesis has received importance in the near past, where various
performance goals were considered. For array synthesis, different optimization algorithms e.g.,
the simulated annealing, the ant-colony optimization, the GA and the PSO have been used in the
previous research studies. The optimization of linear antenna provides a pattern that has
minimum the SLL and the HPBW (Half-Power Beamwidth) i.e. by using the PSO and the GA.
The composite differential evaluation (CoDE) algorithm applied to optimize inter-element
spacing, between two consecutive elements to minimize the SLL and to place nulls in the desired
direction. A multi-objective optimization approach has been used in Ref, to maximize the
directivity and to minimize the SLL of an antenna array in the optimization process. In Ref. A
new technique memetic multi-objective evolutionary algorithm called memetic generalized
differential evaluation (MGDE3) algorithm, which is the extension of generalized differential
evaluation (GDE3) algorithm. In Ref. The presented technique had been utilized multi-objective
functions to optimize inter-elements spacing, excitation currents, and excitation phases as well as
minimized SLL and HPBW. Real-coded genetic (RCG) was employed for time modulating
linear antenna array to impose nulls in the desired direction by optimizing spacing and excitation
amplitude.

You might also like