Maturation of Artificial Intelligence (1943-1952)

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 12

.

Maturation of Artificial Intelligence (1943-1952)


o Year 1943: The first work which is now recognized as AI was done by Warren McCulloch a
of artificial neurons.
o Year 1949: Donald Hebb demonstrated an updating rule for modifying the connectio
called Hebbian learning.
o Year 1950: The Alan Turing who was an English mathematician and pioneere
publishes "Computing Machinery and Intelligence" in which he proposed a test. The
intelligent behavior equivalent to human intelligence, called a Turing test.
The birth of Artificial Intelligence (1952-1956)
o Year 1955: An Allen Newell and Herbert A. Simon created the "first artificial intelligence p
This program had proved 38 of 52 Mathematics theorems, and find new and more elegant p
o Year 1956: The word "Artificial Intelligence" first adopted by American Computer scientist J
the first time, AI coined as an academic field.

At that time high-level computer languages such as FORTRAN, LISP, or COBOL were invented. A
time.

The golden years-Early enthusiasm (1956-1974)


o Year 1966: The researchers emphasized developing algorithms which can solve mathemat
first chatbot in 1966, which was named as ELIZA.
o Year 1972: The first intelligent humanoid robot was built in Japan which was named as WAB

The first AI winter (1974-1980)


o The duration between years 1974 to 1980 was the first AI winter duration. AI winter refers to
with a severe shortage of funding from government for AI researches.
o During AI winters, an interest of publicity on artificial intelligence was decreased.

A boom of AI (1980-1987)
o Year 1980: After AI winter duration, AI came back with "Expert System". Expert systems
making ability of a human expert.
o In the Year 1980, the first national conference of the American Association of Artificial Intelli

The second AI winter (1987-1993)


o The duration between the years 1987 to 1993 was the second AI Winter duration.
o Again Investors and government stopped in funding for AI research as due to high cost bu
XCON was very cost effective.
The emergence of intelligent agents (1993-2011)
o Year 1997: In the year 1997, IBM Deep Blue beats world chess champion, Gary Kasparov,
chess champion.
o Year 2002: for the first time, AI entered the home in the form of Roomba, a vacuum cleaner
o Year 2006: AI came in the Business world till the year 2006. Companies like Facebook, Twitte

Deep learning, big data and artificial general intelligence (2011


o Year 2011: In the year 2011, IBM's Watson won jeopardy, a quiz show, where it had to
Watson had proved that it could understand natural language and can solve tricky question
o Year 2012: Google has launched an Android app feature "Google now", which was able to p
o Year 2014: In the year 2014, Chatbot "Eugene Goostman" won a competition in the infamou
o Year 2018: The "Project Debater" from IBM debated on complex topics with two master deb
o Google has demonstrated an AI program "Duplex" which was a virtual assistant and which
lady on other side didn't notice that she was talking with the machine.

Now AI has developed to a remarkable level. The concept of Deep learning, big data, and data sc
companies like Google, Facebook, IBM, and Amazon are working with AI and creating amazing
inspiring and will come with high intelligence.

Bb

The quest for artificial intelligence (AI) began over 70 years ago, with the idea that computers would one day be able to think like
us. Ambitious predictions attracted generous funding, but after a few decades there was little to show for it.

But, in the last 25 years, new approaches to AI, coupled with advances in technology, mean that we may now be on the brink of realising
those pioneers’ dreams.

1943

WW2 triggers fresh thinking


World War Two brought together scientists from many disciplines, including the emerging fields of neuroscience and computing.

In Britain, mathematician Alan Turing and neurologist Grey Walter were two of the bright minds who tackled the challenges of intelligent
machines. They traded ideas in an influential dining society called the Ratio Club. Walter built some of the first ever robots. Turing went on to
invent the so-called Turing Test, which set the bar for an intelligent machine: a computer that could fool someone into thinking they were
talking to another person.
Watch Grey Walter’s nature-inspired 'tortoise'. It was the world’s first mobile, autonomous robot. Clip from Timeshift: (BBC Four, 2009).

1950

Science fiction steers the conversation


In 1950, I Robot was published – a collection of short stories by science fiction writer Isaac Asimov.

Asimov was one of several science fiction writers who picked up the idea of machine intelligence, and imagined its future. His work was
popular, thought-provoking and visionary, helping to inspire a generation of roboticists and scientists. He is best known for the Three Laws of
Robotics, designed to stop our creations turning on us. But he also imagined developments that seem remarkably prescient – such as a
computer capable of storing all human knowledge that anyone can ask any question.
See Isaac Asimov explain his Three Laws of Robotics to prevent intelligent machines from turning evil. Clip from Timeshift (BBC Four, 2009).

1956

A 'top-down' approach
The term 'artificial intelligence' was coined for a summer conference at Dartmouth University, organised by a young computer scientist, John
McCarthy.

Top scientists debated how to tackle AI. Some, like influential academic Marvin Minsky, favoured a top-down approach: pre-programming a
computer with the rules that govern human behaviour. Others preferred a bottom-up approach, such as neural networks that simulated brain
cells and learned new behaviours. Over time Minsky's views dominated, and together with McCarthy he won substantial funding from the US
government, who hoped AI might give them the upper hand in the Cold War.

Marvin Minsky founded the Artificial Intelligence Laboratory at Massachusetts Institute of Technology (MIT).

1968
2001: A Space Odyssey – imagining
where AI could lead
Minsky influenced science fiction too. He advised Stanley Kubrick on the film 2001: A Space Odyssey, featuring an intelligent computer, HAL
9000.

During one scene, HAL is interviewed on the BBC talking about the mission and says that he is "fool-proof and incapable of error." When a
mission scientist is interviewed he says he believes HAL may well have genuine emotions. The film mirrored some predictions made by AI
researchers at the time, including Minsky, that machines were heading towards human level intelligence very soon. It also brilliantly captured
some of the public’s fears, that artificial intelligences could turn nasty.
Watch thinking machine HAL 9000’s interview with the BBC. From 2001: A Space Odyssey (Stanley Kubrick, MGM 1968)

1969

Tough problems to crack


AI was lagging far behind the lofty predictions made by advocates like Minsky – something made apparent by Shakey the Robot.

Shakey was the first general-purpose mobile robot able to make decisions about its own actions by reasoning about its surroundings. It built
a spatial map of what it saw, before moving. But it was painfully slow, even in an area with few obstacles. Each time it nudged forward,
Shakey would have to update its map. A moving object in its field of view could easily bewilder it, sometimes stopping it in its tracks for an
hour while it planned its next move.

Researchers spent six years developing Shakey. Despite its relative achievements, a powerful critic lay in wait in the UK.

1973

The AI winter
By the early 1970s AI was in trouble. Millions had been spent, with little to show for it.
There was strong criticism from the US Congress and, in 1973, leading mathematician Professor Sir James Lighthill gave a damning health
report on the state of AI in the UK. His view was that machines would only ever be capable of an "experienced amateur" level of chess.
Common sense reasoning and supposedly simple tasks like face recognition would always be beyond their capability. Funding for the
industry was slashed, ushering in what became known as the AI winter.

John McCarthy was incensed by the Lighthill Report. He flew to the UK and debated its findings with Lighthill on a BBC Television live
special.

1981

A solution for big business


The moment that historians pinpoint as the end of the AI winter was when AI's commercial value started to be realised, attracting new
investment.

The new commercial systems were far less ambitious than early AI. Instead of trying to create a general intelligence, these ‘expert systems’
focused on much narrower tasks. That meant they only needed to be programmed with the rules of a very particular problem. The first
successful commercial expert system, known as the RI, began operation at the Digital Equipment Corporation helping configure orders for
new computer systems. By 1986 it was saving the company an estimated $40m a year.
Ken Olsen, founder of Digital Equipment Corporation, was among the first business leaders to realise the commercial benefit of AI.

1990

Back to nature for 'bottom-up'


inspiration
Expert systems couldn't crack the problem of imitating biology. Then AI scientist Rodney Brooks published a new paper: Elephants Don’t
Play Chess.

Brooks was inspired by advances in neuroscience, which had started to explain the mysteries of human cognition. Vision, for example,
needed different 'modules' in the brain to work together to recognise patterns, with no central control. Brooks argued that the top-down
approach of pre-programming a computer with the rules of intelligent behaviour was wrong. He helped drive a revival of the bottom-up
approach to AI, including the long unfashionable field of neural networks.
Rodney Brooks became director of the MIT Artfificial Intelligence Laboratory, a post once held by Marvin Minsky.

1997

Man vs machine: Fight of the 20th


Century
Supporters of top-down AI still had their champions: supercomputers like Deep Blue, which in 1997 took on world chess champion Garry
Kasparov.

The IBM-built machine was, on paper, far superior to Kasparov - capable of evaluating up to 200 million positions a second. But could it think
strategically? The answer was a resounding yes. The supercomputer won the contest, dubbed 'the brain's last stand', with such flair that
Kasparov believed a human being had to be behind the controls. Some hailed this as the moment that AI came of age. But for others, this
simply showed brute force at work on a highly specialised problem with clear rules.
Find out why Deep Blue "thinks like God" according to Gary Kasparov. Clip from Andrew Marr’s History of the World (BBC One, 2012).

2002

The first robot for the home


Rodney Brook's spin-off company, iRobot, created the first commercially successful robot for the home – an autonomous vacuum cleaner
called Roomba.

Cleaning the carpet was a far cry from the early AI pioneers' ambitions. But Roomba was a big achievement. Its few layers of behaviour-
generating systems were far simpler than Shakey the Robot's algorithms, and were more like Grey Walter’s robots over half a century
before. Despite relatively simple sensors and minimal processing power, the device had enough intelligence to reliably and efficiently clean a
home. Roomba ushered in a new era of autonomous robots, focused on specific tasks.
The Roomba vacuum has cleaned up commercially – over 10 million units have been bought across the world.

2005

War machines
Having seen their dreams of AI in the Cold War come to nothing, the US military was now getting back on board with this new approach.

They began to invest in autonomous robots. BigDog, made by Boston Dynamics, was one of the first. Built to serve as a robotic pack animal
in terrain too rough for conventional vehicles, it has never actually seen active service. iRobot also became a big player in this field. Their
bomb disposal robot, PackBot, marries user control with intelligent capabilities such as explosives sniffing. Over 2000 PackBots have been
deployed in Iraq and Afghanistan.

The legs of BigDog contain a number of sensors that enable each limb to move autonomously when it walks over rough terrain.
2008

Starting to crack the big problems


In November 2008, a small feature appeared on the new Apple iPhone – a Google app with speech recognition.

It seemed simple. But this heralded a major breakthrough. Despite speech recognition being one of AI's key goals, decades of investment
had never lifted it above 80% accuracy. Google pioneered a new approach: thousands of powerful computers, running parallel neural
networks, learning to spot patterns in the vast volumes of data streaming in from Google's many users. At first it was still fairly inaccurate but,
after years of learning and improvements, Google now claims it is 92% accurate.

According to Google, its speech recognition technology had an 8% word error rate as of 2015.

2010

Dance bots
At the same time as massive mainframes were changing the way AI was done, new technology meant smaller computers could also pack a
bigger punch.

These new computers enabled humanoid robots, like the NAO robot, which could do things predecessors like Shakey had found almost
impossible. NAO robots used lots of the technology pioneered over the previous decade, such as learning enabled by neural networks. At
Shanghai's 2010 World Expo, some of the extraordinary capabilities of these robots went on display, as 20 of them danced in perfect
harmony for eight minutes.
Find out how close we are to enabling robots to learn with mathematician Marcus Du Sautoy. Clip from Horizon: The Hunt for AI (BBC Two,
2012).

2011
Man vs machine: Fight of the 21st
Century
In 2011, IBM's Watson took on the human brain on US quiz show Jeopardy.

This was a far greater challenge for the machine than chess. Watson had to answer riddles and complex questions. Its makers used a
myriad of AI techniques, including neural networks, and trained the machine for more than three years to recognise patterns in questions and
answers. Watson trounced its opposition – the two best performers of all time on the show. The victory went viral and was hailed as a
triumph for AI.

Watson is now used in medicine. It mines vast sets of data to find facts relevant to a patient’s history and makes recommendations to
doctors.

2014

Are machines intelligent now?


Sixty-four years after Turing published his idea of a test that would prove machine intelligence, a chatbot called Eugene Goostman finally
passed.

But very few AI experts saw this a watershed moment. Eugene Goostman was seen as 'taught for the test', using tricks to fool the judges. It
was other developments in 2014 that really showed how far AI had come in 70 years. From Google's billion dollar investment in driverless
cars, to Skype's launch of real-time voice translation, intelligent machines were now becoming an everyday reality that would change all of
our lives.
Across four states in America it is legal for driverless cars to take to the road.

Explore the BBC


 Home
 News
 Sport
 Reel
 Worklife
 Travel
 Future
 Culture
 TV
 Weather
 Sounds
 Terms of Use
 About the BBC
 Privacy Policy
 Cookies
 Accessibility Help
 Parental Guidance
 Contact the BBC
 BBC emails for you
 Advertise with us
 AdChoices / Do Not Sell My Info
Copyright © 2023 BBC. The BBC is not responsible for the content of external sites. Read about our approach to external linking.

You might also like