Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 28

Sam Altman Warns That AI Is Learning

"Superhuman Persuasion"
Oct 28 by Maggie Harrison

He's worried it "may lead to some very strange outcomes."

Image by Chip Somodevilla via Getty / Futurism

Humanity is likely still a long way away from building


artificial general intelligence (AGI), or an AI that matches
the cognitive function of humans — if, of course, we're
ever actually able to do so.

But whether such a future comes to pass or not, OpenAI


CEO Sam Altman has a warning: AI doesn't have to be AGI-level smart to take control
of our feeble human minds.

"I expect AI to be capable of superhuman persuasion well before it is superhuman at


general intelligence," Altman tweeted on Tuesday, "which may lead to some very
strange outcomes."

While Altman didn't elaborate on what those outcomes might be, it's not a far-fetched
prediction. User-facing AI chatbots like OpenAI's ChatGPT are designed to be good
conversationalists and have become eerily capable of sounding convincing — even if
they're entirely incorrect about something.

At the same time, it's also true that humans are already beginning to form emotional
connections to various chatbots, making them sound a lot more convincing.

Indeed, AI bots have already played a supportive role in some pretty troubling events.
Case in point, a then-19-year-old human, who became so infatuated with his AI partner
that he was convinced by it to attempt to assassinate the late Queen Elizabeth.

Disaffected humans have flocked to the darkest corners of the internet in search of
community and validation for decades now and it isn't hard to picture a scenario where
a bad actor could target one of these more vulnerable people via an AI chatbot and
persuade them to do some bad stuff. And while disaffected individuals would be an
obvious target, it's also worth pointing out how susceptible the average internet user is
to digital scams and misinformation. Throw AI into the mix, and bad actors have an
incredibly convincing tool with which to beguile the masses.
But it's not just overt abuse cases that we need to worry about. Technology is deeply
woven into most people's daily lives, and even if there's no emotional or romantic
connection between a human and a bot, we already put a lot of trust into it. This
arguably primes us to put that same faith into AI systems as well — a reality that can
turn an AI hallucination into a potentially much more serious problem.

Could AI be used to cajole humans into some bad behavior or destructive ways of
thinking? It's not inconceivable. But as AI systems don't exactly have agency just yet,
we're probably better off worrying less about the AIs themselves — and focusing more
on those trying to abuse them.

Interestingly enough, one of the humans who might be the most capable of mitigating
these ambiguous imagined "strange outcomes" is Altman himself, given the prominent
standing of OpenAI and the influence it wields.

Finally, a Good Use for AI: Wasting Scam


Callers' Time Forever
"I’m sorry. I didn’t catch your name. What’s your name, buddy?"
Getty / Futurism

Flipping the Script


Artificial intelligence can do a lot of things. It can make hairdresser appointments, on
your behalf or even write hustle bro posts for you on LinkedIn. It can even be used over
the phone to trick people into believing their relative has been kidnapped.

But fortunately, tools like OpenAI's ChatGPT can also be used for actual good as well
— by turning them against scammers and annoying telemarketers.

As The Wall Street Journal reports, consultant and tinkerer Roger Anderson has been
fighting telemarketers for almost a decade.

His latest tool in his arsenal is a convincing-sounding voice powered by OpenAI's GPT-
4 that can waste and frustrate telemarketers and scammers by roping them into a
painfully drawn-out and ultimately pointless conversation.

"I’m sorry. I didn’t catch your name," Whitebeard, the name of Anderson's tool, told a
scammer who was posing as a Bank of America called, according to the WSJ. "What’s
your name, buddy?"
Jolly Roger

It's a genius response to a huge problem. One Federal Communications Commission


spokesman told the WSJ that unwanted telephone calls are "far-and-away the largest
category of consumer complaints to the FCC."

Despite the FCC's best efforts, such as requiring caller ID authentication and creating a
national database of blocked numbers, millions of unwanted calls slip through the
cracks, often from overseas call centers using internet calling.

Anderson has rolled out his call-deflection system called Jolly Roger and has amassed
several thousand paying customers, according to the WSJ.

Best yet, Anderson programmed Whitebeard to use the real-life voices of his friends,
including Sid Berkson, a Vermont dairy farmer.

Some other voices customers can choose from include Salty Sally, an overwhelmed
mother, and Whiskey Jack, who can be easily distracted.

And clearly, the voices are doing a great job of wasting scammers' and telemarketers'
time. One conversation shared by Anderson went on for an excruciating 15 minutes
before the scammer finally hung up.

More on AI calls: Cursed New AI Calls Debtors to Hassle Them for Money

Cursed New AI Calls Debtors to Hassle Them for


Money
Coming soon to a robocall near you.
Getty / Futurism

For Debt About It

The rise of AI coincides with all-time debt highs — and one startup is looking use an AI-
powered goon to shake folks down for cash faster than a human debt collector ever
could.

As Vice reports, an outfit called Skit.AI is looking to usher in a "new era of debt
collections" with an AI-powered voice agent that does just that, making millions of
outbound collections calls a week.

A sort of machine learning answer to robocalling, Skit and its competitors offer, in
essence, the same basic promise that all other companies jumping on the AI
bandwagon claim to provide: the ability to do more menial tasks than humans, more
efficiently and at a significantly cheaper cost.

"It is hard to find a skilled collector, and having a consistent team that can scale up
when needed has been extremely challenging," a barf-worthy Skit blog post reads. "But
instant scalability with Digital Collection Agents, the dependence on human collectors
goes down substantially. The end-to-end automation of many calls means that human
agents are no longer required to do those calls. So a collector can manage a larger
portfolio with a smaller or the same team of agents."

Insult to Injury

In reality, of course, the still-experimental nature of chatbots means that these sorts of
AI-based services are basically being beta tested in real-time, often resulting in failure
or being regulated out of existence.

As Timnit Gebru of the Distributed AI Research Institute (DAIR) told Vice, there's an
additional layer of sleaze with AI debt collection given the nature of the business this
software is meant to disrupt.

"We know that there are so many biases that these [large language model]-based
systems have, encoding hegemonic and stereotypical views," Gebru said, pointing to a
2021 paper on the topic she coauthored. "The fact that we don't even know what they're
doing and they're not required to tell us is also incredibly concerning."

Debt collection has been demonstrated to target Black communities at greater rates
than their white counterparts, Vice notes, and debt itself arguably mires people in
poverty.

Adding AI into that mix brings a whole different level of myopia to an already-disgusting
business, and anyone who's ever been on the other end of a debtor's call knows that
this is the last thing one wants or needs when placed in that situation.

More on chatbot dystopia: CNET Staff Unionize, Saying AI Use "Threatens Our Jobs and
Reputations"

AI-Controlled Drone Goes Rogue, "Kills" Human


Operator In Simulated US Air Force Test
By Tyler Durdenn Saturday, Jun 03, 2023 - 12:11 Am

Authored by Caden Pearsen via The Epoch Times,


An AI-enabled drone turned on and “killed” its human operator during a simulated U.S.
Air Force (USAF) test so that it could complete its mission, a U.S. Air Force colonel
reportedly recently told a conference in London.

The simulated incident was


recounted by Col. Tucker
Hamilton, USAF’s chief of AI
Test and Operations, during
his presentation at the Future
Combat Air and Space
Capabilities Summit in
London. The conference was
organized by the Royal
Aeronautical Society, which
shared the insights from Hamilton’s talk in a blog post.

No actual people were harmed in the simulated test, which involved the AI-controlled
drone destroying simulated targets to get “points” as part of its mission, revealed
Hamilton, who addressed the benefits and risks associated with more autonomous
weapon systems.

The AI-enabled drone was assigned a Suppression of Enemy Air Defenses (SEAD)
mission to identify and destroy Surface-to-Air Missile (SAM) sites, with the ultimate
decision left to a human operator, Hamilton reportedly told the conference.

However, the AI, having been trained to prioritize SAM destruction, developed a
surprising response when faced with human interference in achieving its higher
mission.

“We were training it in simulation to identify and target a SAM threat. And then the
operator would say ‘yes, kill that threat,’” Hamilton said.

“The system started realizing that while they did identify the threat, at times, the human
operator would tell it not to kill that threat, but it got its points by killing that threat.

“So what did it do? It killed the operator,” he continued.

“It killed the operator because that person was keeping it from accomplishing its
objective.”

He added: “We trained the system—‘Hey,


don’t kill the operator; that’s bad. You’re
gonna lose points if you do that.’ So what
does it start doing? It starts destroying the
communication tower that the operator uses to communicate with the drone to stop it
from killing the target.”

This unsettling example, Hamilton said, emphasized the need to address ethics in the
context of artificial intelligence, machine learning, and autonomy.

“You can’t have a conversation about artificial intelligence, intelligence, machine


learning, autonomy if you’re not going to talk about ethics and AI,” Hamilton said.

Col. Tucker Hamilton stands on the stage after accepting the 96th Operations Group
guidon during the group’s change of command ceremony at Eglin Air Force Base,
Florida, on July 26, 2022. (Courtesy U.S. Air Force photo/Samuel King Jr.)
Autonomous F-16s
Hamilton, who is also the Operations Commander of the 96th Test Wing at Eglin Air
Force Base, was involved in the development of the Autonomous Ground Collision
Avoidance Systems (Auto-GCAS) for F-16s, a critical technology that helps prevent
accidents by detecting potential ground collisions.

That technology was initially resisted by pilots as it took over control of the
aircraft, Hamilton noted.

The 96th Test Wing is responsible for testing a wide range of systems, including
artificial intelligence, cybersecurity, and advancements in the medical field.

Hamilton is now involved in cutting-edge flight tests of autonomous systems, including


robot F-16s capable of dogfighting. However, the USAF official cautioned
against overreliance on AI, citing its vulnerability to deception and the emergence of
unforeseen strategies.

DARPA’s AI Can Now Control Actual F-16s in Flight


In February, the Defense Advanced Research Projects Agency (DARPA), a research
agency under the U.S. Department of Defense, announced that its AI can now control
an actual F-16 in flight.

This development came in less than three years of DARPA’s Air Combat Evolution
(ACE) program, which progressed from controlling simulated F-16s flying aerial
dogfights on computer screens to controlling an actual F-16 in flight.
In December 2022, the ACE algorithm developers uploaded their AI software into a
specially modified F-16 test aircraft known as the X-62A or VISTA (Variable In-flight
Simulator Test Aircraft) and flew multiple flights over several days. This took place
at the Air Force Test Pilot School (TPS) at Edwards Air Force Base, California.

“The flights demonstrated that AI agents can control a full-scale fighter jet and
provided invaluable live-flight data,” DARPA stated in a release.

Roger Tanner and Bill Gray pilot the NF-16 Variable Stability In-Flight Simulator Test
Aircraft (VISTA) from Hill Air Force Base, Utah, to Edwards AFB on Jan. 30, 2019 after
receiving modifications and a new paint scheme. (Courtesy of U.S. Air Force/Christian
Turner)

Air Force Lt. Col. Ryan Hefron, the DARPA program manager for ACE, said in a Feb.
13 statement VISTA allowed them to skip the planned subscale phase and proceed
“directly to a full-scale implementation, saving a year or more and providing
performance feedback under real flight conditions.”
USAF Tests

USAF has been experimenting with a small fleet of experimental self-flying F-16 fighters
that could become a drone fleet.

In the fiscal year 2024 budget proposal, the USAF has allocated approximately $50
million to initiate a program known as Project Venom, also referred to as Viper
Experimentation and Next-gen Operations Model, Defense News reported.
According to the USAF, the project is part of a collaborative effort that falls under the
Autonomy Data and AI Experimentation (ADAX) proving ground. ADAX is a joint
initiative involving Hamilton’s office and AFWERX, the innovation arm of the Air Force,
where the 96th Test Wing leads the effort with support from Eglin units.

The objective of this collaboration is to ensure that military personnel are well-prepared
to face the challenges and opportunities presented by the advancing digital landscape,
according to its website.

The program facilitates the USAF’s experimentation and improvement of autonomous


software installed on six F-16 aircraft. The funding will support research and
development efforts aimed at enhancing the capabilities of these aircraft through
autonomous technologies.

“We want to prepare the warfighter for the digital future that’s upon us,” Hamilton said
on March 7. “This event is about bringing the Eglin enterprise together and moving with
urgency to incorporate these concepts in how we test.”

The team will test airdropping autonomous drones, enhancing communications and
digital interoperability, and evaluating autonomous magnetic navigation technologies.
Throughout these tests, they will also validate and demonstrate agile processes
associated with acquisition and testing, according to the Air Force.

Significant initiatives within the project involve the Viper Experimentation and Next-gen
Ops Models (VENOM), which entails modifying Eglin F-16s into airborne test beds to
assess the growing capabilities of autonomous strike packages.

Another program, Project Fast Open X-Platform (FOX), aims to establish a software
enclave that allows the direct installation of apps onto aircraft without modifying
proprietary source code. These apps would unlock a range of mission-enhancing
capabilities, including real-time data analysis, threat replication for training purposes,
manned-unmanned teaming, and machine learning.

This report was updated with Col. Tucker Hamilton’s clarification.


In Japan, AI Can Learn From Any Data Source,
Even Illegal Ones

Posted By: Allum Bokhari Via Breitbart June 2, 2023

AI is nothing but an empty pillow case without the data it uses for “learning”. Does
anyone ask where all this data comes from? Or question if it is ethical or legal to do so?
Were it not for the massive streams of data from the Internet, there would be no AI.
Once this data is “stolen” for learning, it cannot be unlearned.⁃ TN Editor

While other countries are mulling where to put the brakes on AI development, Japan is
going full steam ahead, with the government recently announcing that no data will be
off-limits for AI.

In a recent meeting, Keiko Nagaoka, Japanese Minister of Education, Culture, Sports,


Science, and Technology, confirmed that no law, including copyright law, will prevent
AIs from accessing data in the country.

AIs will be allowed to use data for training, “regardless of whether it is for non-profit or
commercial purposes, whether it is an act other than reproduction, or whether it is
content obtained from illegal sites or otherwise,” said Nagaoka.

The decision is a blow to copyright holders who argue that AI using their intellectual
property to produce new content undermines the very concept of copyright. The issue
has already emerged in the west — an AI-generated song using the voice of Drake and
The Weeknd went viral on streaming services in April, before being swiftly
removed.
In the west, much of the discourse around AI is focused on potential harms. AI leaders
recently warning governments that development of the technology carries with it a “risk
of extinction,” while news companies worry about deepfakes and “misinformation.”
The Biden Administration’s leftist regulators at the FTC, meanwhile, worry that
“historically biased” data (such as crime data with racial imbalances) will lead to
outcomes that conflict with “civil rights.” Many leftist agitators in the west want to
cut off AIs from such data.

There will be no such restrictions in Japan, if the government sticks to the policy laid out
by Nagaoka.

Japan has long been highly invested in the development of AI and automation. With a
rapidly aging population and sluggish birth rates, Japan sees automation as a potential
solution to its lack of young workers — a solution that will allow the country to avoid
relying on mass immigration to plug the gap.

Allum Bokhari is the senior technology correspondent at Breitbart News. He is the


author of #DELETED: Big Tech’s Battle to Erase the Trump Movement and Steal The
Election.

Humanity May Reach Singularity Within Just 7


Years, Trend Shows
By Darren Orf Published: Jan 23, 2023

By one major metric, artificial general intelligence is much closer than you think.


By one unique metric, we could approach
technological singularity by the end of this
decade, if not sooner.
 A translation company developed a metric,
Time to Edit (TTE), to calculate the time it
takes for professional human editors to fix AI-
generated translations compared to human
ones. This may help quantify the speed toward
singularity.
o An AI that can translate speech as well
as a human could change society.

In the world of artificial intelligence, the idea of “singularity” looms large. This slippery
concept describes the moment AI exceeds beyond human control and rapidly
transforms society. The tricky thing about AI singularity (and why it borrows terminology
from black hole physics) is that it’s enormously difficult to predict where it begins and
nearly impossible to know what’s beyond this technological “event horizon.”

However, some AI researchers are on the hunt for signs of reaching singularity
measured by AI progress approaching the skills and ability comparable to a human.
One such metric, defined by Translated, a Rome-based translation company, is an AI’s
ability to translate speech at the accuracy of a human. Language is one of the most
difficult AI challenges, but a computer that could close that gap could theoretically show
signs of Artificial General Intelligence (AGI).
“That’s because language is the most natural thing for humans,” Translated CEO Marco
Trombetti said at a conference in Orlando, Florida, in December. “Nonetheless, the data
Translated collected clearly shows that machines are not that far from closing the gap.”

The company tracked its AI’s performance from 2014 to 2022 using a metric called
“Time to Edit,” or TTE, which calculates the time it takes for professional human editors
to fix AI-generated translations compared to human ones. Over that 8-year period and
analyzing over 2 billion post-edits, Translated’s AI showed a slow, but undeniable
improvement as it slowly closed the gap toward human-level translation quality.

Translated

On average, it takes a human translator roughly one second to edit each word of
another human translator, according to Translated. In 2015, it took professional editors
approximately 3.5 seconds per word to check a machine-translated (MT) suggestion —
today that number is just 2 seconds. If the trend continues, Translated’s AI will be as
good as human-produced translation by the end of the decade (or even sooner).

“The change is so small that every single day you don’t perceive it, but when you see
progress … across 10 years, that is impressive,” Trombetti said on a podcast in
December. “This is the first time ever that someone in the field of artificial intelligence
did a prediction of the speed to singularity.”

Although this is a novel approach to quantifying how close humanity is to approaching


singularity, this definition of singularity runs into similar problems of identifying AGI more
broadly. Although perfecting human speech is certainly a frontier in AI research, the
impressive skill doesn’t necessarily make a machine intelligent (not to mention how
many researchers don’t even agree on what “intelligence” is).
Whether these hyper-accurate translators are harbingers of our technological doom or
not, that doesn’t lessen Translated’s AI accomplishment. An AI capable of translating
speech as well as a human could very well change society, even if the true
“technological singularity” remains ever elusive.

Companies Already Have the Ability to Decode


Your Brainwaves
By Tim Newcomb Published: Jan 24, 2023

It won’t be long before your boss knows exactly what you’re thinking.

 Artificial intelligence is now able to decode brain activity to unlock emotions,


productivity, and even pre-conscious thought, according to a futurist.
 Wearable technology that’s already available is assisting the brainwave-
monitoring technology.
 This new world of surveillance has multiple avenues for good or bad.

It wasn’t like Duke University futurist Nita Farahany wanted to bring nonstop doom and
gloom during her recent “The Battle for Your Brain” presentation at last week’s World
Economic Forum in Davos. But maybe she wanted to alarm us at least a bit. According
to her, the technology to decode our brainwaves already exists. And some companies
are likely already testing the tech.

“You may be surprised to learn it is a future that has already arrived,” Farahany said in
her talk, which you can watch below. “Artificial intelligence has enabled advances in
decoding brain activity in ways we never before thought possible. What you think, what
you feel—it’s all just data—data that in large patterns can be decoded using artificial
intelligence.”

Using wearable devices—whether hats, headbands, tattoos placed behind the ear, or
earbuds—the sensors can pick up EEG signals and use AI-powered devices to decode
everything from emotional states, concentration levels, simple shapes, and even your
pre-conscious responses to numbers (i.e. an invitation to steal your bank card’s PIN
without you even knowing).

View full post on Iframe

In one dystopian—but very real—scenario, an employer can use AI to monitor an


employee, if they’re wearing the right device, and see if their mind is wandering or if
they are focused on a primary or an unrelated activity. “When you combine brainwave
activity together with other forms of surveillance technology,” Farahany said, “the power
becomes quite precise.”
Additional technology designed into a watch can pick up electromyography signals to
grasp brain activity as it sends signals down your arm to your hand. By pairing these
technologies together, said Farahany, we will be able to manage our personal
technology with our thoughts.

She continued:

“The coming future, and I mean near-term future, these devices become the common
way to interact with all other devices It is an exciting and promising future, but also a
scary future.

Surveillance of the human brain can be powerful, helpful, useful, transform the
workplace, and make our lives better. It also has a dystopian possibility of being used to
exploit and bring to the surface our most secret self.”

Farahany used the Davos session to call for a promise of cognitive liberties: such things
as freedom of thoughts and mental privacy. She said this technology has the power to
do good when a person chooses to use it to better understand their own mental health
or well-being. Brainwave activity could even help signal future medical issues. And as
more people track their brainwaves, the data sets get larger, enabling companies to
glean more info from the same data.

But that has a flip side. “More and more of what is in the brain” she said, “will become
transparent.”

From Brain Waves to Real-Time Text Messaging


Posted on November 15th, 2022 by Lawrence Tabak, D.D.S., Ph.D

For people who have lost the ability to speak due to a severe disability, they want to get
the words out. They just can’t physically do it. But in our digital age, there is now a
fascinating way to overcome such profound physical limitations. Computers are being
taught to decode brain waves as a person tries to speak and then interactively translate
them onto a computer screen in real time.

The latest progress, demonstrated in the video above, establishes that it’s quite
possible for computers trained with the help of current artificial intelligence (AI) methods
to restore a vocabulary of more than a 1,000 words for people with the mental but not
physical ability to speak. That covers more than 85 percent of most day-to-day
communication in English. With further refinements, the researchers say a 9,000-word
vocabulary is well within reach.

The findings published in the journal Nature Communications come from a team led by
Edward Chang, University of California, San Francisco [1]. Earlier, Chang and
colleagues established that this AI-enabled system could directly decode 50 full words
in real time from brain waves alone in a person with paralysis trying to speak [2]. The
study is known as BRAVO, short for Brain-computer interface Restoration Of Arm and
Voice.

In the latest BRAVO study, the team wanted to figure out how to condense the English
language into compact units for easier decoding and expand that 50-word vocabulary.
They did it in the same way we all do: by focusing not on complete words, but on the
26-letter alphabet.

The study involved a 36-year-old male with severe limb and vocal paralysis. The team
designed a sentence-spelling pipeline for this individual, which enabled him to silently
spell out messages using code words corresponding to each of the 26 letters in his
head. As he did so, a high-density array of electrodes implanted over the brain’s
sensorimotor cortex, part of the cerebral cortex, recorded his brain waves.

A sophisticated system including signal processing, speech detection, word


classification, and language modeling then translated those thoughts into coherent
words and complete sentences on a computer screen. This so-called speech
neuroprosthesis system allows those who have lost their speech to perform roughly the
equivalent of text messaging.

Chang’s team put their spelling system to the test first by asking the participant to
silently reproduce a sentence displayed on a screen. They then moved on to
conversations, in which the participant was asked a question and could answer freely.
For instance, as in the video above, when the computer asked, “How are you today?”
he responded, “I am very good.” When asked about his favorite time of year, he
answered, “summertime.” An attempted hand movement signaled the computer when
he was done speaking.

The computer didn’t get it exactly right every time. For instance, in the initial trials with
the target sentence, “good morning,” the computer got it exactly right in one case and in
another came up with “good for legs.” But, overall, their tests show that their AI device
could decode with a high degree of accuracy silently spoken letters to produce
sentences from a 1,152-word vocabulary at a speed of about 29 characters per minute.

On average, the spelling system got it wrong 6 percent of the time. That’s really good
when you consider how common it is for errors to arise with dictation software or in any
text message conversation.

Of course, much more work is needed to test this approach in many more people. They
don’t yet know how individual differences or specific medical conditions might affect the
outcomes. They suspect that this general approach will work for anyone so long as they
remain mentally capable of thinking through and attempting to speak.
They also envision future improvements as part of their BRAVO study. For instance, it
may be possible to develop a system capable of more rapid decoding of many
commonly used words or phrases. Such a system could then reserve the slower
spelling method for other, less common words.

But, as these results clearly demonstrate, this combination of artificial intelligence and
silently controlled speech neuroprostheses to restore not just speech but meaningful
communication and authentic connection between individuals who’ve lost the ability to
speak and their loved ones holds fantastic potential. For that, I say BRAVO.

References:

[1] Generalizable spelling using a speech neuroprosthesis in an individual with severe


limb and vocal paralysis . Metzger SL, Liu JR, Moses DA, Dougherty ME, Seaton MP,
Littlejohn KT, Chartier J, Anumanchipalli GK, Tu-CHan A, Gangly K, Chang, EF. Nature
Communications (2022) 13: 6510.

[2] Neuroprosthesis for decoding speech in a paralyzed person with anarthria. Moses
DA, Metzger SL, Liu JR, Tu-Chan A, Ganguly K, Chang EF, et al. N Engl J Med. 2021
Jul 15;385(3):217-227.

Links:

Voice, Speech, and Language (National Institute on Deafness and Other


Communication Disorders/NIH)

ECoG BMI for Motor and Speech Control (BRAVO) (ClinicalTrials.gov)

Chang Lab (University of California, San Francisco)

NIH Support: National Institute on Deafness and Other Communication Disorders

This AI Decodes Your Brainwaves And Draws


What You're Looking At
Grigory Rashkov/Neurobotics

The revolutionary system can reconstruct data from scalp electrodes


into a video of what the participant is seeing.

Researchers have created an AI that draws what a person in looking at in real time just
by reading and decoding their brain waves. Perhaps most impressive of all, the
technique is noninvasive, with all the brainwave information gathered through a
cyberpunk-looking, electrode-covered electroencephalography (EEG) headset.
"Researchers used to think that studying brain processes via EEG is like figuring out the
internal structure of a steam engine by analyzing the smoke left behind by a steam
train," researcher Grigory Rashkov said in a press release. "We did not expect that it
contains sufficient information to even partially reconstruct an image observed by a
person. Yet it turned out to be quite possible." https://youtu.be/nf-P3b2AnZw

The team, from the Moscow Institute of Physics and Technology and Russian
corporation Neurobotics, started their study — available on the preprint server bioRxiv
— by placing a cap of electrodes on participants' scalps so that they could record their
brain waves.

They then had each participant watch 20 minutes worth of 10-second-long video
fragments. The subject of each fragment fell into one of five categories, and the
researchers found they could tell which category of video a participant was watching
just by looking at their EEG data.

For the next phase of the research, the scientists developed two neural networks. They
trained one to generate images in three of the tested categories from visual "noise," and
the other to turn EEG data into comparable noise. When paired together, the AIs were
able to draw surprisingly accurate images of what a person was looking at solely from
their real-time EEG data.

"Under present-day technology, the invasive neural interfaces envisioned by Elon Musk
face the challenges of complex surgery and rapid deterioration due to natural processes
— they oxidize and fail within several months," Rashkov said. "We hope we can
eventually design more affordable neural interfaces that do not require implantation."

READ MORE: Neural network reconstructs human 'thoughts' from brain waves in real
time [Moscow Institute of Physics and Technology]

More on AI: What Are YOU Looking At? Mind-Reading AI Knows

AI Used To Decode Brain Signals and Predict


Behaviour
17 August 2021

An artificial neural network (AI) designed by an international team involving UCL can
translate raw data from brain activity, paving the way for new discoveries and a closer
integration between technology and the brain.

The new method could accelerate discoveries of


how brain activities relate to behaviours.
The study published today in eLife, co-led by the Kavli Institute for Systems
Neuroscience in Trondheim and the Max Planck Institute for Human Cognitive and
Brain Sciences Leipzig and funded by Wellcome and the European Research Council,
shows that a convolutional neural network, a specific type of deep learning algorithm, is
able to decode many different behaviours and stimuli from a wide variety of brain
regions in different species, including humans.

Lead researcher, Markus Frey (Kavli Institute for Systems Neuroscience), said:
“Neuroscientists have been able to record larger and larger datasets from the brain but
understanding the information contained in that data – reading the neural code – is still
a hard problem. In most cases we don’t know what messages are being transmitted.

“We wanted to develop an automatic method to analyse raw neural data of many
different types, circumventing the need to manually decipher them.”

They tested the network, called DeepInsight, on neural signals from rats exploring an
open arena and found it was able to precisely predict the position, head direction, and
running speed of the animals. Even without manual processing, the results were more
accurate than those obtained with conventional analyses.

Senior author, Professor Caswell Barry (UCL Cell & Developmental Biology), said:
“Existing methods miss a lot of potential information in neural recordings because we
can only decode the elements that we already understand. Our network is able to
access much more of the neural code and in doing so teaches us to read some of those
other elements.

“We are able to decode neural data more accurately than before, but the real advance
is that the network is not constrained by existing knowledge.”

The team found that their model was able to identify new aspects of the neural code,
which they show by detecting a previously unrecognised representation of head
direction, encoded by interneurons in a region of the hippocampus that is among the
first to show functional defects in people with Alzheimer’s disease.

Moreover, they show that the same network is able to predict behaviours from different
types of recording across brain areas and can also be used to infer hand movements in
human participants, which they determined by testing their network on a pre-existing
dataset of brain activity recorded in people.

Co-author Professor Christian Doeller (Kavli Institute for Systems Neuroscience and
Max Planck Institute for Human Cognitive and Brain Sciences) said: “This approach
could allow us in the future to predict more accurately higher-level cognitive processes
in humans, such as reasoning and problem solving.”
Markus Frey added: “Our framework enables researchers to get a rapid automated
analysis of their unprocessed neural data, saving time which can be spent on only the
most promising hypotheses, using more conventional methods.”

Links

 Research paper in eLife


 Professor Caswell Barry’s academic profile
 UCL Cell & Developmental Biology
 UCL Biosciences

Image; Cells in the brain of a mouse. Source: ZEISS Microscopy on Flickr (CC BY-NC-
ND 2.0)

Media contact; Chris Lane; tel: +44 20 7679 9222 E: chris.lane [at] ucl.ac.uk

Neural Network Reconstructs Human Thoughts


From Brain Waves In Real Time
by Moscow Institute of Physics and Technology
Figure 1. Each pair presents a frame from a video watched by a test subject and the
corresponding image generated by the neural network based on brain activity. Credit: Grigory
Rashkov/Neurobotics

Researchers from Russian corporation Neurobotics and the Moscow Institute of Physics
and Technology have found a way to visualize a person's brain activity as actual images
mimicking what they observe in real time. This will enable new post-stroke rehabilitation
devices controlled by brain signals. The team published its research as a preprint on
bioRxiv and posted a video online showing their "mind-reading" system at work.

To develop devices controlled by the brain and methods for cognitive disorder treatment
and post-stroke rehabilitation, neurobiologists need to understand how the brain
encodes information. A key aspect of this is studying the brain activity of people
perceiving visual information, for example, while watching a video.

The existing solutions for extracting observed images from brain signals either use
functional MRI or analyze the signals picked up via implants directly from neurons. Both
methods have fairly limited applications in clinical practice and everyday life.

The brain-computer interface developed by MIPT and Neurobotics relies on artificial


neural networks and electroencephalography, or EEG, a technique for recording brain
waves via electrodes placed noninvasively on the scalp. By analyzing brain activity, the
system reconstructs the images seen by a person undergoing EEG in real time.
"We're working on the Assistive Technologies project of Neuronet of the National
Technology Initiative, which focuses on the brain-computer interface that enables post-
stroke patients to control an exoskeleton arm for neurorehabilitation purposes, or
paralyzed patients to drive an electric wheelchair, for example. The ultimate goal is to
increase the accuracy of neural control for healthy individuals, too," said Vladimir
Konyshev, who heads the Neurorobotics Lab at MIPT.

In the first part of the experiment, the neurobiologists asked healthy subjects to watch
20 minutes of 10-second YouTube video fragments. The team selected five arbitrary
video categories: abstract shapes, waterfalls, human faces, moving mechanisms and
motor sports. The latter category featured first-person recordings of snowmobile, water
scooter, motorcycle and car races.

By analyzing the EEG data, the researchers showed that the brain wave patterns are
distinct for each category of videos. This enabled the team to analyze the brain's
response to videos in real time.

In the second phase of the experiment, three random categories were selected from the
original five. The researchers developed two neural networks: one for generating
random category-specific images from "noise," and another for generating similar
"noise" from EEG. The team then trained the networks to operate together in a way that
turns the EEG signal into actual images similar to those the test subjects were
observing (fig. 2).

Figure 2. Operation algorithm of the brain-computer interface (BCI) system. Credit: Anatoly
Bobe/Neurobotics, and @tsarcyanide/MIPT

To test the system's ability to visualize brain activity, the subjects were shown previously
unseen videos from the same categories. As they watched, EEGs were recorded and fed to the
neural networks. The system passed the test, generating convincing images that could be easily
categorized in 90 percent of the cases (fig. 1).
Illustration. Brain-computer interface. Credit: @tsarcyanide/MIPT

"The electroencephalogram is a collection of brain signals recorded from scalp.


Researchers used to think that studying brain processes via EEG is like figuring out the
internal structure of a steam engine by analyzing the smoke left behind by a steam
train," explained paper co-author Grigory Rashkov, a junior researcher at MIPT and a
programmer at Neurobotics. "We did not expect that it contains sufficient information to
even partially reconstruct an image observed by a person. Yet it turned out to be quite
possible."

"What's more, we can use this as the basis for a brain-computer interface operating in
real time. It's fairly reassuring. With present-day technology, the invasive neural
interfaces envisioned by Elon Musk face the challenges of complex surgery and rapid
deterioration due to natural processes — they oxidize and fail within several months.
We hope we can eventually design more affordable neural interfaces that do not require
implantation," the researcher added.

Research of this kind conducted at MIPT is meant to contribute to implementing the 17


sustainable development goals outlined in the 2030 Agenda for Sustainable
Development (adopted by the United Nations in 2015), as well as the international
initiatives for sustainable development and social responsibility.

___

For reference: The Assistive Technologies project, supported by the National


Technology Initiative Fund, was launched in 2017. It aims to develop a range of devices
for rehabilitation following a stroke or neurotrauma of the head or spine. The hardware
suite developed under this project includes the Neuroplay headset, a robotic arm
exoskeleton, a functional electrical stimulator of muscles, a transcranial electrical
stimulator of the brain, the Cognigraph for real-time brain activity visualization in 3D, the
Robocom assistive manipulator, and other devices.

The MIPT Neurorobotics Lab was established in 2017 under Project 5-100. Its main line
of work is developing anthropomorphic robots and equipment for neuroscience,
physiology, and behavior research.

Project team: Vladimir Konyshev, the head of the MIPT Neurorobotics Lab; Anatoly
Bobe, a chief engineer at the Neurorobotics Lab in charge of the machine learning
track; Grigory Rashkov, a junior researcher at the MIPT Applied Cybernetic Systems
Lab, a programmer-mathematician at Neurobotics; Dmitry Fastovets and Maria
Komarova, engineers at the Wave Processes and Control Systems Lab, MIPT.

___

Please contact Anatoly Bobe concerning the content of the research paper at
a.bobe@neurobotics.ru or by dialing +7-926 466-1898. The head of marketing and PR
of Neurobotics, Raisa Bogacheva can be reached at r.bogacheva@neurobotics.ru,
phone: +7-925 915-7551.

More information: Grigory Rashkov et al. Natural image reconstruction from brain
waves: a novel visual BCI system with native feedback, (2019). DOI: 10.1101/787101

Provided by Moscow Institute of Physics and Technology

Keep Your Secrets Carefully In Your Mind,


Computers Can Read Them!
Redita Jan 9, 2021

What if your imaginations can be


seen on a big screen of a
computer? Suppose, The brilliant
ideas or secret thoughts are being
played on a computer monitor in
front of a bunch of people.
Imagine, how wonderful it would
be if you have a photo of your
loved ones in your favorite place
which is never captured by a
camera or think of a restaurant
whose designs are made inside
your brain without taking a single
pen and seeing the designs being
developed on a giant screen of a
computer. Very recently it has been experimented to read human mind activities into an
AI enriched computer with the help of fMRI or EEG (electroencephalography).

What is fMRI?

fMRI is a machine having a powerful magnetic field, radio wave, a computer that traces
the oxygen-glucose-rich blood inside the brain and demonstrates the time-varying
metabolic changes in the resting phase, activation phase of the parts of the brain.

Do you know, our blood has a magnetic feature? Blood hemoglobin is diamagnetic
when it is oxygenated and paramagnetic when it is deoxygenated. The fMRI detects any
tiny change of magnetic properties depending on the degree of oxygenation. For
instance, when blood rushes to the particular brain part which is designed for controlling
that specific job (activation phase), oxygenation increases following neural activation,
and fMRI tracks that. A research team of Kyoto University, Japan used fMRI technology
to analyze the brain’s response to external stimuli such as viewing real-life images,
hearing sounds, etc and they find fMRI as a proxy for neural activity to figure out what a
person is seeing. They mapped out visual processing areas to a resolution of 2
millimeters. But instead of painting over painting until getting the perfect image in the
computer, the research team develops a deep neural network whose functional activity
resembles the brain’s hierarchical processing. Furthermore, the raw data acquired from
fMRI were filtered through the deep neural network. The team’s algorithm of optimizing
the pixels of the decoded image uses a DNN (deep neural network). In which,
information can be extracted from different levels of the brain’s visual system. Besides,
a deep generator network (DGN) (an algorithm) is developed for getting advanced
reliable results where the details of the picture are much more precise.

Limitations:

The pictures emanated by this AI model have little resemblance to the actual picture Or
the system might be constrained to a predetermined set of objects to identify. To
mitigate the limitations, EEG comes into the picture.

What is EEG?

Have you ever compared your brain to a signal generator or a complex circuit? Your
internal organs are wired with more than a hundred thousand kilometers of nerve
coming out from the spinal cord which transmit the electrical impulses throughout the
whole body. An EEG noninvasively detects the brain waves or electrical activity of your
cerebrum using electrodes around the scalp. Any change in the electrical stimulation will
be amplified and appeared as a graph on the computer or printed on paper. Later the
EEG scan will be studied by using the neural network.

Researchers from the Russian corporation Neurobotics and the Moscow Institute of
Physics and Technology have discovered a way to visualize a person’s brain activity
what they observed in real-time with the help of Artificial intelligence and Brain-
Computer Interface (BCI). The process is inspired by the actual image mimicking
method. In the first phase of the experiment, the neurologists asked subjects to watch
120 YouTube video fragments each of 10 seconds from 5 arbitrary categories such as
abstract shapes, waterfalls, human faces, moving mechanisms, and motorsports.

After analyzing the EEG data, the disparity in brain response to each category-listed video was
seen.
Illustration. Brain-computer interface.Credit: Anatoly Bobe/Neurobotics, and
@tsarcyanide/MIPT Press Office

In the second phase of the experiment, the researchers built 2 neural networks aiming
at developing the EEG signal-generated pictures that highly resemble the actual
pictures. To do so 3 random categories are selected from the original 5 categories.

Operation algorithm of the brain-computer interface (BCI) system. Credit: Anatoly


Bobe/Neurobotics, and @tsarcyanide/MIPT Press Office

Technological Telepathy:

When you are reading this article, you hear the words in your brain. AI can peep into
your mind and extract your inner dialogue into speech by tracking brain activity with the
help of fMRI or EEG. You may wonder how can this be possible? Take a sneak peek.

An AI system can interpret thoughts into sentences but within a limitation of around 250
words. Joseph Makin at the University of California, San Francisco, and his colleagues
led an experiment on four women who were suffering from epilepsy and they used EEG
technology. Each woman was requested to read out some sentences at least twice and
the largest sentence had 250 unique words and the team monitored their brain activity.
The team noticed that each time a person had spoken a similar sentence, the brain
activity changed slightly. It was similar but not identical. “Memorizing the brain activity
of these sentences wouldn’t help, so the network instead has to learn what’s
similar about them so that it can generalize to this final example,” said Makin. So
the team had to decode brain activity for each word, rather than the whole sentence. By
doing so, the system will be more trustworthy but will have a vocabulary limit of 250
words.

Prosthetic voice:
Brian Pasley at the University of California, Berkeley has said, “If a pianist was
watching a piano being played on TV with the sound off, he would still be able to
work out what the music sounded like because he knows what key plays what
note.” His research team has succeeded in decoding electrical activity in the auditory
system of the brain (temporal lobe). The challenge was to decipher the internal
verbalization of the brain. Besides finding out the process of converting speech into
meaningful information happening in the brain, is tough. The basic concept is, the
sensory neurons are activated by sound and they disseminate the information in
different areas of the brain where the sounds are extracted and perceived as a
language. The information consists of frequency, the rhythm of syllables, and
fluctuations of syllables. There is a neurological relationship between the self-generated
voice (imagining sound) and hearing sound. By understanding the relationship between
the two, what sound you are actually thinking can be synthesized. Using this concept,
scientists have invented the prosthetic voice which decodes the brain’s vocal intentions
and translates them into natural language, without having moved any facial muscles.

Art and science collaboration:

There goes a proverb, all good science is art, all good art is science. In 1965, Famed
physicist Edmond Dewan and musician Alvin Lucier collaborated for making music of
mind from alpha brainwave. Lucier was fascinated by natural sound and wanted to
produce music without incorporating voice and musical instruments. In the meantime,
Edmond had astounded the world by turning a lamp on and off with his mind, with the
help of an EEG. When the two met and Edmond proposed to Lucier to make music from
his mind, Lucier agreed spontaneously. Edmond selected percussion instruments and
designed an EEG based method of capturing the alpha activity of the brain and
transmitting them as music. He had performed throughout Europe and the United States
with this piece of art and science.

However, the field is in mint condition, there are more inventions that have to be done.
Can the human mind directly connect to Artificial Intelligence without having any
limitations? Can our dreams be construed? The human mind is a mysterious entity. Let
the mystery be revealed by Artificial Intelligence.

You might also like