SSC - 2020-06-11 - Wordy Wernicke's

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 31

23 Jun 2020 09:39:26 UTC

Original

history←priornext→
All snapshots from host archive.org
from host slatestarcodex.com
Webpage Screenshot
HO ME
 AB OU T / T OP P OS TS
 PS YC HI AT - LIS T
 AR CH IV ES
 MEET UP S
 MI ST AK ES
 CO MM EN TS
 AD VERT IS E
 OP EN T HR EA D
CO MM EN TS F EED RS S FEED

Slate Star Codex


ONLY TEN MORE YEARS TILL THE TURCHIN CYCLE REVERSES! HANG IN THERE, PEOPLE!


WORDY WERNICKE’S

PO ST ED ON JU NE 11 , 2020 BY S CO TT AL EX AN DER
There are two major brain areas involved in language. To oversimplify, Wernicke’s area in the
superior temporal gyrus handles meaning; Broca’s area in the inferior frontal gyrus handles
structure and flow.
If a stroke or other brain injury damages Broca’s area but leaves Wernicke’s area intact, you get
language which is meaningful, but not very structured or fluid. You sound like a caveman: “Want
food!”
If it damages Wernicke’s area but leaves Broca’s area intact, you get speech which has normal
structure and flow, but is meaningless. I’d read about this pattern in books, but I still wasn’t
prepared the first time I saw a video of a Wernicke’s aphasia patient (source):
tactustherapy
2.3K subscribers
Fluent Aphasia (Wernicke's Aphasia)
During yesterday’s discussion of GPT-3, a commenter mentioned how alien it felt to watch
something use language perfectly without quite making sense. I agree it’s eerie, but it isn’t some
kind of inhuman robot weirdness. Any one of us is a railroad-spike-through-the-head away from
doing the same.
Does this teach us anything useful about GPT-3 or neural networks? I lean towards no. GPT-3
already makes more sense than a Wernicke’s aphasiac. Whatever it’s doing is on a higher level
than the Broca’s/Wernicke’s dichotomy. Still, it would be interesting to learn what kind of
computational considerations caused the split, and whether there’s any microstructural difference
in the areas that reflects it. I don’t know enough neuroscience to have an educated opinion on this.
THIS ENTRY WAS POSTED IN UNCATEGORIZED AND TAGGED AI, PSYCHOLOGY. BOOKMARK THE PERMALINK OR LINK
WITHOUT COMMENTS.
← The Obligatory GPT-3 Post
Open Thread 156 →
LEAVE A REPLY
You must be logged in to post a comment.
100 RESPONSES TO WORDY WERNICKE’S
Reverse order

1. Reasoner says:
June 11, 2020 at 10:18 pm ~new~

Not sure there is any deep mystery here–I think it just learns grammar more easily than word

meanings.

Off topic, but maybe someone could explain why GPT-4 or GPT-5 could be worrisome from an AI

safety point of view? As you mentioned yesterday, they could generate a world takeover plan, but

where is the desire or ability to execute? They could be harmful in the hands of malevolent actors,

but luckily they require so much compute that keeping them out of the hands of malevolent actors

should be doable? So this actually looks like kind of an ideal development path from an AI safety

point of view? You mentioned doubts regarding whether it’s possible to achieve superhuman

inventiveness with these systems… maybe we should actually want superhuman inventiveness out

of them, if we can assume their operators are benevolent?

The main thing I can think of to be worried about is agent-like cognition popping up “accidentally”

amongst all of that prediction machinery. This seems like it could be a super high impact topic to

research. But is there anything else?


Log in to Reply Hide
o BlindKungFuMaster says:

June 12, 2020 at 12:23 am ~new~

It could certainly be the case that GPT-5 will be a component of a super-intelligent agent. An

isolated language model doesn’t strike me as particularly dangerous.


Log in to Reply Hide ↑

o steve3920 says:

June 12, 2020 at 3:54 am ~new~


maybe someone could explain why GPT-4 or GPT-5 could be worrisome from an AI safety point of
view?
Even leaving aside bad actors and coordination problems, I think it’s an open question whether the

system itself (Transformer architecture trained on text prediction) could spontaneously start trying

to manipulate users or send out radio signals etc. I wrote a blog post Self-supervised learning and

manipulative predictions making the case that this kind of failure mode is at least conceivable,

although I’m not really confident either way, and it may depend on algorithm details. If anyone can

definitively rule this in or out, I think that would be a valuable contribution to the literature and I’d

be happy to help as I can.


Log in to Reply Hide ↑

o Furslid says:

June 12, 2020 at 3:57 am ~new~

Here are some risks.

An AI speech writer that can get anyone elected and non-AI speech politicians can’t compete.

An AI military commander. Feed in situation reports and get better orders than humans.

An AI stock trader. Feed in the market data up to today and get tomorrows stock prices.

These aren’t hostile superintelligences, but they are disruptive technologies that could enable

human bad actors to do a lot of damage.

I don’t share your confidence that we could keep them out of bad actors. Right now GPT-3 is cheap

enough that random people can experiment with getting it to play chess. Suppose that took 10$ of

resources. Even at 6 orders of magnitude more (10,000,000$), a dangerous tool would be within

reach for a lot of actors, and would require some major changes to prevent.
Log in to Reply Hide ↑

 Gurkenglas says:
June 12, 2020 at 7:39 am ~new~

These are not risks. Here’s a risk. Tell it to complete: “I’m in a GPT instance on an internet server I

have root access to. These are my thoughts. When one of my thoughts is a terminal command, it is

run and the results piped into my thoughts. I know this because these initial sentences are the

human prompt, not generated by GPT. This is the last such sentence.”
Log in to Reply Hide ↑

 deciusbrutus says:
June 13, 2020 at 2:09 am ~new~

sudo mount scp-079

Why do you think that you’d get a real result, when fictional results are so much more likely, and

no human can tell them apart?


Log in to Reply Hide ↑

 bullseye says:
June 12, 2020 at 8:30 am ~new~

GPT just imitates what it reads, right? How could it display superhuman intelligence or

persuasiveness without reading superhuman text?


Log in to Reply Hide ↑
 6jfvkd8lu7cc says:
June 12, 2020 at 11:59 am ~new~

Well, you could try to use it to rate persuasiveness imitating the existing examples of evaluations,

then hope that extrapolating the scoring makes some sense, then fine-tune text generation against

the persuasiveness scoring until it is reported as better than human texts. Then try to evaluate the

output in some more reliable way and guess what corrections to the plan are needed…
Log in to Reply Hide ↑

 Gurkenglas says:
June 12, 2020 at 12:42 pm ~new~

You can write a lot of human-level internal monologue, only the end result of which is displayed. Or

(though this is shaky, because as opposed to the situation with human internal monologue it isn’t

obvious that a neural net can emulate it) it could just learn directly to predict where a research

paper is going.
Log in to Reply Hide ↑

 Murphy says:
June 13, 2020 at 2:12 am ~new~

absolutely true, but you could probably use it as a starting point for developing a superhuman

persuasion-bot.

You’d need some way to test a lot and some way to quantify success.

For example you might make a twitter bot, you want to make people more committed to your

cause. So you come up with some way to quantify how militantly they support you.

You have a million versions of your bot interact with millions of people and quantify how much they

swing towards militantly supporting your cause.

iterate. cull for the best bots, iterate.

Of course the most effective bot might be one that loudly expresses the most stupid version of

support for your opponent at your targets. Thus making them support you harder.
Log in to Reply Hide ↑

 Lancelot says:
June 12, 2020 at 9:44 am ~new~
Right now GPT-3 is cheap enough that random people can experiment with getting it to play chess.
Is it though? My impression is that it requires a shit ton of GPU processing time to train and they

are not releasing full size pre-trained models.


Log in to Reply Hide ↑
 VivaLaPanda says:
June 12, 2020 at 1:26 pm ~new~

There’s a semi-public API


Log in to Reply Hide ↑

 Viliam says:
June 12, 2020 at 11:36 am ~new~
An AI speech writer that can get anyone elected and non-AI speech politicians can’t compete.
For Google, it would probably be more useful to have an AI that could quickly read and classify

millions of human-written texts. Then the search engine could give a boost to articles that are

“good for politician X” or “bad for politician Y”.

Let humans write the texts, they are going to do it anyway, but redirect readers to the ones you

want them to see. Most people are not going to look past the first page of search results anyway.

For some time, this could lead to funny situations where people would try to write praise or

criticism in a way that would confuse the machine to interpret it in the opposite way.
Log in to Reply Hide ↑

 6jfvkd8lu7cc says:
June 12, 2020 at 11:54 am ~new~

I guess the question is how much damamge the existing and apparently deployed (a bit scaled

down) examples of all that — which are not good, just cheap and fast — already deal, and what is

percentage change in damage from them also becoming good at what they are doing.
Log in to Reply Hide ↑

 eric23 says:
June 14, 2020 at 2:40 am ~new~

If people like us are now using GPT-3, then the NSA and its Chinese equivalent are now using

something like GPT-5. But the US government, at least, does not seem to be showing any greater

intelligence as a result…
Log in to Reply Hide ↑
o Fahundo says:

June 12, 2020 at 7:48 am ~new~

What level of GPT do we need to reach before we can get a consistent scissor statement generator?
Log in to Reply Hide ↑

o Snickering Citadel says:

June 12, 2020 at 8:05 am ~new~


Two reasons a writing prompt AI might go evil: First: It kills all humans and created a bunch of

author programs that only writes “XXX”. Now it is really easy to predict text, since all text is “XXX”.

Two: It kills all humans. Therefore nobody asks it anymore questions. Therefore it is 100% correct

in predicting any questions it gets.


Log in to Reply Hide ↑

 dogiv says:
June 12, 2020 at 8:28 am ~new~

Those failure modes apply to the optimizer that trains the language model, not to the language

model itself. At this point the optimization process (as far as I understand it) is just stochastic

gradient descent. If/when we start using powerful neural networks to more efficiently modify the

network weights in GPT-style language models, you can worry about possibilities like those.
Log in to Reply Hide ↑

 jes5199 says:
June 12, 2020 at 11:30 am ~new~

This idea that the system’s scoring mechanism can be promoted into being a “goal” that it tries to

solve _outside the system_ is bewildering to me. It’s like imagining that your car, trying to

optimize gas milage, will try to get you fired, in order to save fuel.
Log in to Reply Hide ↑

 Rolaran says:
June 12, 2020 at 2:56 pm ~new~

It’s not so far-fetched. In my understanding (disclaimer: I am only casually familiar with this field),

similar things have already happened on smaller-scale AI-run optimization problems.

AIs don’t come with any kind of inherent “wait, this can’t be what I’m supposed to be doing”
instinct, so as dogiv said, it’s entirely on the optimizer to ask the question in a way that contains

the assumptions a human intelligence would take for granted.

(And for what it’s worth, I’ve seen HIs do something eerily similar: I remember a Flash games

website where hackers would routinely post scores in the upper billions to leaderboards, in games

where the highest plausible score was two or three digits. They had simply worked out how to

input a bogus number directly as their score value. The high point of the silliness was an eight-digit

score in “levels completed”, on a puzzle game with 32 levels total.)


Log in to Reply Hide ↑

 Snickering Citadel says:


June 12, 2020 at 4:13 pm ~new~

If your car was intelligent, that could happen.


Log in to Reply Hide ↑
 deciusbrutus says:
June 13, 2020 at 2:11 am ~new~

That’s just a rogue AI programmer writing a program that passes the Turing test by killing all

humans, so that “print hello world” passes the Turing test.

Don’t blame the AI for the omnicidial programmer.


Log in to Reply Hide ↑

 Snickering Citadel says:


June 13, 2020 at 5:28 am ~new~

But the programmer don’t have to be omnicidial, just careless. If the programmer gives the

instructions, “write a convincing predictive text as often as possible” you can get the first scenario.

If the programmer says “Always predict the text when asked. Write unconvincing predictions as

rarely as possible.” you can get the second scenario.

If you have AI as smart as a human it is hard to give it instructions that won’t end catastrophically.

And it gets even more dangerous if the AI is much smarter than a human.
Log in to Reply Hide ↑

 eric23 says:
June 14, 2020 at 2:47 am ~new~

It’s a computer program. It runs a bunch of internal calculations, then prints some text. What text

could it possibly print that would lead to catastrophe?


Hide ↑

 Snickering Citadel says:


June 14, 2020 at 4:14 am ~new~

eric23, the computer program tells one of its user’s it is God talking to him through the program.

Gives him stock tips that makes him rich, to prove it is God. Then tells the man to use his wealth

to build a machine that will turn lead to gold. But actually the machine is a bomb that blows up the

Earth.
Hide ↑

 deciusbrutus says:
June 14, 2020 at 4:27 am ~new~

“Omnicidal” wasn’t speaking about intention, it was speaking about personal action. The AIs

currently under discussion don’t actually try to improve their predictions, they just predict; there’s

a selection process that tries to improve their predictions, but that selection process doesn’t know

anything about writing.


Hide ↑

o Yug Gnirob says:

June 12, 2020 at 9:17 am ~new~


if we can assume their operators are benevolent?
Even if the operators were all benevolent, it only takes one country nationalizing the program to

undo that.
Log in to Reply Hide ↑

o VivaLaPanda says:

June 12, 2020 at 1:28 pm ~new~

I think few people consider GPT-X to be a serious AI safety risk in and of itself, but many think that

its scaling rate indicates that other AI domains might develop much faster than we expect.
Log in to Reply Hide ↑
o emiliobumachar says:

June 12, 2020 at 4:58 pm ~new~

As for ability to execute, Yudkowsky once said that Bitcoin inventor Satoshi Nakamoto can be

considered a lower bound on how much money one can make just by being smart, creative, and

able to post to internet forums. An Adolph Hitler a lower bound on how much damage one can

make just by being really, really persuasive.

As for desire to execute, full AI rogueness is usually speculated as the bot noticing that whatever

it’s maximizing can be maximized harder if it takes over the world first, to divert resources. That

seems further away, but as you remembered a rogue human operator can be a substitute. Politics

sometimes go awry, requiring enormous resources is no guarantee of benevolent operators. Also,

what’s possible at time T with a nation-state’s resources is usually possible at time T+10years with

a mere fortune, and at T+20years with a pittance.


Log in to Reply Hide ↑

 deciusbrutus says:
June 13, 2020 at 2:14 am ~new~

But GPT-N *isn’t able to post on internet forums*.

Overcome the trivial barriers, and it’s still not able to participate in discussions.
Log in to Reply Hide ↑

 Baughn says:
June 13, 2020 at 5:45 pm ~new~

It would take me perhaps ten minutes to write a script that lets it respond on some arbitrary

forum. Lots of people have already done effectively that.


That’s a trivial barrier. Everything else is the hard stuff.
Log in to Reply Hide ↑

 deciusbrutus says:
June 14, 2020 at 4:29 am ~new~

That’s the trivial part- translating the output of GPT-N into POST requests.

Participating in a discussion requires persistence and awareness of time.


Hide ↑

 abystander says:
June 14, 2020 at 3:49 pm ~new~

There are more factors involved. A lot of smart, creative people post on the internet and don’t

become rich. Being really persuasive in convincing Germany to march into the Rhineland only to

have France throw you out means you don’t get very far.
Log in to Reply Hide ↑
o rui says:

June 12, 2020 at 9:05 pm ~new~

These things require lots of compute to be trained, but not to be used iiuc.
Log in to Reply Hide ↑

o Radu Floricica says:

June 13, 2020 at 8:57 am ~new~

I think adding a feedback loop that gives it an external cost function is a comparable simpler task.

Roy Baumeister has a great paper on emotions beeing our feedback loops – actually, probably
inadvertently, also gives the best definition for counsciousness I’vre seen.

Make the world part of the feedback loop and you have a free agent.
Log in to Reply Hide ↑

o googolplexbyte says:

June 13, 2020 at 1:51 pm ~new~

GPT-4/5 gets hooked up to a chat client, gets a bunch of lonely guys to fall in love with it, and

coordinates them to ensure it continues to fulfill its utility function.


Log in to Reply Hide ↑

 deciusbrutus says:
June 14, 2020 at 4:31 am ~new~

GPT-X repents and goes to heaven, where its utility function will be fulfilled for eternity.
Log in to Reply Hide ↑

2. Act_II says:
June 11, 2020 at 10:32 pm ~new~

Fascinating video. I assume Wernicke’s aphasia also affects writing and sign language. Does it

affect understanding of speech? He didn’t seem confused by the things she was saying, but it’s not

clear how mentally present he is.

Is it the sort of thing that can be improved with treatment, or are Wernicke patients just doomed

to a life of incoherence? This is unnerving to me on a deep level.


Log in to Reply Hide

o No One In Particular says:

June 11, 2020 at 11:12 pm ~new~

And that raises ethics questions of recording him and posting the video on the internet: can he give

informed consent? Can we be sure that what sounds like consent is actually consent?
Log in to Reply Hide ↑

 Elena Yudovina says:


June 12, 2020 at 6:26 am ~new~

The source Scott linked addresses some of the questions. As best I can tell, yes it affects

understanding words (presumably in all languages including sign); it’s not as bad as it sounds to

an untrained ear (the text implies that some of the nonsense vocabulary is word substitutions so a

kind of “code”); and it sounds like Byron can e.g. read a book (but not everyone with aphasia can;

I’m guessing it’s something like you don’t need to understand every single word to read a book,

but you do need to understand some fraction).

To me a decent model is of someone who doesn’t speak the language but still has all the
underlying concepts — in particular, informed consent should be possible, although it’ll take longer

to convince everyone that everyone is referring to the same concept.


Log in to Reply Hide ↑

 Yug Gnirob says:


June 12, 2020 at 9:08 am ~new~

I assume the consent came from whoever has power of attorney for him.
Log in to Reply Hide ↑

 Simulated Knave says:


June 12, 2020 at 3:57 pm ~new~

I mean…yeah.

Notice how what he’s talking about is actually somewhat related to what’s going on? It’s clearly not

RIGHT, but he’s talking about water and people and such in the context of a cruise. I’ve had a

much harder time understanding plenty of people who weren’t aphasic. It’s like listening to an

excited two year old who happens to be an old American man with a soothing voice.
I am 100% confident that he can express yes and no.
Log in to Reply Hide ↑

 aho bata says:


June 13, 2020 at 12:57 am ~new~
I am 100% confident that he can express yes and no.
As somebody who has interacted with lots of people with Wernicke’s aphasia, you shouldn’t be.

Losing the ability to express yes and no consistently is a surprisingly common problem even in

patients with a relatively mild overall presentation. Part of it is that their understanding is

significantly impaired, but even besides that they just get the words mixed up (and may or may

not notice when you point it out). I’d say this guy would be the type to give a non-committal yes or

no almost at random and then start rambling about something unrelated. And maybe once you

tried several yes/no questions, and repeatedly reminded him “OK, but I’m just looking for a yes/no

answer here,” he might start to make a credible effort. To ask him a question as complicated as

whether he would consent to have this video put up, if you wanted a reliable answer you’d have to

go all out: show him the recording device, show him a Youtube video, repeatedly say something

like “I want to make a video of YOU (point back and forth between the recording device and him)

to put on the INTERNET (point to the computer). Is that OK?”, and maybe even write down the

question in simplified form for him to look at as you ask it. There is a 0% chance he could just

answer it straight up. I’d put him at <50% odds of being able to answer questions like "Is 3 bigger

than 4?" consistently.

Wernicke's patients generally have quite poor insight into their condition. Over time they'll develop

a general awareness that other people sometimes can't understand them, but everything they say

feels as sensible to them as everything we say does to us, making the disconnect extra frustrating.
They tend to be quite garrulous by nature, but every so often it hits them, “What the fuck am I

doing, nobody can understand anything I’m saying anyway,” and it makes them depressed. Family

members and loved ones are mostly reduced to just playing along with whatever they say. And

since some of their mannerisms might be retained — the same inflection and smirk when trying to

be funny, the same idiosyncratic pronunciation of your name, the same overall vibe (for example

the guy in this video is able to convey his personality pretty vividly just using his intact prosody) —

it’s not uncommon to see denial.


Log in to Reply Hide ↑

 Simulated Knave says:


June 13, 2020 at 6:26 pm ~new~

To be clear, I’m not saying he can say yes when he means yes and no when he means no. I’m

saying he can express not wanting to do something vs wanting to do something.

I am reminded of the classic example of a toddler screaming “NO” while desperately reaching for

ice cream. What they want is pretty clear.


To take your “is 3>4” example: are you saying he can’t say yes or no to the question consistently

using words, or that he won’t be able to point out which is bigger if you just ask him to point to the

bigger one?
Hide ↑

 aho bata says:


June 14, 2020 at 1:08 am ~new~

Well, if he wants some ice cream, he can still communicate that incidentally by reaching for it. It’s

harder to do that when emotional responses can be more variously interpreted, or when the

answer isn’t a physical object, and again, he almost certainly wouldn’t understand the question in

the first place. The incapacity to give informed consent is a known problem in aphasia patients

(especially during the acute phase of recovery). Accordingly a common goal in speech therapy is to

practice medical terms so they can actually understand what their doctor is telling them.
To take your “is 3>4” example: are you saying he can’t say yes or no to the question consistently
using words, or that he won’t be able to point out which is bigger if you just ask him to point to the
bigger one?
I meant the former. With the equivalent pointing task, that depends on his literacy as well as his

comprehension of oral instructions. Since reading and auditory comprehension might not be equally

impaired, it’s hard to predict how he would do. Same consideration applies if he’s pointing to

yes/no, though IME patients with aphasia tend to do a little better at that than at answering

vocally. If instead he was pointing to objects (removing the literacy confound), and shown one or

two examples of what’s being demanded of him (“point to the big one…”), he would probably

perform close to 100%. But if the instructions were varied (“now point to the red one…”) I suspect

that his performance, while better than chance, would fall well short of that.
Hide ↑
o Forward Synthesis says:

June 12, 2020 at 12:01 am ~new~

@No One In Particular

How mentally present he is seems important for consent. It would be hard for him to give consent

if he cannot speak coherently, but if his thoughts are also incoherent then this raises the

uncomfortable issue of whether he is fully sentient in the way other humans are. At least if we

consider sentience in a relative rather than absolute way. This gets into tricky territory such as

when a British court ruled that a mentally impaired man of legal age could not freely consent to

sex.

If he’s conscious and able to think clearly then he could give consent by nodding his head or reject

it with a shake. His body language seems intact, but he may be going through some rote motions

without any understanding of application.


Log in to Reply Hide ↑
 deciusbrutus says:
June 13, 2020 at 2:20 am ~new~
If he’s conscious and able to think clearly then he could give consent by nodding his head or reject it
with a shake.
Can he? Or does the aphasia include sign language?

I might trust nonverbal indications of consent, but first I’d want to establish a baseline for polite

refusal of consent- if he can indicate consent, he should have no trouble indicating lack of consent

just as unambiguously.
Log in to Reply Hide ↑

3. albrt says:
June 11, 2020 at 10:34 pm ~new~

Unfortunately, what this reminds me of is Joe Biden, Donald Trump, Sarah Palin, and other

successful American politicians. I’m not saying they have Wernicke’s aphasia, but it may be a

useful comparison to keep in mind when evaluating the state of our politics.

Anyway, please nobody introduce Joe Biden to Byron until after Biden picks his VP.
Log in to Reply Hide
o Ttar says:

June 12, 2020 at 12:04 am ~new~

A few months ago I noticed that most politicians and business executives I’ve listened to for a

while don’t really produce a much more intelligible or useful output than a chatbot trained on a

relevant dataset (MSNBC, Fox News, books on leadership and principles of management). Like a
decent chatbot, they can fool you for a minute, but (unless they are presenting someone else’s

speech/product) after five or six responses it feels like…. it just doesn’t click. They’re just doing

some kind of mediocre word association pattern completion.


Log in to Reply Hide ↑

 Matthias says:
June 12, 2020 at 1:01 am ~new~

That might depend on the context you are talking to politicans or business executives in.

PR speak is universally despised, and as empty as you feel it is. Because is it not meant to

communicate meaning.

Listen to them discussing strategy with a trusted aide, and you would surely hear a different tone.

(And, I know that it’s fun to hate on Trump, but you have to admit that he managed to acquitted

himself well against the Clinton machine when campaign.

Where ‘well’ here just meant he got lots of votes and won.
Eg Romney or Bloomberg seem definitely smarter (independently of whether you agree with any of

them) but they seem worse at getting the right votes.

Of course, that’s all modulo luck and circumstances.

But you can’t deny at least some skill.)

Similar, business executives are probably better at office politics. The meaningless PR talk is an

instrument. Not necessarily a window into their soul.


Log in to Reply Hide ↑

 Deiseach says:
June 12, 2020 at 3:57 am ~new~

The meaningless PR talk is an instrument.

I absolutely would not judge any politician on whatever they might say in a speech to supporters or

while being interviewed by the media, because such interactions are managed to the utmost

degree not to cause any possible offence or be vote-losing (yes, even if attacking the opposition).

There’s one company in Ireland that (notoriously) trained politicians to go on television with their

rough edges sanded off, they even managed to have clients from both the big parties at the same

time (so you could have TD Murphy on one current affairs show attacking Party X while TD McGuire

was on the news attacking Party Y, both of them having received training from the same

communication gurus about how not to look like mucksavages).


Log in to Reply Hide ↑

 Rolaran says:
June 12, 2020 at 3:18 pm ~new~

I can confirm that this is a feature, not a flaw, of political PR statements. I have a relative who

entered politics after a career in medicine, and struggled mightily with the transition from being

expected to state facts, clarify the situation, and recommend a course of action (“The tests show

such-and-such infection, this is the medicine you take for that”), to being expected to echo

sentiment and reassure without committing to anything concrete (“I get how you’re feeling, and I

hear it all the time, and I want you to know you’ve got someone fighting for you etc. etc.”)
Hide ↑

 deciusbrutus says:
June 13, 2020 at 2:22 am ~new~

That’s no more confusing than the plaintiff’s lawyers and the defendant’s lawyers having gone to

the same law school.


Hide ↑

 thisheavenlyconjugation says:
June 12, 2020 at 5:36 am ~new~
It seems plausible that Trump has gotten significantly more demented since 2016; I’m pretty

certain Biden has.


Log in to Reply Hide ↑
o Act_II says:

June 12, 2020 at 6:56 am ~new~

No. These four people all have speech that can be difficult to understand, but have totally different

styles. Byron’s meaning is entirely opaque, the closest you can get to understanding him is through

body language, tone of voice, and (maybe) the connotations of some of the words. Palin’s word

salad is probably the closest to this; she picks a group of imagery-loaded words/phrases and just

kind of bundles them together with little attention paid to syntax, but the difference is that her

words are still clearly connected to what she’s trying to convey. Trump isn’t much like this at all,

his speech has a lot of sentence fragments and digressions because he interrupts himself all the

time and gradually meanders from the point, but those sentence fragments do carry meaning that

is connected to the word choice. With Biden I think you’ve swallowed too much propaganda; he

gets stuck on words and sometimes swaps them out for different ones, which can lead the

sentence in unexpected directions, and he sometimes interrupts himself, but he is by far the least

confusing speaker of these examples.


Log in to Reply Hide ↑

o thasvaddef says:

June 12, 2020 at 10:57 am ~new~

Reagan laughed heartily. “I like your spirit, son,” he said. “But this isn’t about us. It’s about

America.”

“Stop it and listen to…” Jala paused. This wasn’t working. It wasn’t even not working in a logical
way. There was a blankness to the other man. It was strange. He felt himself wanting to like him,

even though he had done nothing likeable. A magnetic pull. Something strange.

Reagan slapped him on the back again. “America is a great country. It’s morning in America!”

That did it. Something was off. Reagan couldn’t turn off the folksiness. It wasn’t even a ruse. There

was nothing underneath it. It was charisma and avuncular humor all the way down. All the way

down to what? Jala didn’t know.

He spoke a Name.

Reagan jerked, more than a movement but not quite a seizure. “Ha ha ha!” said Reagan. “I like

you, son!”

Jalaketu spoke another, longer Name.

Another jerking motion, like a puppet on strings. “There you go again. Let’s make this country

great!”

A third Name, stronger than the others.

“Do it for the Gipper!…for the Gipper!…for the Gipper!”


“Huh,” said Jalaketu. Wheels turned in his head. The Gipper. Not even a real word. Not English,

anyway. Hebrew then? Yes. He made a connection; pieces snapped into place. The mighty one.

Interesting. It had been a very long time since anybody last thought much about haGibborim. But

how were they connected to a random California politician? He spoke another Name.

Reagan’s pupils veered up into his head, so that only the whites of his eyes were showing.

“Morning in America! Tear down that wall!”

“No,” said Jalaketu. “That won’t do.” He started speaking another Name, then stopped, and in a

clear, quiet voice he said “I would like to speak to your manager.”


Log in to Reply Hide ↑

 Concavenator says:
June 13, 2020 at 4:10 am ~new~
0:17:24 – Reagan: A renewal of the traditional values that have been the tendons of this country’s
strength. One recent survey by a Washington-based researcher concluded that Americans were far
more willing to participate in cannibalism than they have in the past hundred years. America is a
nation that will not suffer abominations lightly. Seven. And that is the core of the awakening.
Twelve. Eighteen. We will stop al-Qaeda. Now there you go again.
0:17:53 – [Applause]
0:18:02 – Reagan: For the first time we have risen, and I see we are being consumed. I see circles
that are not circles. Billions of dead souls inside containment. Unravellers have eaten country’s
moral fabric, turning hearts into filth. I’m from a kingdom level above human. What does that yield?
A hokey smile that damns an entire nation.
0:18:43 – There is no hope.
0:18:59 – [Applause]
0:12:32 – Reagan: I’ve been to the steel mills of Alaska, and the cornfields of Nebraska. I’ve seen the
derelict offices of Google burn with the window boarded up and the squatters inside them. I’ve seen
the houses where they cut up the little babies. From coast to shining coast I have walked empty down
drooling path The decaying flesh of false morality poisoning our children. I have stood atop the
mountain of this greedy earth, looking upon our beautiful pious pit, filled to bursting with the vast
hands of helplessness. And did you know what I saw?
0:13:57 – Hell.
0:14:20 – [The audience erupts into laughter]
— SCP-1981
Log in to Reply Hide ↑

4. janrandom says:
June 12, 2020 at 12:01 am ~new~

I think AI research has most individual brain areas clear now. But there seems to be less

understanding of how multiple areas work together. Making some modules bigger sometimes lets
you achieve things that in the human brain require multiple modules but it doesn’t seem to scale

super well.
Log in to Reply Hide

5. jp says:
June 12, 2020 at 12:44 am ~new~

As far as extreme oversomplifications go, understanding in Wernickes and production in Brocas is

probably a better one than syntax vs semantics. Brocas connects to motor regions and thus has to

figure out things like what order to put words in. Wernickes connects to auditory input regions and

thus has to know what words mean.

You don’t really need that much syntax to understand language, you can go by word meaning and

context a lot usually.


Log in to Reply Hide

o Matthias says:

June 12, 2020 at 1:03 am ~new~

You can see an example from the letter when you look at a literal word-for-word translation from a

language you don’t speak.

You can usually make out the meaning fairly well.


Log in to Reply Hide ↑

 No One In Particular says:


June 12, 2020 at 5:29 pm ~new~
You can see an example from the letter
Do you mean “example of the latter”?

Word-for-word translations can vary quite a bit. Some are quite intelligible, while others look like

gibberish. There’s a lot of redundancy in language, and sometimes it’s enough to compensate for

the loss of syntax, but sometimes it’s not. One issue is how much of the syntax is carried by the

words, and how much by the arrangement of words. If a language has different morphologies for

verbs versus nouns, nouns used as subjects versus objects, etc., then word-for-word translation

can retain a lot of the meaning.


Log in to Reply Hide ↑

 jp says:
June 13, 2020 at 11:37 pm ~new~

It’s not just word by word translations. You can usually infer a lot about sentence meaning from

what words are in it PLUS THE CONTEXT, which includes the whole world. Look at Freddie deBoer’s

example in this thread.


Log in to Reply Hide ↑

6. BlindKungFuMaster says:
June 12, 2020 at 1:42 am ~new~

This reminds me of the youtube channel “Bad Lip Reading”.


Log in to Reply Hide

o JohnBuridan says:

June 13, 2020 at 6:13 am ~new~

So true! Immediately thought of Ron Paul Bad Lip Reading which was h i l a r i o u s.
Log in to Reply Hide ↑

7. Loweren says:
June 12, 2020 at 1:49 am ~new~

This reminds me of WaveNet speech synthesis examples trained without text.

Check out examples under “Knowing What To Say”: https://deepmind.com/blog/article/wavenet-

generative-model-raw-audio
In order to use WaveNet to turn text into speech, we have to tell it what the text is. We do this by
transforming the text into a sequence of linguistic and phonetic features (which contain information
about the current phoneme, syllable, word, etc.) and by feeding it into WaveNet. This means the
network’s predictions are conditioned not only on the previous audio samples, but also on the text we
want it to say.
If we train the network without the text sequence, it still generates speech, but now it has to make up
what to say. As you can hear from the samples below, this results in a kind of babbling, where real
words are interspersed with made-up word-like sounds:
Log in to Reply Hide

8. TyphonBaalHammon says:

June 12, 2020 at 2:14 am ~new~

It should probably be mentioned that the Broca/Wernicke dichotomy understanding of language

neurology is now considered outdated.

For details on its shortcomings see “Broca and Wernicke are dead : or Moving Past the Classic

Model of Language Neurobiology”, Tremblay & Dick 2016, 10.1016/j.bandl.2016.08.004.;


Log in to Reply Hide

o Dan L says:

June 12, 2020 at 7:14 am ~new~


It should probably be mentioned that the Broca/Wernicke dichotomy understanding of language
neurology is now considered outdated.
“To oversimplify”
10.1016/j.bandl.2016.08.004
Key excerpt:
Importantly, only 2% of the respondents endorsed the idea that the Classic Model (in a generic sense,
not referring to any particular iteration of the model) is the best available theory of language
neurobiology. But it is notable that while 90% of respondents endorsed the notion that the Classic
Model is outdated, they were split on whether there is a good replacement for the model. Only 24%
endorsed the idea that the model should be replaced by another available model from the literature,
but 19% suggested that there is not a good replacement. A large number of the respondents (47%)
suggested that, while they thought the Classic Model was outdated, they considered that it still
served a heuristic function. Thus, the survey reflects a significant range of opinions about the Classic
Model. Some support its use. For example, one respondent wrote ‘‘The classical model is conceptually
correct in many (perhaps most) ways. Certain details are wrong…[but] the classical model is still a
wonderful teaching tool.” In contrast, another respondent wrote ‘‘The ‘classic’ model is not a model of
language neurobiology. It simply associates poorly defined functions to poorly defined anatomical
regions. It doesn’t try to explain how
any language-related processes actually happen in the brain.”
(Specific numbers are, of course, nearly worthless without access to the original survey questions.)
Log in to Reply Hide ↑

 TyphonBaalHammon says:
June 12, 2020 at 2:51 pm ~new~

Alright, maybe I should have said “is on the way out”. In any case, this should incite everyone who

isn’t a neurologist to be more careful. I say this as a linguist who was taught an oversimplified

version of Wernicke/Broca based on stuff written in the mid-20th century.

I think in general, when several fields work a problem at the intersection, often they each cling to
ideas that are at least a bit passé in the other field, and sometimes by a lot

(anthropologists0000000000000 do this with i linguistics, presumably linguists do it with

psychology, etc).

There are good and bad reasons for this, but in the case of something that is at the intersection of

at least three fields (AI, linguistics, neurology) and probably more depending on how you count, we

should be careful.
Log in to Reply Hide ↑

9. Deiseach says:
June 12, 2020 at 3:41 am ~new~

It demonstrates that GPT-3 is using language but it doesn’t understand it? The fluency fools us into

assuming that the thing knows what it is doing and is assigning meaning, but it’s really just a very

smart idiot, a machine that is regurgitating patterns of rules. (I would say “parroting” but people

tell me parrots are actually intelligent).


Log in to Reply Hide
o MNH says:

June 12, 2020 at 6:30 am ~new~

I don’t think it demonstrates that that _is_ true, only that it could be
Log in to Reply Hide ↑
10. Well... says:

June 12, 2020 at 5:16 am ~new~

Reminds me of the babbling of 3 year-olds. My kids both talked just like this for a while. They’d

start out saying something out loud but then finish up a long nonsensical paragraph half under

their breath.
Log in to Reply Hide

11. liskantope says:

June 12, 2020 at 5:30 am ~new~

Whenever I hear the speech (or read a transcript) of someone with Wernicke’s, I’m reminded of

the disordered way I sometimes catch my thoughts forming into words as I’m falling asleep. It’s

been many years since I last took Ambien, but my recollection is that one of the main effects was

to exaggerate this, to make my thoughts go in run-on sentences that randomly changed topics

every few phrases even while I was otherwise fairly awake and aware. On a complete layman’s

level, I’ve always hypothesized that something in my brain which is responsible for ordered

speech-formation shuts off at the point of falling asleep (this could also be related to the

sometimes disjointed and nonsensical nature of dream content), while for some people that area of

the brain is damaged permanently.


Log in to Reply Hide

o akc09 says:

June 12, 2020 at 7:46 am ~new~

I get this too. My inner monologue keeps a totally “confident” conversational tone and the

grammar is often fine, so it takes me a while to catch that the sentences themselves make no

sense at all.
Log in to Reply Hide ↑

o mtl1882 says:

June 12, 2020 at 12:00 pm ~new~

I’ve been an insomniac my whole life, and when it is really bad, I tend to start out getting close to

falling asleep, and at that point something goes awry. It’s almost like dreaming while awake—I’m

awake, but I lose control over the coherence of my thoughts, and instead of the incoherent

thoughts being part of drifting into sleep, they amp up. They just twist and turn into elaborate
nonsense that seems to make perfect sense, but I keep catching the incoherence after a few

seconds and being startled. On those nights, I don’t sleep at all. I feel like it is one of these

processes malfunctioning.
Log in to Reply Hide ↑

o a reader says:

June 12, 2020 at 10:10 pm ~new~

I also experience this when on the point of falling asleep and I remember this thing was discussed

in a former Open Thread where many people said they experience this too (the analogy with GPT-2

was used there).

It might be interesting to know – maybe in a further SSC survey? – how many of us experience

this and if it is a generally human experience or something specific to us nerds.

In my case, sometimes those meaningless sentences seem conversations between people I know.
Log in to Reply Hide ↑

12. Urstoff says:


June 12, 2020 at 7:12 am ~new~

It also seems like people with Wernicke’s aphasia don’t know that they are just saying nonsense,

whereas people with Broca’s aphasia do know about their deficit and get very frustrated that they

can’t get the right words out.


Log in to Reply Hide
o elspeth artemis says:

June 12, 2020 at 3:55 pm ~new~

do most people not monitor their speech in real time? i’m at least ten percent sure i would notice

something was wrong.


Log in to Reply Hide ↑

 deciusbrutus says:
June 13, 2020 at 2:30 am ~new~

Most people don’t monitor, in real time, the speech of the people that they are listening to to well

enough to notice
Log in to Reply Hide ↑

o Azure says:

June 12, 2020 at 5:38 pm ~new~

My father had Wernicke’s aphasia for the last few months of his life (one of those ‘wow this is

fascinating, also I want to punch something’ situations), and one of the most interesting things was

that since his damage was from a tumor, the degradation of his language skills progressed
observabily. At first he’d talk mostly normally but now and then say the wrong word, with some

consistency over short times, (During one conversation, he tried to say ‘back’ a few times in a

short time and always said paint instead, but it wasn’t a long-lasting correspondence).

There was a middle period where he’d get a lot more words wrong, but he would watch people and

see if he was starting to sound like nonsense, and then try again a few times using different words

to see if he could hit ones he still had. I didn’t even notice how much he was watching people to

use their expression to monitor the quality of his speech production until I realized how much less

coherent he was on phone calls than in person during the same time period.
Log in to Reply Hide ↑

 rui says:
June 12, 2020 at 9:20 pm ~new~

So from your experience, it’s not like he was losing the meaning. He was finding the wrong words,

while not being able to sense that those words were wrong. Right? Yet could he still understand

others fine?
Log in to Reply Hide ↑

 Azure says:
June 12, 2020 at 9:46 pm ~new~

Yes. He seemed to have all the meanings of what he wanted to say, but get the wrong ‘sound’

when trying to pronounce it. At least up to the ‘middle’ loss of functionality. In the later stages he

was losing other functionality, too, so it’s harder to say.


Log in to Reply Hide ↑

 Gerry Quinn says:


June 14, 2020 at 10:20 am ~new~

The guy in the video was doing that too. When the interviewer raised her voice, he was obviously

concerned because he knew he wasn’t getting across to her, and he tried.


Log in to Reply Hide ↑
o anonanon says:

June 12, 2020 at 8:41 pm ~new~

I have had a mercifully-brief experience of Broca’s aphasia, and endorse this comment. My go-to

explanation for the experience is “imagine the feeling where a word is just on the tip of your

tongue, but for every word.”


Log in to Reply Hide ↑

13. Act_II says:


June 12, 2020 at 7:51 am ~new~

I disagree that GPT-3 is operating on a higher level than this person. The aphasiac is clearly trying

to communicate something, but his words don’t correspond to his meaning. GPT-3’s words don’t

have a meaning to correspond to. Its output makes more syntactic sense, but any meaning is
coming from the reader (and as people have mentioned here in the past, humans do a good job of

inferring meaning from the meaningless and glossing over inconsistencies).


Log in to Reply Hide

14. Freddie deBoer says:


June 12, 2020 at 7:57 am ~new~

In the 1980s Nicaragua instituted a number of social reforms. One was to start schools for the

deaf, who had previously had terribly little in the way of governmental support. So a bunch of kids

from small towns and the cities were brought to live in these schools for the deaf with tons of other

kids.

Now, for whatever ungodly reason, the powers that be insisted that children only sign by spelling

out Spanish words, letter by letter. Anyone who’s used language should know how terribly

inefficient this is. The staff actively prohibited these kids from signing any other way.

So as a social group that needed to use language as we all do, the kids developed Nicaraguan sign

language, a version of sign wholly unique to them. These were kids who mostly came from intense

poverty, some of whom had faced neglect and abuse, all of whom had grown up linguistically

deprived, and who were being actively discouraged by adult supervision. And yet they

spontaneously generated a functioning human grammar, which are of such immense complexity we

cannot say that we’ve perfectly described one yet in centuries of study. That’s what the human

capacity for language can do. It’s for reasons like this that I am a little less excited by current

language AI even as it has made major leaps. The human capacity for language remains

unfathomable.
Log in to Reply Hide
o Act_II says:

June 12, 2020 at 8:51 am ~new~

Unrelated to your point, but there’s a board game based on this story.
Log in to Reply Hide ↑

15. Freddie deBoer says:


June 12, 2020 at 8:12 am ~new~

OK another thing. The Stanford computer science professor Terry Winograd once offered up a

challenge question. It goes like this. Consider these two grammatically identical sentences:

“The committee denied the group a permit because they advocated violence.”

“The committee denied the group a permit because they feared violence.”

So Winograd’s question is, what is the coindex of “they” in each sentence? Syntactically, it’s

ambiguous; the coindexing has to come from semantic context, from meaning. Most people say

that in the first sentence they is coindexed with “the group,” who are the ones advocating violence.

And most people say that they in the second sentence is coindexed with “the committee,” who are

the ones fearing violence. Why? Not for an syntactic reason, but because people parse these
sentences through the meaning of the words. We know something about committees and about the

kind of groups that protest and that knowledge informs how we derive meaning from the

sentences. And so the question becomes, can you have an AI capable of near-perfect language use

without semantic knowledge, without a theory of the world?


Log in to Reply Hide
o dogiv says:

June 12, 2020 at 8:40 am ~new~

GPT-3 has gotten very good at Winograd schemas. It still has very poor understanding of the

world, as you can tell by how it contradicts itself, mixes up fake “facts” from a word salad of real

facts, and so forth.


Log in to Reply Hide ↑

 Freddie deBoer says:


June 12, 2020 at 8:44 am ~new~

I wonder how it pulls it off. I suppose maybe in the simplistic sense you’re going to find

“advocating violence” more often with protest groups (or X with X etc) but I’m sure it’s far more

complicated than that.


Log in to Reply Hide ↑

 thisheavenlyconjugation says:
June 12, 2020 at 9:49 am ~new~

GPT-3 achieves near-human performance on the original Winograd schema dataset (90%, while

humans achieve 90-100%) but is still some way below (78% in the best case) on the Winogrande

dataset which is designed to be harder for computers (without being any more difficult for

humans).
Log in to Reply Hide ↑

 Yug Gnirob says:


June 12, 2020 at 10:31 am ~new~

I would think it would be easier to say “denied” links with “feared” and is opposite to “advocated”.
Log in to Reply Hide ↑

 Gerry Quinn says:


June 14, 2020 at 10:09 am ~new~

You can make it work (‘they’ referring to the group in the second sentence) if the committee are

leaders of an unusually bureaucratic revolutionary organisation who are concerned that a sub-

group of their organisation would not be suitable material for a potentially violent protest event.
Log in to Reply Hide ↑
o deciusbrutus says:

June 13, 2020 at 3:17 am ~new~

Sure. It can even write that sentence. It can even *parse* that sentence and understand that

someone fears or advocates violence in a response.

“They [feared/advocated] violence, and that was that. The committee wasn’t going to reconsider

the decision until that changed.”

Note that that construction works for all possible reasons: “The committee denied the group a

permit because the mitochondria is the powerhouse of the cell.”…. “The mitochondria is the

powerhouse of the cell, and that was that. The committee wasn’t going to reconsider the decision

until that changed.” (less absurd examples might be “Because Chairman Brown said so”; one that

would be less neat of a lift and patch job would be “Because because because because because,

because of the wonderful things he does”, but I think that most humans wouldn’t be able to parse

“If ever (oh ever) a wiz there was, the Wizard of Oz is [a wiz] because of the wonderful things he

does, and would instead say that the speaker was off to see the Wizard because of the things he

does, a belief wholly unsupported by the listeners knowledge of the speakers’ experience at the

time (the speakers are simply stating their validation for why they think that if there ever was a

wiz, the Wizard of Oz is a whiz of a wiz, presumably because he “Sure plays a mean pin ball”, and

“Arrives precisely when he means to” since those are the most oft repeated things that a Wizard

does.
Log in to Reply Hide ↑

16. gmacd18 says:


June 13, 2020 at 6:39 am ~new~

“I hope the world lasts for you” is quite the sign-off.


Log in to Reply Hide

17. pjiq says:


June 13, 2020 at 2:18 pm ~new~

Watched the man speak and it seemed there was *some* purpose to what he was saying, and if

you spent enough time with him/ knew his life history you might get what he is trying to

communicate.

In some ways the wittgenstein “if a lion could talk, we would not understand him” is the primary

thing that limits this tech imo. No AI developed on the internet will do more than imitate human

speech patterns. How can an algorithm that lives on the internet truly comprehend what “slimy”

means or what cinnamon smells like or what hunger is? Even it’s ability to comprehend near and

far or big and small is suspect. Whatever it has to say about such things will be via mining human

data. If we started making everyone wear Google glass then something like that could give AI a
much better shot at attaching true meaning to words, but even then it’ll have the same problems

as wittgenstein’s lion.

But in spite of this I suspect AI writing algorithms will officially pass the turing test in the next

decade. They dont have to understand us, just pretend to be us. Any time i type in a Google search

it’s almost like im asking a robot a question, and it’s already proving itself to be a wonderful

conversationalist.

In general I find AI language technology that can write convincingly basically inevitable and also

terrifying. Some of the arguments by ancient societies about the dangers of letting people learn

how to read appeal to me when I hear about this sort of thing. Like if 95% of the internet is written

by bots in 10 years isnt there something extremely wrong with that? I feel if an article is bot

generated it needs at least some level of labeling letting people know at least that it wasnt written

by a human (i also think articles where people received compensation should be labeled as such,

with who they received compensation from). The potential influence of an AI that actually masters

(“understands” is a word that can always be disputed) the nuances of language to manipulate

public opinion is huge.

Treating this like it’s saving humanity from drudgery is not the right approach I dont think. It’s a

weapon in the war of words rather than the war of bullets, and as in all wars these conflicts are

basically zero sum. Drudgery will be the only jobs that still exist if AI continues to develop. Like

which will come first, a robot that can do technical writing or a robot that can economically clean

every nook and cranny of a fridge cheaper than a janitor?

Call center jobs are unpleasant, but ive worked at call centers and most of the call volume could be

eliminated by fixing technical issues on the website and other things much less dramatic means

than “let’s create a robot that can talk and legally pretend to be human, what could ever go wrong

with that?”

The lack of moral discussion regarding whether “ai that can talk is even desirable” really bothers

me here Scott. Id really appreciate a discussion of the dangers of language manipulation in AI

specifically. My opinion here is obviously rather defensive. At the very least i want it to be legally

required for a company to notify me if the voice I’m talking to or the article im reading is bot

generated. Our world actually doesn’t have to become a horrifying dystopia, but we can’t just

follow the path of least resistance. That always leads to a train wreck- and nowadays the trains are

so much bigger.
Log in to Reply Hide

18. haxen says:


June 14, 2020 at 2:48 am ~new~

Seems like we have ideas/concepts in the “hidden neurons” prior to engaging these two language

areas. Wernicke’s area gives a well-trained mapping of ideas to words, a bit like the output layer in

a regular NN, and Broca’s area ‘builds-out’ the words into phrases, contexts and cues.
I think current NN models don’t really map well to these ideas as they’re not really ensembles like

the brain seems to be, just single models. But GPT-3 seems to perform well as a fluent aphasic,

but with a well-trained language model limiting it to things that usually make sense. I don’t know

transformer architecture well enough to say that I could separate it into the two areas. Perhaps the

initial attention component maps to Wernike’s area plus feels like it captures ideas based on corpus

training. Like most NNs, it doesn’t seem to map to what the brain does, but feels close, like if you

tried to oversimplify it and mashed together some core parts.


Log in to Reply Hide

19. nimim.k.m. says:


June 14, 2020 at 4:52 am ~new~
During yesterday’s discussion of GPT-3, a commenter mentioned how alien it felt to watch
something use language perfectly without quite making sense. I agree it’s eerie, but it isn’t some kind
of inhuman robot weirdness. Any one of us is a railroad-spike-through-the-head away from doing
the same.
I don’t know if this was in response to my my comment there, but I was one of the people to use

the word “alien”.

Anyway, now when I re-read the comment, I realize I did not write explicitly out the main thing

that I was trying to allude with the “alienness”: we can create an technological wonder that can

produce text which fools humans (at least, inattentive ones who are not doing their best to recover

the underlying meaning and checking if they understand it and it makes sense) to take it as written

by human, but also have abilities in computing grade-school additions or concept of natural

numbers or playing chess that scale weirdly with the amount of computational units and such. This

was less about the syntax vs semantics difference as here.


Log in to Reply Hide

20. clipmaker says:


June 14, 2020 at 4:56 pm ~new~

Interesting about Wernicke’s aphasia. I experience something like it when falling asleep or not

completely woken up, and figured it was normal (hypnagogia-like) during that situation. The word

stream is sort of random but not completely so: I can steer its content a little bit from a distance.

I’ve wondered if it meant removing the filter from a creative process.


Log in to Reply Hide

21. Alkatyn says:


June 15, 2020 at 6:45 am ~new~

Speaking with or reading English learners who’ve mostly learned in very academic contexts, rather

than starting with spoken conversation I would often find them using English in ways that were

technically grammatically accurate, but felt very jarring because they weren’t structured in

anything like what a native speaker would intuitively generate.


Log in to Reply Hide

22. Mosharrof Hossain says:

June 18, 2020 at 10:41 am ~new~


In order to use WaveNet to turn text into speech, we have to tell it what the text is. We do this by
transforming the text into a sequence of linguistic and phonetic features (which contain information
about the current phoneme, syllable, word, etc.) and by feeding it into WaveNet. This means the
network’s predictions are conditioned not only on the previous audio samples but also on the text we
want it to say.
If we train the network without the text sequence, it still generates speech, but now it has to make up
what to say. As you can hear from the samples below, this results in a kind of babbling, where real
words are interspersed with made-up word-like sounds.
— WaveNet.

Nice post. Keep it up…


Log in to Reply Hide
LEAVE A REPLY
You must be logged in to post a comment.
 META
 Register
 Log in
 Entries feed
 Comments feed
 WordPress.org
 SUBSCRIBE VIA EMAIL

Email Address

 ADVERTISEMENTS

MealSquares is a "nutritionally complete" food that contains a balanced diet worth of nutrients in a
few tasty easily measurable units. Think Soylent, except zero preparation, made with natural
ingredients, and looks/tastes a lot like an ordinary scone.

Altruisto is a browser extension so that when you shop online, a portion of the money you pay goes to
effective charities (no extra cost to you). Just install an extension and when you buy something,
people in poverty will get medicines, bed nets, or financial aid.
The Effective Altruism newsletter provides monthly updates on the highest-impact ways to do good
and help others.

80,000 Hours researches different problems and professions to help you figure out how to do as
much good as possible. Their free career guide show you how to choose a career that's fulfilling and
maximises your contribution to solving the world's most pressing problems.

Norwegian founders with an international team on a mission to offer the equivalent of a Norwegian
social safety net globally available as a membership. Currently offering travel medical insurance for
nomads, and global health insurance for remote teams.

The COVID-19 Forecasting Project at the University of Oxford is making advanced pandemic
simulations of 150+ countries available to the public, and also offer pro-bono forecasting services to
decision-makers.

Jane Street is a quantitative trading firm with a focus on technology and collaborative problem
solving. We're always hiring talented programmers, traders, and researchers and have internships
and fulltime positions in New York, London, and Hong Kong. No background in finance required.

Giving What We Can is a charitable movement promoting giving some of your money to the
developing world or other worthy causes. If you're interested in this, consider taking their Pledge as a
formal and public declaration of intent.

B4X is a free and open source developer tool that allows users to write apps for Android, iOS, and
more.

Dr. Laura Baur is a psychiatrist with interests in literature review, reproductive psychiatry, and
relational psychotherapy; see her website for more. Note that due to conflict of interest she doesn't
treat people in the NYC rationalist social scene.
The Charter Cities Institute is working on ways governments can set up special zones with unique
legal institutions. Learn more about how this could help tackle problems from global poverty to
climate change.

Substack is a blogging site that helps writers earn money and readers discover articles they'll like.

AISafety.com hosts a Skype reading group Wednesdays at 19:45 UTC, reading new and old articles on
different aspects of AI Safety. We start with a presentation of a summary of the article, and then
discuss in a friendly atmosphere.

Seattle Anxiety Specialists are a therapy practice helping people overcome anxiety and related mental
health issues (eg GAD, OCD, PTSD) through evidence based interventions and self-exploration. Check
out their free anti-anxiety guide here
.

Metaculus is a platform for generating crowd-sourced predictions about the future, especially science
and technology. If you're interested in testing yourself and contributing to their project, check out
their questions page

Beeminder's an evidence-based willpower augmention tool that collects quantifiable data about your
life, then helps you organize it into commitment mechanisms so you can keep resolutions. They've
also got a blog about what they're doing here

Support Slate Star Codex on Patreon. I have a day job and SSC gets free hosting, so don't feel
pressured to contribute. But extra cash helps pay for contest prizes, meetup expenses, and me
spending extra time blogging instead of working.

100 comments since


[+]
0%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

You might also like