Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

Page 1 of 6

Memorandum

To: Republican Commerce Committee LAs


From: Republican Commerce Committee Staff
Date: June 28, 2024
RE: Paul Christiano, Head of AI Safety, Artificial Intelligence Safety Institute, NIST

Executive Summary

While at Camp David last fall, President Biden tried to relax by screening the summer blockbuster,
Mission: Impossible – Dead Reckoning Part One. Yet as he watched the film’s AI villain become
sentient, go rogue, and attack government entities, the president “saw plenty . . . to worry about”
regarding AI. 1 White House Deputy Chief of Staff Bruce Reed later explained this movie spurred Biden
to move forward with a sweeping AI Executive Order on October 30, 2023. 2 Two days later, the
Department of Commerce stood up the Artificial Intelligence Safety Institute (AISI) within the National
Institute of Standards and Technology (NIST) “to lead the U.S. government’s efforts on AI safety and
trust, particularly for evaluating the most advanced AI models” and “support the responsibilities
assigned to the Department of Commerce under the historic executive order.” 3

Whom did the administration select to lead “AI Safety” at this new, powerful government entity in
setting national standards for AI development? Paul Christiano—an AI Doomer who wants more
government control over computer technology because he estimates “a 10-20 percent chance of [an] AI
takeover” that results in humans dying, and “a 50-50 chance of doom shortly after you have AI systems
that are human level.” 4 Christiano has argued that governments should “implement [AI] regulation
immediately” and enact AI “policies that will often lead to temporary pauses or slowdowns in
practice.” 5

Christiano has built his career on the unscientific prophecy that AI poses an existential risk to future
societies. Based on this belief, Christiano created tools to measure the “alignment” of AI systems to
certify their “safety.” The problem with this approach is that the assessor decides what values, ethics,
and goals must guide an AI system, and therefore, whether AI is adhering to the assessor’s subjective
definition of “safety.” In order for the federal government to assess AI alignment—as Christiano
proposes—it must adopt sweeping powers and tools to determine what information is “safe.” Such tools
have enormous potential to facilitate the same kind of government-Big Tech censorship collusion seen
during the pandemic and 2020 election. Moreover, if the government requires new AI developers
seeking to enter the marketplace to pass its safety assessments—which Big Tech and its allies, like
Christiano, build—the government will not only enable regulatory capture, but also promote
marketplace consolidation.

As Congress considers legislation to codify AISI and empower this new regulator to set national
standards for the nascent AI industry, lawmakers should consider whether Christiano and Biden
administration allies will use their newfound authority to champion alarmist AI policies that stifle
competition, consolidate AI development into the hands of a few companies (including one Christiano
personally helped to create), and further government censorship of Americans.
Page 2 of 6

This memo provides information regarding (1) Christiano’s radical statements on AI, (2) his
professional background, including his time as a senior scientist at OpenAI and close association with
the controversial Effective Altruism and longtermism philosophies, and (3) his appointment to lead
AISI.

I. CHRISTIANO HAS MADE NUMEROUS APOCALYPTIC AND UNGROUNDED


PREDICTIONS ABOUT THE RISK OF AI TO HUMANITY.

• Christiano follows in the grand tradition of stoking anxieties to exert control over industry. In the
mid-19th Century, Great Britain restricted automobiles from exceeding a 2 mile-per-hour speed limit
in response to horse owners’ claims about the dangers of the new “horseless carriage.” 6 In the
1970s, “peak oil” advocates pumped fears that the world would soon run out of oil so
environmentalists could use taxpayers’ money to subsidize competing energy sources. 7 Now,
Christiano and his ilk claim that the only way to prevent AI from destroying humanity is to grant
government bureaucrats sweeping new regulatory powers.
• Last year, in a podcast episode titled “How We Prevent the AI’s from Killing Us with Paul
Christiano,” Christiano prophesied that “there’s something like a 10–20 percent chance of [an]
AI takeover” that kills humans, and “overall, maybe you’re getting more up to a 50-50 chance
of doom shortly after you have AI systems that are human level.” 8
o He further claimed that “[t]he most likely way we die involves—not AI comes out of the blue
and kills everyone—but involves we have deployed a lot of AI everywhere . . . [and] if for
some reason, God forbid, all these AI systems were trying to kill us, they would
definitely kill us.”
• Over the past decade, Christiano has authored several blog posts on the Effective Altruism Forum
that further reveal his personal anxieties about AI development and the need for government control.
o In an October 2023 blog post titled, “Thoughts on Responsible Scaling Policies and
Regulations,” he advocated for “[a] durable, global, effectively enforced, and hardware-
inclusive pause on frontier AI development.” 9 He further stated that:
 “I think the safest action in an ideal world would be pausing [AI development]
immediately until we were better prepared.”
 “[If] governments shared my perspective on risk then I think they should
already be implementing domestic policies that will often lead to temporary
pauses or slowdowns in practice.”
 “In reality I think the expected risk is large enough (including some risk of a
catastrophe surprisingly soon) that a sufficiently competent state would
implement regulation immediately.”
o In an eerie 2014 blog thought piece titled “On Progress and Prosperity,” Christiano wrote
that he considered “the collective welfare of future people to be substantially more
important than the welfare of existing people.” 10
o In a 2013 blog post titled, “Three Impacts of Machine Intelligence,” he explained “that
machine intelligences can make a plausible case that they deserve equal moral standing
. . . and that an increasingly cosmopolitan society will be hesitant about taking drastic anti-
Page 3 of 6

machine measures (whether to prevent machines from having ‘anti-social’ values, or


reclaiming resources or denying rights to machines with such values).” 11
• Much like the Wilsonian administrative statists who believed specialized power should be
concentrated in the hands of a few technocrats, Christiano also believes AI development should be
limited to a few, established actors.
o During a 2018 podcast, Christiano stated “to the extent that we have a coordination problem
amongst developers of AI, to the extent that the field is hard to reach agreements or
regulate, as there are more and more actors, then [I] almost equally prefer not to have a
bunch of new actors.” 12 He added that “it’s nicer to have a smaller number of more pro-
social actors than to have a larger number of actors with uncertain . . . or even a similar
distribution of motivations.”

II. CHRISTIANO’S PROFESSIONAL BACKGROUND AS AN ACOLYTE OF


EFFECTIVE ALTRUISM AND LONGTERMISM EXPLAINS HIS STATEMENTS.

• Christiano is closely associated with two controversial philosophies: Effective Altruism and
longtermism. 13
o Effective Altruism is a 21st century philosophy that advocates “using evidence and reason to
figure out how to benefit others as much as possible, and taking action on that basis.” 14
 Effective Altruists believe that AI is an existential threat to humanity. 15
 The philosophy gained notoriety after the collapse of FTX and indictment of Sam
Bankman-Fried, the FTX founder and Effective Altruism advocate. 16
o The closely related philosophy, longtermism, posits that “the long-term future” of human
potential “matters more than anything else, so we should . . . ensure not only that it exists, but
that it’s utopian.” 17
 Longtermists believe that we should use advanced technologies to reengineer our
bodies and brains to create a “superior” race of radically enhanced “posthumans,”
which is necessary for humans to achieve our full potential. 18
 Critics of longtermism argue that the philosophy is “rooted in eugenics” and is
“eugenics under a different name.” 19
o Proponents of these philosophies are sometimes referred to as “AI Doomers,” because they
believe AI poses existential risks to humanity. 20 Therefore, as a former longtermist
explained, when they refer to AI “risks,” they are referring to “any possibility of [human]
potential being destroyed” centuries from now. 21 Critics of longtermism have noted its
striking resemblance to other utopian movements that advocated for reprehensible violence
to accomplish its goals.
• After completing his PhD in theoretical computer science in 2017, Christiano began his career as a
fellow at the Future of Humanity Institute (FHI)—a controversial, Oxford University-based
research group dedicated toward studying existential risks and advancing longtermism. 22
o An ex-longtermist labeled FHI’s work as a “noxious ideology” and “eugenics on steroids.” 23
 Throughout its existence, FHI was led by Professor Nick Bostrom, who argued in a
2007 paper “that there are no compelling reasons to resist the use of genetic
intervention to select the best children.” 24
Page 4 of 6

o FHI also advocated for government control of AI. In a 2017 paper, Bostrom argued in favor
of increasing the “openness” about the “capabilities, goals, and plans” for AI development,
because it could “increas[e] the probability of government control of advanced AI.” 25
 Bostrom thanked Christiano for his contributions to this paper.
o Oxford shut down FHI in April 2024 after Bostrom’s past use of racist language and
statements came to light, among other controversies. 26 In an “apology,” Bostrom failed to
withdraw his comments and seemed to make a partial defense of eugenics. 27
• Christiano worked from 2017 to 2021 at Open AI (which developed Chat GPT), leading its
language model alignment team. 28
• From around 2016 to 2021, Christiano served as a technical advisor at the Open Philanthropy
Project, a left-wing grantmaking foundation controlled by Facebook co-founder Dustin Moskovitz
that issues grants based on Effective Altruism. 29
o For instance, in 2019 the Project donated $750,000 to the Real Justice PAC—which works to
elect pro-crime prosecutors; 30 in 2021 it donated approximately $42 million to Georgetown’s
Center for Security and Emerging Technology—a think tank that warns about the existential
threat of AI, 31 and in 2022 it gave over $10 million to the Clinton Health Access Initiative—
a Clinton Foundation program. 32
o The Open Philanthropy Project has also given hundreds of millions of dollars to think tanks
and programs that place staffers in the federal government. 33
• Christiano left OpenAI in 2021 to start Open Philanthropy-funded nonprofits that provide AI
companies risk and safety assessments and write safety standards that AISI—which
Christiano now leads—relies upon to develop federal guidelines. 34
o He first started the Alignment Research Center (ARC), a non-profit focused on developing
techniques to identify and test whether an AI model is potentially dangerous. 35
 ARC received $1.5 million from Open Philanthropy and $1.25 million from Sam
Bankman-Fried’s FTX Foundation. 36
o In September of 2023, ARC spun out a new nonprofit, Model Evaluation and Threat
Research (METR), which provides AI risk and safety assessments to companies. 37
 In February 2024, METR published an AI standard (used by the UK AI Safety
Institute and other organizations) to evaluate AI Safety. 38
o METR and ARC’s work focuses on testing whether an AI is “aligned” and therefore “safe.”
Alignment refers to an AI producing outputs that are aligned with the AI developer’s values,
goals, and ethics. When government bureaucrats seek to set AI alignment in the name of
“safety,” they necessarily engage in judgment calls about science, bias, history, and
misinformation.
o METR is a member of the NIST AI Safety Institute Consortium, which is housed under AISI,
and partner of the UK AI Safety Institute, including the Frontier AI Taskforce. 39 In these
roles, METR assists the U.S. and U.K. governments in “developing or deploying AI in safe,
secure, and trustworthy ways.” 40
 Given that Christiano created METR, as the head of AI Safety at AISI, it is likely that
he will give METR’s recommendations to AISI undue weight.
• In 2023, Christiano served on the advisory board of the UK Frontier AI Taskforce—a
government “start-up” dedicated to evaluating the risks of AI. 41 In October 2023, the Taskforce
Page 5 of 6

published a discussion paper on AI risks that calls for censorship and echoes the ideas FHI
championed. 42
o The paper warns that there is a “significant” risk AI will “degrad[e] the information
environment” and that “be misused to deliberately spread false information.”
o It also posits that “future advanced AI systems will seek to increase their own influence and
reduce human control, with potentially catastrophic consequences.”
• From 2023 until April 2024, Christiano served as a trustee for Anthropic’s “Long-Term
Benefit Trust”—which manages the AI company—along with Jason Matheny, CEO of the RAND
Corporation and a former Biden White House aide. 43

III. NIST EMPLOYEES VIEW CHRISTIANO’S APPOINTMENT TO LEAD THE


ARTIFICIAL INTELLIGENCE SAFETY INSTITUTE AS PROBLEMATIC.

• In response to the AI Executive Order, Commerce Secretary Gina Raimondo formed AISI within
NIST to establish AI safety standards and guidance for the private sector and government agencies.
o NIST’s mission is “to promote US innovation and industrial competitiveness by advancing
measurement science, standards, and technology in ways that enhance economic security and
improve our quality of life.” 44 It has a reputation for being a non-partisan, scientific agency. 45
• Secretary Raimondo appointed Christiano as Head of AI Safety at AISI in April 2024. 46 NIST
employees threatened to resign over Christiano’s appointment because they feared that his
association with Effective Altruism and longtermism could compromise the institute’s
objectivity and integrity. 47
1
https://apnews.com/article/biden-ai-artificial-intelligence-executive-order-cb86162000d894f238f28ac029005059
2
https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-
trustworthy-development-and-use-of-artificial-intelligence/
3
https://www.commerce.gov/news/press-releases/2023/11/direction-president-biden-department-commerce-establish-us-
artificial
4
https://www.youtube.com/watch?v=GyFkWb903aU&ab_channel=Bankless
5
https://www.alignmentforum.org/posts/dxgEaDrEBkkE96CXr/thoughts-on-responsible-scaling-policies-and-regulation
6
https://www.mentalfloss.com/article/71555/ridiculous-uk-traffic-laws-yore
7
https://www.wsj.com/articles/why-peak-oil-predictions-haven-t-come-true-1411937788
8
https://www.youtube.com/watch?v=GyFkWb903aU&ab_channel=Bankless
9
https://forum.effectivealtruism.org/posts/cKW4db8u2uFEAHewg/thoughts-on-responsible-scaling-policies-and-regulation
10
https://forum.effectivealtruism.org/posts/L9tpuR6ZZ3CGHackY/on-progress-and-prosperity
11
https://forum.effectivealtruism.org/posts/KdxGwxwY3t7iw9xjB/three-impacts-of-machine-intelligence
12
https://forum.effectivealtruism.org/posts/fmk8xJG2TPBc2W7zo/paul-christiano-on-how-openai-is-developing-real-
solutions
13
https://venturebeat.com/ai/nist-staffers-revolt-against-potential-appointment-of-effective-altruist-ai-researcher-to-us-ai-
safety-institute/
14
https://www.centreforeffectivealtruism.org/ceas-guiding-principles
15
https://www.politico.com/news/2023/12/30/ai-debate-culture-clash-dc-silicon-valley-00133323
16
https://www.bbc.com/worklife/article/20231009-ftxs-sam-bankman-fried-believed-in-effective-altruism-what-is-it
17
https://www.vox.com/future-perfect/23298870/effective-altruism-longtermism-will-macaskill-future
18
https://aeon.co/essays/why-longtermism-is-the-worlds-most-dangerous-secular-credo
19
https://www.truthdig.com/articles/nick-bostrom-longtermism-and-the-eternal-return-of-eugenics-2/
20
https://www.thedailybeast.com/the-ai-doomers-have-infiltrated-washington
21
https://aeon.co/essays/why-longtermism-is-the-worlds-most-dangerous-secular-credo
Page 6 of 6

22
https://www.theguardian.com/technology/2024/apr/19/oxford-future-of-humanity-institute-closes;
https://forum.effectivealtruism.org/topics/paul-christiano
23
https://www.theguardian.com/technology/2024/apr/28/nick-bostrom-controversial-future-of-humanity-institute-closure-
longtermism-affective-altruism
24
https://nickbostrom.com/ethics/human-enhancement
25
https://nickbostrom.com/papers/openness.pdf
26
https://www.thedailybeast.com/nick-bostrom-oxford-philosopher-admits-writing-racist-n-word-email
27
https://nickbostrom.com/oldemail.pdf
28
https://paulfchristiano.com/
29
https://www.openphilanthropy.org/grants/?q=%22effective+altruism%22; https://www.influencewatch.org/person/dustin-
moskovitz-and-cari-tuna-wife/; https://forum.effectivealtruism.org/topics/paul-christiano
30
https://www.openphilanthropy.org/grants/real-justice-pac-criminal-justice-reform-october-2019/;
https://realjusticepac.org/endorsements/
31
https://www.openphilanthropy.org/grants/?organization-name=center-for-security-and-emerging-technology;
https://www.politico.com/news/2023/10/13/open-philanthropy-funding-ai-policy-00121362
32
https://www.openphilanthropy.org/grants/clinton-health-access-initiative-chai-
incubator/#:~:text=Open%20Philanthropy%20recommended%20a%20grant,was%20made%20on%20GiveWell's%20recom
mendation
33
https://www.politico.com/news/2023/12/30/ai-debate-culture-clash-dc-silicon-valley-00133323
34
https://ai-alignment.com/announcing-the-alignment-research-center-a9b07f77431b
35
https://www.alignment.org/
36
https://www.openphilanthropy.org/grants/alignment-research-center-general-support-november-2022/;
https://openphilanthropy.org/grants/alignment-research-center-general-support /; https://www.wsj.com/articles/ftx-seeks-to-
recoup-sam-bankman-frieds-charitable-donations-11673049354
37
https://metr.org/
38
https://metr.org/blog/2024-02-29-metr-task-standard/
39
https://metr.org/
40
https://www.nist.gov/aisi/artificial-intelligence-safety-institute-consortium-aisic
41
https://www.gov.uk/government/publications/frontier-ai-taskforce-first-progress-report/frontier-ai-taskforce-first-progress-
report
42
https://www.gov.uk/government/publications/frontier-ai-capabilities-and-risks-discussion-paper/frontier-ai-capabilities-
and-risks-discussion-paper
43
https://www.anthropic.com/news/the-long-term-benefit-trust; https://www.vox.com/future-perfect/23794855/anthropic-ai-
openai-claude-2
44
https://www.nist.gov/about-nist
45
https://ww2.aip.org/fyi/2017/nist-director-nominee-discusses-his-vision-agency; https://www.nist.gov/speech-
testimony/developing-nist-privacy-framework-how-can-collaborative-process-help-manage-privacy
46
https://www.nist.gov/news-events/news/2024/04/us-commerce-secretary-gina-raimondo-announces-expansion-us-ai-safety
47
https://venturebeat.com/ai/nist-staffers-revolt-against-potential-appointment-of-effective-altruist-ai-researcher-to-us-ai-
safety-institute/

You might also like