Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 2

We just hosted our 2nd annual Sequoia Capital AI Ascent, a gathering of the 100 top founders

and researchers in AI.

Big thank you to our amazing speakers including Sam Altman, William Peebles and Noam
Brown of OpenAI, Arthur Mensch of Mistral AI, Daniela Amodei of Anthropic, Dylan
Field of Figma, and heavy hitters like Andrej Karpathy Andrew Ng Harrison Chase and more.

Here are some of the most thought-provoking ideas from the day:

Idea #1: LLMs as Agents


- LLMs have the potential to be powerful agents, defined as (1) choosing a sequence of actions to
take - through reasoning/planning or hard-coded chains – and (2) executing that sequence of
actions
- Per Andrew Ng and Harrison Chase: some agent capabilities are robust (tool use, reflection,
chaining together actions in LangChain) while others are emergent (planning/reasoning, multiple
agents, memory)
- Examples: Zapier or Glean for actions, Cognition for reasoning
- Andrej Karpathy had an elegant prediction that self-contained agents is roughly where are
headed in the manifestation of AGI into the different nooks and crannies of the economy.

Idea #2: Planning & Reasoning


- Planning & reasoning was a major emphasis at our event and a close cousin to the “agents”
topic
- If you make the comparison to AlphaGo, Step 1 (pre-training/imitation learning) only takes you
so far, while Step 2 (reinforcement learning, search) is what actually made those AIs superhuman.
A similar analogy holds for LLMs.
- These are broad-sweeping lessons from 70 years of AI research. The two methods that scale
arbitrarily and generally are search and learning (see Richard Sutton's The Bitter Lesson).
Exciting times for AI research...

Idea #3: Practical AI Use in Production


- Smaller/cheaper/but still “pretty smart” models were a consistent theme in our event
- In addition, we discussed speed/latency, expanding context windows/RAG, AI safety,
interpretability, and the CIO as “on the rise” as the key buyer for AI that makes enterprises more
efficient internally.

Idea #4: What to Expect from the Foundation Model Companies


- Bigger smarter models
- Less big, less expensive, pretty smart models
- More developer platform capabilities
- Different focus areas: Mistral AI on developer and open-source, Anthropic
on the enterprise

Idea #5: Implications for AI Startups!


- The model layer is changing rapidly and getting better, faster, cheaper
- Smart: focus on building applications that will get better as the models get smarter
- Not smart: don't spend time patching holes in the current models that will disappear as the
models themselves improve, this is not a durable place to build a company

Assume that the foundation will get smarter and cheaper, and all the little nuances (latency, etc)
smooth over... what great applications can you build?

I feel lucky for what an exceptional AI community we are part of (or an AI "coral reef" as Andrej
Karpathy called it), and how information-dense th

You might also like