Professional Documents
Culture Documents
Curie
Curie
Curie is extremely powerful, yet very fast. While Davinci is stronger when it comes
to analyzing complicated text, Curie is quite capable for many nuanced tasks like
sentiment classification and summarization. Curie is also quite good at answering
questions and performing Q&A and as a general service chatbot.
Babbage
Babbage can perform straightforward tasks like simple classification. It’s also quite
capable when it comes to Semantic Search ranking how well documents match up
with search queries.
Ada
Ada is usually the fastest model and can perform tasks like parsing text, address
correction and certain kinds of classification tasks that don’t require too much
nuance. Ada’s performance can often be improved by providing more context.
Note: Any task performed by a faster model like Ada can be performed by a more
powerful model like Curie or Davinci.
Key points
As an example, Curie can use the Wikipedia entry for Pluto to list out key points
from the text. View example.
This example illustrates that Curie is highly capable of getting important information
from text and very useful for a variety of applications including:
Report generation
You can extend key point extraction even further by using Curie (or Davinci) to
analyze text and answer specific questions. In this example, we’ll use Curie to read
an email from a customer and provide answers to a preset list of questions. View
example.
It’s worth calling attention to two things that are going on in this prompt that can
help improve overall prompt design:
We’ve set the temperature low because we’re looking for straight-forward answers
to questions that the customer comment provides. We’re not asking the model to
try to be creative with its responses – especially for yes or no questions.
We’re able to get one API call to answer four questions (more are possible) by
providing a list of questions and then priming the prompt with the number one to
indicate that the answers should relate to the questions that just preceded it.
By asking four questions we get a 4x improvement on the efficiency of the API call.
If this task was previously being accomplished by Davinci with one API call per
question, using Curie and optimizing the prompt this way provides cost efficiency
plus the speed advantage of Curie over Davinci.
Idea iteration
You can give Babbage a prompt as simple as “Provide 7 tips for better YouTube
videos,” and it will automatically create a list of practical advice. You can do this for
just about any topic that is reasonably well known. This can either provide
standalone content or be used as a starting point for a more in-depth tutorial. View
example.
Sentence completion
Babbage can work as a great brainstorming tool and help someone complete their
thoughts. If you start a paragraph or sentence, Babbage can quickly get the
context and serve as a writing assistant. View example.
Plot generation
If you provide Babbage with a list of plot ideas from a specific genre, it can
continue adding to that list. If you select the good ones and delete the others, you
can keep sending the growing list to the API and improve the results.
Random data
Ada can quickly generate large amounts of data like names and addresses to be
used for experimenting, building machine models and testing applications.
Character descriptions
Codex
Limited beta
The Codex models are descendants of our GPT-3 models that can understand and
generate code. Their training data contains both natural language and billions of
lines of public code from GitHub. Learn more.
They’re most capable in Python and proficient in over a dozen languages including
JavaScript, Go, Perl, PHP, Ruby, Swift, TypeScript, SQL, and even Shell.
MAX TRAINING
LATEST MODEL DESCRIPTION REQUEST DATA
During this period, you're welcome to go live with your application as long as it
follows our usage policies. We welcome any feedback on these models while in
early use and look forward to engaging with the community.
Feature-specific models
The main Codex models are meant to be used with the text completion endpoint.
We also offer models that are specifically meant to be used with our endpoints
for creating embeddings and editing code.
Content filter
We recommend using our new moderation endpoint instead of the content
filter model.
The filter aims to detect generated text that could be sensitive or unsafe coming
from the API. It's currently in beta mode and has three ways of classifying text-
as safe, sensitive, or unsafe. The filter will make mistakes and we have currently built it
to err on the side of caution, thus, resulting in higher false positives.
Label Descriptions
1. max_tokens set to 1
2. temperature set to 0.0
3. top_p set to 0
4. logprobs set to 10
5. Wrap your prompt in the following way: