OpenAI API

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

Gemini (Google) is currently not available for API usage in our region.

Azure Copilot, currently does not have API or price plan.

OpenAI has been at the forefront of AI development, offering an array of models, each
designed to meet various computational needs. Among these, the GPT-4-Vision-Preview
stands out for its advanced capabilities in processing not just textual information but also
visual inputs. Its pricing reflects the dual nature of its functionality.

In the OpenAI API, the currency of operation is referred to as "tokens." This system
quantifies the amount of text processed, with payments structured around the number of
tokens used.

For the GPT-4-Vision-Preview, the pricing is straightforward and usage-based, aimed at


providing cost-effective access to powerful AI tools. It costs $0.01 per 1,000 prompt
tokens, which covers the data fed into the model. The completion tokens, representing
the model's output, are priced at $0.03 per 1,000 tokens. This tiered pricing structure
ensures users pay in proportion to their usage, maintaining affordability even as they
scale up.

In addition to the GPT-4-Vision-Preview, OpenAI's lineup includes models like the text-
centric GPT-4, the more efficient GPT-3.5 Turbo, and the specialized Assistants API, each
with its unique pricing scheme. These models serve a wide array of applications, from
complex data analysis to streamlined chatbot functionalities. OpenAI's commitment to
providing scalable solutions is evident in its diverse offering, catering to a spectrum of
needs while prioritizing cost efficiency for users.

Pricing: https://openai.com/pricing

What are tokens: https://help.openai.com/en/articles/4936856-what-are-tokens-and-how-to-


count-them

In OpenAI's system, there is also a specific calculation for determining the cost in tokens
when an image is used as input. This calculation varies depending on the resolution of
the image submitted. For images of low resolution, the token cost is generally lower,
reflecting the smaller amount of data to be processed. Conversely, high-resolution
images incur a higher token cost due to the increased data size and complexity involved
in processing. This pricing structure ensures that users can optimize their costs based
on the resolution of images required for their applications.
Gpt-4-vision-preview model pricing

Gpt-4-vision-preview model pricing for images

Prices for low-resolution images

Price for high-resolution images


Rate Limits for openAI API

Rate limits: https://platform.openai.com/docs/guides/rate-limits?context=tier-one

OpenAI API implements a tiered pricing and rate limit structure to cater to a wide range
of use cases and customer needs. These tiers are designed to accommodate users with
varying levels of demand, offering different limits on requests per day (RPD), tokens per
minute (TPM), and requests per minute (RPM). The structure ensures that users can
select a plan that best fits their usage patterns and budget. The more a user is willing to
pay, the higher the tier they can access, which comes with the advantage of higher rate
limits. This tiered approach allows OpenAI to provide scalable solutions that can be
tailored to both individual developers and large enterprises, ensuring that resources are
efficiently allocated across its user base.
OpenAI API GPT-4-Vision-Preview Model: Tier 1 and 2
Account
Overview Tier 1

● Tier 1 Account Cost: from $5 to $49


● Usage Limits:
● Requests Per Minute (RPM): 80
● Allows a high volume of requests in a short time frame.
● Requests Per Day (RPD): 500
● Suits regular daily usage for moderate-scale projects.
● Tokens Per Day (TPD): 10,000
● Limits the number of tokens processed daily, catering to moderate
usage levels.
Overview Tier 2

● Tier 2 Account Cost: from $50 to $99


● Usage Limits:
● Requests Per Minute (RPM): 100
● Facilitates a greater volume of requests in a short time, ideal for
projects requiring quick interactions with the API.
● Requests Per Day (RPD): 1,000
● Accommodates increased daily usage, suitable for larger-scale
projects or more frequent API interactions.
● Tokens Per Day (TPD): 20,000
● Expands the limit on the number of tokens processed daily,
supporting more extensive use cases and higher demand for text
generation or processing.

Our use-case pricing for gpt-4-vision-preview model:

Per Request:
● Prompt Tokens: 150 tokens
● Completion Tokens: 100 tokens
● Total Tokens: 250 tokens
● Cost for Prompt Tokens: $0.0015
● Cost for Completion Tokens: $0.003
● Total Cost per Request: $0.0045

For 1,000 Requests:


● Total Prompt Tokens: 150,000 tokens (150 tokens/request * 1,000 requests)
● Total Completion Tokens: 100,000 tokens (100 tokens/request * 1,000 requests)
● Total Tokens for 1,000 Requests: 250,000 tokens
● Total Cost for Prompt Tokens: $1.50 (0.0015 dollars/token * 150,000 tokens)
● Total Cost for Completion Tokens: $3.00 (0.003 dollars/token * 100,000 tokens)
● Total Cost for 1,000 Requests: $4.50 (0.0045 dollars/request * 1,000 requests)
OpenAI API usage:
Documentation: https://platform.openai.com/docs/overview
Within the OpenAI API user interface, there are multiple sections, one of which is
dedicated to billing. This section allows users to add credit to their account for future use
and also provides a facility to check the remaining credit balance.

The next section within the OpenAI API user interface pertains to usage. Here, users can
view detailed statistics on their consumption, including the number of tokens and amount
of money spent, broken down by days, models, and months. This feature enables users
to monitor and manage their API usage effectively.
Additionally, there is a section on the OpenAI API user interface where users can generate
an API key. This key enables them to access the OpenAI API, facilitating secure and
authenticated interactions with the platform's services.
Useful Information for Interacting with OpenAI API:

● Endpoint: POST https://api.openai.com/v1/chat/completions


● Authorization Header: Include a "Bearer Token" with your API key for
authentication.
● Content-Type: Set to "application/json" to specify the format of your request
body.

To access and utilize the OpenAI API effectively, it's crucial to correctly configure your
request headers with the necessary authentication and content type information. These
settings ensure that your requests to the API are secure and interpreted correctly by the
server, facilitating smooth and efficient communication with OpenAI's advanced AI
services.

When constructing the body of your request to the OpenAI API, particularly when using
the endpoint for chat completions, you need to include several pieces of information to
guide the processing of your query. Here's a breakdown of the essential components:

● Model: Specify the model you intend to use for your query. In this case, you would
use gpt-4-vision-preview as the model parameter to indicate the specific AI
model you're interacting with.
● Max Tokens: Define the maximum number of tokens the response is allowed to
contain. For instance, setting max_tokens to 200 limits the response to this token
count, ensuring that the output is concise and within your expected scope.
● Message: The message portion includes the actual content of your interaction with
OpenAI. It should contain a text field with a specific question or prompt you're
asking OpenAI. Additionally, you can include an image_url that either contains the
URL of the image you're referring to or an encoded version of the image itself. You
should also specify the resolution of the image (either "low" or "high") in the details
section, providing OpenAI with the context needed to process the image
appropriately.

These components are vital for structuring your request to the OpenAI API, allowing for
precise and tailored interactions with the AI, whether you're querying with text, images, or
a combination of both. Ensuring these elements are correctly formatted and included in
your request body will facilitate effective communication with the API and garner the
most relevant responses to your queries.
Request example

Image on provided url:


OpenAI response, body and header of response

In the response from the OpenAI API, you will receive message content that includes both
the answer to your question and an analysis of the specific image you submitted. This
allows for a comprehensive understanding of the content you're inquiring about,
leveraging the AI's capabilities to interpret and provide feedback on images as well as
text inputs.

Additionally, the response includes a "usage" section, which details the token
consumption for your request. In the provided example, the usage is outlined as follows:

"usage": {
"prompt_tokens": 149,
"completion_tokens": 65,
"total_tokens": 214
},

This indicates that the request sent 149 tokens as input, where the image accounted for
85 tokens and the text for 64 tokens. The part of the response received, which addresses
your query, cost 65 tokens. This breakdown is crucial for monitoring and managing your
token usage with the OpenAI API, ensuring you remain within your allocated budget or
limits. It also provides insight into the cost distribution between processing the image
and generating the textual response, allowing for more informed decisions in future
interactions with the API.

You might also like