Site icon DataForSEO

How the price for using LLM Responses endpoints is calculated

The pricing system for the LLM Responses API follows a similar pay-as-you-go principle implemented for our APIs. That means you pay only for the APIs and features you use, and the data you receive. However, our system interacts with LLM APIs to send user requests and retrieve response data, which also requires payment for the tokens processed and any additional features used. Thus, the price of using the LLM Responses API is calculated differently, considering additional expenses. Here is how.

For endpoints using the Live data retrieval method, the cost is calculated by the following formula:

Base task price ($0,0006) + Cost charged by an LLM API

Here, the cost charged by an LLM API consists of:

The cost of token processing. It includes the charges for input and output tokens processed by a selected LLM. The cost of processing tokens is individual for every AI model supported by the LLM Responses API. The newest and most advanced LLMs usually charge more for token processing than legacy models. You can choose the model you want to use by specifying the model_name parameter in the API request.

The cost of using AI features. It includes the charges for using any special features the LLM provides to generate a response. In the case of LLM Responses endpoints, you will be charged additionally for using the web search feature. This feature allows the AI model to access and cite current web information. You can enable it by specifying the web_search parameter with true in the request.

The web search cost also depends on the AI model used. It is important to note that some AI models don’t have the web search feature, and in Perplexity Sonar models, it is always enabled by default.

Below are links to the pricing pages for supported LLM APIs, where you can find information about token processing costs and feature usage charges:

Additionally, our system supports the Standard (POST-GET) data retrieval method for ChatGPT and Claude LLMs. For using the Standard method, the total cost is calculated as follows:

Base task price ($0.0002) + Advance payment ($0.01)

Our system requires an advance payment of $0.01 to execute the task and ensure sufficient funds are available to cover the LLM API costs. The advance payment is charged automatically when the task is set.

After task completion, if the cost charged by the LLM API is less than $0.01, the difference is refunded to your account. If the cost charged by the LLM API exceeds $0.01, the additional amount is charged from your balance.

Tasks using the Standard method may take up to 72 hours to complete. If the task is not completed within this time, it is marked as failed, and the $0.01 advance is refunded. It is also important to note that if your account balance is negative, you will not receive the results even if the task is completed successfully.

Where can I find spending information in the API response?

The API response has two fields that display spending information:

Additionally, the result array contains input_tokens and output_tokens fields, which display the number of input and output tokens the LLM processed, respectively.

For more information on LLM Responses API pricing, visit the Pricing page. Moreover, don’t hesitate to contact our 24/7 customer support if you have any additional questions.

Exit mobile version