HELP CENTER

Best practices for handling Keywords Data API requests

General recommendations (only for Google Ads)

  • Try to bring the number of keywords in each task closer to 1000. We also recommend grouping them by location. This way you will make fewer requests and spend less.
  • When grouping the keywords for setting a task to the Search Volume and Keywords for Keywords endpoints, try to avoid synonymous queries within the same task. Google Ads API can group such keywords and return one result for a group of keywords. It is best to use standard functions that are available in every programming language, for example, in the PHP language. That is, when adding a keyword to the payload of a task, compare it with the keywords already present in it. If it is synonymous by more than 30% – put this keyword in another task.
  • When generating the payload, use regular expressions (RegEx) to validate the keywords. If there are certain characters in the payload, Google returns an error that the payload contains an invalid character.
  • Do not set language when collecting data – the results are independent of it. Keyword Planner no longer allows setting it, most likely, it will be deprecated in the future.
  • If you need to collect the most recent data (usually the data is updated in the middle of each month), use the Status endpoint to monitor the status of the update.

Live requests

Unless the high data throughput is critical, we do not recommend collecting data using this endpoint. Google Ads API has a load limit, so there are strict rate limits from our side, and Google Ads API itself may return errors like:

	"status_code": 50301,
	"status_message": "Too many requests.",

 

When we use the schemes described below, we balance the load by ourselves, perform tasks at the optimal speed, and you will not receive errors related to the rate limit.

Handling low-volume Keyword Data API payload

From the user perspective, a low-volume payload means a relatively periodic usage of an API with a few thousand requests a day or one-time data collection.

Setting API tasks

  • We advise setting a few tasks at once, we offer the possibility to set up to 100 Task Post tasks in a single request.
  • You don’t have to use callbacks, such as pingbacks and postbacks.
  • After setting a task, make sure that your task is, in fact, set. The response of the successfully set task will contain the following values:

    	"status_code": 20100,
    	"status_message": "Task Created.",

     

    With such a task, you can proceed further.

  • If the task returns an error, it is not processed further. In this case, it is necessary to provide for an algorithm of re-setting the task after some time. Alternatively, you can label such tasks in the database for further processing. If the error concerns an API service, we recommend that you try re-setting a task with an error in a few minutes, for example:

    	"status_code": 50000,
    	"status_message": "Internal Error.",

     

  • If the error refers to the validation of input parameters, check the payload submitted in the POST request:

    	"status_code": 40501,
    	"status_message": "Invalid Field",

     

    Also, refer to the Errors endpoint to get a full list of possible errors.

  • We have provided the tag string in the POST parameter for your convenience. Use it to put down additional information to use when receiving a task, and, consequently, make the retrieval of task results more convenient.

Retrieving API task results

A single-threaded worker service should be launched on your server. It can make requests to the Tasks Ready endpoint, obtain the id of the completed tasks, collect and process the results. Set the frequency for launching the worker:

  • Normal execution priority (default): every 5-10 minutes;
  • High execution priority: every 1-3 minutes.

Handling high-volume Keyword Data API payload

High-volume payload means frequent usage of an API with many thousands of requests a day, or periodic usage with big volumes of collected data.

Setting API tasks

  • Your server should be set for communication with external services, API services in particular.
  • We strongly advise setting 50-100 tasks in every single Task Post request.
  • You should be using callbacks, such as pingbacks or postbacks. If your server permits, it is better to use a postback method – the results of the task will be delivered to your server immediately after the POST request is completed. However, if you hesitate to load the server with data, you can use pingbacks and control the processing of data on your end.
  • After setting a task, make sure that your task is, in fact, set. The response of the successfully set task will contain the following values:

    	"status_code": 20100,
    	"status_message": "Task Created.",

     

    With such a task, you can proceed further. We recommend storing the id parameters of tasks in the database. This way, you will be able to see an id of the task and when it was submitted for processing.

  • If the task returns an error, it is not processed further. In this case, it is necessary to provide for an algorithm of re-setting the task after some time. Alternatively, you can label such tasks in the database for further processing. If the error concerns an API service, we recommend that you try re-setting a task with an error in a few minutes, for example:

    	"status_code": 50000,
    	"status_message": "Internal Error.",

     

    If an error refers to the validation of input parameters, check the payload submitted in the POST request:

    	"status_code": 40501,
    	"status_message": "Invalid Field",

     

    Also, refer to the Errors endpoint to get a full list of possible errors.

  • We have provided the tag string in the POST parameter for your convenience. Use it to put down additional information in the task and, consequently, make the retrieval of task results more convenient. Values provided in the tag field are also substituted into the $tag variable of the &tag=$tag parameter if specified in the URL of the callback. The tag parameter can therefore be used for transferring certain labels and implementing the logic of your software solution.
  • If you are using pingbacks, specify the ?id=$id, and our system will substitute the id of the completed task into the $id variable.

Retrieving API task results

A worker service that communicates with a database has to be launched on your server.

If you are using pingbacks:

  1. Your web server obtained a get GET request where the id parameter is the id of the completed task.
  2. Completed tasks should be appropriately labeled.
  3. Worker identifies completed tasks continuously or periodically, depending on the capabilities of your server. Completed tasks are collected through the Task Get endpoint using the id parameters of these tasks. You can regulate the workload of the server by setting up the appropriate frequency of launching the worker.
  4. After all the previous steps are completed, you can process the retrieved results.

If you are using postbacks:

  1. Your web server obtained a POST that contains the results of the completed task (note that data is sent in the compressed .gz format and thus requires decoding).
  2. After the previous step is completed, you can process the results.

Additional recommendations for using the webhooks

  • Watch a video with a detailed explanation of how to use pingbacks and postbacks.
  • If your web server responded with an error, you can get a list of such tasks using the Tasks Ready endpoint but only if you didn’t collect this task result separately from the Task Get endpoint. After the operation of your server is resumed, you can re-send the tasks using the Webhook Resend endpoint.

Embed DataForSeo widget on your website


Embed code:
Preview: