If you’ve been in SEO long enough, you know that Google likes to keep us on our toes. This time, though, they’ve shaken the ground under everyone who tracks keyword rankings.
Recently, Google quietly removed support for the num=100 search parameter – the one that allowed viewing 100 search results per page. This parameter also made it possible to collect large sets of ranking data quickly and efficiently. But now, with results now capped at 10 per page, more requests are needed to collect the same amount of data as before, which translates to higher resource usage, and naturally, higher data costs.
Even so, there are smart ways to adapt and optimize your workflows to keep your data accurate without overspending. In this blog post, we’ll break down the changes we’ve had to introduce at DataForSEO, and what specific strategies you can implement to achieve maximum cost efficiency in your rank-tracking with SERP API.
Contents
How DataForSEO Adapted As Google Dropped num=100
Cost-Effective Approaches to Rank Tracking with SERP API
1. Optimize Your SERP Crawl Depth
2. Adjust Your Tracking Frequency
3. Fetch Only the Pages That Matter
4. Automate Savings with Stop-on-Match in SERP API
5. Lean on DataForSEO for Fair Billing and Automatic Refunds
Rank Tracking Cost Optimization in Real-Life Scenarios
Takeaways for Smarter SERP API Spend
How DataForSEO Adapted As Google Dropped num=100
At DataForSEO, we realized something was off early, so we jumped into action right away. Our team worked long shifts over that September weekend, tuning our crawling system, resources and loads to ensure uninterrupted performance for our clients. In parallel, we began developing a new pricing model that would reflect the real resource usage under the new conditions, while remaining cost-effective. And of course, we gave our clients an early heads-up so they could prepare before the changes took effect.
So, what exactly changed in the DataForSEO SERP API?
To keep things straightforward, our SERP API now charges based on the actual “depth” of results retrieved. The first page (typically up to 10 results) is billed at full rate and every additional page is discounted by 25%.
This approach mirrors the reality of crawling. Page 1 is generally the most resource-heavy, packed with all the ads, AI Overviews, and other SERP features, which require more processing power. Subsequent pages are lighter and simpler, so they cost less to collect.
Google’s update also served as a catalyst for aligning the pricing rules across all Organic search engine types of SERP API, including Google, Bing, Yahoo, Baidu, Seznam, Naver. This approach ensures the cost matches true resource usage required to retrieve paginated results, while keeping billing consistent and transparent for every customer.
Ultimately, the new model helps you reduce costs when collecting deeper pages without compromising data completeness and reliability. So, how do you make the most of it?
Cost-Effective Approaches to Rank Tracking with SERP API
Data accuracy and cost efficiency can absolutely go hand in hand. You don’t have to choose between them. With a few thoughtful adjustments to your rank-tracking strategy, you can achieve greater savings without compromising on quality.
Now that Google limits each page to 10 results, it makes sense to focus your workflow optimization on three key areas: depth, frequency, and targeting. These are not just savings tactics, but smarter ways to navigate rank tracking in today’s evolving SERPs.
To maintain operational stability, it’s equally important to have sound cost-saving automation features and balance-protection mechanisms. With DataForSEO, you can rely on stop_crawl_on_match and automatic refunds for controlled and predictable spending.
But let’s explore everything step by step.
1. Optimize Your SERP Crawl Depth
One of the simplest ways to make your rank tracking more cost-effective is by fine-tuning your depth. If you’ve been pulling full top-100 results for every keyword, narrowing that range can immediately lower your data costs. For instance, going from 100 to 50 results slices your spend in half, while still giving you a clear picture of the competitive search landscape.
In most cases, real competition sits within the top 30 results. Fetching three pages instead of ten, you instantly cut your costs by roughly 70%, while getting a focused view of the competitors that matter most in terms of traffic and performance.
To make depth control even more predictable, you can additionally use the max_crawl_pages parameter. As you may already know, search engines tend to return varying numbers of results on page one, so it’s impossible to guess what depth will stay within that page. The max_crawl_pages parameters solves that uncertainty by letting you limit how many pages the API should crawl. Learn more about this feature here.
For example, if you send a request "depth": 15, and Google shows 14 results on page 1 and 10 on page 2, you’d normally receive data from two pages and pay more for the extra crawl. But with "max_crawl_pages": 1, the API will only return the first page’s 14 results, helping you minimize costs.
Below you can find an example SERP API request with depth and max_crawl_pages settings.
[
{
"language_code": "en",
"location_code": 2840,
"keyword": "albert einstein",
"max_crawl_pages": 3,
"depth": 30
}
]
And here’s how reducing the number of SERPs you pull translates to savings.
| Depth Strategy | Pages Crawled | Cost | Savings vs 10 pages (100 results) |
| Full depth, 100 results (baseline) | 10 pages | $0.00465 | 0% |
| Fetch top 50 results | 5 pages | $0.0006 + 4×0.00045 = $0.0024 | 48.4% |
| Fetch top 30 results | 3 pages | $0.0006 + 2×0.00045 = $0.0015 | 67.7% |
| First page only | 1 page | $0.0006 | 87.1% |
2. Adjust Your Tracking Frequency
Check frequency is one of the most important levers for optimizing your rank tracking costs. When you step back and evaluate how often different keywords truly need attention, it becomes easier to define the right rhythm for tracking them.
If we’re speaking of your high-value keywords (those directly tied to revenue, brand visibility, or critical campaigns), these absolutely deserve daily monitoring. That’s where even some small fluctuations matter, and keeping a close eye on such terms allows for quick reaction and tweaks.
But not all keywords require the same scrutiny. For secondary, long-tail terms, consider shifting from daily checks to every two or three days. A slightly wider monitoring window will still capture the most important trends, but it will also help you reduce costs dramatically without losing any meaningful insights.
From an implementation standpoint, you could group your keywords by priority labels,
(e.g., “core”, “supporting”, “long-tail”) and assign different update intervals to each group in your task scheduler or no-code workflow.
Lowering the check frequency for non-critical keywords can cut costs dramatically in the long run.
| Frequency Strategy | Checks / Month | Relative Cost | Savings |
| Daily (baseline) | 30 | 100% | 0% |
| Every 2 days | 15 | 50% | 50% |
| Every 3 days | 10 | 33% | 66.7% |
3. Fetch Only the Pages That Matter
When you already know roughly where a target website tends to rank, you can leverage another smart option: you don’t need to crawl the SERPs from the top every time – fetch only the pages you need instead.
Let’s say your target website typically ranks around position 34. That’s on page four. Rather than fetching all 100 results, you could only scan a narrower set of results where your site is likely to appear: in our example, these would be pages 3, 4, and 5. In addition to significantly cutting your costs by skipping unnecessary pages, this approach will also speed up your workflow.
From a practical perspective, with DataForSEO SERP API, you can easily do this using the search_param field with the start parameter. For instance, &start=31 jumps straight to the first result on page four. From there, you can set depth to 30 or max_crawl_pages to 3 to limit how far the crawl goes.
Note that rank values will count the results starting from the first page of the crawl, so with &start=31 applied, "rank_absolute": 1 will mean the actual rank is 31.
Here’s an example workflow with all the necessary implementation steps:
- Determine expected rank ranges based on historical data for a domain.
- Set
"search_param": "&start=X"to start crawling search results at the closest position range. - Add
"max_crawl_pages"or"depth"to set a rational limit for the crawl. - Process results using the following formula for smooth analytics:
Actual rank = rank_absolute + start - 1
Replace start with the number you set as N in "&start=N". For example, if start = 31 and rank_absolute = 5, the actual position is 5 + 31 - 1 = 35.
SERP API request example
[
{
"language_code": "en",
"location_code": 2840,
"keyword": "albert einstein",
"max_crawl_pages": 3,
"depth": 30,
"search_param": "&start=31"
}
]
Overall, this option ensures a precise targeted data collection that lets you focus on the pages that you really need, avoid unnecessary page scans and save significantly.
| Targeted Crawl Strategy | Cost | Savings |
| Full crawl, 10 pages, 100 results (baseline) | $0.00465 | 0% |
| Fetch only 3 specific pages | $0.0015 | 67.7% |
| Fetch only 2 specific pages | $0.00105 | 77.4% |
| Fetch only 1 specific page | $0.0006 | 87.1% |
4. Automate Savings with Stop-on-Match in SERP API
The next essential tactic is based on a new feature we’ve added to the DataForSEO SERP API – stop_crawl_on_match. Rank tracking often involves collecting multiple pages just to confirm whether a specific domain appears in some of them. But once the target is found, your objective has been achieved, so everything beyond that point becomes unnecessary spending.
The stop_crawl_on_match array allows you to avoid just that, by specifying up to 10 target domains and/or URLs that act as stopping signals. By default, the crawl automatically stops as soon as the first match is found for one of your specified targets, preventing unnecessary page fetches and API usage. But to make this mechanism even more flexible, we’ve added three parameters that help you control where and when crawling stops:
target_search_mode– decide whether the crawl should stop after finding any single target or only after all targets are located.find_targets_in– restrict target detection to specific SERP elements such as organic results or featured snippets.ignore_targets_in– exclude rankings in unwanted SERP elements like paid ads, images, or videos from the matching process.
These options let you shape crawl behavior around your specific analytical needs. You can learn more about using them in our dedicated Help Center article.
Note that if the target is not found, the crawl will continue as usual up to the indicated depth.
Here’s how to add the stop_crawl_on_match array with additional settings to your API request body.
[
{
"language_code": "en",
"location_code": 2840,
"keyword": "email marketing tools",
"depth": 100,
"stop_crawl_on_match": [
{
"match_value": "mailchimp.com",
"match_type": "domain"
},
{
"match_value": "constantcontact.com",
"match_type": "domain"
},
{
"match_value": "sendinblue.com",
"match_type": "domain"
}
],
"target_search_mode": "all",
"find_targets_in": [
"organic",
"featured_snippet"
]
}
]
The match_type parameter supports three options:
domain– match the specified domain exactly (without subdomains);with_subdomains– match the domain and all its subdomains;wildcard– match by pattern.
Read more about this and other settings for the stop_crawl_on_match parameter in more detail here. For maximum control and efficiency, combine the stop_crawl_on_match with parameters like depth, max_crawl_pages, and "search_param": "&start=N".
Automated stopping maximizes your rank-tracking efficiency by eliminating wasted crawls. Savings depend on when the target domain is found.
| Target’s Position Found On | Cost | Savings |
| Page 1 | $0.0006 | 87.1% |
| Page 2 | $0.00105 | 77.4% |
| Page 3 | $0.0015 | 67.7% |
| Page 4 | $0.00195 | 58.1% |
| Not found (full crawl, 10 pages) | $0.00465 | 0% |
5. Lean on DataForSEO for Fair Billing and Automatic Refunds
Last but not least, we’ve got your back. The DataForSEO system includes a built-in safety net for your balance protection.
If you ever ask for more results than actually exist, you will get an automatic refund of the difference back to your account balance. For example, when you set “depth” to 14, you are initially billed for two pages, but if a single page contains all 14 results once it’s fetched, the charge for the extra page(s) is returned to your balance automatically.
Besides that, if a page contains more than 10 results (let’s say 15), and you set depth to 10 or ignore the depth field, you will receive all the 15 results at no additional cost.
This logic ensures users never overpay and can confidently rely on built-in refunds.
Rank Tracking Cost Optimization in Real-Life Scenarios
Businesses of all sizes can undoubtedly benefit from the rank-tracking optimization techniques discussed above, however, their cost-saving impact depends heavily on the scale.
➤ For freelance SEOs and small teams working with a limited number of clients, every API request directly affects profitability. So, the biggest win here comes from avoiding unnecessary data collection. The best tactic would be adjusting your depth carefully and using stop-on-match to ensure you only pay for results that actually appear in client reports. The savings may seem small per keyword at first, but they will add up over time, resulting in healthier margins and allowing you to offer more competitive rates or reinvest in growth..
➤ Large SEO agencies handling thousands of keywords across multiple clients face a different challenge: scale. This means even minor inefficiencies can lead to significant financial impact. On the other hand, it also means implementing small adjustments in depth, frequency, and crawl behavior can translate into significant savings. To achieve the best outcomes, combine those tweaks with smart client segmentation. For instance, you could maintain deeper top-100 tracking for priority clients or very competitive niches, while reducing depth to 30 or 50 results for smaller accounts and less volatile markets. Tracking frequency can be tiered as well: daily for top accounts, every two or three days for others. Working together, these strategies will help you balance the depth of insights with long-term sustainability.
➤ If you’re a developer and or SEO software provider, the main cost-saving opportunity in this case is system-level optimization. You can build configurable rank tracking options directly into your platform, giving users full control over how many pages their dashboard covers and how frequently they want it updated. This way, you can upgrade your rank tracking to an adaptive customer-first solution where users can clearly see how their settings affect cost. Behind the scenes, leverage additional features like stop-on-match to avoid unnecessary data processing. This mixed approach will help you not only improve performance and reduce data costs, but also increase transparency and brand trust.
Takeaways for Smarter SERP API Spend
Change in SEO is inevitable. Search engines will always be challenging us to optimize, refine, and future-proof our workflows. But when we find the right approach, like replacing a single letter, change doesn’t have to be a disruption – it can become a chance.
The removal of num=100 is indeed a great limitation, but it also opens the door to smarter rank tracking. With reliable data and the right strategy, even major shifts like this become chances to innovate, opportunities to optimize workflows and build more sustainable systems.
At DataForSEO, our mission has always been to keep your data accurate and affordable, and your costs – predictable and transparent. That’s why we’ve compiled all of these cost-saving opportunities together into practical strategies you can apply right away.
Here’s your quick recap:
- Control your
depth— track only what matters most. - Adjust frequency — prioritize your most valuable keywords.
- Target pages smartly — skip directly to where your site ranks.
- Use
stop_crawl_on_match— avoid unnecessary crawling easily. - Rely on automatic refunds — never pay for unused crawls.
Each of these steps helps you keep your search monitoring budget-friendly without sacrificing data quality.
If you’re ready to turn this shift into a strategic advantage, explore the updated DataForSEO SERP API docs and see how you can streamline your workflow.
