Live POST /v1/search 1 credit per search

Web Search as an API.
Multi-Engine. Structured.

Query the web programmatically. Get back structured results — titles, URLs, snippets, and ranks — across multiple search engines. No browser. No scraping SERP pages. One API call.

terminal
curl -X POST https://api.crawlhq.dev/v1/search \
  -H "X-API-Key: $API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "query": "CrawlHQ alternatives 2025",
    "num_results": 10,
    "engines": ["google", "bing"]
  }'
response
{
  "results": [
    {
      "title": "Best web scraping APIs 2025",
      "url": "https://example.com/article",
      "snippet": "...",
      "rank": 1
    }
  ],
  "credits_used": 1
}
200 OK 1 credit per search

What makes it production-grade

Every module is built for pipelines that run without you watching.

🔍

Multi-Engine Coverage

Query Google, Bing, DuckDuckGo, and more in one call. Combine results for broader coverage and cross-validate rankings.

📦

Structured Output

Every result includes title, URL, snippet, rank, and source engine. No HTML parsing. Drop results directly into your pipeline.

🌐

Region & Language Control

Set the country and language for localised results. Search Indian sources in Hindi, or target specific markets globally.

📰

News Mode

Filter to news results only. Get the latest coverage on any topic, company, or event — structured and ready for summarisation.

🎯

Domain Filtering

Restrict results to specific domains, or exclude domains from results. Search only within gov.in sites, or exclude Wikipedia.

⚙️

Combine with read

Chain search + read for a full intelligence pipeline: find the top 10 URLs for a query, then read each one into clean Markdown.

Use Cases

What teams build with search

RAG Source Discovery

Search for the most relevant URLs on a topic, then pass those URLs to /read to build a live, grounded RAG knowledge base.

Brand Mention Monitoring

Search your brand name daily. Track which sites are mentioning you, which articles are ranking, and what the sentiment looks like.

Research Automation

Automate the 'Google it' step in any research workflow. Search for information, get structured results, pipe into an LLM for synthesis.

Lead Discovery

Search for companies in a specific vertical, city, or segment. Get a list of URLs to enrich with /scrape + /extract.

Content Gap Analysis

Search your target keywords. See what's ranking. Feed the URLs into /read and use an LLM to identify what content you're missing.

Competitive Intelligence

Search competitor names daily. Track new press mentions, product launches, and funding announcements as they appear in search results.

Frequently asked questions

Which search engines are supported?
Currently: Google, Bing, DuckDuckGo, Brave Search, and Yahoo. You can specify one or multiple engines per request.
How is this different from SerpAPI or Serper.dev?
CrawlHQ search is one of 9 modules under a single API key — you get search + scraping + extraction + monitoring all with the same auth. SerpAPI is search-only. Also: CrawlHQ uses INR pricing, significantly cheaper for Indian teams.
Can I get more than 10 results?
Yes. Set num_results up to 100 per query. Additional results beyond 10 are available on Starter plan and above.
Is the search truly multi-engine in one call?
Yes. Set engines: ['google', 'bing'] and get merged, de-duplicated results with source attribution per result. You see which engine returned which result.
Can I use this for news tracking?
Yes. Set type: 'news' in the request to filter to news results. Combine with a scheduler to build automated daily briefs.

Start using search in minutes

2,500 free credits. No credit card. One API key for all 9 modules.

Get API Key Free →