Live POST /v1/scrape 1–2 credits per request

Fetch Any Page.
Beat Every Bot Block.

Residential proxy rotation, headless Chrome JS rendering, and automatic retry — all in a single API call. If the page loads in a browser, CrawlHQ can fetch it.

terminal
curl -X POST https://api.crawlhq.dev/v1/scrape \
  -H "X-API-Key: $API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "url": "https://example.com",
    "render_js": true,
    "wait_for": ".main-content"
  }'
response
{
  "status": "success",
  "html": "<!DOCTYPE html>...",
  "url": "https://example.com",
  "status_code": 200,
  "credits_used": 2
}
200 OK 1–2 credits per request

What makes it production-grade

Every module is built for pipelines that run without you watching.

🔄

Rotating Residential Proxies

Every request routes through a fresh residential IP. No datacenter IPs that sites recognize and block instantly.

🖥️

Full JS Rendering

Headless Chrome renders React, Vue, Angular apps completely. SPAs, lazy-loaded content, infinite scroll — all captured.

⏱️

Smart Wait Conditions

Wait for a CSS selector, a network idle state, or a fixed delay. Get the page after your target content loads, not before.

🔁

Automatic Retry

Failed requests retry with a fresh proxy automatically. You're only charged credits on successful 200 responses.

🌍

Geo-Targeting

Specify a country for the exit node. Scrape geo-restricted content, test regional pricing, or bypass country-level blocks.

📋

Custom Headers & Cookies

Pass custom headers, cookies, and user-agent strings. Authenticate as a logged-in user, bypass paywalls, mimic any browser.

Use Cases

What teams build with scrape

Competitor Monitoring

Scrape competitor product pages, pricing tables, and feature announcements. Detect changes before your sales team does.

Lead Data Collection

Fetch company pages, LinkedIn profiles, and directory listings. Feed structured data into your enrichment pipeline.

News & Media Aggregation

Pull articles from news sites that block RSS. Build custom feeds with exactly the sources you care about.

E-Commerce Price Tracking

Scrape product pages with JS-rendered prices from Amazon, Flipkart, and D2C stores. Track stock and availability.

Real Estate & Classified Listings

Fetch listings from MagicBricks, 99acres, Housing.com — even as they update their HTML.

Government & Public Data

Scrape ECI affidavits, court records, tender portals, and regulatory filings. All public data, fully automated.

Frequently asked questions

Does it handle JavaScript-heavy sites like React apps?
Yes. CrawlHQ uses headless Chrome with full JS execution. You can also specify a CSS selector to wait for before returning the HTML, ensuring dynamic content has fully rendered.
What anti-bot systems does it bypass?
CrawlHQ handles Cloudflare, Akamai, PerimeterX, DataDome, and most other commercial anti-bot systems through a combination of residential proxies, browser fingerprint management, and behavioural mimicry.
Am I charged if the scrape fails?
No. Credits are only deducted on successful 2xx responses. If a request fails after retries, you pay nothing.
Can I scrape pages that require login?
Yes. Pass session cookies in the request headers. CrawlHQ will include them with the request, allowing you to scrape authenticated pages.
What's the difference between 1 credit and 2 credits?
Simple HTTP fetches (no JS rendering) cost 1 credit. Requests with render_js: true cost 2 credits due to the additional compute for headless Chrome execution.
Is there a rate limit?
Rate limits scale with your plan. Free tier: 10 requests/minute. Starter+: 60 req/min. Growth+: 300 req/min. Scale: custom.

Start using scrape in minutes

2,500 free credits. No credit card. One API key for all 9 modules.

Get API Key Free →