What Are ATS Integrations? (2026 Architecture & Strategy Guide)
ATS integrations sync hiring data between your app and Greenhouse, Lever, or Workable. A technical guide to the real engineering costs and architecture decisions.
Your sales team just spent three months working a six-figure enterprise deal. Technical validation is done, the economic buyer is on board, and procurement is reviewing the contract. Then a talent acquisition operations manager asks a simple question: "Does this sync with Greenhouse?"
You check the roadmap. It doesn't. Engineering says it's a two-week project. You've heard this before — maybe with HRIS integrations or CRM integrations — and you already know why that estimate is wrong. Two months later, the integration is stuck in staging, token refreshes are failing silently, and the prospect has moved on to a competitor.
If you build B2B SaaS for HR, IT, or operations teams, you will inevitably hit this roadblock. Buyers refuse to adopt software that creates data silos in their recruiting pipelines.
This guide breaks down the technical realities of ATS integrations: the specific API challenges of the top platforms, the underlying data models, and how to architect a system that prevents your engineering team from becoming full-time integration maintainers.
What Are ATS Integrations?
An ATS integration is a programmatic connection between your SaaS application and an Applicant Tracking System — syncing candidate records, job postings, interview stages, scorecards, and hiring outcomes through REST APIs, webhooks, or event-driven pipelines. It is a customer-facing integration that your enterprise buyers expect to work on day one.
At a high level, these integrations allow distinct systems to read and write recruiting data without manual data entry. An AI sourcing tool might push parsed resumes into an ATS. An IT provisioning platform might listen for a "Hired" webhook to automatically create a new employee's Google Workspace account.
The initial HTTP request to fetch a candidate takes an afternoon. Operating an integration at scale across hundreds of mutual customers requires:
- Authentication State Management: Securely storing and continuously rotating OAuth 2.0 access tokens or API keys.
- Data Transformation: Mapping your internal database schema to the highly variable schemas of different ATS vendors.
- Traffic Control: Implementing resilient retry logic with exponential backoff and jitter to handle rate limits and temporary API outages.
- Event Listening: Maintaining reliable webhook receivers that can process inbound events, verify cryptographic signatures, and handle out-of-order delivery.
That's where the real months go.
Why Enterprise Buyers Demand ATS Integrations in 2026
The ATS market is large and accelerating. According to MarketsandMarkets, the global applicant tracking system market is projected to grow from USD 3.28 billion in 2025 to USD 4.88 billion by 2030, at a CAGR of 8.2%. That growth means more companies buying more ATS platforms, and every one of them expects your product to plug in.
Your prospects are not all on the same system. Greenhouse dominates structured hiring at companies like DoorDash and Wayfair. Lever combines ATS and CRM functionality for mid-market sourcing teams. Workable appeals to growing companies that want faster deployment. If you sell into talent acquisition teams, your pipeline has prospects on all three — and they won't compromise.
The fragmentation is painful. A Joveo report found that 88% of talent acquisition leaders use multiple point solutions, with 40% juggling four or more platforms daily. Meanwhile, according to 360 Research Reports, around 43% of businesses face issues integrating ATS platforms with existing HR systems or payroll software. This fragmentation leads to siloed data, lost candidates, and compliance risks.
When an enterprise buyer evaluates your SaaS product, they're not just assessing your feature set. They're evaluating how much friction your tool adds to their existing workflow. If your product requires a recruiter to manually export a CSV of candidates from Lever and import it into your system, you've already lost the deal. Your software must act as an invisible extension of their system of record.
The Hidden Engineering Costs of ATS APIs
The "two-week estimate" your engineering lead gave you is based on the happy path: make a GET request, parse some JSON, render it in the UI. Here's what actually eats the time.
OAuth Token Lifecycles That Break Silently
Each ATS vendor handles authentication differently, and the differences are not trivial.
Greenhouse's Harvest API currently uses HTTP Basic Auth with API keys — straightforward until you realize that Harvest v1 and v2 are being deprecated and removed on August 31, 2026. The replacement, Harvest v3, moves to OAuth 2.0, which means every existing Greenhouse integration needs a full auth migration this year.
Greenhouse Deprecation Alert: Harvest API v1 and v2 will be unavailable after August 31, 2026. If you're building a new integration, go straight to Harvest v3. If you have an existing integration on v1/v2, start your migration now — the auth model is fundamentally different.
Lever requires OAuth 2.0 Authorization Code flow where access tokens expire after just one hour. Your system needs to reliably refresh those tokens before they expire, and deal with refresh tokens that expire after one year or after 90 days of inactivity. Miss a refresh window at 3 AM and your customer's data sync goes silent — often without any visible error.
Here's where it gets worse: if two background workers attempt to refresh the same token simultaneously, a race condition occurs. One worker succeeds, the other fails, the old token is invalidated, and the integration breaks silently. You're now forced to build distributed locks just to maintain authentication state. For a deeper dive into this specific vendor, see our Lever API engineering guide.
Workable supports both API keys and OAuth 2.0, with rate limits that differ based on access type. Each vendor's auth flow requires its own token storage, rotation logic, and failure handling.
Rate Limits That Throttle Your Bulk Syncs
ATS platforms aggressively protect their infrastructure. They don't care if your sync job is important; exceed their limits and they'll drop your requests.
| ATS Platform | Rate Limit | Window | Enforcement |
|---|---|---|---|
| Greenhouse Harvest | 50 requests | 10 seconds | Fixed-window, HTTP 429 with Retry-After header |
| Lever | ~10 requests/sec (general), ~2 req/sec (submissions) | Per second | HTTP 429 |
| Workable | Varies by access type | 10 seconds | Per-account basis |
Greenhouse returns rate limit headers (X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Reset) on every response. For specific endpoints like the audit log API, the limit drops to just 3 paginated requests per 30 seconds. Sounds manageable until you're doing a bulk sync of 10,000 candidates across 200 customer accounts and need to queue, throttle, and retry without dropping records.
Your engineering team can't simply write a while loop to ingest data. They need an asynchronous queuing system with exponential backoff and jitter. When a 429 error occurs, the system must parse the Retry-After header, pause the worker, and re-enqueue the job without blocking other customer syncs.
Pagination That Breaks Your Data Pipeline
If you want to pull a complete list of jobs or candidates, you have to page through the results. The ATS industry has refused to standardize how this works.
Greenhouse uses Link header-based pagination. The response includes a Link header with rel="next" and rel="last" URLs containing opaque, Base64-encoded cursor payloads. You can't construct the next page URL yourself — you must parse it from the header.
// Custom logic required to extract the next page URL
// from a Greenhouse Link header
function getNextPageUrl(linkHeader) {
if (!linkHeader) return null;
const links = linkHeader.split(',');
const nextLink = links.find(link => link.includes('rel="next"'));
if (nextLink) {
const match = nextLink.match(/<(.*)>/);
return match ? match[1] : null;
}
return null;
}Lever uses cursor-based pagination with offset tokens returned in the response body's next field. You can only pass an offset returned from a previous request — fabricating offsets will fail silently or return unexpected results.
Workable uses query parameter-based pagination with since_id or created_after filters for incremental syncs.
If you're building a generic data ingestion layer, you need three different pagination implementations just for the top three ATS vendors. For more on how this problem compounds across dozens of APIs, see our pagination normalization guide.
The Core ATS Data Model: Candidates, Jobs, and Applications
Despite vendor differences, every ATS integration revolves around a common set of entities. Understanding this data model is essential for designing your integration layer — whether you build it yourself or use a unified API.
The Golden Rule of Integration Data Modeling Never store raw vendor JSON directly in your core application tables. Always map third-party payloads to an intermediate, standardized schema first. This isolates your business logic from external API changes.
flowchart LR
A["Organization<br>(Company/Tenant)"] --> B["Jobs<br>(Open Roles)"]
B --> C["Applications<br>(Candidate + Job)"]
D["Candidates<br>(People)"] --> C
C --> E["Interview Stages<br>(Pipeline Steps)"]
E --> F["Interviews<br>(Scheduled Events)"]
F --> G["Scorecards<br>(Evaluations)"]
C --> H{"Outcome"}
H --> I["Offer"]
H --> J["Rejection"]The key entities break down into four domains:
Organizational Setup — Companies, departments, offices, and the jobs (requisitions) they're hiring for. Each job defines its own pipeline of interview stages and may include custom application form fields.
Candidate Pipeline — Candidates are people; applications are the transactional records linking a candidate to a specific job. An application moves through interview stages and can be advanced, moved, or rejected programmatically.
Evaluation & Outcomes — Interviews are scheduled with specific users (interviewers), who submit scorecards after each session. An application culminates in either an offer (with salary, equity, start date) or a rejection with a specific reason.
Operational Metadata — Activities (notes, audit logs), attachments (resumes, cover letters), tags, and EEOC compliance data all attach to candidates and applications throughout the process.
The challenge is that each vendor models these entities differently:
| Concept | Greenhouse | Lever | Workable |
|---|---|---|---|
| A person in the system | Candidate | Contact (within Opportunity) | Candidate |
| A person's application | Application (separate entity) | Opportunity (combines candidate + application) | Candidate (tied to a job) |
| Pipeline stages | Job Stages (per-job) | Pipeline Stages | Stages (per-job) |
| Rejection reasons | Reject Reasons (standardized list) | Archive Reasons | Disqualification Reasons |
Lever's data model is the outlier that burns the most engineering time. In Lever, the Opportunity is the primary entity — it combines what Greenhouse separates into Candidate and Application. The deprecated /candidates endpoint still exists alongside the /opportunities endpoint you should actually use. If you're building against all three vendors, you need a normalization layer that maps these conceptual differences into a single schema.
By mapping all incoming data to a unified structure, your core application logic only needs to understand one schema. When you add a new ATS vendor, you write a new mapping configuration rather than rewriting your backend. For a detailed walkthrough of these specific schema mismatches, see our guide on integrating multiple ATS platforms simultaneously.
Build vs. Buy: Solving the ATS Integration Bottleneck
You have three realistic options. Each has real trade-offs, and the right choice depends on where you are as a company.
Option 1: Build Point-to-Point Integrations In-House
Best for: Single-vendor support where you only need Greenhouse (or only Lever), and you have dedicated integration engineers with bandwidth.
The real cost: Budget 6 to 10 weeks per vendor for production-quality work. That includes OAuth lifecycle management, pagination implementation, rate limit handling with exponential backoff, webhook subscriptions, error monitoring, and the ongoing maintenance when vendors ship breaking changes. Greenhouse's v1/v2 deprecation this year is a perfect example — every team that built against the old API now has a forced migration on their roadmap.
Building in-house gives you absolute control over the code, but every sprint spent debugging token failures and updating webhook signatures is a sprint not spent building your core product. An integration is never "done." APIs deprecate endpoints, vendors change pricing models, and data schemas evolve. For a full breakdown of the hidden costs, read our build vs. buy analysis.
Option 2: Use an Embedded iPaaS or Workflow Tool
Best for: Simple, low-frequency data syncs where your customers configure their own mappings.
The limitation: These tools are designed for workflow automation, not deep programmatic access. You'll hit walls when you need to move a candidate between stages, submit scorecards, or handle custom fields that vary per customer account.
Option 3: Use a Unified ATS API
Best for: Multi-vendor ATS support where you need Greenhouse, Lever, Workable, Ashby, and others behind a single API contract.
A unified API platform normalizes the authentication, pagination, rate limiting, and data model differences across vendors into a single interface. Instead of building three (or ten) separate integrations, you build once against a standardized schema and route requests to the appropriate vendor by specifying a connected account ID.
# Fetch candidates from any ATS — same endpoint, same schema
curl -X GET "https://api.truto.one/unified/ats/candidates?integrated_account_id=GREENHOUSE_ACCT_123" \
-H "Authorization: Bearer YOUR_TRUTO_TOKEN"
# Same call, different vendor — just change the account ID
curl -X GET "https://api.truto.one/unified/ats/candidates?integrated_account_id=LEVER_ACCT_456" \
-H "Authorization: Bearer YOUR_TRUTO_TOKEN"The platform handles OAuth token refreshes (including Lever's one-hour expiry), normalizes pagination across Link headers and offset tokens, and applies vendor-specific rate limiting with automatic backoff — so your application code stays clean.
Modern unified API architectures provide specific advantages that are difficult to replicate internally:
- Zero-Storage Compliance: Enterprise buyers are highly protective of candidate PII. A pass-through architecture means the integration platform never stores sensitive candidate data at rest, allowing you to pass enterprise security reviews.
- Automated Authentication: The platform automatically handles complex OAuth token lifecycles, transparently refreshing tokens before they fail, eliminating race conditions.
- Declarative Integration Mappings: Each vendor's API quirks (Lever's Opportunities vs. Greenhouse's Candidates/Applications) are handled through configuration-driven mappings rather than custom code. When a vendor changes their API, the mapping updates without requiring you to redeploy your application.
Honest trade-off: A unified API gives you speed and coverage, but it's an abstraction. You lose direct control over vendor-specific features that aren't part of the normalized schema. Good platforms offer a Proxy API that lets you drop down to the raw vendor API when you need something the unified layer doesn't cover — like Greenhouse's EEOC endpoints or Lever's confidential posting filters.
What Your ATS Integration Strategy Should Look Like
Stop thinking about ATS integrations as a one-time engineering project. They're a product capability that directly impacts revenue.
Here's a practical decision framework:
- You need one ATS vendor and have engineering bandwidth — Build it in-house. Go straight to the vendor's latest API version (Harvest v3 for Greenhouse, OAuth for Lever). Budget 6 to 10 weeks for production readiness.
- You need 2+ ATS vendors or can't dedicate engineers — Use a unified API. The schema normalization and auth management alone will save you months. You can always drop to the raw API for edge cases.
- You're already on Greenhouse Harvest v1/v2 — Start your migration to v3 now. The August 2026 deadline is closer than it looks, and the authentication model is fundamentally different.
Whichever path you choose, don't let the initial simplicity of a GET /candidates call fool you. The real work lives in the long tail: token refreshes at 3 AM, pagination edge cases during concurrent modifications, rate limit storms during bulk syncs, and data model differences that surface only after your third enterprise customer connects their account.
That's where ATS integrations live or die.
Frequently Asked Questions
- What is an ATS integration?
- An ATS integration is a programmatic connection that synchronizes hiring data — candidates, jobs, applications, interview stages, and outcomes — between a SaaS application and an Applicant Tracking System through REST APIs, webhooks, or event-driven pipelines.
- How long does it take to build an ATS integration?
- A production-quality ATS integration typically takes 6 to 10 weeks per vendor, covering OAuth token management, pagination, rate limiting, webhook handling, error monitoring, and testing. The initial API call takes hours; the operational reliability work takes months.
- What are the main challenges of ATS API integrations?
- The three biggest challenges are managing OAuth token lifecycles (Lever tokens expire every hour), handling inconsistent rate limits (Greenhouse enforces 50 requests per 10 seconds), and normalizing data models where vendors model the same concepts differently (e.g., Lever's Opportunities vs. Greenhouse's separate Candidates and Applications).
- Should I build ATS integrations in-house or use a unified API?
- If you only need one ATS vendor and have dedicated engineering bandwidth, building in-house is viable. If you need multiple vendors (Greenhouse, Lever, Workable), a unified API saves months by abstracting auth, pagination, and schema normalization into a single interface.
- Is the Greenhouse Harvest API being deprecated?
- Yes. Greenhouse Harvest API v1 and v2 will be deprecated and unavailable after August 31, 2026. All integrations must migrate to Harvest v3, which uses OAuth 2.0 instead of Basic Auth.