How News Orgs Verify Claims at Scale

News organizations use fact-checking APIs to verify thousands of claims daily. Learn how Reuters, AFP, and BBC use tools like ClaimBuster and Google Fact Check.

Newsroom workflow diagram showing claims flowing through verification APIs and returning cited verdicts
T
Teja Thota

Building Webcite, the fact-checking and citation API for AI applications.

Reuters Fact Check publishes hundreds of verified reports each month across 16 languages, according to Reuters, 2025. The Duke Reporters’ Lab counted 443 active verificationing projects worldwide in its 2025 census, operating in 116 countries and more than 70 languages, according to Reporters’ Lab, 2025. At this scale, manual verification alone cannot keep pace. This article examines how major news organizations use APIs and automated tools to verify claims, and where a verification API fits into that workflow.

Key Takeaways
  • Full Fact's automated tools process roughly 330,000 sentences per weekday across broadcast, news, and social media.
  • Factiverse tracked 1,123 claims in real time during the 2024 US presidential and VP debates.
  • Google Fact Check Tools API searches over 200,000 existing fact checks from 800+ publishers worldwide.
  • ClaimBuster uses NLP models to score which political claims are most worth checking.
  • A verification API like Webcite can slot into existing newsroom pipelines to add source-backed verdicts in seconds.
Newsroom Claim Verification: The process of systematically checking factual statements in news content against authoritative sources using automated tools, APIs, and human editorial judgment to confirm accuracy before or after publication.

The Scale Problem in Modern Fact verificationing

The volume of claims that newsrooms must evaluate has outpaced what human teams can handle manually. According to the Poynter Institute’s 2024 State of the Fact-Checkers report, nearly 60 percent of fact-checking teams said they verify both internet-based and political misinformation in roughly equal parts. Meanwhile, a 2024 study by Indiana University found that just 0.25 percent of X (formerly Twitter) users were responsible for 73 to 78 percent of all low-credibility content shared on the platform, according to DemandSage, 2026.

A single political debate generates hundreds of verifiable claims in 90 minutes. A major election cycle produces thousands of shareable claims per day across social media, broadcast, and print. The International Verifying Network (IFCN) lists 170 member organizations as of 2024, according to Poynter IFCN, 2024, but even collectively, these organizations cannot manually review every claim at the speed social media distributes them.

This is why the newsroom verificationing stack has shifted from purely editorial to API-assisted. The tools described below represent how leading organizations have automated parts of their verification pipeline without sacrificing editorial rigor.

Google Fact Check Tools API

Google Fact Check Tools API is the most widely used starting point for programmatic fact verificationing. It provides two core capabilities: a Claim Search endpoint that queries existing fact checks from hundreds of publishers, and a ClaimReview Markup API that lets publishers tag their own fact checks with structured data, according to Google Developers, 2025.

The Claim Search endpoint is straightforward. You send a query string and get back matching fact checks:

const response = await fetch(
  "https://factchecktools.googleapis.com/v1alpha1/claims:search?" +
  new URLSearchParams({
    query: "climate change causes flooding",
    languageCode: "en",
    key: "YOUR_GOOGLE_API_KEY"
  })
)

const data = await response.json()
// data.claims[0].claimReview[0].textualRating: "Mostly True"
// data.claims[0].claimReview[0].publisher.name: "PolitiFact"

However, academic research from the University of Michigan revealed a significant limitation: in a study of 1,000 false claims, 842 (84.2 percent) returned no verifying results, according to arXiv, 2024. The 158 claims that did return results had a 94.46 percent relevance rate. This means Google Fact Check Tools API works well for claims that have already been checked by a publisher, but it cannot verify novel claims.

This is the gap where tools like ClaimBuster, Factiverse, and Webcite operate: they verify claims that have not been previously reviewed.

ClaimBuster: Identifying What to Check

ClaimBuster, developed at the University of Texas at Arlington, approaches the problem from a different angle. Rather than verifying claims, it identifies which claims are worth verifying, according to UT Arlington IDIR Lab, 2025.

The tool uses natural language processing to score sentences on a 0-to-1 scale based on how “check-worthy” they are. A statement like “unemployment fell to 3.7 percent last quarter” scores high because it contains a specific, verifiable number. A statement like “we need to do better” scores low because it is opinion.

ClaimBuster’s API sends daily email alerts to professional verifiersrs at organizations including CNN, PolitiFact, and FactCheck.org with the most verifiable claims from TV transcripts and social media. Post-hoc analysis showed a strong positive correlation between ClaimBuster’s rankings and the claims that journalists independently chose to check, according to VLDB, 2017.

The API is free to use after registration:

const response = await fetch(
  "https://idir.uta.edu/claimbuster/api/v2/score/text/",
  {
    method: "POST",
    headers: {
      "x-api-key": "YOUR_CLAIMBUSTER_KEY",
      "Content-Type": "application/json"
    },
    body: JSON.stringify({
      input_text: "The senator claimed GDP grew 4.2 percent in Q3."
    })
  }
)

const data = await response.json()
// data.results[0].score: 0.89 (highly check-worthy)

ClaimBuster solves the prioritization problem. In a debate with 200 factual statements, it tells the newsroom which 15 to check first. But it does not perform the verification itself. That requires a different tool.

Full Fact: Automation at Broadcast Scale

Full Fact, the United Kingdom’s largest independent fact verificationing organization, built its own automated monitoring system that processes roughly 330,000 sentences on a typical weekday, according to Full Fact, 2025. The system monitors broadcast TV, radio, social media, and online news in near-real time.

Through 2024, Full Fact’s tools supported verification professionalsitoring 12 national elections across Africa, Europe, and Asia, according to Full Fact, 2024. In South Africa, Nigeria, Liberia, the Democratic Republic of the Congo, and Senegal, the tools helped local verificationing organizations track claims across TV, radio, YouTube, and social media simultaneously.

Full Fact’s system works in three stages. First, it transcribes and ingests content from multiple sources. Second, it uses NLP to identify claims that match patterns of known misinformation or contain specific verifiable assertions. Third, it flags those claims for human reviewers with relevant context and potential sources.

The scale is significant. A human fact verificationer can realistically review 5 to 10 claims per day with thorough sourcing. Full Fact’s automated pipeline narrows 330,000 daily sentences down to the handful that require human attention. This is the difference between automated verifying and manual review: automation handles volume, humans handle judgment.

Factiverse: Real-Time Claim Verification

Factiverse, a Norwegian startup that secured 1 million euros in funding, provides real-time verificationing that combines claim detection with actual verification, according to Factiverse, 2025. Unlike ClaimBuster (which only identifies claims) or Google Fact Check Tools (which only searches existing reviews), Factiverse performs end-to-end verification.

During the 2024 US presidential and VP debates, Factiverse tracked 1,123 claims in real time, according to Factiverse, 2024. The system searched Bing, Google, Wikipedia, Semantic Scholar, and its own database of over 300,000 existing fact checks to verify each claim as it was spoken.

NRK, Norway’s state broadcaster, uses Factiverse daily for resource-critical events like elections and climate summits. Viestimedia, Finland’s second-largest media company, integrated the Factiverse API into its content management system. TjekDet, a Danish fact verificationing organization, used it to discover a misleading claim by a politician buried in hours of radio content that would have gone unnoticed in a manual review.

The API supports CMS integration, allowing newsrooms to verify claims without leaving their existing editorial tools.

const response = await fetch("https://api.factiverse.ai/v1/verify", {
  method: "POST",
  headers: {
    "Authorization": "Bearer YOUR_FACTIVERSE_KEY",
    "Content-Type": "application/json"
  },
  body: JSON.stringify({
    claim: "Norway generates 98% of its electricity from hydropower",
    language: "en"
  })
})

const data = await response.json()
// data.verdict: "supported"
// data.sources: [{ title: "IEA Norway Report", url: "..." }]

BBC Verify: Combining OSINT with API-Driven Checks

BBC Verify, launched in 2023, represents a different model: a dedicated verification unit staffed by over 60 journalists who combine open-source intelligence (OSINT), satellite imagery, forensic analysis, and data verification, according to BBC, 2025. In June 2025, the BBC launched Verify Live, a real-time blog showing audiences what claims are being investigated and how they are being checked.

According to the UK communications regulator Ofcom, 2024, 26 percent of UK adults used a fact-checking website or tool at least once in 2024, with BBC Verify being the most recognized and most used. The Reuters Institute for the Study of Journalism considers it one of the most trusted verificationing sources in the UK.

BBC Verify also participates in Project Origin, an alliance with Microsoft, CBC, Media City Bergen, Telegraaf, and IPTC to establish a chain of trust from publisher to consumer, according to WAN-IFRA, 2024. This initiative focuses on content authentication technology that verifies the provenance of media assets using cryptographic signatures.

The BBC model shows that even with substantial human resources, automated tools are necessary for monitoring the volume of claims across platforms. Their approach uses APIs for initial claim detection and source discovery, then routes flagged content to specialist journalists for deep verification.

ClaimReview: The Standard That Connects Everything

All of these tools share a common data format: ClaimReview, the Schema.org structured data standard that encodes fact verification results in a machine-readable format, according to Schema.org, 2025. Over 38,000 verify articles were tagged with ClaimReview in the first five months of 2025, according to Reporters’ Lab, 2025.

ClaimReview matters because it makes fact checks discoverable. When PolitiFact rates a claim “Mostly False” and tags it with ClaimReview markup, that rating becomes searchable through Google Fact Check Tools API, surfaceable in Google Search results, and available to any application that queries the ClaimReview ecosystem.

AFP (Agence France-Presse), one of the world’s three largest news agencies, publishes fact checks in over 80 countries and tags them with ClaimReview for cross-platform distribution. The Associated Press (AP) does the same, making its verification verdicts available to the hundreds of local news outlets that syndicate AP content.

For newsrooms building their own verification pipelines, ClaimReview is the integration point. You can query existing ClaimReview data through the Google Fact Check Tools API and publish your own fact checks in the same format.

How Webcite Fits Into Newsroom Workflows

The tools above each solve part of the problem. ClaimBuster identifies which claims to check. Google Fact Check Tools searches existing reviews. Full Fact monitors broadcast at scale. Factiverse verifies claims in real time. But many newsrooms need a simpler, single-API solution that takes a claim and returns a verdict with sources.

Webcite provides this. A newsroom can send any claim to the Webcite API and receive a structured response with a verdict, confidence score, and cited sources:

const response = await fetch("https://api.webcite.co/api/v1/verify", {
  method: "POST",
  headers: {
    "x-api-key": "your-api-key",
    "Content-Type": "application/json"
  },
  body: JSON.stringify({
    claim: "Global sea levels rose 3.6mm per year between 2006 and 2015",
    include_stance: true,
    include_verdict: true
  })
})

const result = await response.json()
// result.verdict.result: "supported"
// result.verdict.confidence: 91
// result.citations: [
//   { title: "IPCC AR6", url: "...", stance: "supports" },
//   { title: "NASA Sea Level", url: "...", stance: "supports" }
// ]

The integration pattern for a newsroom CMS is straightforward:

  1. A journalist writes or receives a story containing factual claims
  2. The CMS extracts claims from the text (using ClaimBuster or simple NLP)
  3. Each claim is sent to the Webcite API for verification
  4. The API returns verdicts with sources that the journalist can review
  5. Supported claims get citation links; unsupported claims get flagged for manual review

This fits naturally alongside existing tools. ClaimBuster can prioritize which claims to send to Webcite. Google Fact Check Tools can check whether a claim has already been reviewed. Webcite handles the novel claims that neither tool covers.

The free tier provides 50 credits per month, enough for a small newsroom to verify roughly 12 claims per day. The Builder plan at $20/month provides 500 credits. For large newsrooms processing hundreds of claims daily, Enterprise plans start at 10,000 credits.

Building a Complete Verification Pipeline

A newsroom that wants to implement API-driven fact verificationing does not need to choose one tool. The most effective approach combines multiple tools in a pipeline:

Stage 1: Ingest and monitor. Use Full Fact’s approach of monitoring multiple content sources (broadcast, social media, news feeds) or build a simpler RSS-based ingestion system.

Stage 2: Prioritize. Run incoming claims through ClaimBuster’s API to score check-worthiness. Focus human and API resources on claims that score above 0.7.

Stage 3: Check existing reviews. Query Google Fact Check Tools API to see if any publisher has already reviewed the claim. If a credible organization has rated it, surface that rating.

Stage 4: Verify novel claims. For claims with no existing review, send them to a verification API like Webcite or Factiverse for automated source checking.

Stage 5: Human review. Route low-confidence results and high-stakes claims to human verification teams final judgment.

Stage 6: Publish and tag. Publish fact checks with ClaimReview markup so they enter the shared ecosystem and become searchable by other newsrooms and platforms.

This pipeline handles the scale problem. The 330,000 daily sentences that Full Fact processes would overwhelm any human team. But with automated prioritization, existing-review lookups, and API-based verification, the human team only sees the claims that genuinely require editorial judgment.

In 2024, 30 percent of professional verifiersrs integrated AI into their workflows, and the AI fact verificationing market reached $1.52 billion, according to Full Fact AI, 2025. As misinformation volumes continue to grow and 86 percent of global citizens report having been exposed to misinformation, according to DemandSage, 2026, API-driven verification is no longer optional for newsrooms that want to maintain credibility at scale.

Frequently Asked Questions

What APIs do news organizations use for verifying?

News organizations use several APIs including Google Fact Check Tools API for searching existing fact checks, ClaimBuster for identifying checkable claims, Factiverse for real-time verification, and Full Fact’s automated tools for monitoring broadcast and social media content. Many organizations combine multiple APIs in their workflows.

How many claims can verificationing APIs process per day?

Full Fact’s automated tools process roughly 330,000 sentences on a typical weekday across broadcast, social media, and news sources. ClaimBuster continuously monitors political speeches and debates. Factiverse tracked 1,123 claims in real time during the 2024 US presidential debates alone.

What is ClaimReview and why does it matter for fact verificationing?

ClaimReview is a structured data schema developed by Schema.org that standardizes how fact checks are tagged and shared. Over 38,000 verify articles were tagged with ClaimReview in the first five months of 2025. Google, Bing, and Meta use ClaimReview markup to surface fact checks in search results.

Can smaller newsrooms afford automated verificationing tools?

Yes. Google Fact Check Tools API is free. ClaimBuster offers free API access for registered users. Webcite provides a free tier with 50 credits per month. These tools let smaller newsrooms automate claim detection and verification without the budgets of organizations like Reuters or the BBC.

How does a verification API differ from manual fact verificationing?

Manual verifying requires a journalist to research each claim individually, which takes 30 minutes to several hours per claim. A verification API processes claims in seconds by automatically searching sources, scoring credibility, and returning a structured verdict with citations. APIs handle volume; humans handle nuance and context.