Automate AI Fact-Checking for Flawless Content
Reduce content revision time by up to 70% by instantly identifying factual inaccuracies and generating detailed error reports.
Manual fact-checking of articles against source documents is time-consuming, prone to human error, and delays content publication. This workflow automates the comparison of claims in an article with provided factual documents, instantly identifying inaccuracies and generating a comprehensive error report, ensuring high-quality, verified content.

Documentation
AI-Powered Fact-Checking Workflow
This n8n workflow revolutionizes content verification by automatically comparing claims from an article against a reference document. It’s ideal for journalists, content marketers, researchers, and anyone needing to ensure the factual accuracy of written material quickly and efficiently.
Key Features
- Automated Sentence Splitting: Intelligently breaks down articles into individual sentences, preserving critical context like dates and list items.
- Intelligent Claim Verification: Utilizes a specialized Large Language Model (LLM) via Ollama to assess the factual accuracy of each sentence against a provided source document.
- Granular Error Identification: Pinpoints exact sentences that contain factual discrepancies, making revisions straightforward.
- Comprehensive Error Summary: Generates a structured report detailing the number of errors, a list of incorrect statements, and an overall accuracy assessment.
- Configurable LLM Integration: Easily switch between different Ollama models for varied fact-checking capabilities.
How It Works
The workflow initiates by either a manual trigger for testing or by receiving input from another workflow, including the article text to be checked and a source document containing facts. The article text is then intelligently split into individual sentences. Each sentence is subsequently passed to a LangChain-powered LLM (using a specialized Ollama model) alongside the full source document. The LLM evaluates if the claim in the sentence is supported by the source document, outputting a "yes" or "no." Only the sentences deemed factually incorrect ("no") are filtered and aggregated. Finally, another LLM generates a comprehensive summary report of these identified inaccuracies, classifying the article's overall factual state.