How MIT Taught AI to Read Like a Human with Recursive Language Models (RLM)

Have you ever asked an AI to analyze a long report or a big document, only to get a summary that misses the most important details? It’s a common problem. Even the most powerful AIs today can get lost when you give them too much information at once. They start to ā€œforgetā€ key facts buried in the text, making their answers unreliable. This has been a major roadblock, forcing us to break large documents into smaller pieces and feed them to the AI one at a time.

But what if there was a smarter way? Imagine an AI that could read a massive document like a human researcher skimming for important sections, searching for keywords, and then diving deep to find the exact information it needs. That’s the revolutionary idea behind a new AI design from MIT called a Recursive Language Model (RLM), and it’s changing what’s possible with artificial intelligence.

From Reading Everything to Smart Investigation

Most AIs today work by trying to stuff as much information as possible into their short-term memory. The more you give them, the more diluted their attention becomes, and they start making mistakes. It’s like trying to drink from a firehouse; you’re bound to miss a lot.

RLMs take a completely different approach. Instead of just reading a document from start to finish, the AI acts like a detective investigating a case. It treats the document as a crime scene to be actively explored.

Here’s a simple breakdown of how it works:

  1. The Document Becomes a Searchable Space: The entire document is made available to the AI, but it doesn’t read it all at once. It’s more like having a huge library at its disposal.
  2. The AI Becomes a Problem-Solver: The main AI gets the user’s question (e.g., ā€œFind the total revenue in the financial reportā€). It then thinks about the best way to find the answer in the library.
  3. A Team of Helper AIs: The main AI can delegate smaller tasks to a team of ā€œhelperā€ AIs. For example, it might tell one helper to search for the word ā€œrevenue,ā€ another to find all the tables, and a third to read the summary section. It’s like a lead detective assigning different tasks to junior detectives.
  4. Putting the Clues Together: The main AI gathers all the reports from its helpers, pieces together the clues, and comes up with a final, accurate answer.

This clever process allows the AI to focus its brainpower on the most relevant parts of the document, rather than getting bogged down by unnecessary details. This diagram shows how the main AI works with its team of helpers:

Diagram illustrating a recursive language model architecture, showing user input, large context handling with over 10 million tokens, interaction with root and sub-language models, execution in a REPL environment, and the flow leading to the final answer.

By breaking down a big problem into smaller, manageable steps, the AI can solve incredibly complex questions that would stump other systems.

Flowchart illustrating a seven-step process for querying a language model, including loading context, receiving a query, generating Python code, searching context, calling sub-models, combining results, and finalizing the answer.

The Surprising Results: Smaller, Smarter, and Better

The most amazing part of the MIT research is that this new method works incredibly well. In a head-to-head challenge, an RLM using a smaller, less powerful AI model beat a much larger, more expensive model by over 114%.

Bar chart comparing OOLONG benchmark performance scores for GPT-5 (30.2 points), GPT-5-mini (20.3 points), and RLM(GPT-5-mini) (64.7 points), highlighting a 114% improvement for RLM over GPT-5.

This shows that a smarter approach is far more effective than just building a bigger AI. The RLM’s advantage grows even larger when dealing with enormous documents. While other AIs get confused and their performance drops, the RLM stays sharp, even when searching through the equivalent of 10 million pages of text.

Bar graph comparing performance percentages of Traditional LLMs and RLM across different context sizes (tokens), highlighting the issue of 'context rot' for Traditional LLMs at 5M tokens.

In one test that required finding information across more than 1,000 separate documents, the RLM found the correct answer every single time, while other methods failed.

Bar chart comparing accuracy percentages of different models on the BrowseComp-Plus Deep Research Task, showing the accuracy of GPT-5 (truncated) at 40%, GPT-5 + BM25 at 60%, ReAct + GPT-5 + BM25 at 80%, and RLM (GPT-5) at 100%.

See It for Yourself: A Fun, Hands-On Demo

To help everyone understand this technology, I built a web app that lets you see an RLM in action. The app gives the AI a classic ā€œneedle in a haystackā€ challenge: find a secret number hidden somewhere in a one milion of lines of text.

Screenshot of an RLM demo interface created by Lawrence Teixeira, showing input and execution log sections with configuration settings.

You can watch on the screen as the AI works through the problem, delegating tasks and narrowing down its search until it finds the hidden number. It’s a great way to see this new kind of AI thinking in real-time.

Why This Matters: More Power and Privacy for Everyone

This new approach does more than just improve performance. It gives more people access to powerful AI and helps solve some of the biggest problems with AI today.

  1. It solves the ā€œForgettingā€ Problem: The AI no longer gets lost in long documents.
  2. It Protects Your Privacy: Because this method is so efficient, it can run on your own computer. This means you can analyze sensitive financial or medical records without your data ever leaving your control.
  3. You’re in Charge: You don’t have to rely on big tech companies to use powerful AI. You can run it yourself, on your own terms.

For businesses, this is a game-changer. Imagine an AI that can review thousands of legal contracts for risks, or a programmer’s assistant that can find a single bug in millions of lines of code. These are the kinds of powerful tools that RLMs make possible.

What’s the limitation of RLM?

The main limitation of RLM is that its power comes with overhead and complexity. When the input is short and the task is simple, using the base model directly is often faster and more efficient, since RLM adds extra steps like environment interaction and recursive calls.

In its current form, RLM relies on synchronous, blocking submodel calls, which increase end-to-end latency and can slow responses. The paper also notes that system prompts are fixed and not tailored to different task types, leaving performance gains on the table.

Finally, letting the model write and execute code inside a REPL introduces real engineering challenges, especially around security isolation, safety, and predictable behavior.

In short, RLM is powerful for hard, large-scale problems, but it is heavier, slower, and more complex than standard models for simple tasks.

Read the Official Research Paper

If you want to dive deeper into the technical details behind Recursive Language Models, the MIT researchers have published their full findings in an official paper. You can read the complete research, including all the experiments and results, on arXiv:

Official Paper: Recursive Language Models – The full academic paper by Alex Zhang and the MIT CSAIL team.

Conclusion

Scaffolding to handle extremely long contexts is becoming increasingly important for LLMs, and context folding is a promising approach in this direction. We currently believe that the Recursive Language Model is the best method for context folding, due to its simplicity and, at the same time, great flexibility and extensibility. The future of AI isn’t just about raw power; it’s about intelligence, efficiency, and a new, recursive way of solving problems.

That’s it for today!

Sources

[1] Recursive Language Models (RLM): A New Paradigm for Retrieval-Augmented Language Modeling. (2026). Manus AI Internal Document.

[2] Zhang, A. (2025, October 15). Recursive Language Models. Alex L. Zhang.

[3] Kohli, V. (2026, January 8). Breaking the Context Window: How Recursive Language Models Handle Infinite Input. GetMaxim.ai.

[4] Gibbons, P. (2026, January 19). The MIT RLM: How to Build Powerful Sovereign AI at Home. Think Bigger Think Better.

From Locked PDFs to Limitless AI: The Plain Text Revolution You Can’t Ignore

In today’s world, we’re surrounded by data. From company reports and legal, intellectual property documents to academic papers and scanned invoices, a vast amount of our collective knowledge is stored in PDF files. For decades, PDFs have been the digital equivalent of a printed page, easy to share and view, but incredibly difficult to work with. This has created a massive bottleneck in the age of Artificial Intelligence (AI).

As a technology leader, you’re constantly looking for ways to leverage AI to drive business value. But what if your most valuable data is trapped in a format that AI can’t understand? This is the challenge that a new wave of technology is solving, and it all starts with a surprisingly simple solution: plain text.

The Surprising Power of Plain Text: What is Markdown?

If you’ve ever written a quick note on your computer or sent a text message, you’ve used plain text. Markdown is a plain-text markup language that uses characters you already know to add simple formatting. For example, you can create a heading by putting a # in front of a line, or make text bold by wrapping it in **asterisks**.

This might not sound revolutionary, but it’s a game-changer for AI. Unlike complex file formats like PDFs or Word documents, which are filled with hidden formatting code, Markdown is clean, simple, and easy for both humans and computers to read. It separates the meaning of your content from its appearance, which is exactly what AI needs to understand it.

Markdown
---
title: "Markdown.md in 5 minutes (with a real example)"
author: "Your Name"
date: "2026-01-11"
tags: [markdown, docs, productivity]
---

# Markdown.md in 5 minutes āœ…

Markdown (`.md`) is a plain-text format that turns into nicely formatted content in places like GitHub, GitLab, docs sites, and note apps.

> Tip: Keep it readable even **without** rendering. That’s the magic.

---

## Table of contents

- [Why Markdown?](#why-markdown)
- [Formatting essentials](#formatting-essentials)
- [Lists](#lists)
- [Task list (GFM)](#task-list-gfm)
- [Links and images](#links-and-images)
- [Code blocks](#code-blocks)
- [Tables (GFM)](#tables-gfm)
- [Mini ā€œREADMEā€ section](#mini-readme-section)
- [Resources](#resources)

---

## Why Markdown?

- **Fast** to write
- **Portable** (works across tools)
- **Version-control friendly** (diffs are clean)

Use cases:
- README files
- technical docs
- meeting notes
- product specs
- blog posts

---

## Formatting essentials

This is **bold**, this is *italic*, and this is `inline code`.

This is ~~strikethrough~~ (supported on many platforms like GitHub).

### Headings

- `# H1`
- `## H2`
- `### H3`

### Blockquote

> ā€œMarkdown is where docs and code finally get along.ā€

### Horizontal rule

---

## Lists

### Unordered list

- Item A
- Item B
  - Nested item B1
  - Nested item B2

### Ordered list

1. Step one
2. Step two
3. Step three

---

## Task list (GFM)

- [x] Write the first draft
- [ ] Add screenshots
- [ ] Publish the post

---

## Links and images

### Link

Read more: [My project page](https://example.com)

### Image

![Alt text describing the image](https://placehold.co/1200x630/png?text=Markdown+Example)

> Tip: If your platform doesn’t allow external images, use local paths:
> `![Diagram](images/diagram.png)`

---

## Code blocks

### Python (syntax-highlighted)

```python
def summarize_markdown(text: str) -> str:
    return f"Markdown length: {len(text)} chars"

Why AI Loves Markdown: A Non-Technical Guide to Token Efficiency

To understand why AI prefers Markdown, we need to talk about something called “tokens.” You can think of tokens as the words or parts of words that an AI reads. Every piece of information you give to an AI, whether it’s a question or a document, is broken down into these tokens. The more tokens there are, the more work the AI has to do, which means more time and more cost.

This is where Markdown shines. Because it’s so simple, it uses far fewer tokens than other formats to represent the same information. This means you can give the AI more information for the same cost, or process the same information much more efficiently.

A bar graph comparing token efficiency of different file formats including JSON, XML, HTML, and Markdown, indicating that Markdown uses 30-60% fewer tokens than JSON.

As you can see, Markdown is significantly more efficient than other formats. This isn’t just a technical detail—it has real-world implications. It means you can analyze more documents, get faster results, and ultimately, build more powerful AI applications.

The “PDF Problem”: Why You Can’t Just Copy and Paste

So, why can’t we just copy text from a PDF and give it to an AI? The problem is that PDFs were designed for printing, not for data extraction. A PDF only knows where to put text and images on a page; it doesn’t understand the structure of the content.

When you try to extract text from a PDF, especially one with columns, tables, or complex layouts, you often end up with a jumbled mess. The reading order gets mixed up, tables become gibberish, and important context is lost. For an AI, this is like trying to read a book that’s been torn apart and shuffled randomly.

Side-by-side comparison of an original PDF monthly financial report and its traditional OCR output, highlighting errors in the OCR extraction process.

This is the “PDF problem” in a nutshell. The valuable information is there, but it’s locked away in a format that’s hostile to AI.

The Solution: How Modern AI Unlocks Your PDFs

Fortunately, a new generation of AI, called Vision Language Models (VLMs), is here to solve this problem. These models can see a document just like a human does. They can understand the layout, recognize tables and headings, and transcribe the content into a clean, structured format like Markdown.

This is where a tool like MarkPDFDown comes in. It uses these powerful VLMs to convert your PDFs and images into AI-ready Markdown, unlocking the knowledge within them.

Flowchart illustrating the process of converting a PDF document into Markdown using Vision Language Models (VLM). The diagram includes icons representing a PDF, images, a VLM, and Markdown.

Introducing MarkPDFDown: Your Bridge from PDF to AI

MarkPDFDown is a powerful yet simple tool that makes it easy to convert your documents into Markdown. It’s designed for anyone who wants to make their information accessible to AI, without needing a team of data scientists.

User interface of MarkPDFDown tool displaying options to convert PDF files and images into Markdown format.
MarkPDFDown – PDF/Image to Markdown Converter

With MarkPDFDown, you can:

  • Convert PDFs and images to Markdown: Unlock the data in your scanned documents, reports, and other files.
  • Preserve formatting: Keep your headings, lists, tables, and other important structures intact.
  • Process documents in batches: Convert multiple files at once to save time.
  • Choose your AI model: Select from a range of powerful AI models to get the best results for your documents.

The Script Behind the Magic

To give you a peek behind the curtain, here is a snippet of the Python code that powers MarkPDFDown. This script handles file conversion, using the powerful LiteLLM library to interface with various AI models.

Python
import streamlit as st
import os
from PIL import Image
import zipfile
from io import BytesIO
import base64
import time
from litellm import completion

# --- Helper Functions ---

def get_file_extension(file_name):
    return os.path.splitext(file_name)[1].lower()

def is_pdf(file_extension):
    return file_extension == ".pdf"

def is_image(file_extension):
    return file_extension in [".png", ".jpg", ".jpeg", ".bmp", ".gif"]

# ... (rest of the script)

This script is a great example of how modern AI tools are built—by combining powerful open-source libraries with the latest AI models to create simple, effective solutions to complex problems.

The Future is Plain Text

The shift from complex, proprietary formats to simple, plain text is more than just a technical trend—it’s a fundamental change in how we interact with information. By making our data more accessible, we’re paving the way for a new generation of AI-powered tools that can understand our knowledge, answer our questions, and help us make better decisions.

As a leader, you don’t need to be a programmer to understand the importance of this shift. By embracing tools like MarkPDFDown and the principles of AI-ready data, you can unlock the full potential of your organization’s knowledge and stay ahead of the curve in the age of AI.

That’s it for today!

Sources

Boosting AI Performance: The Power of LLM-Friendly Content in Markdown

Why Markdown is the best format for LLMs

Improved RAG Document Processing With Markdown

MarkPDFDown GitHub Repository

Lawrence Teixeira’s Blog – Tech News & Insights