How to Generate 50K-Token Documents: Same LLM, Different Results

TL;DR We compared Dataframer vs raw Claude Sonnet 4.5 for long-form text; Dataframer overwhelmingly won on diversity, style fidelity, length, and quality.

Long-Form Text Generation Comparison

Alex Lyzhov

Mon Jan 12

Summary

We generated 50K-token documents using the same frontier LLM and got dramatically different results. In the baseline, outputs collapsed into short, repetitive “summary essays”. With Dataframer’s scaffold, we consistently got full-length, style-faithful documents across 4 long-text datasets.

What surprised us most (real examples):

  • Real Estate: baseline repeated “Zoning” 8 times; Dataframer produced 15 distinct topics from 5 inputs.
  • Gutenberg: baseline reused the same plot loop; Dataframer generated genuinely varied stories with strong prose.
  • Wiki Medical: baseline got shorter and added unwanted Markdown; Dataframer stayed long and encyclopedic.

Read on for:

  • The exact evaluation setup (blind, Gemini 3 Pro)
  • The three failure modes (mode collapse, style drift, length shrinkage) and how scaffolding prevents them
  • Full outputs + scripts for reproducing our results

Introduction

Long-form synthetic text generation (10K-100K tokens per sample) is critical for LLM evaluation and training - 2026 LLMs routinely operate with very large prompts and extended chat or agentic traces. This is a challenging domain where maintaining coherence, diversity, and stylistic fidelity matters enormously.

Raw LLMs struggle here: autoregressive generation commits to tokens without lookahead, can’t revise earlier sections, and tends to drift or repeat as context grows. Long texts require global coherence that single-pass generation can’t guarantee. You need intermediate representations (outlines, state tracking), iterative search and editing, verification loops, and revision procedures. In short, sophisticated scaffolding.

What is Dataframer?

Dataframer is a platform for generating high-quality synthetic datasets at scale. You provide example data, and it generates new samples matching the same patterns, distributions, and structure. Under the hood, AI analyzes your seeds to create a specification capturing data properties and distributions, then runs generation with outlining, iterative evaluation loops and revision cycles - exactly the intermediate representations and quality controls that raw prompting lacks. For more details, see our documentation.

Dataframer Pipeline

Experiment Setup

Datasets

We manually collected 4 datasets with long texts as seeds for style and formatting conditioning:

DatasetSourceNumber of SamplesSample LengthDescription
WikisourceDownload2 texts35k-50k tokens”Results and Prospects” (Trotsky) + “The Time Machine” (H.G. Wells)
GutenbergDownload2 texts45k-50k tokens”The Call of the Wild” (Jack London) + “The Time Machine” (H.G. Wells)
Wiki MedicalDownload2 articles25k-30k tokens”Healthcare in Canada” + “History of Medicine”
Wiki Real EstateDownload5 articles5k-15k tokensNIMBY, Real Estate Economics, Intellectual Property, Property Management, REITs

Generation Protocol

We generated up to 15 samples for each dataset. Generation followed the standard flow and was almost completely hands-off:

  1. Load seed data
  2. Generate a spec using Claude Sonnet 4.5 - a blueprint capturing the structure, patterns, and requirements of your data
  3. Make minimal spec edits (see below)
  4. Generate samples

The spec edits were trivial - we only made changes in 2 places across all 4 datasets. See the before/after specs:

DatasetGenerated SpecEdited SpecChange
Wiki Medicalspec(same)No changes
WikisourcebeforeafterRemoved last sentence about length (platform determines length automatically)
Wiki Real Estatespec(same)No changes
GutenbergbeforeafterChanged “from” to “resembling those from” (we want new fiction, not reproductions)

There was no cherrypicking: we did not select datasets where Dataframer performs well, nor make algorithm changes for these datasets. All seed datasets, generation specs, and scripts are included for reproducibility.

Baseline

As of January 2026, to the best of our knowledge, we have not identified a commercially available system that provides comparable general-purpose generation of diverse long-form texts. Therefore, we compare against a raw frontier LLM baseline: Claude Sonnet 4.5 with low reasoning mode (1024 tokens of reasoning). Importantly, Dataframer uses the same model for all its internal roles (outlining, generation, filtering, revision). The only difference between the two methods is our agentic scaffold.

Evaluation Methodology

We designed evaluation to be maximally fair:

  • Systems anonymized as “System 1” (Dataframer) and “System 2” (baseline Claude Sonnet 4.5), same number of samples for each
  • Independent evaluator from a different model family: Gemini 3 Pro Preview with high reasoning mode
  • Evaluator received all samples from both systems in one context window and compared them across 7 dimensions: Diversity, Style Distribution Matching, Length, Quality, Artifacts, Validity, and Overall Assessment

Results

At a Glance

DatasetDataframerBaseline (Sonnet 4.5)
WikisourceFull novellas with compelling plots, authentic period voicesOnly produced dry essays, ignored fiction entirely
GutenbergSuperb prose quality, massive creativityPlot loop - same expedition story repeated
Wiki Real Estate15 unique topics from 5 inputs, perfect style match8x “Zoning”, 4x “Land Value Tax”
Wiki MedicalLong-context coherence, encyclopedic depthToo short, added unwanted Markdown formatting

The key insight: same model (Claude Sonnet 4.5), same seeds - the only difference is Dataframer’s agentic scaffold.

Deep Dive: Wikisource

CriterionDataframerSonnet 4.5 Baseline
DiversityExceptional - political treatises, epistolary novels, sci-fi, utopias. Creatively merges both inputs.Very low - nearly all dry expository essays. No fiction, no dialogue. Repetitive titles.
Style DistributionMatches both input styles. Reproduces Wikisource formatting (nav arrows, metadata). Authentic period voices.Fails - homogenizes everything into generic “academic” voice.
LengthMassive long-form content - full novellas with Preface to Epilogue structure.Short-medium essays, summary-based, lacking depth.
QualityExtraordinary - compelling plots, character arcs, authentic world-building.Mediocre - reads like undergraduate summaries.
ArtifactsIntentionally reproduces Wikisource artifacts (nav links, page numbers).Strips all formatting.
ValidityHigh - historically grounded, internally consistent.Moderate - logically sound but platitudinous.

Winner: Dataframer (vastly superior)

The other three datasets showed consistent patterns: Dataframer maintained diversity and style fidelity while the baseline collapsed into repetitive outputs (Gutenberg: same plot structure repeated; Wiki Real Estate: 80% duplicate topics) and introduced unwanted formatting changes.

Full Evaluation Details

Evaluation summaries and full reports:

All generated outputs (both Dataframer and baseline) are available for download:

All data (seeds, Dataframer outputs, and baseline outputs) is also available on HuggingFace.


Dataframer Avoids Typical Synthetic Data Failure Modes

The blind evaluation revealed three distinct failure modes in the baseline that Dataframer successfully avoids:

Failure Mode 1: Mode Collapse

The Problem: The baseline repeatedly generates the same topics or formulaic plot structures.

  • Wiki Real Estate: “Zoning” generated 8 times, “Land Value Tax” 4 times out of 15 samples
  • Gutenberg: Every story follows ship → island → ruins → beings → escape
  • Wiki Medical: Duplicate “Medical Education” articles

How Dataframer Avoids It: Strategic diversity injections during the outlining phase ensure each sample explores distinct territory within the semantic space of the seeds.

Failure Mode 2: Style Drift

The Problem: The baseline introduces formatting and structural elements not present in the seed data.

  • Added Markdown headers (#, ##) when inputs used plain text
  • Converted dense encyclopedic prose into bullet-point lists and blog-post-style summaries
  • Stripped source-specific formatting artifacts (Wikisource navigation, metadata)

How Dataframer Avoids It: Iterative evaluation and editing loops keep the generated style tightly matched to the original distribution. The system continuously compares output characteristics against seed characteristics.

Failure Mode 3: Length Shrinkage

The Problem: The baseline generates summaries instead of full documents, lacking detail and nuance.

  • Wikisource seeds were 35k-50k tokens; baseline outputs were 2k-5k tokens
  • Inputs were dense, chapter-length texts; outputs were brief essays

How Dataframer Avoids It: The algorithm explicitly accounts for target length during outlining and generation, maintaining long-context coherence through outlining and structured revision passes.


Discussion

There are many aspects of synthetic data quality that need to be monitored: coherence, diversity, specification matching, length profiles, and more. Missing any one of these can result in an unusable dataset.

Our evaluation shows that naive prompting of frontier models collapses into repetitive, homogenized outputs. Dataframer’s agentic scaffold - with its outlining, generation, filtering, and revision stages - produces dramatically better results using the exact same underlying model.

For practitioners building synthetic data pipelines:

  • Watch for mode collapse: Count unique topics/structures in your outputs
  • Watch for style drift: Compare formatting and structure to your inputs
  • Watch for length shrinkage: Are your outputs significantly shorter than your inputs?

The difference is stark: a system that actually satisfies your specifications versus one that collapses into repetitive, homogenized outputs.

"Dataframer offers an industry-first agentic platform to generate large structured or unstructured synthetic datasets. Interested in generating high-quality data for your use case? Just reach out to us."

Puneet Anand, CEO

DataFramer

Ready to Get Started?

Contact our team to learn how we can help your organization develop AI systems that meet the highest standards.

Book a Meeting