BTC
ETH
SOL
BNB
GOLD
XRP
DOGE
ADA
Back to home
Tech

Don’t Let AI Write For You

Letting large language models (LLMs) ghostwrite your documents—PRDs, technical specs, essays—skips the hard part of thinking.

Letting large language models (LLMs) ghostwrite your documents—PRDs, technical specs, essays—skips the hard part of thinking. It delivers text fast but erodes your understanding and credibility. In tech and finance, where flawed plans cost millions, this matters. A 2023 study by Stanford found LLMs hallucinate facts in 27-52% of responses on benchmarks like medical QA. You get polished prose hiding potential garbage.

Writing forces clarity. You pose a question like “What should we build?” for a PRD or “How do we secure this chain?” in crypto specs. Answering pulls you through uncertainty into structure. Each draft tests if you’re solving the right problem. Skip this, and you stay murky. Data backs it: deliberate practice, like wrestling ideas into words, builds expertise. A meta-analysis in Psychological Science (2014) shows reflection during tasks boosts retention by 20-30% over passive review.

LLMs short-circuit this. Tools like GPT-4 or Claude spit out 1,000-word docs in seconds, mimicking expertise. But they remix training data—billions of tokens from the web—without true comprehension. OpenAI admits models “lack human-like understanding.” Result: surface-level output that sounds right but crumbles under scrutiny. In a 2024 arXiv paper, researchers tested LLM-generated code specs; 40% failed real-world implementation due to overlooked edge cases humans catch via iteration.

Why Trust Breaks

Peers spot LLM slop. Phrasing like repetitive transitions (“In addition,” “Furthermore”) or vague fluff triggers detectors—GPTZero flags 80-90% of pure generations accurately. When your spec reeks of AI, colleagues think: “Did they think this through, or just prompt-engineer platitudes?”

This hits leadership hard. In finance, a dodgy risk assessment from LLM could greenlight a $10M bad trade. At crypto firms, half-baked security docs invite exploits—recall the 2022 Ronin hack, $625M lost partly from ignored specs. Sending AI text signals you didn’t grapple with risks. Humans build trust by showing scars from the thinking process: revisions, trade-offs, doubts. AI docs? They scream “I outsourced my brain.”

Oxide Computer’s policy nails it: ban LLMs as primary writers. They use models for brainstorming or polishing, not origination. Adam Leventhal’s post there details how this preserves authenticity—readers sense real contention with ideas.

Use LLMs Right, or Don’t Bother

LLMs excel at grunt work. Prompt for 10 feature ideas; discard nine trash ones, refine the gem. Research? They summarize papers fast—Claude handles 200K-token contexts. Editing? Paste your draft; it flags inconsistencies. Transcription or boilerplate? Perfect.

But full generation? Risky. Hallucinations persist: a 2024 Vectara benchmark clocked factual errors at 3-27% across models. In security intel, like threat models, one wrong assumption cascades. Why it matters: software delivery speeds up 20-50% with AI aids (GitHub Copilot stats), but thoughtful humans must steer. Firms ignoring this face “AI washing”—docs that look good, fail bad.

Counter the hype. LLMs won’t replace thinkers; they amplify them. Next time a deadline looms, write the zero draft yourself. Wrestle the unknown. Your edge in tech, finance, crypto? Depth others fake. Use AI to lift you higher, not as a crutch. Miss this, and you’re just another prompt jockey in a sea of generated noise.

April 1, 2026 · 3 min · 12 views · Source: Lobsters

Related