Browse
Archive
15
posts
- The Rosetta Stone of AI BS Mar 11, 2026
- ATS Tried to automate hiring, but got automated back Mar 03, 2026
- Learning RAG while benchmarking it Feb 17, 2026
- I Let an AI Interview Me, Then Data-Analyzed My Own Answers Feb 9, 2026
- How I discovered something interesting about ATS... Jan 25, 2026
- Me, Claude vs jsPDF - The Saga Jan 20, 2026
- 2KB to 2GB: Why Embedded Systems Engineers Will Dominate Jan 11, 2026
- All roads lead to Rome, yet my passport is empty. Jan 10, 2026
- Architecture Before Syntax: The Theme-Aware Chart.js Jan 9, 2026
- What Would AI Invent If We Started from Assembly? Jan 8, 2026
- Taming Gemini Costs & Coding with AI Jan 6, 2026
- Building Production SEO in a 29MB Binary Dec 30, 2025
- Why I built this website, its tech stack and approach Dec 30, 2025
- The Scorer Paradox: A Pragmatic Guide to Beating the ATS Dec 11, 2025
- Why I am Skeptical of AGI, but you should use AI Dec 12, 2025
Why I am Skeptical of AGI, but you should use AI
I'm not some Luddite who thinks the old ways are better. Opposite, actually. I build and run my own local GPU clusters because I want to understand these systems from the metal up, not just parrot the hype. I know AI is powerful because I use it every day. But the AGI narrative being sold right now? It doesn't match the engineering reality I see.
We're hitting a ceiling. And the numbers back it up.
1. The Data Feedback Loop
Seriously, how is AGI possible when we've already scraped damn near everything off the internet? There's nothing left. What's showing up now as "fresh" data is increasingly AI-generated slop.
Train a model on its own output enough times and it gets dumber. Researchers call this Model Collapse. A feedback loop where models eat their own tainted output and gradually lose the variance and chaos of actual human reality.
An engineering caveat: I'll admit my POV is wrong only when they solve these two problems: Synthetic Data (AI making data for AI) and Self-Play. They're betting AI can teach itself without going insane. But until that's proven to work at scale without degrading quality, it's a hypothesis, not a product.
2. The Economic Reality
Right now, the race for bigger models makes no economic sense. The ROI isn't there for the amount of compute being burned.
Don't get me wrong. I'm not anti-AI. I think it's amazing as a tool. But the real competitive advantage for companies will come when they start using their own data and training for their specific needs.
3. Where the Actual Money Is
Forget AGI. Companies that will actually win aren't the ones waiting around for a god-like AI. They're taking the current obsession with "General" intelligence and flipping it to "Specific" intelligence: smaller models trained on their own proprietary data to solve boring, real problems. Having access to the same generic model your competitors also have access to is not a competitive advantage. It's table stakes.
That's where the money is. Everything else is CEO marketing.
4. The Human Premium (and Your Job Security)
Ironic, isn't it? To fix the "AI-tainted" data loop, we need humans. High-quality human data is becoming a luxury asset. The only way to stop Model Collapse is expensive human labor: curators and experts who verify truth.
This blows up the "AI is cheap" argument, but it creates a massive opportunity. Companies that decide to self-host their own AI will need armies of Data Analysts, DevOps, and Systems Integrators to build and maintain clean data pipelines. We're not being replaced. We're moving from "doing the work" to "ensuring the machine does the work correctly."
Academic Sources & Data
Data Wall Projection: Villalobos, P., et al. (2022). "Will we run out of ML data? Evidence from projecting dataset size trends."
Updated Scarcity Models: Epoch AI (2024). "Will we run out of data? Limits of LLM scaling based on human-generated data."
Model Collapse: Shumailov, I., et al. (2023). "The Curse of Recursion: Training on Generated Data Makes Models Forget."