Browse
Archive
15
posts
- The Rosetta Stone of AI BS Mar 11, 2026
- ATS Tried to automate hiring, but got automated back Mar 03, 2026
- Learning RAG while benchmarking it Feb 17, 2026
- I Let an AI Interview Me, Then Data-Analyzed My Own Answers Feb 9, 2026
- How I discovered something interesting about ATS... Jan 25, 2026
- Me, Claude vs jsPDF - The Saga Jan 20, 2026
- 2KB to 2GB: Why Embedded Systems Engineers Will Dominate Jan 11, 2026
- All roads lead to Rome, yet my passport is empty. Jan 10, 2026
- Architecture Before Syntax: The Theme-Aware Chart.js Jan 9, 2026
- What Would AI Invent If We Started from Assembly? Jan 8, 2026
- Taming Gemini Costs & Coding with AI Jan 6, 2026
- Building Production SEO in a 29MB Binary Dec 30, 2025
- Why I built this website, its tech stack and approach Dec 30, 2025
- The Scorer Paradox: A Pragmatic Guide to Beating the ATS Dec 11, 2025
- Why I am Skeptical of AGI, but you should use AI Dec 12, 2025
What Would AI Invent If We Started from Assembly?
Are we teaching AI our mistakes?
I had this weird thought the other day: what if we're training AI to code all wrong?
Don't get me wrong, AI coding assistants are evolving fast. Genuinely fast, not like areas where AI keeps hitting a ceiling. Language and culture, for instance. Those are fluid, shaped by human subjectivity, historical context, norms that shift every decade. No objective "right" answer exists. Good writing in one context is terrible in another. Cultural references expire. Meanings drift. AI struggles here because there are no fixed rules to optimize against, and it's hard to evaluate what's "good" versus "bad." No clear end-goal, no win condition. How do you measure if a piece of writing is "better" than another? Depends on context, audience, purpose. Even the evaluation is subjective.
Now look at where AI has actually succeeded: chess and Go. Complex games, sure, but with strict rules. Fixed board. Defined moves. Clear win condition. And critically, you can evaluate success objectively: did you win or lose? AI trained through self-play, millions of games against itself, and mastered these games in ways humans never could. Rules gave it a framework for optimization. A win condition gave it a way to know what worked.
That's when it clicked for me: computers also operate within strict rules. Hardware has hard boundaries. Memory, CPU cycles, instruction sets. Not subjective. Physics. A CPU cycle is a CPU cycle whether you're running Python, JavaScript, or assembly.
But every coding assistant today is trained on our languages. Python, JavaScript, Go, whatever. Languages humans designed with all our biases, legacy baggage, and "this seemed like a good idea in 1972" decisions baked right in.
So this is a thought experiment, not a proposal. What if we trained AI to code starting from assembly, the actual metal, and let it invent its own languages from the ground up? What would come out of that black box?
Are We Solving a Problem by Teaching Computers to Think Like Us?
Every programming language carries the DNA of its creators' limitations:
- Verbose syntax that trips us up: Looking at you, Java. Sometimes I write more boilerplate than actual logic.
- Legacy constraints we can't escape: C was designed in the 1970s. We're still dealing with null pointer exceptions because of decisions made when computers had kilobytes of memory.
- Human readability over machine efficiency: We optimize for our brains, not the hardware. That's fine, but what if we didn't have to?
Not bugs. Features of human-designed systems. But what if AI, unburdened by our cognitive limitations, could create something better?
Starting from Assembly
Train AI to code starting from assembly language. Raw instructions that talk directly to the CPU. No abstractions, no human-friendly syntax. Just the fundamental rules of computation.
From there, it would:
- Learn the hardware constraints: Memory management, CPU cycles, I/O operations, the actual physics of computation.
- Build abstractions organically: Create higher-level constructs optimized for performance and clarity, not human comfort.
- Invent its own syntax: Design a language tailored to its own logic and processing capabilities.
Like teaching someone to build a house starting with atoms instead of lumber. Painful? Probably. But you'd understand the fundamentals in a way that's impossible when you start with pre-cut boards.
The Curiosity
I have no idea what would actually emerge. That's the point. If we gave AI the constraints of hardware and let it explore, what patterns would it discover? What optimizations would it find that we've never considered? We can't predict what comes out of the black box, and that's exactly why I'm curious.
What Might Emerge: Pure Speculation
1. Eliminating Human Bias
Starting from assembly means the AI avoids inheriting the inefficiencies we've baked into every language. It can design constructs purely based on logic and performance metrics, not "this feels intuitive to humans."
Consider how we handle errors. Every language does it differently because humans have different opinions about what "clean" error handling looks like. What if the AI discovered an approach nobody's considered?
2. Optimizing for Machine Logic
AI-designed languages could align perfectly with hardware capabilities. No more fighting against the CPU's natural tendencies. A language that works with the hardware, not against it.
Could mean software that's faster, more efficient, uses fewer resources. When we're running AI models that consume gigawatts of power, that's not trivial.
3. Innovating Beyond Our Paradigms
We've been stuck in the same programming paradigms for decades. Procedural, object-oriented, functional. What if AI discovered entirely new ways to structure code that nobody in 70 years of computer science has thought of?
Maybe it would invent something that makes concurrency trivial. Or a memory model that eliminates entire classes of bugs. We don't know what we don't know.
4. Reducing Errors at the Language Level
Languages designed by AI could minimize ambiguity and redundancy. With deep understanding of hardware constraints, it could create syntax that makes certain bugs impossible.
A language where memory leaks are syntactically impossible. Where race conditions can't exist because the language model prevents them. Sounds far-fetched, but Rust already proved you can eliminate entire bug categories at the language level.
Why This Is Hard
Okay, sounds great in theory. Real problems exist though:
Understanding Context
AI would need to understand why we program, to solve real-world problems. Can't just create a language that's theoretically perfect but useless in practice. Optimization has to be balanced against practicality.
Human Usability
Even if the AI invents the perfect language, humans need to be able to use it. Without constraints, the AI might create something so abstract that it's impossible for us to work with.
Tension is real: optimize for the machine, or optimize for the human? Answer is probably "both, somehow."
Training Data
Where do you even get training data for this? You'd need examples of hardware interaction, operating system internals, software design principles. But you'd also need to avoid contaminating it with human-designed language patterns.
Chicken-and-egg problem. Need good training data to create a good language, but need a good language to generate good training data.
Adoption
Say the AI invents the perfect language. Great. Now what? Developers need to learn it, tooling needs to be built, communities need to form around it.
History is littered with "better" technologies that failed because adoption was too hard. Being technically superior isn't enough. Never has been.
Why Developers Might Hate This
I get it. Programming is an art form. We take pride in the elegance of human-designed languages. AI inventing languages from scratch feels like it's diminishing our craft.
But this isn't about replacing developers. It's about freeing us from fighting against inefficient abstractions so we can focus on solving actual problems.
If AI handles the low-level inefficiencies, we focus on higher-level problem-solving. Not obsolescence. Amplification.
Change is scary. Always has been. People hated garbage collection when it was introduced. People hated high-level languages when assembly was the norm. We adapted. We were better for it.
The Real Question
What would emerge if we let AI explore the space of possible languages, constrained only by hardware physics? We can't know until we try. That's what makes this thought experiment interesting to me.
Where This Idea Came From
What if AI could use hardware constraints the same way it used chess rules? Self-play, optimizing within the rigid framework of computational logic?
That was the spark. Start from assembly, let AI build up. What patterns would it discover? What would emerge? I genuinely don't know, and that's what makes it worth thinking about.
We're Building on a Flawed Foundation
Another angle on this: we're training AI on our own mistakes.
Every AI coding assistant learns from human-written code. Code that's full of inefficiencies, redundancies, and biases because humans wrote it. We're teaching AI to replicate our flaws.
Verbose syntax? AI learns it. Legacy constraints? Inherited. Human readability over efficiency? AI optimizes for that too, because that's all it's ever seen.
Starting from assembly and letting AI invent its own syntax could bypass all of that. Languages optimized for performance, clarity, and hardware alignment, free from the baggage of 70 years of human design decisions.
What This Could Mean (If Anything)
Maybe nothing. Maybe it would produce something completely unusable. Maybe it would discover patterns we've never seen. Honestly don't know.
But if something interesting did emerge, it might look like:
- Hardware-specific languages: Languages tailored to specific chips, maximizing efficiency for that exact architecture.
- Novel paradigms: Ways of structuring code that we've never considered because we're stuck in human-designed patterns.
- Unexpected optimizations: Solutions to problems we didn't even know we had.
Or a complete disaster. That's the thing about thought experiments: you don't know until you try.
Just Curiosity
This is a thought experiment. I'm curious what would emerge if we gave AI hardware constraints and let it explore.
Will it work? Will it produce anything useful? Will it discover something nobody's seen before? No idea. That's the whole point.
Maybe it's a terrible idea. Maybe the black box spits out something incomprehensible. Maybe it finds patterns that change how we think about code. Only way to know is to try it.