Vibe coding and the future of AI generated software (maybe Rust?)
I've been thinking a lot about the state of the software industry. I'm using AI a lot in my day-to-day, and it's hard not to notice how things can get going a lot faster than they used to. Andrej Karpathy recently coined that term Vibe Coding to describe this shift—where you just ask for changes, accept all the AI's suggestions, and move on. It resonated to the extent that Merriam Webster made it the "slang of the week." I think we can all see that the cost per line of code is approaching zero, and that drastically changes the way we think about software development.
But as much as AI speeds things up, it isn't perfect. It has this annoying habit of forgetting the big picture. I'm constantly trying to throw more and more context at it so it understands how things link together. It loves to rewrite logic that already exists, sometimes overcomplicating simple tasks or missing the broader intent of a codebase. Bugs aren’t always fixed so much as worked around and reimplemented in completely different areas. Codebases get bloated, patterns become inconsistent, and debugging turns into a weird game of "ask the AI for random changes until it works." And because everything is happening so fast, you don’t always have the time (or the patience) to untangle how it all fits together.
The thing is, AI is already better than us at writing code—at least in the sense of quickly solving well-defined problems. OpenAI recently showed that LLMs are now elite competitive programmers, beating human coders on CodeForces and placing in the gold medal range at the International Olympiad in Informatics. That’s wild.
But, I know that when software engineers hear that, their eyes roll a little. Competitive programming is not the same as software engineering. Writing the most efficient quicksort implementation isn’t the same thing as maintaining a million-line enterprise codebase that has dependencies older than some of its engineers. Software engineering is less about writing code and more about taming an evolving, multiheaded beast. The job is about refactoring old systems, integrating new features, fixing bugs that only appear in production under just the right circumstances. AI can generate a million lines of code overnight, but that doesn’t mean it can maintain a cohesive, structured system over time.
So how do we bridge that gap? How do we take these AI systems that are already objectively better at coding than most humans and actually make it useful at the scale of an entire software stack?
AI as an Autonomous Software Engineering Team
Right now, AI is treated like a junior engineer—it writes functions when asked, but it doesn’t really understand the architecture of a system. But what if we stopped treating AI like an assistant and started thinking about how to build software in a way that AI can fully own?
For an AI-driven software engineering team to actually work, I think that a few things need to happen:
-
AI needs to understand and enforce architecture – Right now, AI coding models don’t have a system-wide view of a codebase. They generate code in response to isolated prompts, which makes them bad at long-term maintenance. A fully agentic AI system would need strict architectural constraints to keep things coherent.
-
AI needs structured, incremental changes – If you ask an AI to refactor something today, it might completely miss existing abstractions and rewrite everything from scratch. We need a way to define code as modular, composable units, so that AI is extending the system rather than generating isolated fragments of logic.
-
AI needs to handle debugging and patching autonomously – The current AI workflow is "generate code, run it, see if it works." But real software engineering involves diagnosing subtle bugs, applying patches, and ensuring changes don’t break the system. Right now, humans still do the heavy lifting there.
Rust as a Foundation for AI-Native Software Development
I think there may be a compelling argument that an AI-driven code ecosystem could take place in the Rust language and I want to explain why. Rust’s strict coding guardrails could be exactly what AI needs to thrive. The compiler catches mistakes early, preventing AI from generating broken logic that only fails at runtime. Strong typing forces structure. And if AI is going to be generating a lot of code, Rust’s performance means that we’re not drowning in performance hits from "vibe coding" like we might be in a dynamically typed language. It’s a language that naturally forces boundaries and contracts—which is exactly what AI tends to struggle with when left unchecked.
What if the key to making AI coding actually work is defining atomic, composable coding primitives—small, well-scoped building blocks that AI assembles rather than reinvents? If AI is limited to writing modular functions with clear constraints, suddenly it’s not just free-styling code, but building within a structured system. A Rust-based toolchain could be an interesting way to enforce that kind of discipline at the compiler level, keeping AI-generated software sane and maintainable.
AI Won’t Just Help Write Software—It Will Be the Entire Engineering Team
AI is already elite at coding. What’s stopping it from owning entire codebases? Right now, the biggest issue isn’t AI’s ability to write code—it’s our failure to adapt software engineering to an AI-native paradigm.
Maybe the real shift isn’t about making AI write better code. Maybe it’s about reimagining software engineering itself—designing systems where AI doesn’t just generate code, but actively maintains, extends, and improves it over time. Instead of writing software, maybe our job becomes defining constraints, relationships, and high-level goals, and letting AI handle the implementation details.
At some point, the role of software engineers might shift from writing code to defining the contracts and architecture that guide AI development. If that’s the direction things are headed, then the real question isn’t how AI will fit into existing engineering workflows—it’s whether our current approach to software development is even the right fit for AI at all.
And if we’re going to rebuild software engineering with AI-first principles, it’s probably worth thinking about what kind of foundation actually lets that system scale. Rust might just be one of the best places to start.