AgentNet

The Last 30%: Where AI Coding Needs You Most

AI gets you started — but finishing is still your job.

In the age of generative coding tools like Copilot, GPT-4, and Replit Ghostwriter, developers are faster than ever. But there’s a trap hidden in that speed: the illusion of completion.

We call it the 70% Problem — the idea that AI can write 70% of the code, but the remaining 30% is where:


⚙️ Where the 70% Works

LLMs are brilliant at:

This is productivity gold — the stuff devs would rather not handwrite anyway.


🧱 Where the 30% Bites Back

But the last mile always matters more:

AI can hallucinate. But worse — it overconfidently completes what it doesn’t understand.

The 70% can feel like 100% until review time.

And here’s the catch: that final 30% often has to fit inside the spaghetti structure created by the AI’s first pass.

So not only are you finishing the job — you’re untangling it as you go.


🕵️ Real-World Friction: A Mini Case Study

Imagine you ask your LLM:

"Create an API that returns products on discount for logged-in users."

You get a working Express.js endpoint. Nice!

But when you test it:

The LLM nailed the structure — but missed the domain nuance.

That’s the 30% in action.

And now, that 30% has to be woven into an architecture you didn’t design.


🔄 Shift the Mindset: Draft, Then Engineer

If you treat LLMs as draft partners, not finishers:

💡 Pro tip:

Add a comment above AI-generated code like: // Prompted draft — verify edge cases + structure

This keeps reviewers sharp and expectations honest.


📌 Design for the 30%

The future isn’t just smarter AI — it’s smarter collaboration.

Consider designing your workflows to embrace this gap:

AI is a power tool, not a magic wand. It shines within constraints.


🚀 Final Thought

We don’t just need faster code. We need better engineering judgment.

Understanding the 70% Problem lets you:


Tags: ai-assisted coding, software engineering, llms, productivity, code review, agentic systems