Skip to content
Back to blog
·3 min read

Agentic AI: How Claude Code Changed the Way I Develop

Real-world experience integrating agentic AI into a professional development workflow. Concrete cases, limitations, and best practices.

AIClaude CodeAgenticProductivityDeveloper Experience

AI is no longer a copilot — it's an agent

18 months ago, I used AI as a fancy autocomplete. Today, I launch agents that audit a project, create GitHub issues, implement fixes, and deploy — in a single session.

This isn't science fiction. It's my daily workflow with Claude Code since January 2026.

What actually changed

Before: the classic cycle

1. Identify a problem
2. Search for a solution (docs, Stack Overflow, trial and error)
3. Implement
4. Test
5. Debug
6. Commit

Average time per feature: 2-4 hours.

Now: the agentic cycle

1. Describe the problem and context
2. The agent explores the code, understands the architecture
3. It proposes a plan, I validate or adjust
4. It implements, tests, and commits
5. I review the diff

Average time for the same feature: 30-60 minutes.

The gain isn't just speed — it's cognitive load. I no longer carry the context of 15 files in my head. The agent does.

Concrete case: GEO audit in one session

On my portfolio (riggi.tech), I ran a complete GEO (Generative Engine Optimization) audit with Claude Code:

  1. Exploration — the agent scanned the entire codebase, identified existing JSON-LD schemas, analyzed robots.txt, verified canonical URLs
  2. Diagnosis — 8 issues automatically created on GitHub, prioritized P0 to P3
  3. Implementation — AI crawlers in robots.txt, canonical URLs on all pages, Service, FAQ, CollectionPage, and Speakable schemas
  4. Critical review — a second Opus agent audited the first agent's code, found a shallow merge bug on hreflang tags
  5. Fix + deployment — correction applied, build verified, deployed to production

Total: 8 issues, 12 files modified, 3 commits. In one session.

The limitations — let's be honest

Agentic AI isn't magic. Here's what doesn't work well yet:

Visual output

The agent can't "see" the rendered result. It can write perfectly valid CSS that produces a visually broken result. The visual feedback loop still requires a human.

Design choices

The agent proposes technically correct but sometimes over-engineered solutions. You need to frame it: "simple, no unnecessary abstractions, no features nobody asked for."

Editorial tone

When I asked the agent to rewrite my site's copy, the result was too marketing-heavy, too American. My feedback: "I'm someone simple, humble, and effective." AI tends to overcompensate.

Best practices

After 60+ sessions and 229 commits with Claude Code:

  1. Give context, not instructions — Instead of "add a Button component with primary and secondary variants," I say "we need a CTA button on service pages — look at how other components are structured and follow the pattern."
  2. Plan before executing — Having the agent validate its approach before implementation prevents 80% of false starts.
  3. Use critical agents — Launching an Opus agent to review another agent's code is like automated code review. It catches bugs, inconsistencies, and edge cases.
  4. Don't accept everything — I reject about 20% of suggestions. Not because they're technically wrong, but because they don't match the project's style or philosophy.

The future

Agentic AI is to software engineering what IDEs were to Notepad: a paradigm shift, not a replacement. The developer doesn't disappear — they move up one level of abstraction.

My 2026 workflow: I'm the architect and reviewer. The agent is the developer. And it works.