Context Engineering: How I've Been Using Claude Code in My Development Workflow

September 13, 2025

Context engineering with AI assistants is evolving rapidly, and everyone seems to have their own approach. This article shares a workflow I’ve been experimenting with over a couple of months using Claude Code: a practical method for maintaining context across sessions, validating understanding, and turning AI collaboration into well-documented, testable code.

There are heaps of excellent resources on context engineering out there: Claude Code Best Practices from Anthropic, Simon Willison’s exploration of Claude Code, and Ethan Mollick’s deep dive into working with AI tools to name a few. Every day brings new articles with tips and tricks for using AI assistants effectively, it can be overwhelming, even repetitive at times. Some are genuinely insightful, others feel like variations on the same theme. What I want to share here is something different: my personal journey over the last few months. Not another prescriptive guide, but the workflow that’s actually stuck and made a tangible difference to how I write code.

I won’t give you prescriptive prompts or a CLAUDE.md file to copy and paste. Instead, I want to explain the workflow and reasoning behind it, so you can adapt it to your own context.

Building Context Through Dialogue

Working with large codebases presents a unique challenge: features rarely live in isolation. They span multiple files, interact with various systems, and carry implicit knowledge that’s often scattered across team members’ heads and outdated documentation.

What I found that worked great was starting with a simple premise: I already understand the code I’m working with, but I want to validate that understanding and create a shared context with Claude Code.

Here’s an example of how this works. I start by describing what I know about how a feature works in my own words. I’m not asking Claude to explain the code to me, I’m explaining it myself and asking for validation. I’ll typically mention an entry point where the logic begins, something like: “The search feature starts in search/handler.go and I believe it parses the query, builds the database filters, executes the search, and returns paginated results.”

Claude Code then gives me its understanding based on the actual code. This is where the magic happens: it’s not about getting an explanation, it’s about calibrating our shared understanding. If there’s a mismatch, I’ll follow up with corrections or clarifications until we’re aligned.

Once I’m happy with our shared understanding, I ask Claude Code to write this down in a markdown file; let’s call it SEARCH_FEATURE.md. Now I have a structured, written form of how this particular feature works. Remember, I’m documenting a specific feature, not the entire codebase. This focused approach keeps things manageable and relevant.

From Understanding to Planning

With our context documented, I start fresh with a /clear command. This clean slate is intentional: it forces me to be explicit about what context is needed for the next phase.

Now I can say: “Based on @SEARCH_FEATURE.md, I want to add pagination to our search results that currently return everything.”

But here’s where I add my own thinking to the mix. I’ll usually have ideas about implementation approaches, so I’ll include them: “We could use offset-based pagination which is simple, or cursor-based pagination which handles data changes better. Offset is easier to implement and gives page numbers, but cursor pagination performs better with large datasets. What’s the right choice here?”

I do this in plan mode (rotate shift + tab in Claude Code), which helps maintain focus on planning rather than jumping straight to implementation. The back-and-forth here is crucial, I challenge Claude Code with questions, explore edge cases, and refine until I’m confident we have not only a direction I’m happy with but also documented reasoning about alternatives considered.

Iterative Planning and Documentation

Claude Code tends to create plans with multiple phases, and I often tweak these. Sometimes it over-engineers things, so I’ll consolidate or remove phases. This granular control helps break down complex problems into manageable chunks, and I can git commit after each successful phase implementation.

Depending on the complexity, I’ll either append to the existing markdown file or create a new one for the plan. The key is maintaining that written record of decisions and approach.

Using Tests as Validation Checkpoints

Here’s a critical addition to my workflow that’s saved me countless hours: I ask Claude Code to write tests first, then use them as validation checkpoints. Instead of me manually checking if the implementation works, Claude runs the tests to verify correctness.

This TDD approach does consume more tokens due to iterative test-fail-implement-pass cycles and context accumulation. But here’s the thing: I’d write tests anyway, with or without AI. Tests are living documentation that verify the code works as intended. While code remains the source of truth, tests ensure that truth aligns with expectations. The token investment essentially automates what I’d do manually, with the added benefit of reduced debugging cycles and higher confidence in the implementation.

This creates a powerful feedback loop. The tests become the specification, and Claude Code iterates on the implementation until they pass.

Implementation with Precision

Time for another /clear, starting fresh for the implementation phase. This separation between planning and implementation has been crucial for keeping the context manageable.

My editor of choice is Neovim with the claudecode.nvim plugin, which has been particularly helpful for maintaining focus. Instead of saying “implement phase 2”, I can select specific lines from the plan and send them directly with references like @SEARCH_FEATURE.md#L7-17.

This approach has helped me explore code more effectively. When I need to understand a particular section, I can select it and ask targeted questions about its behaviour or edge cases. Being able to provide precise context has made these conversations much more productive.

Most of the time, I prefer to handle the git workflow myself, staging files as chunks of work are completed so I know exactly what’s being added. But I let Claude Code write the commit messages. It’s genuinely excellent at understanding what was done and writing comprehensive commit messages that capture both the what and the why.

This also helps with continuity. When I return to a feature hours later, instead of providing the entire plan again, I can just give Claude Code a couple of recent commits to read and it picks up right where we left off.

Conclusion

My development workflow is evolving, and I’m still figuring it out. The approach I’ve described here is what’s been working for me lately, but it’s constantly changing as I try to keep up with how quickly these tools are improving.

The biggest shift for me has been becoming more pragmatic. I still care deeply about clean APIs, good abstractions, and code that expresses clear intent, sometimes I find myself arguing with Claude about these things. But now that AI generates code for me, I’m learning to pick my battles. The craft still matters, but I’m less precious about being the one who types it all out.

Building context through documentation has become central to how I work, though I’m not committing these context files to git yet. These artifacts are still personal tools, they help me maintain continuity between sessions and occasionally help when explaining my thinking to teammates. But there’s a journey ahead in figuring out how this fits with how everyone else on the team works.

This way of working is still evolving. The tools keep changing, my approach keeps adapting, and I’m still discovering what works and what doesn’t. But for now, this structured context-building has shifted development from wrestling with complexity alone to having a conversation about solving problems, even if sometimes that conversation involves me shouting at Claude about proper error handling.


Alabê Duarte
Written by Alabê Duarte, Software Engineer at SafetyCulture, formerly ThoughtWorks.

Opinions are mine.