
Following our successful HULA framework workshops, we evolved the concept at Founders & Coders to explore a different challenge: how do development teams coordinate when each developer has their own AI assistant? Rather than teaching individuals to work with AI, this workshop paired developers to collaborate on projects whilst each maintained separate Claude Code instances.
The results exceeded expectations: 90% of participants found the content relevant, 65% felt very well supported, and every team completed functional Kanban applications with persistent storage. More importantly, the structured team coordination revealed a powerful innovation that points toward scalable solutions for larger development teams.
Participants formed pairs (with one group of 3) and the day followed a deliberate progression designed to surface coordination challenges:
The task remained straightforward, build a collaborative Kanban board application, but the process explored how teams coordinate when everyone has an AI pair programming partner.
Our established IQRE methodology (Iterate, Question, Review/Create, Explain) proved valuable for maintaining discipline in AI interactions whilst enabling team coordination. The “Explain” phase became particularly critical through structured pull request templates:
## How I Worked with Claude
– Initial prompts/ideas I shared
– How I refined Claude’s suggestions
– Questions I asked to fill knowledge gaps
– My own contributions beyond Claude’s suggestions
## Code Understanding
Demonstrate your understanding by explaining what each key section does
## What I Learned
– New concepts/techniques discovered
– Challenges faced and solutions found
Working with multiple AI instances exposed the critical importance of shared specifications, the content of some of these specs are discussed in more detail in my previous AI in the loop post linked above. Teams established comprehensive CLAUDE.md, FUNCTIONAL.md, and ARCHITECTURE.md files during conception, creating common standards before splitting into parallel development.
Standardised prompting becomes crucial when everyone has an AI assistant. Small variations in how developers prompt their Claude instances can lead to dramatically different code styles, architectural assumptions, and implementation approaches. Teams that consistently referenced shared specifications—always directing Claude to review standards before coding—maintained architectural coherence across multiple development streams.
One participant emphasised:
“Excellent experience of collaborative development with AI supporting. I found the documentation and audit trails of work to be really helpful.”
The workshop highlighted Claude’s context window limitations. This is a challenge affecting both individual and team AI usage. Our initial approach involved HISTORY.md files, summaries each developer maintained to preserve context across sessions when Claude conversations exceeded limits or needed to be reset.
However, participants questioned this approach. If the context window is precious, why include potentially irrelevant previous interactions? Some teams found shared dependency documents more valuable than individual history files, especially when these are treated as live documents, leading to an unexpected innovation.
Rather than maintaining individual HISTORY.md files as we’d suggested, one three-developer team created a living TICKETS.md document that evolved throughout development:
This wasn’t a static dependency chart. The team continuously updated ticket statuses, subdivided bottleneck tasks, and “locked” dependencies in real-time. When DATA-2: Create Svelte Stores became a blocking issue, they immediately split it into atomic components (DATA-2A: Basic Tasks Store, DATA-2B: UI State Store, DATA-2C: Derived Stores) to enable parallel progress.
This innovation solved a critical coordination problem: how do you orchestrate multiple AI assistants working on interdependent features? The visual dependency tracking became a shared brain for the team, emerging from the pressure of coordinating three developers rather than the typical pair that the workshop was planned for.
The TICKETS.md approach revealed why shared living documentation becomes more valuable than individual context preservation when coordinating multiple AI instances:
Individual history files, by contrast, may consume precious context window space unnecessarily. The live documentation approach optimised for team coordination rather than individual AI sessions.
Several participants requested deeper workplace integration, noting: “AI being used effectively in real-life workplaces” as their primary interest. The TICKETS.md innovation points toward broader applications in professional development environments, all enabled through Model Context Protocol (MCP) connections:
The coordination challenges we observed point toward emerging solutions in AI orchestration. Our upcoming workshop series explores two complementary approaches:
Human-AI pair programming on legacy codebases: Moving beyond greenfield projects to examine how teams integrate AI assistance with existing systems—addressing the professional scenario that “most engineers are going to be working on existing projects”.
Multi-agent system development using LangGraph: Teaching developers to build coordinated AI systems that handle complex, multi-step workflows whilst maintaining human oversight. Think supervisor agents that coordinate specialist AI workers through structured dependency graphs.
Our workshop material includes six progressive patterns, from simple sequential workflows to production-ready multi-agent systems with error handling and human-in-the-loop approval gates. These patterns could solve the coordination problems we observed manually by encoding team coordination logic into AI systems.
The workshop demonstrated that effective AI team coordination requires fundamentally different approaches than individual AI assistance. Success patterns included:
The feedback was overwhelmingly positive:
“Continually surprised by just how powerful and effective using AI to code can be from each workshop. Can’t wait for the next one.”
The TICKETS.md innovation reveals how effective team AI coordination can emerge organically from well-structured constraints. Rather than complex orchestration frameworks, simple shared documentation patterns enabled effective parallel development.
This approach scales through emerging technologies. Model Context Protocol provides standardised interfaces for AI tools to access project data, whilst LangGraph enables teams to codify coordination logic into automated workflows. Together, they point toward development environments where AI assistants coordinate seamlessly whilst preserving human architectural control.
As one participant noted: “Working in pairs was perfect; larger teams would have been extremely messy”. However, this raises interesting questions about optimal team sizes and whether additional coordination mechanisms might enable larger teams to work effectively.
The future of team development lies not in replacing human coordination with AI, but in developing orchestration frameworks that leverage AI capabilities whilst preserving human architectural control. The teams that master these practices early will have significant advantages as AI assistance becomes ubiquitous in software development.