Your AI initiatives are producing prototypes. Your competitors are shipping production systems.

I design and deliver production-grade AI systems for enterprise teams: from use-case discovery and PoC shaping through to architecture and implementation, with governance built in from day one, not retrofitted when the auditor arrives. That covers the full stack: model selection and data pipeline design, EU AI Act considerations factored in from the start, and the decisions your team will live with after I'm gone. Then I make sure the team can run without me.

I plan my exit from the first conversation. With AI systems this matters more than in conventional architecture: models drift, regulations change, and teams who don't own their AI infrastructure become permanently dependent on the person who built it. Success means the team can maintain, retrain, and extend the system without me.

Faster delivery Deutsche Telekom
Fewer production bugs Mercedes-Benz
Build cycle time Siemens

The teams that call me in have usually reached the same point: an AI initiative that started well is stalling because the decisions required to take it to production cut across team boundaries, regulatory obligations, and budget cycles at the same time. I cut through that. The result is a production AI system your team owns outright, with the governance your compliance team can sign off on and no ongoing dependency on me. Most engagements deliver that in months, not years.

What I Do

I design production-grade AI systems and embed with enterprise and Mittelstand teams to build them. The work starts earlier than most engagements: use-case discovery, PoC shaping, deciding what is worth building before a line of code is written. From there: selecting the right models, designing the data pipelines, establishing the governance layer your compliance team will sign off on, and making the architectural decisions that hold when volume triples or regulatory questions arrive. Platform architecture and technical leadership for complex non-AI systems is part of the same practice. With 15+ years building at Siemens, Deutsche Telekom, and Mercedes-Benz, I can tell you which decisions will save you months and which ones will cost you years.

The word 'product' in AI Product Architect is intentional. I have designed and shipped products across the full stack: UX research, product design, frontend engineering, cloud architecture, and backend systems. I understand how an AI feature fits into the product your users experience, not just the infrastructure your team maintains. That is what separates AI that ships from AI that stalls in staging.

This is not ML research, data platform engineering, or AI strategy without delivery. It is architecture-led AI product work with a defined method: use-case discovery, PoC shaping, model selection and pipeline design, governance mapping, implementation, production rollout, and team handoff. That sequence is what keeps enterprise AI systems out of staging.

Diagnostic Clarity

The first week produces a written picture of where you are, whether you're starting from scratch or untangling decisions already in production. For AI engagements: I audit your data assets and their quality, map existing model dependencies, flag where EU AI Act requirements are likely to apply, and identify where AI creates genuine leverage versus where it's adding cost without return. Root causes are usually different from initial symptoms. In AI work, that gap is wider than anywhere else.

Architectural Decisions

AI architecture decisions are the ones hardest to reverse: which model to use and how to update it, how to structure your data pipeline, where the human-in-the-loop checkpoints go, how to log and audit decisions for regulatory purposes. Enterprise teams stall because these decisions cross team boundaries and it's rarely clear who should make the final call. I bring opinionated judgment based on what works at scale. You get a clear path forward with trade-offs understood. Not a menu of options and an invoice.

Planned Exit

AI systems need owners who understand them, not just developers who can run the code. Before I leave, I pair with the engineers who'll maintain the models, document every decision that will matter at 3am when something breaks, and make sure the team knows how to detect drift, retrain responsibly, and handle a compliance query without calling me. The test is the same it always is: can the team continue at the same quality without me?

What You Get in Week 1

The first week isn't just observation. It produces tangible output:

  • A written assessment of where you are, what's broken, and what needs to happen next
  • A clear breakdown of your upcoming decisions and what each choice will cost you
  • An honest recommendation on whether I'm the right fit for the engagement (including when I'm not)
  • If we move forward: a roadmap showing what gets built, in what order, and why

Week 1 ends with a clear picture of what's happening and a specific plan for what to do about it.

Every engagement follows the same arc

1

Diagnose

One week. I assess the codebase, the team, and the delivery pipeline. The diagnosis identifies root causes, which often differ from the presenting symptoms.

2

Embed

Weeks to months. I join the team, make architectural decisions, write code, review PRs, integrate AI where it creates real leverage, and establish the patterns that scale. Most time goes to code, decisions, and pairing, not status meetings.

3

Exit

Planned from day one. When the foundation is solid and the team is self-sustaining, I leave. That's the goal.

Let's talk about what's blocking your team.

Tell me what's going on