Why I Had to Get Hands-On with AI Development [A CTA's Wake-Up Call]
I built an app to close the gap between advising on AI and actually using it. Here's what I learned.
The skills required to be effective as an architect have changed. Not gradually. Fundamentally. For the past several years, I’ve been selling complex work: enterprise strategies, multi-cloud architectures, transformation programs. High-level stuff. Important stuff. But recently I realized something uncomfortable: I couldn’t credibly lead teams using AI-assisted development without getting hands-on myself.
Watching demos isn’t enough. Reading documentation isn’t enough. You have to feel how fast things move now, where the friction remains, and what’s genuinely changed versus what’s just marketing. So I built something. A small app, from a real problem, using the AI tools I’d been advising clients about.
One week. That’s how long it took. And that timeline is the evidence: when you actually learn the tools, what used to take months compresses into days.
This is the third piece in a series about AI-assisted development. The first argued that we’re now supervising systems that write code. The second explored how AI changes the OOTB vs. custom decision. This one is the proof that I’ve done what I’ve been writing about.
Why This Problem, Why Now
I needed a real project. Not a tutorial. Not a sandbox experiment. Something that would ship.
The problem I chose was one I’d encountered several times: team-based record sharing beyond standard objects. Salesforce has teams functionality, but those only work for Accounts, Contacts, and Cases. On projects, we’d faced the need to build something similar for custom objects. The solution was already in my head from previous implementations. Perfect candidate for a real AI development test.
The Actual Problem
Standard Account Teams in Salesforce only work for Accounts, Contacts, and Cases. If you need team-based sharing for Opportunities, custom objects, or anything else, you’re on your own.
The workarounds I’ve seen over the years: dozens of public groups mirroring team structures, Apex triggers copying Account team members to Opportunity teams on insert, spreadsheets tracking who should have access to what. None of these are good.
The Learning I Needed
I’d been testing AI development tools on smaller challenges. Permission management scripts. Security audit automation. Legacy refactoring. Enough to know the tools had crossed a threshold, but not enough to feel confident leading teams through AI-assisted builds.
That’s the gap I wanted to close. Not just “can AI help me code faster?” but “how does this change how I run a project? How do I estimate work? What can I promise clients? What skills does my team need?”
You can’t answer those questions from documentation. You have to do the work.
I connected with Michał Bajdek around the same time. We’d been working independently on similar problems, seeing the same patterns from different angles. We decided to build together through Tucario.
The team sharing problem was the right test case. Known solution. Clear enough that I could focus on learning the AI workflow rather than figuring out what to build.
Here’s the thing: Flexible Teams Share, the app I’m describing here, turned out to be just one small piece of what came from this learning. Through this process, I realized something more important needed to be built. More on that in upcoming posts.
What I Actually Learned
Here’s what this hands-on experience taught me.
The workflow is conversational but structured. I describe intent, AI generates implementation, I review and correct. The architecture came from years of solving this problem. AI handled the translation to code. This sounds simple. Living it is different. You develop an intuition for what instructions produce clean output versus what leads to rework.
The tedious parts compress dramatically. LWC components, Apex controllers, test classes. Structure that takes time to type but not time to think through. “Add a field to track last modification date. Update the component to display it. Add a test for the Apex method.” Done in minutes. Edge cases too: “What happens if a team member is deactivated? Write tests for that scenario.”
Your brain is still the bottleneck for solution design. How to structure the data model. What the user experience should be. Which edge cases matter. AI doesn’t know (maybe yet) which patterns actually work in real implementations. That knowledge comes from having been burned before.
My own focus and context-switching changed. I built this alongside other work, including personal websites. Normally, jumping between different codebases kills productivity. With AI assistance, I could pick up where I left off without the mental overhead of reloading everything into my head. The AI remembered the project state. I just had to remember what I wanted to build next.
The estimation models I used before are wrong. Not “need adjustment” wrong. Fundamentally wrong. I couldn’t have given a client accurate expectations for AI-assisted development without doing this. Now I can.
The Product
What we built: Flexible Teams Share. The pattern I kept implementing, now productized.
Teams as first-class objects. Create teams, assign members, define access levels. Business users manage this themselves without admin intervention. Sharing propagates automatically: associate a team with a record, team members get access. Works across object types.
Admins stay in control through permission sets and audit logs. Flexibility is governed, not chaotic. Built with governor limits in mind from day one because I’ve been burned by that before.
The app is going through AppExchange security review. That’s a real product, from a real problem, built in a real timeline. The proof that I’ve done what I’m writing about.
What This Means for Technical Roles
Here’s the uncomfortable truth I had to face.
For years, I could stay credible by understanding platforms deeply, seeing patterns across implementations, and making sound architectural decisions. Hands-on coding was optional. Plenty of CTAs, enterprise architects, and technical leads operate that way.
That’s no longer enough.
AI-assisted work doesn’t just speed up coding. It changes what’s feasible and what clients can reasonably expect. Problems that would have required a team and months are now addressable by someone with clear intent and a few days. If you’re advising on build vs. buy, estimating timelines, leading delivery teams, or even just trying to stay relevant in technical conversations, you need to understand this viscerally. Not theoretically.
This isn’t only about coding either. AI is changing how we write documentation, how we analyze requirements, how we review solutions, how we communicate with clients. The whole workflow is shifting. Coding is maybe 70-80% of where the impact hits, but it’s not the only thing.
Each of these topics deserves its own deep dive. This is a broad area, and I’ll be writing more about specific aspects in upcoming posts.
The expertise still matters. AI didn’t know Salesforce sharing model nuances or which patterns actually work in real implementations. That knowledge came from years of experience. But the gap between knowing how to solve something and having it built has collapsed. That changes the economics of every decision you make.
If you’re an architect, developer, or technical lead who hasn’t gotten hands-on with AI tools: do it. Not because it’s interesting (though it is). Because your credibility depends on understanding what’s actually possible now.
What I’d Tell Others in Technical Roles
Get hands-on. Not eventually. Now. The skills landscape has shifted and you need to feel it, not just hear about it. Pick a real problem and build something.
Your domain expertise still matters. AI compressed my execution time, but the solution came from years of solving similar problems. Deep platform knowledge isn’t obsolete. It’s what lets you use AI effectively instead of generating plausible-looking garbage.
But your old estimation models are broken. I couldn’t have given accurate expectations for AI-assisted work without doing this. The gap between “sounds reasonable” and “I’ve done this” matters more than ever when timelines have changed this dramatically. And credibility requires currency. You can’t advise on what you haven’t done. The tools are moving fast. Watching from the sidelines means your advice gets stale.
This is Tucario’s first app. More patterns are coming. But the bigger point isn’t the product. It’s that staying relevant in technical roles now requires staying hands-on in ways it didn’t before.