February 11, 2026 Comments Off

The renter problem: why cloud LLMs feel inevitable (until they don't) If you work with AI in any serious capacity, you're probably sending requests to an API. Claude, GPT, Gemini. You paste in

February 3, 2026 Comments Off

The Per-User Product: How LLMs Are Forcing a New SaaS Architecture When Every User Can Get a Different Product I've been thinking about where software architecture is headed in the context of LLMs,

January 18, 2026 Comments Off

When code stops being the source of truth A paradigm shift is emerging in software engineering: Requirements, not Code, are becoming the Source of Truth. For decades, engineers have treated code as the

January 18, 2026 Comments Off

Technical TL;DR (for busy engineers) Static weights are the bottleneck. Most LLMs can infer in-session, but they don't durably update from experience unless you retrain or fine-tune. Context windows, RAG, and "memory" features help,

January 17, 2026 Comments Off

TL;DR Time invested: ~4 weeks of focused preparation Resources used: Frank Kane's Udemy course, Stephane Maarek's AI Practitioner tests, Tutorials Dojo practice exams, AWS documentation, hands-on Bedrock projects Difficulty level: Hardest AWS exam

January 1, 2026 Comments Off

I’ve been playing with a bunch of “AI + web” setups lately, and I keep running into the same vibe: the model is smart, but the search layer feels… constrained. You ask for

December 30, 2025 Comments Off

1. The enterprise AI bet: what AWS is actually optimizing for Here’s the uncomfortable truth about AWS in AI: they’re not trying to “win the model leaderboard.” They’re trying to win regulated, enterprise

December 28, 2025 Comments Off

Getting Started: “I’ll just use BMAD to move faster” Over the last couple of weeks I’ve been working with the BMAD framework on a personal project, and I wanted to write this up