The renter problem: why cloud LLMs feel inevitable (until they don't) If you work with AI in any serious capacity, you're probably sending requests to an API. Claude, GPT, Gemini. You paste in
The renter problem: why cloud LLMs feel inevitable (until they don't) If you work with AI in any serious capacity, you're probably sending requests to an API. Claude, GPT, Gemini. You paste in
The Per-User Product: How LLMs Are Forcing a New SaaS Architecture When Every User Can Get a Different Product I've been thinking about where software architecture is headed in the context of LLMs,
When code stops being the source of truth A paradigm shift is emerging in software engineering: Requirements, not Code, are becoming the Source of Truth. For decades, engineers have treated code as the
Technical TL;DR (for busy engineers) Static weights are the bottleneck. Most LLMs can infer in-session, but they don't durably update from experience unless you retrain or fine-tune. Context windows, RAG, and "memory" features help,
TL;DR Time invested: ~4 weeks of focused preparation Resources used: Frank Kane's Udemy course, Stephane Maarek's AI Practitioner tests, Tutorials Dojo practice exams, AWS documentation, hands-on Bedrock projects Difficulty level: Hardest AWS exam
I’ve been playing with a bunch of “AI + web” setups lately, and I keep running into the same vibe: the model is smart, but the search layer feels… constrained. You ask for
1. The enterprise AI bet: what AWS is actually optimizing for Here’s the uncomfortable truth about AWS in AI: they’re not trying to “win the model leaderboard.” They’re trying to win regulated, enterprise
Getting Started: “I’ll just use BMAD to move faster” Over the last couple of weeks I’ve been working with the BMAD framework on a personal project, and I wanted to write this up