Tokenmaxxing
I burnt 250M tokens in a day. Tokenmaxxing is the deliberate practice of maximizing AI token consumption through parallelization.
I burnt 250M tokens in a day. Tokenmaxxing is the deliberate practice of maximizing AI token consumption through parallelization.
What happens when AI stops getting cheaper & starts getting more expensive? The implications for software companies are profound.
Intercom & Chroma shipped custom AI models today. Would you choose one software over another because it has a proprietary model with better performance?
The SaaS era rewarded unbundling & specialization. AI companies are rebundling into platforms because rapid model changes create cognitive burden for buyers assembling best-of-breed stacks.
When a $50 billion American company builds its flagship model on Chinese open-source weights, that's a sign of the criticality of open source to innovation.
Agent pricing reveals whether you're solving scarcity or creating disruption - and the market rewards these very differently.
Stripe launched Tempo today. I raced Claude against a local model on my laptop - and the simpler model won.
For every dollar hyperscalers earn from AI today, they're spending twelve dollars to build more capacity. That's the 12x bet embedded in $575 billion of capex this year.
Amazon lost $6.3M in orders from AI coding failures. Utah law eliminates the hallucination defense. Companies bear liability for their agents' mistakes.
The AI infrastructure shortage isn't chips anymore — it's power, data centers & memory. What happens when demand exceeds supply through 2028?