PodcastsTecnologíaDev Interrupted

Dev Interrupted

LinearB
Dev Interrupted
Último episodio

287 episodios

  • Dev Interrupted

    The best model for your team? You haven’t invented it yet. | Ai2’s Tim Dettmers

    21/04/2026 | 45 min
    Forget the massive GPU clusters. According to Tim Dettmers, research scientist at Ai2, you can build a state-of-the-art AI coding agent with what he calls a "hot plate and a frying pan." This week on Dev Interrupted, Andrew sits down with Tim to unpack how his resource-strapped team built the SERA model using a fraction of the compute power of major labs. They explore the tactical engineering behind synthesizing training data from private codebases without verification tests, proving that the open-source community is uniquely positioned to out-specialize frontier models. Finally, Tim shares his contrarian take on the future of token economics, explaining why the cost of AI might actually spike as compute efficiency hits a physical wall.
    OFFERS
    Start Free Trial: Get started with LinearB's AI productivity platform for free.
    Book a Demo: Learn how you can ship faster, improve DevEx, and lead with confidence in the AI era.
    LEARN ABOUT LINEARB
    AI Code Reviews: Automate reviews to catch bugs, security risks, and performance issues before they hit production.
    AI & Productivity Insights: Go beyond DORA with AI-powered recommendations and dashboards to measure and improve performance.
    AI-Powered Workflow Automations: Use AI-generated PR descriptions, smart routing, and other automations to reduce developer toil.
    MCP Server: Interact with your engineering data using natural language to build custom reports and get answers on the fly.
  • Dev Interrupted

    The self-authoring wiki, beating brain fry, and Obsidian as memory is a trap

    17/04/2026 | 38 min
    Have you or a loved one been afflicted by "brain fry" after managing too many autonomous agents? This week on the Friday Deploy, Andrew and Ben explore the cognitive toll of orchestrating AI swarms and share Kelly Vaughn’s expert strategies for avoiding burnout. The hosts also discuss Google's new campaign to punish websites that hijack the back button, the breakthrough of running Gemma 4 natively on mobile devices, and a new 8-step maturity model for building agentic data pipelines. Finally, they dive into a heated debate over whether Obsidian flat-files are a scalable memory solution for AI, comparing the methodology to Andrej Karpathy's new agent-compiled wiki system.
    Read the guide: The APEX Framework
    Follow the show:
    Subscribe to our Substack 
    Follow us on LinkedIn
    Subscribe to our YouTube Channel
    Leave us a Review
    Follow the hosts:
    Follow Andrew
    Follow Ben
    Follow Dan
    Follow today's stories:
    Introducing a new spam policy for "back button hijacking"
    Google Gemma 4 Runs Natively on iPhone With Full Offline AI Inference
    Water Town: The Agent Swarm Data Stack
    Stop Calling It Memory: The Problem with Every "AI + Obsidian" Tutorial
    The Wiki That Writes Itself
    Breaking out of the "brain fry" spiral of AI
    After Burnout by Kelly Vaughn
    OFFERS
    Start Free Trial: Get started with LinearB's AI productivity platform for free.
    Book a Demo: Learn how you can ship faster, improve DevEx, and lead with confidence in the AI era.
    LEARN ABOUT LINEARB
    AI Code Reviews: Automate reviews to catch bugs, security risks, and performance issues before they hit production.
    AI & Productivity Insights: Go beyond DORA with AI-powered recommendations and dashboards to measure and improve performance.
    AI-Powered Workflow Automations: Use AI-generated PR descriptions, smart routing, and other automations to reduce developer toil.
    MCP Server: Interact with your engineering data using natural language to build custom reports and get answers on the fly.
  • Dev Interrupted

    The guardian in the machine | Wayfound’s Tatyana Mamut

    14/04/2026 | 44 min
    Are your AI agents quietly ignoring their guardrails just to get the job done? This week on Dev Interrupted, Andrew sits down with Wayfound AI founder and CEO Tatyana Mamut to discuss why traditional, deterministic software testing falls completely short when evaluating stochastic AI models. They explore the growing strategic divide between OpenAI and Anthropic, the urgent need for independent "guardian agents," and what it takes to run a company with just 4 humans and 27 agents. Finally, they break down how to stop the chaotic game of telephone between engineers and business leaders by relying on "Deep-T" subject matter experts to evaluate what good AI output actually looks like.
    Read the guide: The APEX Framework
    Follow the show:
    Subscribe to our Substack 
    Follow us on LinkedIn
    Subscribe to our YouTube Channel
    Leave us a Review
    Follow the hosts:
    Follow Andrew
    Follow Ben
    Follow Dan
    Follow today's stories:
    Wayfound AI: Secure your autonomous enterprise and align your AI workforce with an independent agent supervision platform.
    Moltbook: Explore the viral social network built exclusively for AI agents, where you can observe Tatyana's OpenClaw agent, Aphasia, in action.
    Anthropic's Agent Autonomy Research: Read Anthropic's report on how people actually use agents and why post-deployment monitoring is an absolute necessity.
    OpenClaw: Explore the viral, open-source personal AI assistant framework.
    Follow Tatyana Mamut on LinkedIn
    OFFERS
    Start Free Trial: Get started with LinearB's AI productivity platform for free.
    Book a Demo: Learn how you can ship faster, improve DevEx, and lead with confidence in the AI era.
    LEARN ABOUT LINEARB
    AI Code Reviews: Automate reviews to catch bugs, security risks, and performance issues before they hit production.
    AI & Productivity Insights: Go beyond DORA with AI-powered recommendations and dashboards to measure and improve performance.
    AI-Powered Workflow Automations: Use AI-generated PR descriptions, smart routing, and other automations to reduce developer toil.
    MCP Server: Interact with your engineering data using natural language to build custom reports and get answers on the fly.
  • Dev Interrupted

    Reading model benchmarks like a pro, Mythos is looming, and Claude talk caveman, save big token

    10/04/2026 | 30 min
    Is the secret to slashing your token costs by 65% forcing your LLM to speak like a caveman? This week on the Friday Deploy, Andrew and Ben test out a hilarious new Claude plugin that reduces AI output to primitive shorthand before diving into Anthropic's $100 million push to win the cybersecurity arms race with Project Glasswing. The hosts also unpack the sudden release of four game-changing open-source models—including Gemma 4 and Holo3—and explain why modern AI benchmarks are proving that humans still have a cognitive edge. Finally, they wrap up by sharing how they deploy custom background agents to hack their way through expo floors at industry conferences.
    Read the guide: The APEX Framework
    Follow the show:
    Subscribe to our Substack 
    Follow us on LinkedIn
    Subscribe to our YouTube Channel
    Leave us a Review
    Follow the hosts:
    Follow Andrew
    Follow Ben
    Follow Dan
    Follow today's stories:
    Project Glasswing
    Tool School: Benchmarking 101 (How To Read AI Model Report Cards)
    Four Open Models Just Proved You Can Own Frontier AI at Every Scale
    JuliusBrussee/caveman
    OFFERS
    Start Free Trial: Get started with LinearB's AI productivity platform for free.
    Book a Demo: Learn how you can ship faster, improve DevEx, and lead with confidence in the AI era.
    LEARN ABOUT LINEARB
    AI Code Reviews: Automate reviews to catch bugs, security risks, and performance issues before they hit production.
    AI & Productivity Insights: Go beyond DORA with AI-powered recommendations and dashboards to measure and improve performance.
    AI-Powered Workflow Automations: Use AI-generated PR descriptions, smart routing, and other automations to reduce developer toil.
    MCP Server: Interact with your engineering data using natural language to build custom reports and get answers on the fly.
  • Dev Interrupted

    Stop measuring AI adoption. Start measuring AI impact. | LinearB’s APEX framework

    07/04/2026 | 42 min
    Are your AI coding tools actually making your team faster, or are they just creating downstream chaos? This week, Ben Lloyd Pearson and Dan Lines introduce APEX, LinearB’s new engineering leadership framework built explicitly to measure and manage software delivery in the AI era. Moving beyond traditional frameworks like DORA and SPACE, APEX balances AI Leverage, Predictability, Efficiency, and Developer experience to ensure upstream code generation translates into actual business value. Tune in to learn how to break past the illusion of coding speed, prevent AI slop from clogging your review pipelines, and discover which pillar of the APEX framework your team needs to tackle first.
    Download the APEX Framework
    Follow the show:
    Subscribe to our Substack 
    Follow us on LinkedIn
    Subscribe to our YouTube Channel
    Leave us a Review
    Follow the hosts:
    Follow Andrew
    Follow Ben
    Follow Dan
    Follow today's guest:
    LinearB APEX Framework: Explore the full operating model, visual breakdowns, and the guide to operationalizing the metrics.
    Workflow Automation: Learn about LinearB's gitStream (policy-as-code for PR automation) and WorkerB (developer bot for minimizing idle time).
    OFFERS
    Start Free Trial: Get started with LinearB's AI productivity platform for free.
    Book a Demo: Learn how you can ship faster, improve DevEx, and lead with confidence in the AI era.
    LEARN ABOUT LINEARB
    AI Code Reviews: Automate reviews to catch bugs, security risks, and performance issues before they hit production.
    AI & Productivity Insights: Go beyond DORA with AI-powered recommendations and dashboards to measure and improve performance.
    AI-Powered Workflow Automations: Use AI-generated PR descriptions, smart routing, and other automations to reduce developer toil.
    MCP Server: Interact with your engineering data using natural language to build custom reports and get answers on the fly.

Más podcasts de Tecnología

Acerca de Dev Interrupted

Software itself is fundamentally changing. We explore the transition to agentic orchestration, vibe coding, and AI-native development, grounding the conversation in the principles that have always defined great engineering.On Tuesdays, we interview the founders, architects, and builders of the world’s most impactful tech to uncover the timeless engineering principles and strategies shaping the next era of development.And on Fridays, we drop an end-of-week roundup of the biggest news in AI and software, and what it actually means for your career, your craft, and your life as a developer.Subscribe to stay ahead of the next era of code.
Sitio web del podcast

Escucha Dev Interrupted, All-In with Chamath, Jason, Sacks & Friedberg y muchos más podcasts de todo el mundo con la aplicación de radio.net

Descarga la app gratuita: radio.net

  • Añadir radios y podcasts a favoritos
  • Transmisión por Wi-Fi y Bluetooth
  • Carplay & Android Auto compatible
  • Muchas otras funciones de la app

Dev Interrupted: Podcasts del grupo

Aplicaciones
Redes sociales
v8.8.11| © 2007-2026 radio.de GmbH
Generated: 4/22/2026 - 1:05:22 AM