Powered by RND
PodcastsTecnologíaAIandBlockchain

AIandBlockchain

j15
AIandBlockchain
Último episodio

Episodios disponibles

5 de 206
  • Arxiv. Small Batches, Big Shift in LLM Training
    What if everything you thought you knew about training large language models turned out to be… not quite right? 🤯In this episode, we dive deep into a topic that could completely change the way we think about LLM training. We’re talking about batch size — yes, it sounds dry and technical, but new research shows that tiny batches, even as small as one, don’t just work — they can actually bring major advantages.🔍 In this episode you’ll learn:Why the dogma of “huge batches for stability” came about in the first place.How LLM training is fundamentally different from classical optimization — and why “smaller” can actually beat “bigger.”The secret setting researchers had overlooked for years: scaling Adam’s β2 with a constant “token half-life.”Why plain old SGD is suddenly back in the game — and how it can make large-scale training more accessible.Why gradient accumulation may actually hurt memory efficiency instead of helping, and what to do instead.💡 Why it matters for you:If you’re working with LLMs — whether it’s research, fine-tuning, or just making the most out of limited GPUs — this episode can save you weeks of trial and error, countless headaches, and lots of resources. Small batches are not a compromise; they’re a path to robustness, efficiency, and democratized access to cutting-edge AI.❓Question for you: which other “sacred cows” of machine learning deserve a second look?Share your thoughts — your insight might spark the next breakthrough.👉 Subscribe now so you don’t miss future episodes. Next time, we’ll explore how different optimization strategies impact scaling and inference speed.Key Takeaways:Small batches (even size 1) can be stable and efficient.The secret is scaling Adam’s β2 correctly using token half-life.SGD and Adafactor with small batches unlock new memory and efficiency gains.Gradient accumulation often backfires in this setup.This shift makes LLM training more accessible beyond supercomputers.SEO Tags:Niche: #LLMtraining, #batchsize, #AdamOptimization, #SGDPopular: #ArtificialIntelligence, #MachineLearning, #NeuralNetworks, #GPT, #DeepLearningLong-tail: #SmallBatchLLMTraining, #EfficientLanguageModelTraining, #OptimizerScalingTrending: #AIresearch, #GenerativeAI, #openAIRead more: https://arxiv.org/abs/2507.07101
    --------  
    16:52
  • DeepSeek. Secrets of Smart LLMs: How Small Models Beat Giants
    Imagine this: a 27B language model outperforming giants with 340B and even 671B parameters. Sounds impossible? But that’s exactly what happened thanks to breakthrough research in generative reward modeling. In this episode, we unpack one of the most exciting advances in recent years — Self-Principled Critique Tuning (SPCT) and the new DeepSeek GRM architecture that’s changing how we think about training and using LLMs.We start with the core challenge: how do you get models not just to output text, but to truly understand what’s useful for humans? Why is generating honest, high-quality reward signals the bottleneck for all of Reinforcement Learning? You’ll learn why traditional approaches — scalar and pairwise reward models — fail in the messy real world, and what makes SPCT different.Here’s the twist: DeepSeek GRM doesn’t rely on fixed rules. It generates evaluation principles on the fly, writes detailed critiques, and… learns to be flexible. But the real magic comes next: instead of just making the model bigger, researchers introduced inference-time scaling. The model generates multiple sets of critiques, votes for the best, and then a “Meta RM” filters out the noise, keeping only the most reliable judgments.The result? A system that’s not only more accurate and fair but can outperform much larger models. And the best part — it does so efficiently. This isn’t just about numbers on a benchmark chart. It’s a glimpse of a future where powerful AI isn’t locked away in corporate data centers but becomes accessible to researchers, startups, and maybe even all of us.In this episode, we answer:How does SPCT work and why are “principles” the key to smart self-critique?What is inference-time scaling, and how does it turn medium-sized models into champions?Can a smaller but “smarter” AI really rival the giants with hundreds of billions of parameters?Most importantly: what does this mean for the future of AI, democratization of technology, and ethical model use?We leave you with this thought: if AI can not only think but also judge itself using principles, maybe we’re standing at the edge of a new era of self-learning and fairer systems.👉 Follow the show so you don’t miss new episodes, and share your thoughts in the comments: do you believe “smart scaling” will beat the race for sheer size?Key Takeaways:SPCT teaches models to generate their own evaluation principles and adaptive critiques.Inference-time scaling makes smaller models competitive with massive ones.Meta RM filters weak judgments, boosting the quality of final reward signals.SEO Tags:Niche: #ReinforcementLearning, #RewardModeling, #LLMResearch, #DeepSeekGRMPopular: #AI, #MachineLearning, #ArtificialIntelligence, #ChatGPT, #NeuralNetworksLong-tail: #inference_time_scaling, #self_principled_critique_tuning, #generative_reward_modelsTrending: #AIethics, #AIfuture, #DemocratizingAIRead more: https://arxiv.org/pdf/2504.02495
    --------  
    18:33
  • Arxiv. The Grain of Truth: How Reflective Oracles Change the Game
    What if there were a way to cut through the endless loop of mutual reasoning — “I think that he thinks that I think”? In this episode, we explore one of the most elegant and surprising breakthroughs in game theory and AI. Our guide is a recent paper by Cole Wyth, Marcus Hutter, Jan Leike, and Jessica Taylor, which shows how to use reflective oracles to finally crack a decades-old puzzle — the grain of truth problem.🔍 In this deep dive, you’ll discover:Why classical approaches to rationality in infinite games kept hitting dead ends.How reflective oracles let an agent predict its own behavior without logical paradoxes.What the Zeta strategy is, and why it guarantees a “grain of truth” even in unknown games.How rational players, equipped with this framework, naturally converge to Nash equilibria — even if the game is infinite and its rules aren’t known in advance.Why this opens the door to AI that can learn, adapt, and coordinate in truly novel environments.💡 Why it matters for you:This episode isn’t just about math and abstractions. It’s about a fundamental shift in how we understand rationality and learning. If you’re curious about AI, strategic thinking, or how humans manage to cooperate in complex systems, you’ll gain a new perspective on why Nash equilibria appear not as artificial assumptions, but as natural results of rational behavior.We also touch on human cognition: could our social norms and cultural “unwritten rules” function like implicit oracles, helping us avoid infinite regress and coordinate effectively?🎧 At the end, we leave you with a provocative question: could your own mind be running on implicit “oracles,” allowing you to act rationally even when information is overwhelming or contradictory?👉 If this topic excites you, hit subscribe to the podcast so you don’t miss upcoming deep dives. And in the comments, share: where in your own life have you felt stuck in that “infinite regress” of overthinking?Key Takeaways:Reflective oracles resolve the paradox of infinite reasoning.The Zeta strategy ensures a grain of truth across all strategies.Players converge to ε-Nash equilibria even in unknown games.The framework applies to building self-learning AI agents.Possible parallels with human cognition and culture.SEO Tags:Niche: #GameTheory, #ArtificialIntelligence, #GrainOfTruth, #ReflectiveOraclesPopular: #AI, #MachineLearning, #NeuralNetworks, #NashEquilibrium, #DecisionMakingLong-tail: #GrainOfTruthProblem, #ReflectiveOracleAI, #BayesianPlayers, #UnknownGamesAITrending: #AGI, #AIethics, #SelfPredictiveAIRead more: https://arxiv.org/pdf/2508.16245
    --------  
    18:46
  • Arxiv. Seed 1.5 Thinking: The AI That Learns to Reason
    What if artificial intelligence stopped just guessing answers — and started to actually think? 🚀 In this episode, we dive into one of the most talked-about breakthroughs in AI — Seed 1.5 Thinking from ByteDance. This model, as its creators claim, makes a real leap toward genuine reasoning — the ability to deliberate, verify its own logic, and plan before responding.Here’s what we cover:How the “think before respond” principle works — and why it changes everything.Why the “mixture of experts” architecture makes the model both powerful and efficient (activating just 20B of 200B parameters).Record-breaking performance on the toughest benchmarks — from math olympiads to competitive coding.The new training methods: chain-of-thought data, reasoning verifiers, RL algorithms like VAPO and DPO, and an infrastructure that speeds up training by 3×.And most surprisingly — how rigorous math training helps Seed 1.5 Thinking write more creative texts and generate nuanced dialogues.Why does this matter for you?This episode isn’t just about AI solving equations. It’s about how AI is learning to reason, to check its own steps, and even to create. That changes how we think of AI — from a simple tool into a true partner for tackling complex problems and generating fresh ideas.Now imagine: an AI that can spot flaws in its own reasoning, propose alternative solutions, and still write a compelling story. What does that mean for science, engineering, business, and creativity? Where do we now draw the line between human and machine intelligence?👉 Tune in, share your thoughts in the comments, and don’t forget to subscribe — in the next episode we’ll explore how new models are beginning to collaborate with humans in real time.Key Takeaways:Seed 1.5 Thinking uses internal reasoning to improve responses.On math and coding benchmarks, it scores at the level of top students and programmers.A new training approach with chain-of-thought data and verifiers teaches the model “how to think.”Its creative tasks prove that structured planning = more convincing writing.The big shift: AI as a partner in reasoning, not just an answer generator.SEO Tags:Niche: #ArtificialIntelligence, #ReasoningAI, #Seed15Thinking, #ByteDanceAIPopular: #AI, #MachineLearning, #FutureOfAI, #NeuralNetworks, #GPTLong-tail: #AIforMath, #AIforCoding, #HowAIThinks, #AIinCreativityTrending: #AIReasoning, #NextGenAI, #AIvsHumanRead more: https://arxiv.org/abs/2504.13914
    --------  
    18:03
  • Why Even the Best AIs Still Fail at Math
    What do you do when AI stops making mistakes?..Today's episode takes you to the cutting edge of artificial intelligence — where success itself has become a problem. Imagine a model that solves almost every math competition problem. It doesn’t stumble. It doesn’t fail. It just wins. Again and again.But if AI is now the perfect student... what’s left for the teacher to teach? That’s the crisis researchers are facing: most existing math benchmarks no longer pose a real challenge to today’s top LLMs — models like GPT-5, Grok, and Gemini Pro.The solution? Math Arena Apex — a brand-new, ultra-difficult benchmark designed to finally test the limits of AI in mathematical reasoning.In this episode, you'll learn:Why being "too good" is actually a research problemHow Apex was built: 12 of the hardest problems, curated from hundreds of elite competitionsTwo radically different ways to define what it means for an AI to "solve" a math problemWhat repeated failure patterns reveal about the weaknesses of even the most advanced modelsHow LLMs like GPT-5 and Grok often give confident but wrong answers — complete with convincing pseudo-proofsWhy visualization, doubt, and stepping back — key traits of human intuition — remain out of reach for current AIThis episode is packed with real examples, like:The problem that every model failed — but any human could solve in seconds with a quick sketchThe trap that fooled all LLMs into giving the exact same wrong answerHow a small nudge like “this problem isn’t as easy as it looks” sometimes unlocks better answers from models🔍 We’re not just asking what these models can’t do — we’re asking why. You'll get a front-row seat to the current frontier of AI limitations, where language models fall short not due to lack of power, but due to the absence of something deeper: real mathematical intuition.🎓 If you're into AI, math, competitions, or the future of technology — this episode is full of insights you won’t want to miss.👇 A question for you:Do you think AI will ever develop that uniquely human intuition — the ability to feel when an answer is too simple, or spot a trap in the obvious approach? Or will we always need to design new traps to expose its limits?🎧 Stick around to the end — we’re not just exploring failure, but also asking: What comes after Apex?Key Takeaways:Even frontier AIs have hit a ceiling on traditional math tasks, prompting the need for a new level of difficultyApex reveals fundamental weaknesses in current LLMs: lack of visual reasoning, inability to self-correct, and misplaced confidenceModel mistakes are often systematic — a red flag pointing toward deeper limitations in architecture and training methodsSEO Tags:Niche: #AIinMath, #MathArenaApex, #LLMlimitations, #mathreasoningPopular: #ArtificialIntelligence, #GPT5, #MachineLearning, #TechTrends, #FutureOfAILong-tail: #AIerrorsinmathematics, #LimitsofLLMs, #mathintuitioninAITrending: #AI2025, #GPTvsMath, #ApexBenchmarkRead more: https://matharena.ai/apex/
    --------  
    19:03

Más podcasts de Tecnología

Acerca de AIandBlockchain

Cryptocurrencies, blockchain, and artificial intelligence (AI) are powerful tools that are changing the game. Learn how they are transforming the world today and what opportunities lie hidden in the future.
Sitio web del podcast

Escucha AIandBlockchain, iSenaCode Live y muchos más podcasts de todo el mundo con la aplicación de radio.net

Descarga la app gratuita: radio.net

  • Añadir radios y podcasts a favoritos
  • Transmisión por Wi-Fi y Bluetooth
  • Carplay & Android Auto compatible
  • Muchas otras funciones de la app
Aplicaciones
Redes sociales
v7.23.9 | © 2007-2025 radio.de GmbH
Generated: 9/19/2025 - 5:26:13 AM