Powered by RND
Escucha AI Today en la aplicación
Escucha AI Today en la aplicación
(898)(249 730)
Favoritos
Despertador
Sleep timer

AI Today

Podcast AI Today
AI Today Tech Talk
Welcome to AI Today TechTalk – where we geek out about the coolest, craziest, and most mind-blowing stuff happening in the world of Artificial Intelligence! 🚀 ...

Episodios disponibles

5 de 30
  • SFT Memorizes, RL Generalizes: A Comparative Study of Foundation Model | #ai #2025 #genai #google
    Paper: https://arxiv.org/pdf/2501.17161 This research paper compares supervised fine-tuning (SFT) and reinforcement learning (RL) for post-training foundation models. Using novel and existing tasks involving arithmetic and spatial reasoning, the study finds that RL promotes better generalization to unseen data, unlike SFT which tends to memorize training data. Further analysis reveals RL enhances visual recognition capabilities in multimodal models, while SFT aids in stabilizing RL training by improving output formatting. The paper also explores the impact of increased inference-time computation on generalization. #ai, #artificialintelligence, #arxiv, #research, #paper, #publication, #llm, #genai, #generativeai, #largevisualmodels, #largelanguagemodels, #largemultimodalmodels, #nlp, #text, #machinelearning, #ml, #nvidia, #openai, #anthropic, #microsoft, #google, #technology, #cuttingedge, #meta, #llama, #chatgpt, #gpt, #elonmusk, #samaltman, #deployment, #engineering, #scholar, #science, #apple, #samsung, #turing, #aiethics, #innovation, #futuretech, #deeplearning, #datascience, #computervision, #autonomoussystems, #robotics, #dataprivacy, #cybersecurity, #digitaltransformation, #quantumcomputing, #aiapplications, #aiethics, #techleadership, #technews, #aiinsights, #aiindustry, #aiadvancements, #futureai, #airesearchers
    --------  
    16:23
  • Deepseek Janus-Pro: Unified Multimodal Understanding and Generation | #ai #2025 #genai #deepseek
    Paper: https://github.com/deepseek-ai/Janus/blob/main/janus_pro_tech_report.pdf Github: https://github.com/deepseek-ai/Janus/tree/main?tab=readme-ov-file The paper introduces Janus-Pro, an improved multimodal model building upon its predecessor, Janus. Janus-Pro boasts enhanced performance in both multimodal understanding and text-to-image generation due to optimized training strategies, expanded datasets (including synthetic aesthetic data), and a larger model size (1B and 7B parameter versions). The architecture uses decoupled visual encoding for improved efficiency and performance across various benchmarks. Results show significant gains over previous state-of-the-art models, although limitations remain in resolution and fine detail. The code and models are publicly available. #ai, #artificialintelligence, #arxiv, #research, #paper, #publication, #llm, #genai, #generativeai, #largevisualmodels, #largelanguagemodels, #largemultimodalmodels, #nlp, #text, #machinelearning, #ml, #nvidia, #openai, #anthropic, #microsoft, #google, #technology, #cuttingedge, #meta, #llama, #chatgpt, #gpt, #elonmusk, #samaltman, #deployment, #engineering, #scholar, #science, #apple, #samsung, #turing, #aiethics, #innovation, #futuretech, #deeplearning, #datascience, #computervision, #autonomoussystems, #robotics, #dataprivacy, #cybersecurity, #digitaltransformation, #quantumcomputing, #aiapplications, #aiethics, #techleadership, #technews, #aiinsights, #aiindustry, #aiadvancements, #futureai, #airesearchers
    --------  
    16:58
  • Memory Layers at Scale | #ai #2024 #genai #meta
    Paper: https://arxiv.org/pdf/2412.09764 This research paper explores the effectiveness of memory layers in significantly enhancing large language models (LLMs). By incorporating a trainable key-value lookup mechanism, memory layers add parameters without increasing computational cost, improving factual accuracy and overall performance on various tasks. The researchers demonstrate substantial gains, especially on factual tasks, even surpassing models with much larger computational budgets and outperforming mixture-of-experts models. They detail improvements in memory layer implementation, achieving scalability with up to 128 billion memory parameters, and discuss various architectural optimizations. The findings strongly advocate for integrating memory layers into future AI architectures. #ai, #artificialintelligence, #arxiv, #research, #paper, #publication, #llm, #genai, #generativeai, #largevisualmodels, #largelanguagemodels, #largemultimodalmodels, #nlp, #text, #machinelearning, #ml, #nvidia, #openai, #anthropic, #microsoft, #google, #technology, #cuttingedge, #meta, #llama, #chatgpt, #gpt, #elonmusk, #samaltman, #deployment, #engineering, #scholar, #science, #apple, #samsung, #turing, #aiethics, #innovation, #futuretech, #deeplearning, #datascience, #computervision, #autonomoussystems, #robotics, #dataprivacy, #cybersecurity, #digitaltransformation, #quantumcomputing, #aiapplications, #aiethics, #techleadership, #technews, #aiinsights, #aiindustry, #aiadvancements, #futureai, #airesearchers
    --------  
    14:59
  • Large Concept Models: Language Modeling in a Sentence Representation Space | #ai #2024 #genai
    Paper: https://scontent-dfw5-1.xx.fbcdn.net/... This research paper introduces Large Concept Models (LCMs), a novel approach to language modeling that operates on sentence embeddings instead of individual tokens. LCMs aim to mimic human-like abstract reasoning by processing information at a higher semantic level, enabling improved handling of long-form text generation and zero-shot multilingual capabilities. The authors explore various LCM architectures, including MSE regression, diffusion-based generation, and quantized models, evaluating their performance on summarization, summary expansion, and cross-lingual tasks. The study demonstrates that diffusion-based LCMs outperform other methods, exhibiting impressive zero-shot generalization across multiple languages. Finally, the authors propose extending the LCM framework with a high-level planning model to further enhance coherence in long-form text generation. #ai, #artificialintelligence, #arxiv, #research, #paper, #publication, #llm, #genai, #generativeai, #largevisualmodels, #largelanguagemodels, #largemultimodalmodels, #nlp, #text, #machinelearning, #ml, #nvidia, #openai, #anthropic, #microsoft, #google, #technology, #cuttingedge, #meta, #llama, #chatgpt, #gpt, #elonmusk, #samaltman, #deployment, #engineering, #scholar, #science, #apple, #samsung, #turing, #aiethics, #innovation, #futuretech, #deeplearning, #datascience, #computervision, #autonomoussystems, #robotics, #dataprivacy, #cybersecurity, #digitaltransformation, #quantumcomputing, #aiapplications, #aiethics, #techleadership, #technews, #aiinsights, #aiindustry, #aiadvancements, #futureai, #airesearchers
    --------  
    29:20
  • DeepSeek v3 | #ai #2024 #genai
    Technical Report: https://arxiv.org/pdf/2412.19437 Github: https://github.com/deepseek-ai/DeepSe... This research paper introduces DeepSeek-V3, a 671-billion parameter Mixture-of-Experts (MoE) large language model. The paper details DeepSeek-V3's architecture, including its innovative auxiliary-loss-free load balancing strategy and Multi-Token Prediction objective, and its efficient training framework utilizing FP8 precision. Extensive evaluations demonstrate DeepSeek-V3's superior performance across various benchmarks compared to other open-source and some closed-source models, particularly in code and math tasks. The paper also discusses post-training methods like supervised fine-tuning and reinforcement learning, along with deployment strategies and hardware design suggestions. Finally, it acknowledges limitations and suggests future research directions #ai, #artificialintelligence, #arxiv, #research, #paper, #publication, #llm, #genai, #generativeai, #largevisualmodels, #largelanguagemodels, #largemultimodalmodels, #nlp, #text, #machinelearning, #ml, #nvidia, #openai, #anthropic, #microsoft, #google, #technology, #cuttingedge, #meta, #llama, #chatgpt, #gpt, #elonmusk, #samaltman, #deployment, #engineering, #scholar, #science, #apple, #samsung, #turing, #aiethics, #innovation, #futuretech, #deeplearning, #datascience, #computervision, #autonomoussystems, #robotics, #dataprivacy, #cybersecurity, #digitaltransformation, #quantumcomputing, #aiapplications, #aiethics, #techleadership, #technews, #aiinsights, #aiindustry, #aiadvancements, #futureai, #airesearchers
    --------  
    28:35

Más podcasts de Tecnología

Acerca de AI Today

Welcome to AI Today TechTalk – where we geek out about the coolest, craziest, and most mind-blowing stuff happening in the world of Artificial Intelligence! 🚀 This is your AI crash course, snackable podcast-style. Think of it as your weekly dose of cutting-edge research, jaw-dropping breakthroughs, and “Wait, AI can do THAT?!” moments. We take the techy, brain-bending papers and news, break them down, and serve them up with a side of humor and a whole lot of fun. Whether you’re an AI superfan, a tech wizard, or just someone who loves knowing what’s next in the tech world, this channel has s
Sitio web del podcast

Escucha AI Today, System Crash y muchos más podcasts de todo el mundo con la aplicación de radio.net

Descarga la app gratuita: radio.net

  • Añadir radios y podcasts a favoritos
  • Transmisión por Wi-Fi y Bluetooth
  • Carplay & Android Auto compatible
  • Muchas otras funciones de la app
Aplicaciones
Redes sociales
v7.7.0 | © 2007-2025 radio.de GmbH
Generated: 2/17/2025 - 2:06:18 PM