AI Latest Research & Developments - With Digitalent & Mike Nedelko
Join us monthly as we explore the cutting-edge world of artificial intelligence. Mike distills the most significant trends, groundbreaking research, and pivotal developments in AI, offering you a concise yet comprehensive update on this rapidly evolving field.
Whether you're an industry professional or simply AI-curious, this series is designed to be your essential guide. If you could only choose one source to stay informed about AI, make it Mike Nedelko's monthly briefing. Stay ahead of the curve and gain insights that matter in just one session per month.
AI Latest Research & Developments - With Digitalent & Mike Nedelko
Latest Artificial Intelligence R&D Session with Digitalent & Mike Nedelko - Episode (011)
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Sora Model and AI Video
OpenAI’s Sora model demonstrates how AI video has become nearly indistinguishable from real footage, reinforcing that AI progress continues to accelerate.
Hallucinations in LLMs
Mike Nedelko discussed an OpenAI paper reframing hallucinations as the result of training flaws and evaluation incentives, not mysterious behaviour. LLMs train in two phases: unsupervised pre-training (predicting the next word) and post-training (fine-tuning through human feedback and reinforcement learning).
Sources of Hallucinations
Hallucinations arise from singleton rate errors—rare, one-off facts—and intrinsic limitations, where models rely on statistical patterns rather than reasoning, as shown in the “strawberry problem.”
Flawed Evaluation Systems
Current evaluation systems reward correct guesses but not uncertainty, encouraging confident falsehoods. OpenAI proposes new benchmarks that reward calibrated honesty, though implementation remains challenging.
Complex Reasoning and Scale-Free Networks
LLMs struggle with complex reasoning compared to the brain’s scale-free network, which features interconnected hubs that enable adaptability and self-organization.
BDH (Dragon Hatchling) Architecture
The new BDH architecture mimics this biological design, achieving GPT-2-level performance with greater efficiency. As part of Axiomic AI, it aims for models that scale predictably and stably.
Emergent Attention and Interpretability
In BDH, attention emerges naturally from local neuron interactions, producing interpretable, brain-like behaviour with sparse, composable structures that could power future modular AI systems.