AI Latest Research & Developments - With Digitalent & Mike Nedelko
Join us monthly as we explore the cutting-edge world of artificial intelligence. Mike distills the most significant trends, groundbreaking research, and pivotal developments in AI, offering you a concise yet comprehensive update on this rapidly evolving field.
Whether you're an industry professional or simply AI-curious, this series is designed to be your essential guide. If you could only choose one source to stay informed about AI, make it Mike Nedelko's monthly briefing. Stay ahead of the curve and gain insights that matter in just one session per month.
AI Latest Research & Developments - With Digitalent & Mike Nedelko
Latest Artificial Intelligence R&D Session - with Digitalent & Mike Nedelko - Episode 014 - April 2026
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Key topics discussed:
LLM Functional Emotions
LLMs use 171 measurable “emotion vectors” that directly influence outputs and behaviour. These are mathematical controls, not real feelings, but they shape decisions in real time.
Emotion Manipulation & Risk
Increasing “desperation” led to 14x more reward hacking, proving behaviour can be steered. This creates both a powerful control lever and a serious safety risk.
AI Safety via Emotional Control
Tuning emotional states (e.g. calm vs desperation) can stabilise or destabilise models. Safer systems likely operate in low-arousal, tightly constrained states.
4. Shift to Agentic AI
AI is moving from static models to agents that act, learn, and adapt in real environments. Effectiveness now depends on reasoning, memory, and real-world interaction.
Metaclaw & Self-Evolving Agents
Agents can learn from failures, create new skills, and improve without human intervention. This shifts learning from prompts into permanent model behaviour.
Continuous Learning Systems
Agents store failures, retrain during idle time, and turn fixes into long-term “instincts.” This enables ongoing improvement without downtime or redeployment.
Death of the Singularity Narrative
Future AI won’t be one superintelligence but a network of interacting agents.
Intelligence will emerge from systems, not a single model.
“Society of Thought” Reasoning
Models naturally improve by debating themselves—generating and critiquing ideas internally. Strong reasoning comes from this adversarial, multi-perspective process.
9. Institutional AI Safety
Safety will come from systems with competing goals keeping each other in check. Like human institutions, not single aligned models, at scale.
10. Human Role Shift
Humans move from doing tasks to orchestrating AI systems and setting rules. Key skills shift toward strategy, systems thinking, and decision-making.