Courses from 1000+ universities
Coursera’s flagship credentials may carry big brand names, but who’s actually creating the content?
600 Free Google Certifications
Management & Leadership
Web Development
Communication Skills
Introduction to Research Ethics: Working with People
An Introduction to Interactive Programming in Python (Part 1)
The Science of Gastronomy
Organize and share your learning with Class Central Lists.
View our Lists Showcase
Explore the concept of Vibe Coding as a learning machine and its potential application to scientific discovery, examining the CodeScientist preprint and discussing if AI can facilitate "Vibe Science."
Discover Mistral AI's new Magistral models through live causal reasoning tests, comparing performance between open-source and enterprise versions with real-time coding challenges.
Discover why vision-language models struggle with continual learning and explore Google DeepMind's innovative aligned model merging solution to overcome catastrophic forgetting.
Explore AI reasoning capabilities through detailed performance testing of OpenAI o3, DeepSeek R1, and Claude Opus 4 to determine if AI can truly think or just mimics reasoning patterns.
Explore LiNeS, a novel post-training layer scaling technique that prevents catastrophic forgetting in large language models while enhancing multi-task performance and model merging capabilities.
Explore how Vision-Language Models create task vectors - internal representations enabling cross-modal performance through text and images, revolutionizing AI's ability to understand and execute diverse tasks.
Delve into the technical comparison between LoRA and full fine-tuning methods for language models, exploring their structural differences, spectral properties, and impact on model performance.
Explore the boundaries of AI self-learning through TTRL methodology, examining the limits of self-rewarding and self-referencing reinforcement learning in language models.
Discover how off-policy reinforcement learning compares to SFT for AI reasoning, exploring the LUFFY approach that integrates on-policy and off-policy zero RL for effective knowledge transfer without traditional RL methods.
Dive into the debate between Supervised Fine-Tuning and Reinforcement Learning for AI reasoning, exploring research findings on which approach yields better results for vision-language models.
Explore the reasoning capabilities of DeepSeek R1 0528 models through a comparative logic puzzle test between the 8B distilled version and the full 671B model, revealing performance differences.
Explore ALITA, a self-evolving AI agent that uses RAG-MCP for unscripted evolution, enabling scalable agentic reasoning with minimal predefinition and maximum adaptability.
Discover how In-Context Learning (ICL) can optimize action planning for LLMs, enhancing AI systems' ability to determine effective action sequences for reaching objectives in virtual and real environments.
Discover how Google's Uncertainty AI framework in MedAI creates a blueprint for human-aligned AI by implementing iterative, transparent reasoning processes that mirror clinical decision-making in high-stakes domains.
Discover a smarter approach to fine-tuning Large Language Models that enhances their reasoning capabilities and generalization from in-context learning, based on research from Google DeepMind.
Get personalized course recommendations, track subjects and courses with reminders, and more.