Courses from 1000+ universities
Coursera’s flagship credentials may carry big brand names, but who’s actually creating the content?
600 Free Google Certifications
Management & Leadership
Entrepreneurship
Digital Marketing
Understanding Clinical Research: Behind the Statistics
EU policy and implementation: making Europe work!
.ANIMATIONs
Organize and share your learning with Class Central Lists.
View our Lists Showcase
Delve into the technical details of OpenCoder's development, from data preprocessing and deduplication to training methodology and evaluation metrics for building effective code LLMs.
Explore RAGAS framework's key evaluation criteria for Retrieval Augmented Generation (RAG) systems, covering faithfulness, answer relevance, and context relevance metrics for improved AI performance.
Dive into the technical architecture and development process of DeepSeek's R1 model, exploring GRPO implementation, reasoning capabilities, and model optimization techniques through detailed examples.
Explore systematic prompting techniques through an in-depth analysis of template structures, zero-shot methods, emotion prompting, and thought generation approaches for enhanced AI interactions.
Dive into the technical architecture and methodology behind Flux, exploring rectified flow transformers, latent diffusion models, and the innovative approaches that led to superior image generation results.
Dive into Meta's groundbreaking Llama 3 architecture, exploring pre-training techniques, model capabilities, synthetic data quality, and implementation strategies for advanced AI development.
Dive into the mechanics of Samba, a hybrid state space model built on Mamba that enables efficient unlimited context language modeling for advanced AI applications.
Dive into efficient text-to-image generation using PixART-α, exploring fine-tuning techniques, design principles, and practical implementation steps for diffusion transformers.
Dive into implementing text generation using discrete diffusion modeling, covering implementation details from basic concepts to training scripts for creating GPT-2 competitive models.
Dive into discrete diffusion modeling for text generation, exploring probability distributions, score-based modeling, and practical applications in generative AI that rival GPT-2's capabilities.
Dive into the revolutionary concept of 1-bit Large Language Models, exploring how weights can be represented with simple integers (0, 1, -1) instead of complex floating-point numbers.
Dive into Meta AI's I-JEPA framework for self-supervised image learning, exploring its architecture, methodology, and advantages over traditional approaches in computer vision and neural networks.
Dive into implementing Meta's Self-Rewarding Language Model with Mistral 7B, covering fine-tuning techniques, data preparation, prompt generation, and practical demonstrations.
Dive into the technical foundations behind OpenAI's Sora, exploring diffusion transformers, U-Nets, and latent diffusion models that power this groundbreaking video generation technology.
Dive into the mechanics of Medusa, a framework that accelerates LLM inference through parallel token prediction and tree-based attention, enhancing AI model performance and efficiency.
Get personalized course recommendations, track subjects and courses with reminders, and more.