Courses from 1000+ universities
Coursera’s flagship credentials may carry big brand names, but who’s actually creating the content?
600 Free Google Certifications
Computer Science
Web Development
Digital Marketing
Greek and Roman Mythology
Improving Your Study Techniques
Understanding the GDPR
Organize and share your learning with Class Central Lists.
View our Lists Showcase
Detailed explanation of Meta's LLaMA language models, covering training data, architecture, implementation, and performance. Explores how smaller models can outperform larger ones with optimized training.
Explores a novel watermarking technique for diffusion models that embeds invisible, robust fingerprints in generated images, discussing its implementation, effects, and potential applications beyond copyright protection.
Análisis detallado de la sorpresiva destitución de Sam Altman como CEO de OpenAI, explorando las posibles razones, reacciones y consecuencias para la empresa y la industria de IA.
Explore Q-Learning fundamentals, from Markov Decision Processes to Deep Q-Networks, in this comprehensive overview of reinforcement learning concepts and applications.
Explore techniques to extract training data from language models, revealing vulnerabilities in data privacy and model security. Learn about new attacks and their implications for AI alignment and memorization.
Explore how text embeddings can potentially reconstruct original text, discussing methods, implications, and prevention strategies for this emerging privacy concern in natural language processing.
Analysis of Google's Gemini marketing video, examining its portrayal of AI capabilities and raising questions about authenticity and potential misrepresentation of the model's true abilities.
Overview of NeurIPS 2023 poster presentations covering graph neural networks, collaborative filtering, semi-supervised learning, and privacy assessment in image reconstruction, with insights on cutting-edge machine learning research.
Explore cutting-edge AI research from NeurIPS 2023, covering topics like RNN training, image editing, language models, transformers, geo-localization, and hardware resilience in image classifiers.
Explore Mamba, a novel linear-time sequence modeling architecture using selective state spaces. Learn about its advantages over Transformers and its applications in language, audio, and genomics.
Explore cutting-edge machine learning research in temporal action segmentation, test-time adaptation, recurrent neural networks, dictionary learning, equivariant adaptation, cross-task generalization, and adversarial learning.
Análisis crÃtico de un estudio sobre GPT-4 resolviendo exámenes del MIT, revelando problemas metodológicos y discutiendo las implicaciones para la evaluación de modelos de IA en educación.
Explores Reinforced Self-Training, a novel approach for improving language models using offline reinforcement learning, offering efficiency advantages over traditional online methods.
Explore building a CPU using GPT for every instruction, compiling C code to LLVM-IR. Learn about compilers, virtual machines, and innovative applications of language models in computing.
Explores Retentive Network as an alternative to Transformer, offering training parallelism and efficient inference for large language models. Discusses retention mechanism, computation paradigms, and experimental results.
Get personalized course recommendations, track subjects and courses with reminders, and more.