This video demonstrates the performance testing of the new MoE Qwen3-30B-A3B model on an extreme logic test, comparing results with "thinking mode" both active and deactivated. Explore how a Mixture of Experts (MoE) model with only 3B active parameters handles highly complex logical reasoning challenges in this live recording from Discover AI. The 32-minute demonstration includes testing both local Ollama implementation and online versions of the model, with interesting results when the model is corrected and when thinking mode is activated. For comparison, a link to testing of the larger Qwen3-235B-A22B model on the same test is provided. The video is organized into chapters covering the test introduction, local and online implementations, error correction, thinking mode activation, and final verification.
Surprising Performance of Small Qwen3-A3B MoE - Testing on Extreme Logic Test
Discover AI via YouTube
Overview
Syllabus
00:00 NEW Qwen3 Model Test
01:17 Local Qwen3 Ollama
02:10 Qwen3 Online
06:55 Qwen3 You are incorrect
11:04 Activate THINKING mode
25:06 Final Qwen3 verification
Taught by
Discover AI