Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Stanford University

Stanford Seminar - Enabling NLP, Machine Learning, and Few-Shot Learning Using Associative Processing

Stanford University via YouTube

Overview

This presentation details a fully programmable, associative, content-based, compute in-memory architecture that changes the concept of computing from serial data processing--where data is moved back and forth between the processor and memory--to massive parallel data processing, compute, and search directly in-place.

This associative processing unit (APU) can be used in many machine learning applications, one-shot/few-shot learning, convolutional neural networks, recommender systems and data mining tasks such as prediction, classification, and clustering.

Additionally, the architecture is well-suited to processing large corpora and can be applied to Question Answering (QA) and various NLP tasks such as language translation. The architecture can embed long documents and compute in-place any type of memory network and answer complex questions in O(1).

About the Speaker: Dr. Avidan Akeribs is VP of GSI Technology's Associative Computing Business Unit. He has over 30 years of experience in parallel computing and In-Place Associative Computing. He has over 25 Granted Patents related to parallel and in-memory associative computing. Dr. Akeribs has a PhD in Applied mathematics & Computer Science from the Weismann Instiitute of Science, Israel.His specialties are Computational Memory, Associative Processing, Parallel Algorithms, and Machine Learning.

For more information about this seminar and its speaker, you can visit http://ee380.stanford.edu/Abstracts/1...

Syllabus

Introduction.
The Challenge In Al Computing (Matrix Multiplication is not enough!!).
Von Neumann Architecture.
Changing the Rules of the Game!!!.
APU-Associative Processing Unit.
How Computers Work Today.
Truth Table Example.
CAM/ Associative Search.
TCAM Search By Standard Memory Cells.
Neighborhood Computing.
Search & Count.
CPU vs GPU vs FPGA vs APU.
Communication between Sections.
Section Computing to Improve Performance.
APU Chip Layout.
APU Layout vs GPU Layout.
K-NN Use Case in an APU.
K-MINS: The Algorithm.
Dense (1XN) Vector by Sparse NxM Matrix.
Two NxN Sparse Matrix Multiplication.
Taylor Series.
1M SoftMax Performance.
Examples.
Example of Associative Attention Computing.
GSI Associative Solution for End to End.
Low-Shot: Train the network on distance.
Programming Model.
PCle Development Boards.
Computing in Non-Volatile Cells.
Solutions for Future Data Centers.
Summary.

Taught by

Stanford Online

Reviews

Start your review of Stanford Seminar - Enabling NLP, Machine Learning, and Few-Shot Learning Using Associative Processing

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.