Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

openHPI

Parallel Programming Concepts

via openHPI

Overview

Since the very beginning of computer technology, processors have been built with ever-increasing clock frequencies and smarter optimizations for achieving a faster software execution. Developers and the software industry are used to applications becoming faster by merely exchanging the underlying hardware. However, since the beginning of the century it has become apparent that this approach no longer works.

Moore's law about the ever-increasing number of transistors per chip is still valid, but power consumption, thermal management and memory latency issues are making make serial code acceleration increasingly harder. Instead, hardware vendors now use additional transistors for multiple processing elements (‘cores’) per processor chip and deeper memory hierarchies. Modern hardware has the capability to transform any desktop, server, or even mobile system into some kind of parallel computer. This makes parallel programming the new default for application development. The exploitation of any additional horsepower from hardware is now in the responsibility of the software.

The openHPI online course “Parallel Programming Concepts” presents relevant theoretical and practical foundations for parallel programming. We show crucial theoretical ideas such as semaphores and actors, the architecture of modern parallel hardware, different programming models such as task parallelism, message passing and functional programming, and several patterns and best practices.

The course is suitable for all participants who are interested in getting a broader overview of parallelism, especially beyond the usage of multiple threads. Participants should have knowledge in at least one programming language - other skills are not necessary.

Syllabus

  • Week 1: Terminology and fundamental concepts
  • Week 2: Shared Memory Parallelism - Basics
  • Week 3: Shared memory parallelism – Programming
  • Week 4: Accelerators
  • Week 5: Distributed memory parallelism
  • Week 6: Patterns, best practices and examples
  • Exam: Exam

Taught by

Dr. Peter Tröger

Reviews

4.0 rating, based on 5 Class Central reviews

Start your review of Parallel Programming Concepts

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.