With every smartphone and computer now boasting multiple processors, the use of functional ideas to facilitate parallel programming is becoming increasingly widespread. In this course, you'll learn the fundamentals of parallel programming, from task parallelism to data parallelism. In particular, you'll see how many familiar ideas from functional programming map perfectly to to the data parallel paradigm. We'll start the nuts and bolts how to effectively parallelize familiar collections operations, and we'll build up to parallel collections, a production-ready data parallel collections library available in the Scala standard library. Throughout, we'll apply these concepts through several hands-on examples that analyze real-world data, such as popular algorithms like k-means clustering.
Learning Outcomes. By the end of this course you will be able to:
- reason about task and data parallel programs,
- express common algorithms in a functional style and solve them in parallel,
- competently microbenchmark parallel code,
- write programs that effectively use parallel collections to achieve performance
-We motivate parallel programming and introduce the basic constructs for building parallel programs on JVM and Scala. Examples such as array norm and Monte Carlo computations illustrate these concepts. We show how to estimate work and depth of parallel programs as well as how to benchmark the implementations.
Basic Task Parallel Algorithms
-We continue with examples of parallel algorithms by presenting a parallel merge sort. We then explain how operations such as map, reduce, and scan can be computed in parallel. We present associativity as the key condition enabling parallel implementation of reduce and scan.
-We show how data parallel operations enable the development of elegant data-parallel code in Scala. We give an overview of the parallel collections hierarchy, including the traits of splitters and combiners that complement iterators and builders from the sequential case.
Data Structures for Parallel Computing
-We give a glimpse of the internals of data structures for parallel computing, which helps us understand what is happening under the hood of parallel collections.
Prof. Viktor Kuncak, Dr. Aleksandar Prokopec and Heather Miller
completed this course, spending 5 hours a week on it and found the course difficulty to be medium.
This course disappointed me a bit after first two courses of "programming in scala" specialization: turns out good old for loop with mutable vars is much much faster than for expressions and that we threw check style out of the door and our code is much less pretty after all. Well, I guess that was a...
This course disappointed me a bit after first two courses of "programming in scala" specialization: turns out good old for loop with mutable vars is much much faster than for expressions and that we threw check style out of the door and our code is much less pretty after all. Well, I guess that was a good lesson: there's functional programming, there are times when we can use it, and there are times when we need all the performance we can get out of our hardware.
In terms of lectures and assignments they were average in my opinion, I would actually rate lectures higher than in the previous courses and assignments lower. I especially liked lego analogies to explain how different types combine together, material on parallel collections and combiners (though this last lecture was a bit hard to understand and assignment was lengthy and didn't re-enforce material enough, so by the time I finished the assignment I already forgot much of what was taught in lecture).
Overall, my engagement was average but I appreciate an ability to use an autograder for the assignments without purchasing the course.
The first week gives an Okay introduction to the subject, even with half the lessons being about calculating limits of parallelism. I have nothing against those, but the intructors never use that again in the course. Instead, they just go trying differente numbers of threads to see which perform best.
The second week has the worst lessons. It's about one hour to explain, very slowly, the concept of Associativity. Again, I have nothing against taking time to explain something carefully. However, all that "good care" and attention to detail is thrown in the garbage on the last week, that have little more than a half hour of lessons, and an assignment that is not well constructed, explained, and worst: has absolutely NOTHING to do with parallelism.
Taking this course, the third one on the Scala Specialization, made me want my money back.