Distributed Computing (held 2009-2015)

WHAT. The primary objective of the course is to introduce students to te concepts of parallel and distributed computing.

WHY. Ever heard of cloud computing, or map-reduce? Well, have you wondered how things are really under the hood? That's why.

HOW. Distributed and parallel algorithms are harder to design, analyze, and implement with respect to their serial counterparts. Let's be honest: there is no silver bullet . Not everything can be a map and a reduce, not everything can (or should) be parallelized. In this course we will learn how to think about parallelization, showing limits, problems, and hands-on solutions.

WHO. These algorithms play central role in modern computing, e.g., cloud computing, and applied in several fields, from bioinformatics to web services, from structural engineering to entertainment. Needless to say: almost every aspect of modern computing will be distributed.

Topics

Lectures will be introducing the following topics.

  • Principles of Parallel and Distributed Computing
  • Synchronization and Time
  • Distributed Security
  • Vectors, Matrices, and their Operations
  • Distributed Graphs and Algorithms
  • Distributed File Systems
  • Modern and safe parallel programming

The programming technologies that will be introduced in this course will be the following:

  • MPI: Message Passing Interface
  • OpenMP: Open Multi Processing
  • CUDA: NVidia's Parallel Computing
  • CBE: The Cell Broadband Engine (PlayStation 3)
  • OpenCL: Open Computing Language

Textbooks

  • D. P. Bertsekas and John N. Tsitsiklis, "Parallel and Distributed Computation: Numerical Methods", Athena Scientific, 1997. (free download)
  • A. Grama, A. Gupta, G. Karypis, and V. Kumar, "Introduction to Parallel Computing", Addison Wesley (2nd edition), 2003.