Step 57: Message Passing Leads to Better Scalability in Parallel Systems~ Russel Winder

Birat Rai
2 min readJan 11, 2019

--

This is the 57th Step towards gaining the Programming Enlightenment series. If you didn’t learn the 56th Step, read it.

Concurrency makes life easier for programmers as well as difficult in the same time. And, parallelism, a special subset of concurrency — is hard, even the best can only hope to get it right.

Shared memory: race conditions, deadlock, livelock etc are the problems while implementing concurrency.

Either forgo concurrency or eschew shared memory!

Forgoing Concurrency isn’t an option as computers now have more cores and to harness parallelism becomes more important as increment of processor speeds isn’t going to improve much.

So, can we eschew shared memory? Definitely.

Instead of using threads and shared memory, we can use processes and message passing. Process here means a protected independent state with executing code, not operating system process.

We can even have data flow systems as a way of computing. Evaluation is controlled by readiness of data within the system. Definitely we won’t have synchronization problems.

All of the principal languages used rely on shared memory, multi-threaded for concurrency. So what can be done to eschew shared memory? The answer is to use — or, if they don’t exist, create — libraries and frameworks that provide process models and message passing, avoiding all use of shared mutable memory.

--

--

Birat Rai
Birat Rai

No responses yet