Step 57: Message Passing Leads to Better Scalability in Parallel Systems~ Russel Winder
This is the 57th Step towards gaining the Programming Enlightenment series. If you didn’t learn the 56th Step, read it.
Concurrency makes life easier for programmers as well as difficult in the same time. And, parallelism, a special subset of concurrency — is hard, even the best can only hope to get it right.
Shared memory: race conditions, deadlock, livelock etc are the problems while implementing concurrency.
Either forgo concurrency or eschew shared memory!
Forgoing Concurrency isn’t an option as computers now have more cores and to harness parallelism becomes more important as increment of processor speeds isn’t going to improve much.
So, can we eschew shared memory? Definitely.
Instead of using threads and shared memory, we can use processes and message passing. Process here means a protected independent state with executing code, not operating system process.
We can even have data flow systems as a way of computing. Evaluation is controlled by readiness of data within the system. Definitely we won’t have synchronization problems.
All of the principal languages used rely on shared memory, multi-threaded for concurrency. So what can be done to eschew shared memory? The answer is to use — or, if they don’t exist, create — libraries and frameworks that provide process models and message passing, avoiding all use of shared mutable memory.
TL;DR All in all, not programming with shared memory, but instead using message passing, is likely to be the most successful way of implementing systems that harness the parallelism that is now endemic in computer hardware. The future seems to be in using threads to implement processes..
Go to 56th Step
Go to the 58th Step.
References:
- 97 things Every Programmer Should Know ~ Git Book
- 97 Things Every Programmer Should Know ~ Paperback
- What is concurrency? ~ Wiki
- What is dataflow? ~ Wiki
- What is shared memory? ~ Wiki