The next big concurrent language
Tags: Compilers - Developer ToolsTim Bray has been writing his thoughts recently on the topic of the next big language for concurrency.
Let me start by saying I’m completely an arm-chair quarterback here, I’ve never used a functional language for a real project, but I’ve worked in the area of development tools and multi-threaded applications for many years. I’ve watched a lot of language extensions and libraries come and go over the years claiming to “solve” the problems of parallel coding. I’ve also seen some pretty fancy analysis done by compilers, which seems like it could be better leveraged if the languages had better ways to communicate the intent of the code author.
Do we need a new language for concurrency?
My assumption is that the development of OO programming was the result of increasing software complexity, not a response to a particular change in the capabilities of computer hardware (like multi-core is hitting us today).
Over the last many years, distributed networks of computers have been a very popular platform for writing software. There are many libraries and frameworks for dealing with distributed systems, but specific languages that address that need (I’m sure there are some) have not become popular in general. Distributed programs are still written mostly in languages that don’t have any specific features to support distributed programming.
So I don’t think the need for concurrency itself will drive the adoption of a new language. The compelling argument for me is the possibility that the needs of multi-core (and hence multi-threaded) programming may drive software complexity far enough that we need another big leap in programming technology. I’m skeptical that’s the case, but it won’t stop me from theorizing about what the next leap in programming technology might be. 🙂
Eventually we’ll need a new language, what will it be like?
Global state makes multi-threaded programs difficult to write and maintain because it requires synchronization. The problem is not the synchronization, the problem is the global state. That’s a lesson I take from functional languages. Previous attempts to address concurrent programming tended to focus on encapsulating the synchronization, instead of encapsulating (or abstracting away) the global state. For example, a parallel language extension that allows you to specify a matrix operation without needing to name any iterator variables has abstracted away the global state (in this case, the iterator variable). A language feature (like OpenMP) that tells the compiler where and how to synchronize, but still requires you to code the iteration yourself, is hiding the synchronization but not the the global state.
It’s tempting to look for a language that emphasizes functional programming in its design, but that’s not necessarily the right approach. There are many difficult and complex aspects to writing software, and the problem of global state is just one more. The right response is to look for a language that makes it easy to encapsulate global state. In general, functional languages don’t necessarily make it easy to encapsulate global state. I think the correct response is to look at languages that can abstract away the complexity of MT coding and global state in effective ways.
So here’s an analogy: A long time ago I took a course on the software APIs used to internationalize software. My main take-away was the the best way to internationalize any sort of library was to remove all the messages from the library, and only return diagnostic codes. In other words, the best way to internationalize code is not to have to. Similarly, the best way to synchronize code is not to have to. You want to encapsulate it into a specialized module.
In a layered application that is concurrent, you want to focus on making the lowest layers of software completely reentrant (that is, side-effect free). It’s generally not a big deal if the very top layer has global state, as long as the top layer is sufficiently thin. You want a language that makes it easy to write side-effect free code, but there is still lots of code that needs to have side-effects that can’t be ignored.
So it seems to me that the key feature we should be looking for is a significantly increased ability to abstract away implementation details. By the way, any functional language that requires coders to understand how/why/when to write a tail-recursive function loses out big time in the abstraction department.
I was recently inspired by a paper in IEEE Computer that talked about a research language called SequenceL. I discussed it in a previous blog. The benefits of SequenceL are described as the ability to write executable code that maps as directly as possible into a requirements specification in the jargon of the problem domain. This meshes with the recent discussions of DSLs (domain specific languages) as a good way to encapsulate various concurrent implementations.
Check out my last blog entry about SequenceL, and read the paper, it’s very well written. If you have a direct link for it, please let me know.