The Future of Computing

This is all going to be a bit hand-wavey and straight off the top of my head, so bear with me, but it’s a thought/debate that’s been rattling around my head for a while.

Where do you think the future of computing lies?

That’s a huge question that can be interpreted a million different ways, so let me pose some more specific questions.

Will we see a convergence? Will choice in hardware, languages, paradigms, etc all converge on clear winners? Alternatively, will things become more disparate? Is the trend towards true computer-literacy or (for lack of a better term) app-literacy?

Will we end up in a situation where developers can build an AI the way we build a blog now, but struggle to build the fundamentals? Will we get to a point where we’ve abstracted so much away that very few (if any) people can work from first principles?

Will open source or proprietary systems become the norm? If its proprietary, will be see the companies behind them become more or less altruistic (proxy debate for optimistic or dystopian future)? If it’s open source, then will the enumerable options condense down to a few bigger, better, more cohesive players, or will they just multiply? If they multiply, what are the common standards that must exist to make it all work together?

I only ask these questions because I don’t see a clear path myself.

Take web frameworks for a small scale example. Most web languages have a predominant web framework and all are largely similar. Rails works like Phoenix, with similar features to Laravel, comparable to Django and .NET…etc. We can pick out small differences, and each has language features that might make them better suited in some cases, but they by and large do similar things in similar ways. Isn’t that wasted effort?

Sticking with web development, there’s now so many different architectures (SPA, server-rendered, HTML over web sockets, static) available, all with strengths and weaknesses, but again; huge duplicated effort for largely similar results.

Companies like Apple have shown that there’s huge advantages to be gained (performance, functionality, optimisation, etc) from vertical and tight software:hardware integration.

On the flip side, projects like Raspberry Pi have shown that on a long enough timeframe that current compute power is irrelevant and that tiny, single board computers that once could only do very simple things can be personal devices, servers, sensors and everything in between. Future developments like RISC-V are likely to continue this trend.

What I think I’m trying to get at is that at the moment, we developers seem to be finding an ever increasing number of ways to do the same or similar things. The only common thread seems to be capitalism which depending on your outlook might indicate a grim, dystopian future for technology. Yes, many developers work on open source software “for the greater good”, but when this effort is just used for capital gain doesn’t it become akin to (voluntary) free labour?

I don’t know. Where do you imagine computing, and developers like us to be, in 50/100/200 years? Where do you hope we’ll be?

4 Likes