This is all going to be a bit hand-wavey and straight off the top of my head, so bear with me, but it’s a thought/debate that’s been rattling around my head for a while.
Where do you think the future of computing lies?
That’s a huge question that can be interpreted a million different ways, so let me pose some more specific questions.
Will we see a convergence? Will choice in hardware, languages, paradigms, etc all converge on clear winners? Alternatively, will things become more disparate? Is the trend towards true computer-literacy or (for lack of a better term) app-literacy?
Will we end up in a situation where developers can build an AI the way we build a blog now, but struggle to build the fundamentals? Will we get to a point where we’ve abstracted so much away that very few (if any) people can work from first principles?
Will open source or proprietary systems become the norm? If its proprietary, will be see the companies behind them become more or less altruistic (proxy debate for optimistic or dystopian future)? If it’s open source, then will the enumerable options condense down to a few bigger, better, more cohesive players, or will they just multiply? If they multiply, what are the common standards that must exist to make it all work together?
I only ask these questions because I don’t see a clear path myself.
Take web frameworks for a small scale example. Most web languages have a predominant web framework and all are largely similar. Rails works like Phoenix, with similar features to Laravel, comparable to Django and .NET…etc. We can pick out small differences, and each has language features that might make them better suited in some cases, but they by and large do similar things in similar ways. Isn’t that wasted effort?
Sticking with web development, there’s now so many different architectures (SPA, server-rendered, HTML over web sockets, static) available, all with strengths and weaknesses, but again; huge duplicated effort for largely similar results.
Companies like Apple have shown that there’s huge advantages to be gained (performance, functionality, optimisation, etc) from vertical and tight software:hardware integration.
On the flip side, projects like Raspberry Pi have shown that on a long enough timeframe that current compute power is irrelevant and that tiny, single board computers that once could only do very simple things can be personal devices, servers, sensors and everything in between. Future developments like RISC-V are likely to continue this trend.
What I think I’m trying to get at is that at the moment, we developers seem to be finding an ever increasing number of ways to do the same or similar things. The only common thread seems to be capitalism which depending on your outlook might indicate a grim, dystopian future for technology. Yes, many developers work on open source software “for the greater good”, but when this effort is just used for capital gain doesn’t it become akin to (voluntary) free labour?
I don’t know. Where do you imagine computing, and developers like us to be, in 50/100/200 years? Where do you hope we’ll be?
I think we are going to see a few cycles, with each iteration starting off relatively small and then becoming increasingly longer.
10, maybe 20,000 years from now, is it possible we may know everything there is to know? And by then we are more likely to have become the next version of humanity too? A self-enhanced species that may be more computer-like ourselves? (I definitely think we are still at a very primitive stage - compared to what we could eventually become.) Before we get to that stage, we will probably have self-writing programs/languages appear at some stage, where we simply stipulate a spec and all the hard work is done for us (so yeah, the first step may be creating AI tools as simple as it is now in creating a basic crud blog/site).
Shorter term (next 5 to 10 years) I think hardware is going to cause the first big split. Apple are most likely going to lead here and we can see evidence of this with the various hardware in iOS and macOS devices that their counterparts just don’t have - and to get the best out of these features it will probably become more important and maybe even necessary to use their own languages.
Open Source has changed and will probably continue to do so as well. I like how it’s enabled everyday people to make a living from doing something they are passionate about - even if, as you say, there is a capitalist element to it (I personally think it’s great people earning a living from doing something they love). In terms of organisations owning open source, I think it depends on the company and their intent - lots of people have written about ulterior motives of tech giants, and a lot of it may be true, but I think when there is a ‘decent’ company behind something (eg: Ericsson and Erlang) then this is a big plus point. Btw, speaking of decency - I actually met an Ericsson CEO many years ago while on a flight, he was sitting in the back of the plane (i.e not first class - even tho I could see there were plenty of spaces there) which I thought was nice and humble (or just smart - in a plane crash those sitting right at the back have a better chance of survival apparently!)
I think we may actually see more and more devices, apps and services at this level because more and more people want simpler devices that can’t as easily track or spy on them as much. I have certainly been thinking about this a lot recently, particularly how pervasive AI/ML is making things - owners of big platforms will be able to create a psychological profile about its users and this can be (and probably will be/probably already is being) abused.
We need better laws for this and a simple fix to this could be that apps should/may only be allowed to do what a reasonable person would expect of it. So for example, on a social network or dating app, the messages between users should be private (encrypted), their data, browsing, or app-usage details should not be sold or shared to outside companies, psychological profiles should not be created about users, etc
Margaret actually touched on this in her 2021 tech topics thread:
Where I’d like to see computing go and where it is probably going to go (at least to begin with) are probably quite distant - I’d like to see more ethics, and decent honest apps and companies, but unfortunately I think for the immediate future we will see more and more of the opposite. Companies will continue to abuse their power in order to control and manipulate users. I think as a species and especially those of us in tech, have to strive for principles higher than that of making money… hopefully after that everything else will start to fall into place
In very tight regulated profession where you will be legally responsible for the code you write and may end-up in jail.
I work in the security space and even before that I seen that developers and business don’t put security as a first class citizen, instead is almost always an after thought.
How many software is out there that security is an opt-out? I mean the software is released with tight security controls in place and then you need to learn how to opt-out from them.
Changing the mindset of developers and business about security is very hard, and more often then not I get a lot of resistance and downplay for whatever I try to educate people on. They came back with a lot of different excuses and business/developers rationals about trade-offs and risk assessments, but all this reasons fly out of the window when they have a security incident.
So, the question is not if it will happen, but when will happen… software development will be strongly regulated by law, and you may end-up in jail because some code you wrote.
See this talk from Uncle Bob that touches the subject:
This is partly what prompted me to write this. You already know that I’ve been toying with Swift lately just because what’s possible within that ecosystem is too compelling to outright ignore or write-off. Making any money as an independent developer in that space still appears to be close to impossible - even for those that have always been “Mac/iOS developers”. I of course need to keep a roof over my head which is why I struggle to justify the time learning and developing for this platform, but the integration between hardware and software opens up opportunities that excites the developer in me and at minimum gets the businessman curious.
I also find their “on device” mantra convincing as a user. I like that I’ve got a powerful enough device that my data doesn’t need to be shipped off to a server to be crunched. It strikes me as a sensible pattern when networks will always be unreliable and those servers are out of your control. Arguably an iPhone is also out of a user’s control but unless you’re flashing a phone with Linux then what isn’t?
This approach makes things like syncing and coordination harder though. HomeKit is a perfect example, because it constantly struggles to work out “who’s in charge” and what a device should be doing at any given time. It’s also hopelessly reliant on the network for voice comprehension. A central server (a Raspberry Pi!) could easily coordinate this, if not have the power for speech-to-text.
I still find it bananas that I — a self taught developer who started with zero CS experience (I was a graphic designer) — can load up the source code for an operating system, language or framework and learn from it directly. When the docs for something don’t go far enough, I can “pop the hood” and work out how something works, why it works that way and how best to use it. There’s no gatekeeping, and for someone that was always pulling things apart as a child to learn how they worked; that’s an opportunity too good to pass up.
In contrast, I find working on closed source systems like .NET and SwiftUI frustrating. There’s no escape hatch when the docs fall short.
I think the lack of tracking is one aspect but a bigger one is simply that they are “knowable”. You can map and understand the entire purpose and functionality of these boards and use that knowledge to create cool things with practical benefits. Building apps (web or otherwise) is such a large domain these days with so many rabbit holes and conflicting doctrines that it’s overwhelming. For developers that like to tinker, I think these boards are a safe haven where they can learn, explore and create in a low-pressure, low-stakes environment. When so much of our professional work carries so much ceremony and red-tape, simple creation becomes a radical act.
Agreed. It’s inevitable, but I think it’ll take a long time for the law to truly catch up with this industry; if it ever does! What I see as a more likely future is regulation, not by law, but by boards or councils. I’m thinking specifically of medicine here as my wife is a Doctor so I understand it well. I think having an overseeing body such as the GMC to validate that developers have a certain skill level, adhere to a set of ethics, etc are all a good thing and probably a sensible route for our industry.
That said, I have to reconcile that if this were the case currently, then I and I suspect many others wouldn’t be developers. I wouldn’t want to see the equivalent of medical schools charging a small fortune — even here in the UK, it’s worse pretty much every where else in the world — for access to this profession. A compromise might be something like apprenticeships but with a regulatory body for ethics if you want to work in the industry. You can still be taught by anyone, including yourself, but you need to register to charge for your services. Being “struck off” would prevent you from working again, and for serious incidents then legal action could follow too.
Ditto. I’d like to see less siloing of knowledge and cult-like behaviour in the industry. For me, there needs to be less “us and them” and more working together to build something we can be proud of. Unfortunately I fear that capitalism — that chestnut again… — is a significant barrier to this mindset.
I don’t like to point fingers but I feel that DHH’s stance towards Apple after the HEY incident is a good example. Many of his criticisms of Apple the company are valid and shared by many people. But because he perhaps feels personally wronged by them, those criticisms are blended with hyperbole, straw-man arguments and vitriol that actually make his reasonable points less valid. Instead of using his voice in a reasoned manner to enact change, he’s used it to shout and create division and it’s clear that this level of extremism is good for precisely no one.
Potentially. A lot of people I know are still ignorant to what their technology is actually doing behind the scenes, but certainly more and more people are aware that it’s happening, if not fully clued up on the consequences. There’s mounting pressure against shady tracking and data aggregation and I don’t think it’ll be long before this tide turns. Thats where capitalism might actually help, as market pressures will force course corrections or weed out stubborn bad-actors.
Agreed. How do we balance that with keeping a roof over our heads though? That’s the debate I’m struggling with most at the moment.
Anecdata incoming! I’m currently contracting for a company where their approach to testing, security and even “best practises” is underwhelming, yet they readily recognise that if what we’re building goes wrong, leaks data, then we all end up in court with a very large fine, and likely can’t do business again. I struggle to reconcile this approach myself but it comes down to a business decision and making the judgement based on risk vs. reward.
I think the question above about “local data” vs “server data” is interesting. Do you as users and developers have any preference? Do you like the idea of a central server coordinating things and acting as the source of truth, or do you prefer devices being independent, processing data locally and being free of network concerns?
I know we’re all Elixir developer so I suspect we’ll all say “server”, but as I mentioned I personally quite like the concept of local devices, perhaps operating on local networks, but removing the cloud from a system as much as possible. I makes sense both as a user and a developer. Thoughts?
Is that what it’s really like now? I haven’t made an iOS app but did make a tiny macOS app and sold it on the App Store for 99p: Have you made or worked on a macOS app? - #2 by AstonJ From memory I was making about £50 a month for what I thought was a tiny app (all it did was pick lottery numbers for you ). Could be why you often see lots of apps from a single developer?
I agree, and I think many of us feel that apps we download via the App Store are ‘safe’ - i.e if anything nefarious is going on, such as the app stealing data or accessing your photos without your specific permission (ie you upload each photo yourself) then they would boot the apps from the App Store or notify users what’s happened so we can take the appropriate legal action.
I don’t follow Twitter these days but I always thought he had valuable things to say in certain areas. What sort of stuff is he highlighting? About Apple’s fees?
I hope you’re right but I think we may need govt intervention. A lot of the ‘successful’ startups are those who VC backing/funding. This is why I love supporting everyday devs like us lot - and languages like Ruby and Elixir, because they are powerful enablers for bootstrappers.
In the hands of decent folk it should be exciting… but yeah, very scary in the hands of others!
I’ve been following that and I agree that it’s terrible - surely the first thing they should have checked were their systems?
The families can take action against those involved (and I hope they do) but agree there should be better laws, particularly as they had no choice or say in the matter (of using that software).