Similarities and differences - for senior (age/experience) devs

Hello Fellow developers,

I have been trying to wrap my head around How difficult would it have been to be a dev 20-30 years ago?

I have a few questions aimed for people two generations (30ish years) above me , say in their 50s and still coding (hobby or profession).

  1. What were pain points to becoming a dev when you were 20? Most painful two.

  2. What are disadvantages that present generation (kids in 20s ) faces compared to 20 years ago?

Any anecdotes or examples are preferred for better understanding as it allows “me in your shoes” :stuck_out_tongue:

5 Likes

Corresponding tweet for this thread:

Share link for this tweet.

2 Likes

Hmmm, it’s actually hard for me to think about
 I started programming in an assembly add-in cartridge on a Commodore Vic20, the reference manual was a goldmine of info (that’s something that is sorely lacking in many languages nowadays), eventually went to an IBM 8088 (still have both these machines in my closet) with basic and so forth (the old ones, literally ‘basic’ and advanced basic, this was long before things even like quickbasic) but quickly got various borland tools like turbo pascel and turbo C, I mostly did C until I got a borland C++ compiler, then eventually some ancient visual studio for C++ and was just C++ for a long long time, eventually dabbled in other languages when the Internet came to exist for the public (this was about 2003-2004) and figured out I could learn interesting things from them that helped in my C++ work, so I tried to consume and learn as many as I could, eventually going to my current tactic of trying to “Get Good” with at least 3 languages a year that I still do to this day.

So back then the major disadvantages would probably have been a lack of examples (no internet, and the local BBS’s weren’t exactly flush with code) although the examples I did have were absolutely top notch in quality. Another thing would be a lack of community to discuss it in.

Too much information of way too low quality is the biggest thing, I am really not a fan of sites like Stack Overflow, all that work should have gone into writing better documentation, not in making a really questionable site of often really poor quality code that people tend to copy and paste without gaining understanding.

4 Likes

I have been a victim of this too often.

Surprisingly, I have started reading documentation first (rather than googling) when it came to Elm and Elixir.

Like Underjord put it succinctly!

Also, I felt a depth of understanding after reading Programming Phoneix LiveView Book. Those kind of books are not encouraged enough in JS ecosystems.

Thank you for reply!

2 Likes

Pre-Amazon (circa 1990), it was almost IMPOSSIBLE to find programming books about subjects that were “too niche” for Borders and Barnes & Noble unless you ALREADY knew they existed, or you were lucky enough to live somewhere like Boston or Silicon Valley (where there presumably WAS a bookstore or two that made a point of automatically carrying at least one copy of nearly every programming-related book from a respected publisher). And if you rolled the dice and special-ordered a book “sight unseen” based entirely on its title that you managed to stumble upon, you had a good chance of spending a lot of money, waiting weeks or months, and ultimately ending up disappointed.

By 1990, I was lucky enough to have internet (specifically, Usenet) access (courtesy of my university), but it didn’t really help much. My own university only kept Usenet traffic around for 2-4 weeks, and its mainframe-based client didn’t have any way (at least, that I was aware of) to bring replies to your own posts to your attention, so they were as likely to roll off and be forgotten as you were to ever see any that existed. Thanks to the hard archival work by DejaNews (ultimately scooped up by Google), I actually stumbled over a reply to a long-forgotten post I made on comp.sys.amiga.programming in 1989
 approximately 20 years later(!!!). From what I recall, I made the post the week before final exams in December, went home for a month, and by the time I got back in mid-January, it was gone. That was life back in the dark ages.

Pre-Google (and pre-www in general), everything was extraordinarily ephemeral. Aside from Usenet, we had Fidonet and BBSes, but with no real ability to search past posts (or even keep them around much longer than a few months at most, knowledge evaporated almost as quickly as it was shared.


As far as the challenges faced by new programmers today, I’d say it’s the sheer volume of knowledge you have to accumulate just to make it to “Hello, world!” in a language like C# or Java.

Back in the mid-80s, a computer like the Commodore 64 came with a ~150-page book that had enough real information in it to write meaningful programs (as opposed to a useless booklet containing nothing besides legal disclaimers, regulatory notices, and a page of diagrams for people who are too stupid to know how a keyboard and mouse are supposed to be connected), and for another $25 or so, you could buy the Programmer’s Reference Manual which wasn’t particularly nice to read, but contained almost everything ELSE you REALLY needed to know to write programs (at least, in BASIC).

If you went completely nuts, you could buy a half-dozen additional books on topics like assembly language and advanced graphics
 but the point is, even a HUGE personal library of programming books consisted of MAYBE 6-12 books, with 2,000-3,000 pages total between them.

Compare that to a single book about C++ programming with Visual Studio for Windows, which could easily exceed 2,000 pages and barely scratch the surface.

Contemplate for a moment how many pages it would take to print the complete official javadocs for the Android API
 using 1/4" margins, 2 pages per side, double-sided printing, and 8-point type.

I can’t even imagine what it would be like to be a teenager today who has to master object-oriented design, functional programming, and MVVM architecture just to write an Android app that doesn’t completely suck. Or a J2EE web application. Or a Windows app. The bar to entry is staggeringly higher today than it used to be.

The fact is, StackOverflow and Google are the only things that keep Android and IOS development from collapsing into themselves like black holes from their own sheer volatile mass.

3 Likes

Went right through the heart!!

20 years ago : hard to find a gem (gem like manuals - crisp and short were available)

Today : easy to find the garbage docs.

Another question :

  1. What advice would you give to a young programmer (me in 20s) to keep sane?

  2. How to find good resources like in your days? (Crisp, and well explained)

2 Likes

I got into programming around 40 years ago, via electronic hardware development. This was a time when using microprocessors to replace hardware was becoming common. I didn’t have any degrees in electronics or software, but I did have a background that included electronic repair. And I had a degree in English, which taught me a lot about expressing myself logically and clearly.

What were pain points to becoming a dev when you were 20? Most painful two.

The biggest pain point was lack of tools. The ones that were available (such as logic analyzers that could display code mnemonics, and emulators that could let you set breakpoints and examine registers) were scarce and expensive. I ended up creating some of my own tools. Based on a suggestion by a co-worker, I wrote a monitor program that basically let me pretend I had an emulator, though I was using the same processor to execute the code. (see https://idiacomputing.com/pub/An_8031_In-Circuit_Emulator.pdf)

What are disadvantages that present generation (kids in 20s ) faces compared to 20 years ago?

Much of the work today depends on understanding frameworks and APIs written by someone else. Many of these are designed rather haphazardly and often are documented poorly from the consumer’s point of view. The irregularities make it much harder for someone to get a good mental model.

  • George
1 Like

Oh, I almost forgot the OTHER things that make learning difficult today:

  1. Volatile development tools and APIs that mutate & become obsolete faster than they can be documented and learned.

  2. The “friction” imposed by ebooks and “online” documentation.

Back in the Commodore 64 era, documentation started out ‘OK’, and only got better over time, because for all intents and purposes it was a nonmoving target. AFAIK, Commodore Basic (shipped with the c64, and an option to use with the c128 in c64 mode) literally never changed from the day the first c64 rolled off Commodore’s assembly line until the day the final c64-compatible computer capable of running in “c64 mode” did.

Sure, the c128 had a better BASIC that wasn’t compatible with c64 BASIC, but AFAIK, even THAT was a nonmoving target
 it existed on the c128’s final production day exactly the same way it did on its first.

Over time, the c64/c128 got BETTER development tools, but the old ones continued to work exactly the same way THEY did. Old versions had bugs that got fixed, new features were added, and LATER models of the computer ITSELF might have had problems with really old software, but it was basically UNHEARD OF for a program that worked on your computer, with a set of hardware you had for it, at some point in the past, to suddenly and spontaneously quit working in the future.

The Amiga was volatile compared to the c64/c128 (the jump from 1.3 to 1.4/2.0 broke a LOT of stuff), but it was downright STABLE compared to a platform like Android (where new updates come out at semi-random every few weeks that can and do break things that worked 15 minutes earlier).

The problem is particularly acute with Android. For the past few years, I’ve been wanting to learn how to use Jetpack’s new features, but finding documentation that wasn’t confusingly broken was nearly impossible. Following Android’s changes is hard enough when you ALREADY understand a particular advanced topic
 trying to debug a program when you aren’t sure whether a problem is due to a mistake YOU made, a typo in the documentation/tutorial, a change made by Google after the documentation you’re looking at was made, or some combination of all three, is immensely frustrating.

I’m immensely thankful not only that Michael Fazio wrote a great book about it, but also that I got lucky enough to discover and begin reading it within weeks of its first printing, and before Google had time to inevitably wreck it by their next upcoming wave of major changes a month or two from now.

Which brings me to point #2
 Android is so volatile, eBooks are almost the only form of documentation that can be kept up to date
 but for technical documentation, eBooks really, *really * SUCK, for a whole laundry list of reasons:

  • Like software, technical books have “design patterns” of their own. One of the major ones is, “Diagram or overview on one page, explanation and details on the facing page”. Ebooks, especially those made to aggressively re-flow text, completely wreck this pattern. Even when the author/publisher is able to design the eBook to properly put it on one page, or even to make sure it goes on a right-side page with the essential left-side-page content on the page before it, most eBook readers only allow you to view one page at a time, so the whole design pattern goes down the toilet.

  • Most ebook reader hardware is SLOW, and there’s WAY too much reliance on “gestures” to navigate instead of nice, tactile buttons that are themselves engineered to have just enough resistance to avoid unintentional triggering by casual touches.

The big problem with relying upon gestures alone is that they introduce lag and latency. The moment you touch a capacitive touchscreen with your thumb, it has no idea what your intent is
 it has to study how the touch changes for at least a few hundred milliseconds to tell the difference between a swipe, a mash, a tap, etc. That latency introduces “cognitive load”. It’s not merely frustrating
 it’s actively harmful to your ability to learn from text.

When you read something, the memory begins to fade from short-term memory almost immediately unless something reinforces it quickly. If it’s a diagram with explanations on the facing page, your eyes can dart over, and the text and diagram can reinforce each other. If you’re reading an ebook and it takes semi-conscious effort to trigger a page-flip & takes a second or more, you don’t just suffer from the memory fading and becoming corrupted during the 700-1500ms it takes for the new page to appear
 there’s also a HUGE cognitive jolt required when you have to take in the new page, establish landmarks, zone in on the explanation you’re looking for, read it, and connect what you’ve just read with the diagram you saw 2-3 seconds ago.

Put another way, it’s not your imagination that it’s harder to learn something completely new and complicated from an ebook than from a traditional book. It IS, and by now there’s an entire body of academic papers that have begun to explore the toll ebooks take on the learning process.

Online documentation takes a bad situation and makes it worse, because most online tutorial sites are set up to maximize ad exposure, which means limiting the amount of “content” you can see at any one time (without being exposed to more ads), and more often than not, introducing an element of time-gating to slow you down and make sure you’re exposed to the advertising long enough for the site’s creator to get paid for the ad exposure. Take everything I wrote about the cognitive load imposed by ebooks, and jack it up by at least an order of magnitude. Then, make it even worse, because at least books don’t contain ads whose literal goal is to forcibly grab your attention and break your train of thought.

(continued in next post)

2 Likes

What we really need is for the major publishers (Pragmatic, O’Reilly, Manning, etc) to push someone like Google to make a new high-end Android tablet that goes a step above to include optimizations SPECIFICALLY to enhance its use for ebooks:

  • A display that’s big enough to be equivalent to a normal-sized “computer book” with 2 facing pages side by side (with a small active margin around the edge for navigation and control widgets), and sufficiently high resolution to achieve 300PPI. Basically, a 3840x2560 display that’s ~8" x 12" (~15" diagonal).

  • The display also needs to be capable of 120fps
 and 240fps would be even better. Why? To fully replicate the visual experience of flipping through pages of a book. I don’t have links to the studies available, but basically, if you want to smoothly animate the visual effect of flipping through pages as quickly as you can semi-consciously recognize words on the pages being flipped-through, 120fps is the bare minimum, and 240fps will greatly improve legibility. The slower the page-flips, the more you need to introduce blur effects to superficially achieve “the look”. The more blur, the less you’re able to actually RECOGNIZE content on those pages that are within your field of vision for literally 10-20ms apiece. So
 it actually needs a pretty HEFTY GPU, with lots of display RAM.

The high resolution is also important for reading comprehension. Most people don’t realize that the gray blur created by anti-aliasing 100-150PPI text has the same harmful impact upon reading comprehension as a diopter or two of uncorrected astigmatism.

  • A CPU that, if ARM, follows the “Big.Little” paradigm, where you have multiple cores running at different speeds
 and “Big” cores that are capable of spinning up from deep sleep to full-speed within microseconds so they can properly “run to wait”. It doesn’t need to have its powerful cores running at full speed when you’re staring at a rendered page, but it needs to be powerful enough to render the complete page from source to bitmap within literally a millisecond or two. Or, the device needs a good 16-32 gigabytes of RAM so it can spend a few seconds rendering the entire book (or at least, the parts you’re likely to look at within the next few seconds) to RAM, so those bitmaps will already be available.

  • It needs a custom filesystem that’s extraordinarily fast at transferring 50-500mb chunks of sequential data, and almost as fast about jumping to an arbitrary page-point, so pre-rendered pages can be effectively cached AND fetched in realtime. This includes storing chunks of each page in multiple places
 say, one where you have the page rendered at the normal resolution, and one that stores just the portions you’d see while rapidly page-flipping (so it can grab them from storage, one after another, in realtime).

  • At the VERY least, it would have nice, firm, tactile buttons in thumb-friendly positions on both sides. Or preferably, two on each side, arranged in a slight arc that follows your thumbs’ natural paths. Yes, that means the device has at least a 1cm bezel on both sides. Apple might weep, and the Design Nazis might howl at something as gauche as bezels, but screw them. There’s a REASON why printed books have margins, and a good part of it is, “because otherwise, you’d have to keep moving your thumbs out of the way to read the text below them”. The goal of this ebook reference design is to try and enable as many good “book patterns” as possible. Being able to comfortably hold it with both hands without obscuring the page is IMPORTANT.

As far as how you might USE those buttons, here are some thoughts I’ve had:

  • Normally, the screen ignores most touches.

  • If you press and release a button, the page flips from that side to the other. Press and release the right button to flip to the next two pages, press and release the left button to flip to the previous two pages.

  • If you press and hold a button, the touchscreen becomes active
 with a whole host of multi-finger gestures that can be used to trigger flipping, show the equivalent of “dog ears” that you can jump to immediately, view/create annotations, bring up a search window, etc.

The big bonus of keeping the touchscreen inactive unless a button gets held is that when a touch DOES occur (while the button is held), you don’t have to WAIT to distinguish it between a random graze and an intentional gesture. If the left button is down, you know that a gesture on the right side of the screen is INTENDED (and vice-versa).

For all intents and purposes, such an “ebook reader” would be a powerful high-end tablet that goes a step beyond, raising its specs to the level of a mobile workstation laptop while adding things like the bezel and thumb-buttons to improve its ergonomics as an ebook-reading device.

In a pinch, you could use it as a laptop
 but that would really just be icing on the cake. The assumption is that anyone who’s willing to throw down a thousand bucks or more for a tablet with a 15" 4k display and specs approaching that of a high-end laptop already HAS a beefy mobile-workstation or laptop to use for “real stuff” anyway. Its ability to be an Android tablet for things like web browsing is a “because we can, and might as well” feature to add extra value (since it admittedly wouldn’t be cheap).

I’d argue that such a device should ALSO include the chip(s) necessary to enable it to be used as a “dumb” 4k display for a laptop (via HDMI, Thunderbolt, and/or USB 3). I mean, at this point, it might as well. An external 15" 4k display currently costs about $400-500 from Amazon, and it would REALLY come in handy when traveling. Half the time when I go somewhere for the weekend (at least, by car), I end up taking one of my monitors along anyway. I keep planning to buy a 15-17" USB-powered 4k display someday, but I really don’t take enough weekend trips to justify spending that much on a dedicated travel monitor. If it were a bonus feature of an ebook-optimized ultra high-end tablet, that would be another matter entirely.

Ideally, it would be implemented with a physical switch or button on the side that can be used to toggle it between “ebook reader” and “dumb display” modes
 ideally, without the host computer that’s USING It as a dumb display having the slightest idea that anything has changed, as long as you leave the cables connected. Disconnecting and reconnecting displays is disruptive to your desktop environment under Windows and Linux, so I’d just as soon let the computer (and HDCP) THINK the reader I’m using as an external display is still connected and active as long as I leave the cables connected).

Finally, I’d include a bright “IR-blaster” on it, because “why the hell not?” At that point, it would ALREADY be the highest-end Android tablet in history and cost more than a thousand dollars
 another ten cents or so for a bright IR LED that would allow it to do double-duty as a home theater remote control isn’t going to kill anyone. A decoder module would be nice (to make it easier to ‘learn’ codes), but honestly, almost every device from the past 20 years has its raw codes (protocol+value) available online if you know where to find them, so a decoder is more, “convenient for less-technical users” than “essential”. It seriously pisses me off that every phone & tablet maker eliminated IR-blaster capabilities using the cost of the receiver module as an excuse, instead of just cutting it down to transmit-only (since the bright IR LED itself costs almost nothing).

Here’s a video that was made 9 years ago, but gives you an idea of some ways ebook-reading could be improved with better hardware: [KAIST ITC] Smart E-Book Interface Prototype Demo - YouTube

2 Likes

Have you thought about getting an iPad Pro Jeff? It’s 120Hz and at a size you might like :003:

2 Likes

Too much information is finally noise.
Too much stackoverflow, it’s noise too.

The more I program, the less I go to SO etc.

I don’t talk as an aged developer but Googling everything down is, IMHO, a mistake.

I think today, they’re the best. They provided “real” knowledge rather than a shallow one coming from SO, YT videos, etc.

3 Likes

Until I read “Designing Elixir systems with OTP” and finished halfway through “pragmatic studio’s course on phoenix liveview” , i did not understand what good material looks like.

Python, java, js, etc - googling on stackoverflow has really made the mess out of “the love /understanding of programming - neat n hygienic”

Edit: i never get tired of underjord’s blog on above!

3 Likes