My thoughts on macOS vs Linux

Well all of that is true but it won’t happen over an hour, would it? It’s a rhetorical question, I know it won’t. :frowning:

Don’t just assume that people have 6h to throw away on that and that they’ll sing happily while doing it. I might be getting old but I want my machines functional. The constant required grooming is becoming extremely tiring and irksome to me lately.

2 Likes

It would probably take a couple of days (though I can do it in a couple of hours as I’ve done it so many times) but it’s worth it. HOWEVER, things in your dev env will probably break and that’s the worst part of this whole thing - everyday programs are fine, its the dev stuff that ends up being a pita.

A good option for you since you have an iMac would be to simply swap out your HD and then do it on a new one, and if it doesn’t go to plan put the old disk back in :+1:

3 Likes

I don’t know about Macs, but in Linux you can install the user folder in a different partition from the OS, thus I was using this approach in order to keep my user folder between upgrades of Ubuntu and not have to take too much time getting the system back in shape.

Nowadays I just use Docker for pretty much everything and I keep all my bash scripts to wrap docker commands and other utilities in the same repo, that I then use at work and home, thus doing a clean isntall is not a big issue for my setup.

This setup was a life saver 2 years ago. So, I was typing rm -rf ~/some/folder and my fat finger hit the ~ and enter at same time, thus rm -rf ~, and it was in the work computer, but was quick to recover due to my docker setup :slight_smile:

But I understand that not everyone likes docker or is fluent with it, or can run their Editors from Docker.

3 Likes

I don’t like Docker but the way things are, ite usage is inevitable.

And yep, with clever partitioning Linux is a bit more resilient indeed. I’m pondering having my home directory on a different physical disk even (when I get a Linux workstation in the future). But it might not be necessary. Even on my Mac the home directory is being very aggressively backed up – every hour with the Time Machine, and occasionally with two other backup programs. Encrypted compressed snapshots are distributed among 6 cloud storage services.

The GIT repo for my scripts is a neat idea. I’m simply having that inside my OneDrive but I should likely do what you do. Gives more independence.

As for editors inside Docker, I really like Emacs more with GUI and not on the console, but to be fair I never tried to fully replicate everything I like only on the CLI so it might work.

3 Likes

I use Sublime Text 3, VSCode and AndroidStudio from Docker containers.

Also, I have dockerized Firefox for when I want to access my bank accounts.

2 Likes

Yeah, guess I’ll have to kill a weekend sometime to make those scripts.:smiley:

2 Likes

It’s still a work in progress, but here it is:

I still need to add to this repo some code that I have scattered in other repos.

3 Likes

Thanks, I’ll poke through it.

2 Likes

It’s still missing a lot of other repos being merged into it. You can see which by the failed symlinks in the root of the repo,

2 Likes

just my 2 cents.

I also have all my dotfiles in GIT repo (in some complex structure) and all dotfiles are written in a “composable” way.
You can have a private repo including secrets, but that might be dangerous, way better is to have secrets in some vault (e.g. hashicorp vault)
I am then able to simply symlink configs as needed for each system and compose them as I need.
That way all my machines have exactly the same configuration…
I also use ansible, so if I load new server, will just run ansible, which will install all I need + sets up my config files. some ansible playbook can also setup my local pc.

this is how structure looks like

.
├── README.md
├── editors
│   ├── nvim
│   │   ├── init.vim_ex_erl_go_html_js_py_rt
│   │   ├── init.vim_ex_js_erl_html
│   │   ├── init.vim_go_js_py_rb_rt_sc
│   │   └── local_init.vim
│   └── vim
│       ├── vimrc.local
│       ├── vimrc_erl_ex_html_js
│       ├── vimrc_ex_go_html_js_php_pl_py_rb
│       ├── vimrc_ex_go_html_js_py_rb_rt_sc
│       ├── vimrc_ex_go_html_js_rb
│       ├── vimrc_ex_go_js_py_rb
│       ├── vimrc_go
│       ├── vimrc_go_html_js_py
│       ├── vimrc_js_html
│       └── vimrc_js_html_php
├── fish
│   ├── aliases
│   │   ├── desktop
│   │   │   ├── archlinux.fish
│   │   │   └── ubuntu.fish
│   │   ├── mac
│   │   │   └── work_aliases.fish
│   │   ├── server
│   │   │   ├── centos.fish
│   │   │   └── ubuntu.fish
│   │   ├── shared
│   │   │   ├── archlinux.fish
│   │   │   ├── centos.fish
│   │   │   ├── global.fish
│   │   │   ├── mac.fish
│   │   │   └── ubuntu.fish
│   │   └── wsl
│   │       └── home_windows10.fish
│   ├── completion
│   │   └── shared
│   │       ├── docker.fish
│   │       └── exercism.fish
│   ├── config
│   │   ├── desktop
│   │   │   ├── archlinux.fish
│   │   │   └── ubuntu.fish
│   │   ├── mac
│   │   │   └── work_config.fish
│   │   ├── server
│   │   │   ├── centos.fish
│   │   │   └── ubuntu.fish
│   │   ├── shared
│   │   │   ├── archlinux.fish
│   │   │   ├── centos.fish
│   │   │   ├── global.fish
│   │   │   ├── mac.fish
│   │   │   └── ubuntu.fish
│   │   ├── specific_machines
│   │   │   ├── xxx_xxxxx_xx.fish
│   │   │   └── xxxx_xxxxx_xx.fish
│   │   └── wsl
│   │       ├── home_windows10.fish
│   │       └── work_windows10.fish
│   ├── functions
│   │   └── shared
│   │       ├── git
│   │       │   ├── ga.fish
│   │       │   ├── gd.fish
│   │       │   ├── gm.fish
│   │       │   ├── gp.fish
│   │       │   ├── gpom.fish
│   │       │   └── gs.fish
│   │       ├── git.fish
│   │       ├── global
│   │       │   ├── add-key.fish
│   │       │   ├── read_confirm.fish
│   │       │   ├── serve.fish
│   │       │   └── timestamp.fish
│   │       ├── global.fish
│   │       ├── go
│   │       │   ├── goglobpath.fish
│   │       │   ├── gopath.fish
│   │       │   └── listgo.fish
│   │       ├── go.fish
│   │       ├── md
│   │       │   ├── mdless.fish
│   │       │   └── rmd.fish
│   │       ├── md.fish
│   │       ├── mssql
│   │       │   ├── mssql_check.fish
│   │       │   ├── mssql_connect.fish
│   │       │   ├── mssql_create.fish
│   │       │   ├── mssql_destroy.fish
│   │       │   ├── mssql_start.fish
│   │       │   └── mssql_stop.fish
│   │       ├── mssql.fish
│   │       ├── npx
│   │       │   └── npx_aliases.fish
│   │       └── npx.fish
│   └── variables
│       └── global.fish
├── other
│   ├── gemrc
│   ├── gitconfig
│   └── screenrc
├── powershell
│   ├── completions
│   │   ├── deno.ps1
│   │   └── rustup.ps1
│   ├── modules   <tons of my modules, removed for brevity>
│   │   └── SimpleDockerApps
│   ├── profile.ps1
│   ├── profile_mac.ps1
│   └── profile_parallels_windows.ps1
└── scripts   <tons of my scripts, removed for brevity>
    └── rl.rb
4 Likes

That’s pretty awesome. What’s stopping me is the initial ramp up and big effort to make such a setup happen.

2 Likes

Definitely thinking the same, if I do switch to the mac I would need a separate remote workstation. Probably have a docker container for each project and use them with VSCode remote. Similarly to Codespaces but hopefully not paying per hour pricing for barely using the CPU. :grinning_face_with_smiling_eyes:

2 Likes

Link to your dotfiles?

I ended up moving to yadm and I’ve been very happy. I pretty much just maintain the bootstrap script and I version all my other dotfiles accordingly.

I haven’t really needed alternate files or encryption but I’m glad they’re in there already.

2 Likes

Unfortunately my repo is private

I have it similar, by as my bootstrap I use ansible, which will create an ssh key, adds it to the dotfiles repo, then downloads git repo and link files based on some logic (os, version, etc.)

that way, even if my server get somehow hacked, they will be able to get only to that specific repo and not to my whole github :slight_smile:

secrets are also always created directly on the server and my dotfiles only check if some files containing them exists, and if they do, it will export the environment variables.

never heard of yadm, might check it out.

UPDATE: I checked quickly your repo and lot of things you do in your bootstrap I actually do in ansible (like installation of apps, configuration, symlinking, etc.)

2 Likes

I’ve completed a reformat and I’m pleased to report things are, thankfully, MUCH better :smiley:

This looks like it was an issue with a gem - so not Apple’s fault…

This has been fixed too :+1:

Fixed as well :nerd_face:

So it looks like this, in conjunction with their fixes in the latest release, sorted it (doing the upgrade alone did not sort out the issues - it needed to be a clean install - and why I still maintain that we should do this every major release (maybe not till the third point release tho!))

It still doesn’t feel as it did when it was brand new mind, and so there may well (probably?) be some element of deliberate throttling going on. They still need to do a little bit better imo - there’s no reason why this machine, for simple tasks like opening apps shouldn’t be as fast as it was on day one.

This is still the case for the Affinty series of apps - but it appears this could be partly the fault of Affinity as I have seen others complain about the same thing and on multiple platforms (appears it started after a specific release).

Wouldn’t surprise me however that the Affinity devs are themselves a little stumped due to some weird ‘update’ Apple has made.

Overall though I am MUCH happier, but still feel they need to keep improving the performance side of Big Sur - things have definitely improved significantly (which isn’t difficult considering how bad things were!) but they still have some way to go imo.

3 Likes

I do this for Windows every year. Mostly because I end up lapsing and installing some software I really need at the moment and then start wondering what kind of keyloggers it contained. :sweat_smile: But hey, it actually keeps me motivated to maintain my dotfiles.

I’m actually super interested in how Apple seems to be tackling application permissions on desktop. :sweat_smile: I don’t believe Windows or Linux have anything similar planned.

2 Likes

ArchLinux FTW!!

2 Likes

I tested out remote development with VSCode but development containers on a remote server are is yet supported (despite the extension name “Visual Studio Code Remote - Containers:sweat_smile:): Support dev containers through a Remote-SSH session · Issue #2994 · microsoft/vscode-remote-release · GitHub

You can develop remotely using the Visual Studio Remote - SSH extension to open the project folder from your remote server. It seems quite nice to use and even automatically handles port forwarding to your localhost. It’s probably not as comfortable to use than a development container or Codespaces though as I managed to leave some npm watch command running after starting it in VSCode and opening a different project, had to ps aux to kill it.

No Mac for me yet unless Microsoft comes up with a new “Visual Studio Code Remote - Containers via SSH” extension, or unveils some ridiculous low pricing for Codespaces. :grinning:

2 Likes

Eh, been quite the opposite in my experience. Wayland is missing substantial features (including features I use on an hourly basis), it’s still extremely incomplete. It’s security is a whole ton better however, it’s now impossible for a program to, for example, start recording the image of another program, instead it now forces a user request to allow it, I do like these aspects.

Eh, not just me, I’ve walked many people through in my real life to running Kubuntu for years now, and they have no issues, including rather heavy Windows gaming on linux and all. Compared to when they ran windows and I was always being asked to help fix things, their linux systems just don’t break, and they do upgrade to the latest versions of Kubuntu as they are released (which, unlike windows or mac’s, actually gets faster on each release instead of slower, and it’s already the fastest out).

The Mac interface is incredible inconfigurable, similar on Windows, and like on Windows (which I’d argue is even more configurable than Macs to a fairly wide margin) would drive me utter crazy… ^.^;

Plus they’ve done some really horrible things, like whitelisting their own network connections to bypass firewalls, don’t give you details access in to your own system, have to setup certificates for practically everything you build, etc… That sounds like a slice of hell.

I’ve not heard of security issues with wayland? As for X11, those security issues are constrained to your user, it’s not like it can give anything root, and there are ways to sandbox those things as well though that is a bit more irritating to do.

+1

Capitalism is a pox on humanity, nothing but a modern day monarchic system to keep control centralized in the few, but that’s a topic for another thread. ^.^;

On my linux desktop, that’s been running the same linux install since 2006 (yes really) that’s been upgraded every year on top of the existing version, never been cleaned out, etc… etc… etc…, I can open about anything instantly, never had any slowdown in 15 years, and it has noticeably gotten even faster over time (even excluding that I bought an SSD about 9 years ago for it, that I hooked up into the LVM system to act as a boot and program accelerator by mirroring the main directories to it). ^.^

Really though, my 15 year old desktop, which had a CPU change 11+ years ago, still runs absolute circles around my comparatively very new work Dell computer that runs Win10 (which I use as nothing but a thin client to connect to linux systems). Even just logging in to a session that’s already logged in takes 10+ seconds, loading calc.exe of all things takes like 12 seconds, notepad takes like 6 seconds, this is all utter crazy, like every single operation on the thing feels so incredibly lagged, where my home desktop does everything instantly, logging in is as fast as I can hit enter after typing my password and the screen does a tenth of a second flicker to initialize the 3d context back, and all of my system pales in comparison to my wife’s (which I often use concurrently with my wife via more sessions as her hardware is far newer and supports vulkan for me to program with, which it handles with bliss, which, of course, you can’t do on Windows or Mac at all). Heck, even the boottime on my home desktop is about 8 seconds to go through the BIOS (yes it’s that old), another 4 seconds to realize the hardware RAID is disabled, then about 2 seconds to boot linux to get to my login screen, then another <1s to load my profile from scratch once I log in, and all that on my exceptionally old home desktop, which again, wife’s computer blows away.

It’s trivial to see what took long in bootup as well, I can see precisely how long each driver, service, etc… took to load in milliseconds, and what depended on what causing serialization of the loading, and this is done every bootup. I’ve yet to figure out how to get that information out of the Win10 system at work (which takes almost 2 minutes to bootup, just wow’s me every time…), and I haven’t heard one way or the other how you’d get that info on macs.

Isn’t that an ancient windows’ism? Why would mac’s need to do that?! Linux definitely doesn’t…


For note, I run and have other local people run Kubuntu because its solid releases, doesn’t have the tracking of Ubuntu, has the far superior KDE, is not an often-breaking (though usually easily fixed if you know what you’re doing, but not going to put that work on non-computer people) rolling release like Arch, etc…

4 Likes

And I was wondering what had happened to @OvermindDL1 :thinking: but it seems is back :slight_smile: and in shape :muscle:

3 Likes