switching back to linux mint 21

Four days ago I switched from Linux Mint 21 to Ubuntu 22.10, the interim release of Ubuntu. I stated a number of reasons why I switched, the biggest being the updated kernel and critical tools. Yesterday I swapped the SSD with Linux Mint still on it back into my machine in place of the SSD that I’d installed Ubuntu on.

The primary reason I switched back was the lack of certain types of development support. For example I had to install everything necessary to support a full build-out of Python 3.11. There were other missing library and development support packages. When I had to install the Fuse file system library support to run some AppImage applications, that’s when I realized I’d made a mistake for my daily driver usage and that I needed to switch back to Linux Mint. Fortunately for me I’d not made any changes that I hadn’t already saved elsewhere, such that switching back was completely painless.

Now that I’ve had this experience I’ve come to realize that Linux Mint can be considered the more supportive developer distribution compared to Ubuntu. That is, I can do more development without having to hunt down and install various libraries to make it all work. For example I run several AppImage applications. Before they would even execute I had to install the libfuse2 library. I did check with my Ubuntu 22.04 Parallels virtual machine, and that support is there. But it was missing in Ubuntu 22.10. Installation is all of 60 seconds, but the fact it was missing makes me suspicious, especially because Ubuntu is pushing (hard) Snap packages over all others. Was support for AppImage deliberately dropped in 22.10? As they say, things that make you go “hmmm.”

Ubuntu 22.10 will appeal to fans of Ubuntu and fans of the latest Gnome, and they’ll install it and use it without a second thought. But if you’re a developer, especially an embedded/IoT developer, make sure nothing breaks if you decide to install Ubuntu 22.10 and make it your daily driver. You may be unpleasantly surprised like I was.

ai won’t end humanity, humanity will end humanity

Once again another paper has been published about the extinction of humanity at the “hands” of artificial intelligence. This was surfaced to my attention in an article written on the Vice website ( https://www.vice.com/en/article/93aqep/google-deepmind-researcher-co-authors-paper-saying-ai-will-eliminate-humanity ). Follow the link in that article to the paper that has spawned this latest fear mongering and you’ll come across a paper that’s quite dense to read ( https://onlinelibrary.wiley.com/doi/10.1002/aaai.12064 ). I’m certainly no AI “expert”, and I strongly suspect neither is the Vice article’s author. But I do believe there is a fundamental flaw with the argument about AI causing our demise. That flaw is the idea our human-written software, which underpins all of AI, is somehow flawless. It’s certainly not.

All software up to this point is riddled with innumerable and undiscovered defects. I know this to be true in my heart because of my experiences as a both a developer and an end-user since I was introduced to computers and programming as a highschool junior in 1971. The computer at that time was an IBM 360 mainframe, and the language was IBM’s implementation of APL. The compiler wasn’t perfect, and I learned how easy it was to write software that had bugs. Since that time I’ve written assembly software for innumerable processors as well as using high level languages, and in turn using operating systems and applications that ran within those operating systems.

Based on my experiences software flaws come in two broad categories; structural and algorithmic.

Structural flaws are defects such as divide by zero, use after free (a memory area or object), buffer overflow, bad conditional logic, etc. These are the types of defects that malware authors look for in existing code, as abuse of these defects usually leads to privilege escalation and then complete control of the application or operating system it appears in to the detriment of the user(s) of said software. Sometimes a structural defect is a free (i.e. no work involved) gift to malware authors. An example of that kind of gift was Apple’s iOS 7 SSL/TLS bug back in 2014. Here’s the bug before the fix:

static OSStatusSSLVerifySignedServerKeyExchange(SSLContext *ctx, bool isRsa, SSLBuffer signedParams, uint8_t *signature, UInt16 signatureLen){OSStatuserr;...if ((err = SSLHashSHA1.update(&hashCtx, &serverRandom)) != 0)goto fail;if ((err = SSLHashSHA1.update(&hashCtx, &signedParams)) != 0)goto fail;goto fail;if ((err = SSLHashSHA1.final(&hashCtx, &hashOut)) != 0)goto fail;...fail:SSLFreeBuffer(&signedHashes);SSLFreeBuffer(&hashCtx);return err;}

The problem is the highlighted line, which has a second goto statement that will always execute, short-circuiting the logic and never performing the final test. Furthermore, if err is zero, indicating complete success, the calling code will never be the wiser. For a more complete breakdown of this error see https://www.imperialviolet.org/2014/02/22/applebug.html . And if I do say so myself the code has a number of code smells that would demand a rewrite all the way through. But I digress a bit…

This is an example of a coding error that introduces a hard-to-test, hard-to-detect bug buried deep within the overall millions of lines of code of an OS. What test would you write for this?

The second category is algorithmic. An algorithmic defect is where the algorithm is either implemented incorrectly, implemented incompletely, or a bit of both. There are so may examples of this defect to choose from, so I’ll just pick the latest. On Thursday, 15 September, some enterprising Twitter users discovered how to perform a “prompt injection attack” against OpenAI’s GPT-3 (Generative Pre-trained Transformer 3, https://en.wikipedia.org/wiki/GPT-3 ), using an automated tweet bot front end operated by Remoteli.io ( see https://arstechnica.com/information-technology/2022/09/twitter-pranksters-derail-gpt-3-bot-with-newly-discovered-prompt-injection-hack/ ). Needless to say, great hilarity ensued. And needless to say, the issue isn’t with the bot front end, but with the GPT-3 back end, which OpenAI has spent enormous time hyping. I should point out that OpenAI is part of Elon Musk’s collection of companies, the same Musk who owns Tesla and constantly hypes Autopilot, and has been pushing Full Self Driving, or FSD, for years now. FSD isn’t there, and may never be there, at least as it’s currently designed and engineered. While I’m indeed picking on AI products that Musk is involved with because they’re currently hyped to the heavens, I’m also aware of many other so-called AI vision systems that have been shown to be easy to confuse/shut down. And on and on…

If AI does in humanity it will be because we humans allowed defective software to control critical systems such as transportation, energy, food production; in other words all the critical systems we depend on. It won’t be AI coming after us, it’ll be buggy software going haywire. And if it isn’t buggy and it’s still coming after us, then you can be sure there’s a human in the loop directing those AI demons to come after us. All AI will do is to amplify our destructive capabilities, just like all of mankind’s other technical advances through history.