
Once again another paper has been published about the extinction of humanity at the “hands” of artificial intelligence. This was surfaced to my attention in an article written on the Vice website ( https://www.vice.com/en/article/93aqep/google-deepmind-researcher-co-authors-paper-saying-ai-will-eliminate-humanity ). Follow the link in that article to the paper that has spawned this latest fear mongering and you’ll come across a paper that’s quite dense to read ( https://onlinelibrary.wiley.com/doi/10.1002/aaai.12064 ). I’m certainly no AI “expert”, and I strongly suspect neither is the Vice article’s author. But I do believe there is a fundamental flaw with the argument about AI causing our demise. That flaw is the idea our human-written software, which underpins all of AI, is somehow flawless. It’s certainly not.
All software up to this point is riddled with innumerable and undiscovered defects. I know this to be true in my heart because of my experiences as a both a developer and an end-user since I was introduced to computers and programming as a highschool junior in 1971. The computer at that time was an IBM 360 mainframe, and the language was IBM’s implementation of APL. The compiler wasn’t perfect, and I learned how easy it was to write software that had bugs. Since that time I’ve written assembly software for innumerable processors as well as using high level languages, and in turn using operating systems and applications that ran within those operating systems.
Based on my experiences software flaws come in two broad categories; structural and algorithmic.
Structural flaws are defects such as divide by zero, use after free (a memory area or object), buffer overflow, bad conditional logic, etc. These are the types of defects that malware authors look for in existing code, as abuse of these defects usually leads to privilege escalation and then complete control of the application or operating system it appears in to the detriment of the user(s) of said software. Sometimes a structural defect is a free (i.e. no work involved) gift to malware authors. An example of that kind of gift was Apple’s iOS 7 SSL/TLS bug back in 2014. Here’s the bug before the fix:
static OSStatusSSLVerifySignedServerKeyExchange(SSLContext *ctx, bool isRsa, SSLBuffer signedParams, uint8_t *signature, UInt16 signatureLen){OSStatuserr;...if ((err = SSLHashSHA1.update(&hashCtx, &serverRandom)) != 0)goto fail;if ((err = SSLHashSHA1.update(&hashCtx, &signedParams)) != 0)goto fail;goto fail;if ((err = SSLHashSHA1.final(&hashCtx, &hashOut)) != 0)goto fail;...fail:SSLFreeBuffer(&signedHashes);SSLFreeBuffer(&hashCtx);return err;}
The problem is the highlighted line, which has a second goto
statement that will always execute, short-circuiting the logic and never performing the final test. Furthermore, if err
is zero, indicating complete success, the calling code will never be the wiser. For a more complete breakdown of this error see https://www.imperialviolet.org/2014/02/22/applebug.html . And if I do say so myself the code has a number of code smells that would demand a rewrite all the way through. But I digress a bit…
This is an example of a coding error that introduces a hard-to-test, hard-to-detect bug buried deep within the overall millions of lines of code of an OS. What test would you write for this?
The second category is algorithmic. An algorithmic defect is where the algorithm is either implemented incorrectly, implemented incompletely, or a bit of both. There are so may examples of this defect to choose from, so I’ll just pick the latest. On Thursday, 15 September, some enterprising Twitter users discovered how to perform a “prompt injection attack” against OpenAI’s GPT-3 (Generative Pre-trained Transformer 3, https://en.wikipedia.org/wiki/GPT-3 ), using an automated tweet bot front end operated by Remoteli.io ( see https://arstechnica.com/information-technology/2022/09/twitter-pranksters-derail-gpt-3-bot-with-newly-discovered-prompt-injection-hack/ ). Needless to say, great hilarity ensued. And needless to say, the issue isn’t with the bot front end, but with the GPT-3 back end, which OpenAI has spent enormous time hyping. I should point out that OpenAI is part of Elon Musk’s collection of companies, the same Musk who owns Tesla and constantly hypes Autopilot, and has been pushing Full Self Driving, or FSD, for years now. FSD isn’t there, and may never be there, at least as it’s currently designed and engineered. While I’m indeed picking on AI products that Musk is involved with because they’re currently hyped to the heavens, I’m also aware of many other so-called AI vision systems that have been shown to be easy to confuse/shut down. And on and on…
If AI does in humanity it will be because we humans allowed defective software to control critical systems such as transportation, energy, food production; in other words all the critical systems we depend on. It won’t be AI coming after us, it’ll be buggy software going haywire. And if it isn’t buggy and it’s still coming after us, then you can be sure there’s a human in the loop directing those AI demons to come after us. All AI will do is to amplify our destructive capabilities, just like all of mankind’s other technical advances through history.
You must be logged in to post a comment.