problems running wireshark on macos 13

Properly working Wireshark

Today was one of those days. I’ve got Wireshark installed on my Macs, and I’ve had Wireshark installed on all my Macs for years (decades, back to when it was called Etherial). Today after upgrading to macOS 13.2 I decided to do a bit of home network research and fired up the latest version of Wireshark, 4.0.3. That’s when I got the error you see below. I did a fair amount of internet research (i.e. “googling”) and finally constructed a solution that works for me. First my solution, then a partial explanation of what might be happening.

My solution consists of creating the following zsh alias:

alias wireshark='sudo /Library/Application\ Support/Wireshark/ChmodBPF/ChmodBPF && Wireshark'

When I open a shell (iTerm in my case) and type in wireshark I’ll get a prompt for my password, then Wireshark will start and execute correctly.

Why is this happening? Somehow, someway, the script ChmodBPF is not executing with proper permissions to create Wireshark pseudo devices, /dev/bp*. The script is supposed to execute every time I log in, and create many of those pseudo devices (256 on my machine) that Wireshark then uses. I don’t have a deep knowledge of Wireshark so I don’t know what Wireshark is doing with all those devices.

Because of ChmodBPF’s failure, only four pseudo devices were created, and they all had root:wheel ownership, not my username:admin. That then caused Wireshark to fail to properly work.

I discovered a lot of half-assed solutions, such as changing permissions directly on the pseudo devices. Running the script as sudo came from reading a thread on Wireshark’s GitLab issue wiki: https://gitlab.com/wireshark/wireshark/-/issues/18734 . It should be pointed out that if you reboot your Mac that the devices are wiped out, and you’re going to have to run the script as sudo again, at least once. I just combined everything into a terminal alias. I live in the terminal so it doesn’t bother me.

ai won’t end humanity, humanity will end humanity

Once again another paper has been published about the extinction of humanity at the “hands” of artificial intelligence. This was surfaced to my attention in an article written on the Vice website ( https://www.vice.com/en/article/93aqep/google-deepmind-researcher-co-authors-paper-saying-ai-will-eliminate-humanity ). Follow the link in that article to the paper that has spawned this latest fear mongering and you’ll come across a paper that’s quite dense to read ( https://onlinelibrary.wiley.com/doi/10.1002/aaai.12064 ). I’m certainly no AI “expert”, and I strongly suspect neither is the Vice article’s author. But I do believe there is a fundamental flaw with the argument about AI causing our demise. That flaw is the idea our human-written software, which underpins all of AI, is somehow flawless. It’s certainly not.

All software up to this point is riddled with innumerable and undiscovered defects. I know this to be true in my heart because of my experiences as a both a developer and an end-user since I was introduced to computers and programming as a highschool junior in 1971. The computer at that time was an IBM 360 mainframe, and the language was IBM’s implementation of APL. The compiler wasn’t perfect, and I learned how easy it was to write software that had bugs. Since that time I’ve written assembly software for innumerable processors as well as using high level languages, and in turn using operating systems and applications that ran within those operating systems.

Based on my experiences software flaws come in two broad categories; structural and algorithmic.

Structural flaws are defects such as divide by zero, use after free (a memory area or object), buffer overflow, bad conditional logic, etc. These are the types of defects that malware authors look for in existing code, as abuse of these defects usually leads to privilege escalation and then complete control of the application or operating system it appears in to the detriment of the user(s) of said software. Sometimes a structural defect is a free (i.e. no work involved) gift to malware authors. An example of that kind of gift was Apple’s iOS 7 SSL/TLS bug back in 2014. Here’s the bug before the fix:

static OSStatusSSLVerifySignedServerKeyExchange(SSLContext *ctx, bool isRsa, SSLBuffer signedParams, uint8_t *signature, UInt16 signatureLen){OSStatuserr;...if ((err = SSLHashSHA1.update(&hashCtx, &serverRandom)) != 0)goto fail;if ((err = SSLHashSHA1.update(&hashCtx, &signedParams)) != 0)goto fail;goto fail;if ((err = SSLHashSHA1.final(&hashCtx, &hashOut)) != 0)goto fail;...fail:SSLFreeBuffer(&signedHashes);SSLFreeBuffer(&hashCtx);return err;}

The problem is the highlighted line, which has a second goto statement that will always execute, short-circuiting the logic and never performing the final test. Furthermore, if err is zero, indicating complete success, the calling code will never be the wiser. For a more complete breakdown of this error see https://www.imperialviolet.org/2014/02/22/applebug.html . And if I do say so myself the code has a number of code smells that would demand a rewrite all the way through. But I digress a bit…

This is an example of a coding error that introduces a hard-to-test, hard-to-detect bug buried deep within the overall millions of lines of code of an OS. What test would you write for this?

The second category is algorithmic. An algorithmic defect is where the algorithm is either implemented incorrectly, implemented incompletely, or a bit of both. There are so may examples of this defect to choose from, so I’ll just pick the latest. On Thursday, 15 September, some enterprising Twitter users discovered how to perform a “prompt injection attack” against OpenAI’s GPT-3 (Generative Pre-trained Transformer 3, https://en.wikipedia.org/wiki/GPT-3 ), using an automated tweet bot front end operated by Remoteli.io ( see https://arstechnica.com/information-technology/2022/09/twitter-pranksters-derail-gpt-3-bot-with-newly-discovered-prompt-injection-hack/ ). Needless to say, great hilarity ensued. And needless to say, the issue isn’t with the bot front end, but with the GPT-3 back end, which OpenAI has spent enormous time hyping. I should point out that OpenAI is part of Elon Musk’s collection of companies, the same Musk who owns Tesla and constantly hypes Autopilot, and has been pushing Full Self Driving, or FSD, for years now. FSD isn’t there, and may never be there, at least as it’s currently designed and engineered. While I’m indeed picking on AI products that Musk is involved with because they’re currently hyped to the heavens, I’m also aware of many other so-called AI vision systems that have been shown to be easy to confuse/shut down. And on and on…

If AI does in humanity it will be because we humans allowed defective software to control critical systems such as transportation, energy, food production; in other words all the critical systems we depend on. It won’t be AI coming after us, it’ll be buggy software going haywire. And if it isn’t buggy and it’s still coming after us, then you can be sure there’s a human in the loop directing those AI demons to come after us. All AI will do is to amplify our destructive capabilities, just like all of mankind’s other technical advances through history.