snapshot of apple device performance metrics

There has been talk for some time about how Apple devices running iOS are contenders for replacing standard Intel architecture computers, such as MacBook Pros. Since I have a number of Apple devices, I thought I’d install Geekbench 4 (version 4.1) and run it across three of my Apple devices. I’ve put the results in a simple table below, with the results in the first three rows.

MBP mid-2015iPhone 7 PlusiPad Pro 2016
CPU Single-Core446234573017
CPU Multi-Core1600558725082
Compute381171229614764
ProcessorIntel Core i7Apple A10 FusionApple A9x
Max Frequency2.8 GHz2.34 GHz2.26 GHz
OSmacOS 10.12.5iOS 10.3.2iOS 10.3.2

The MBP I own is a 15″ Retina MBP with 16GB of memory and the 2.8GHz quad-core i7. I wasn’t surprised to see the MBP be the leader across the board, particularly in multi-core scoring. The MBP is certainly the brawniest of the three with its Intel processor and eight times the memory over both the iPhone and iPad. Keep in mind that the MBP is the oldest of the three devices.

What I found rather interesting is the GPU-based Compute score. The iOS version of Geekbench uses Metal, the graphical framework that’s a part of iOS. Geekbench on the MBP uses OpenCL and because I’m too cheap to buy a copy, the built-in Iris Pro on the i7 processor was used instead of the beefier AMD Radeon R9 M370X. So even though I’m using the “lesser” graphics processor and “poorer” graphics software framework, the MBP still scored a solid two to three times faster than either iOS device. Of further note is the sizable performance lead of the iPad over the iPhone, even though the iPhone’s CPU is clocked faster and it’s using a more current Apple SoC.

So, am I ready to trade in the MBP for either iOS device? It all depends on the use case.

For general uses involving reading content and typing, I could easily switch to the iPad Pro. I use it with a Logitech keyboard-and-cover in landscape mode, which, when attached to the iPad using the Smart Connector gives me a decent keyboard with back-lit keys. It’s not as efficient and comfortable as the MBP keyboard, but it’s more than serviceable especially over a period of hours. I can do writing and other types of textual creation, as well as fairly sophisticated graphical content creation and photo/video post processing. There are, however, limits to the iPad Pro.

For the ultimate web experience I prefer the MBP and my selection of browsers, which includes Chrome, Firefox, and Vivaldi. I am not a fan of Safari on either iOS or macOS, and I don’t think I ever will be. What makes web browsing on iOS truly annoying is Apple’s insistence of forcing every other browser to use the Apple web engine used by iOS Safari; it is buggy and poorly performant.

When I need to develop software I much prefer the MBP. When I need to do light code editing on the iPad Pro I use Textastic with Working Copy. I have iOS Terminus that allows me to ssh into machines around my home running Linux and macOS (nothing like that for Windows, unfortunately). Under ssh I tend to use vim with extensive vim customizations and colorizations. And I can use scp and git to move things around that need moving. So the iPad Pro makes a pretty decent work platform when I don’t want to fire up the MBP, especially when I need to put it down due to interruptions.

I haven’t even mentioned the iPhone, but it’s decent enough that it can fill in for the iPad when all I can carry with me is just the iPhone. I use a Microsoft Folding Bluetooth keyboard to type on, and I have an SDHC to Lightening card reader for reading JPEG and RAW files produced by my Olympus cameras. The same apps I would use on my iPad to post process work just fine on the iPhone 7 Plus. And when I don’t want to, or can’t have, my Olympus camera, then the iPhone 7 Plus camera is just fine.

Finally, there’s the truly heavy lifting that the MBP is called upon to do. For example, I have a number of Linux virtual machines I power up to perform testing and development in parallel with work on the MBP. I use Xcode to develop iOS applications, as well as Android Studio to develop Android applications. If I want to develop using a full Javascript stack starting with node.js, then the MBP is the only way to go. If I want to develop in Java or Python or Go or Rust, only the MBP allows me to do that.

And the 15″ screen on the MBP is the easiest of all the screens to read, which is important due to my poor eyesight (20/700 and near sighted).

There is no easy answer to the original question, except to say it all depends. As long as I can choose which to use for which task, I will choose all three based on the work at hand that needs to be done.

But I am impressed with what the Apple SoCs can accomplish. While the MBP rules them all, for single core scoring all three devices are fairly close together, compared to multi-core and compute. This bodes well for Apple’s continued evolution of its ARM-based processors, and if I were Intel, I really would be looking over my shoulder at ARM in general and Apple in particular.

gear for the revolution

You’re looking at an Olympus E-P2 with a 15mm body cap lens and a 14-42mm EZ pancake lens. That small round item with the hole in the middle is the Olympus LC-37C auto open lens cap which screws onto the front of the pancake zoom. The E-P2 was introduced November 2009, five months after the E-P1, Olympus’ first µ4:3rds camera. I paid full price for the complete kit which included the original M.Zuiko 14-42mm collapsible kit lens and the VF-2 electronic viewfinder, which slid into the hot shoe at the top of the camera. The key advance of the E-P2 over the E-P1 is that an expansion port was built into the back side of the hot shoe, which allowed for additional capabilities like the VF-2 to be added to the camera. The camera is now so old it qualifies in some corners as a vintage camera. The only up-to-date part of the kit is the pancake zoom which was introduced January 2014 along with the OM-D E-M10 Mark 1.

The question is why go back to something retro? Price and availability. The cost of new contemporary interchangeable digital camera are skyrocketing. While I would love to own a new Olympus OM-D E-M1, the eye watering high cost of US$2,000 is more than my budget can bear. To put that cost in perspective, I can either have the E-M1 Mark 2, or I can remodel one of my bathrooms. Throw in one of those new zooms, such as the M.Zuiko 12-100mm f/4 PRO, and I’ve now got enough money to remodel both bathrooms.

Other reasons for turning back to this camera (and a Sony NEX 5N I also own, but picked up when it was heavily discounted a few years back) is that it still works, it’s compact, and looks like a lot of point-and-shoot cameras that are still be used extensively around the world. As used gear it’s dirt cheap. The large 13 x 17mm sensor in the body doesn’t hurt either, since it is considerably larger than the sensor in every cellphone being made. The final, most important reason for using a camera like this, is that it’s capable of documenting this strange dark world we’ve moved into since Trump was elected. Loosing it or having it get busted won’t set me back an inordinate amount of money. It’s simple and rugged enough to meed my needs for a set of tools that I can use to document who knows what over the next four years (at a minimum).

What can this camera do? With the body cap lens, a 15mm (e30mm) at f/8, I can literally point and shoot and get everything in focus from 3 ft/1m out to infinity. Or with the pancake zoom, I can zoom into the equivalent of 84mm on a 35mm camera for that short zoom effect if I need to keep back and avoid a confrontation.


Or pop the 15mm on the body and just document the world around you without drawing undo attention.


Even at f/8, in low light, the E-P2 is capable of grabbing something decent at ISO 1600 (it can go higher) that can be used, especially on the web. And if you want, you can set the 15mm to closeup (0.3m) and get down a bit close to your subject.

Olympus’ digital Pen’s aren’t the only game in town. Sony’s older NEX series of cameras, especially the 5 series, are an excellent little carry around camera, especially if matched with an inexpensive prime like the older Sigma lenses.

I picked up the NEX 5N when it was on closeout a few years back, and I happened to pick up a Sigma two lens set, the 19mm and the 30mm, for $99 each at about the same time. To give you an idea of relative sizes, the Apple SDHC to Lightening adapter is in front with the 5N’s SDHC card plugged into it. Which brings up an interesting point. I use an iPhone for just about everything now related to photography, from taking the photograph to processing it and then pushing it out to various social channels such as Smugmug and Instagram. The Apple adapter allows me to move images off the card and into the phone for post processing.

What is significant now is that iOS 10.2.1 is capable of knowing when you’ve taken your camera photos in RAW and can actually show you what you have directly on the cell phone once they’ve been imported into your camera roll. In the past I couldn’t process RAW anything unless I had a personal computer and software, such as Lightroom, that knew about how to interpret those RAW files. I discovered today that my iPhone with iOS 10.2.1 can read raw files from both the Olympus E-P2 as well as the Sony NEX 5N. How it handles newer cameras I can’t say. But for what I need, I don’t need the latest and greatest, just something from the last 8 or so years that still works. Here’s two examples from the same RAW file produced by the Sony. The first is post processed by VSCO, the second by Snapseed.


While both VSCO and Snapseed knew they were dealing with RAW files, it was Snapseed that post processed the photo as it was shot. I’d set the Sony to shoot 16:9 aspect ratio. The VSCO app didn’t honor that aspect ratio, choosing to revert to the full 3:2 aspect ratio. Furthermore, mirrorless cameras embed metadata in every file that allows the post processing software to correct for lens flaws, such as barrel distortion in the 19mm. If you look closely at the VSCO image you can see it. It’s properly corrected in the Snapseed. Whether the second is better than the first is entirely up to the viewer. I personally prefer the brighter color from the VSCO processing (which was what I was going after), but if I had to make sure it was “more correct” then I’d probably run it through Snapseed. By the way I didn’t do anything out of the ordinary with Snapseed. I just accepted its defaults when it first read it in, and then immediately saved the JPEG back out again.

In the past I put together several mirrorless kits, with multiple bodies and lenses. Today, I’ve narrowed that down to a single body and one or two lenses. Furthermore, I’m doing everything on my iPhone because it’s now powerful enough and the iOS apps are sophisticated enough. For the citizen journalist who wants a bit more on a budget than just the camera on the phone, the latest iOS release coupled with a reasonably up-to-date iPhone (SE through 6 and on up) can help you build a powerful documentation system without the need of a notebook or even a tablet to handle the output from any mirrorless camera made in the last eight or so years.