More Flowcharts

I realised recently that in the changeover to my new workstation, and the change to new operating system versions, my entire workflow for producing a Surfing The Deathline page art errata fix was broken.

Worse still, I couldn’t fit the entire thing in my head at once, so there was nothing for it, but to start mapping the whole thing out.

The process:

  • Adobe Indesign CS5.
    • Output all pages to individual .pdf files
    • Adobe Photoshop CS5.
      • Convert each page to .png, and .jpg files
      • Automator Workflow.
        • Rename files and copy them to the appropriate development folders
        • Chronosync Workflow.
          • Copy a subset of files to be used in the extract versions to the appropriate extract development folders.

A thought that occurs is to put the entire process into the Virtual Machine I’m using to run the Adobe apps, so that they’re sealed off against change.

Solve for A.

This year my old Mac Pro running macOS 10.13 High Sierra shuffled into the grave. I needed a newer computer quickly, and my options were either Apple-Silicon Mac Studios, or secondhand 2019 Mac Pros.

For reasons, I bought the Mac Pro.

This new machine runs macOS 13 Ventura, and that’s a problem, because it has broken my entire photography workflow, which was based around Apple’s Aperture Digital Asset Manager.

Here’s a diagram of how my photo management worked with Aperture, my cameras, and my iOS devices:

The import, to library, to sync workflow was pretty simple:

  1. Plug the camera or device into the computer.
  2. Select the images you want to import.
  3. Choose where you want the images copied to on disc (this is populated by use, so would eventually have all the folders shown in the filing structure). I choose to keep them organised by device.
  4. Aperture copies the files to disc, placing them in Year /  Month / Day subfolders.
  5. Aperture creates events in the Aperture catalogue, which correspond to the shooting sessions.

 

From there you can:

  • Manage your images in the catalogue.
  • Edit images.
  • Sync images back to your iOS devices.

What Went Wrong?

This process doesn’t work in Ventura. For a start, Aperture won’t run by default under Ventura. There’s Retroactive, which purports to modify older Apple apps to run on the new operating systems, but it isn’t working for me (images won’t display). iTunes doesn’t work either (Retroactive excepted) but that has a replacement in Finder sync. Aperture’s loss is a real pill, however, because in its wake there is no tool that can do all the things it was capable of doing.

One option to keep these older tools working, is to use them via virtual machines. Aperture will run in a VM, and all of its import and organisation utility seems to function correctly. One thing it cant do however, is display full-size images. This is due to a lack of support for virtualised GPU access, in the versions of macOS which support Aperture.

Apple Photos:

Photos was supposed to be a replacement for iPhoto and Aperture, however there are some significant shortcomings. Namely:

  • Photos cannot import from device to a referenced library structure – in other words it can’t move files from device, to your choice of storage location.
    • It can import to a referenced library if the files are already in their final storage location.
  • Photos importing to a managed library structure destructively renames files when it stores them in its internal storage location.

So Photos fails on that first instance – it can’t be the universal ingestion tool to get my images off my devices, unless I want to give up my entire file management structure, and accept my files being destructively renamed.

Nope.

There’s also the matter of bitten once, not going to be bit again. After investing in an Apple solution for this whole process, I don’t want to trust the company with a concentration of functionality. You can never know what core features might disappear from the software, because someone in the company has an office politics agenda to change its direction.

There is another ingestion option, and it’s…

Image Capture:

Image Capture is a very old application, which can import from any device, to any location. This would seem ideal, except for one shortcoming:

  • No subfolders.

Image Capture can only import to a flat folder location – no Year / Month / Day sub folders. This brings a crisitunity in that it forces me to rethink just how much of my process I invest in any one application, and maybe break the process down, so as to ensure no one application can own the entirety of my photo management process.

The New Workflow:

The glue of the new workflow is Hazel – an automation system I’ve been using for a while, which is effectively a more reliable version of Apple’s Folder Actions. Thus:

This is a much more complicated pipeline at first glance. However, it has a high degree of modularity, and actually allows for flexibility the old system lacked. For example, the integration of manual saving of edits. Instead of having to save from an editor, then re-import to Aperture etc, the edit can happen in any application.

This also provides a framework for Digital Asset Managers to be connected in. CMYE’s Peakto looks to be an interesting meta-manager, which can look inside other DAM libraries. Photos is also an option, since one of the things Hazel can do is to automatically import images to the Photos library, so in that second round of Hazel processing after the images are in their Y/M/D folders, there could be an “import to Photos” process.

However, I refuse to trust Photos to continue support of referenced libraries, so it’s probably better to not start with it at all.

Zero DAM:

There’s also an interesting alternative to get things working quickly, and that’s not using a DAM at all, but just saving search criteria as smart folders in the filing structure where your images are kept:

A Finder window, with the Preview pane enabled.

In this system, you’d simply never need to use the DAM for a main catalogue – Finder can do most of the tagging etc for you, and then you can use dedicated editing DAMs like Capture One when you want versioned editing on a single file.

Fixing Image Capture with PLIST Edit

Image Capture is an application included with macOS, which acts as a general image ingester, and scanner interface. You plug a device in, and Image capture looks at all the files available on it, then gives you the option to download them to your chosen location, or application.

The basic UI, is this:

Image Capture in macOS High Sierra

Or at least, that’s how it looked.

The most salient point is that option “Make subfolders per camera”. What that does when checked, is that whatever folder you choose to copy files into, Image Capture will first make a folder with the name of your device. Great if you’re copying images in for the first time, but if you already have a previously established folder for device images, not something you’d want to have enabled.

What went wrong:

In recent versions of macOS, this checkable menu option is no longer visible, which means you lose the ability to control that aspect of the software, and the default is to create the device subfolder. *eugh*

Anyway, a bit of research online indicted tht the setting might be controlled in the .plist file for Image Capture, located at:

~/Library/Preferences/com.apple.Image_Capture.plist

…and sure enough

The nefarious property

Fir enough, I’ll open it in a text editor, and just change <true/> to <false/>

Except… it’s a binary .plist file, and opens as garbage text. Yes, only Apple could make a plain text XML preference file system, into binary files that require a special developer tool to modify them.

So, off to the Mac App Store, and there’s a simple tool PLIST Edit. $10, done.

Open the plist file in it, change the value to False, save, relaunch Image Capture, and:

Prodigal menu returns

Make subfolders per camera is back. Huzzah.

HFS+ and APFS Permissions for SMB Filesharing.

There’s a problem I encountered with Mac-based filesharing over SMB where HFS+ and APFS formatted disks would behave differently from each other when mounted remotely.

While HFS+ disks worked as expected, APFS disks would have issues with write permissions – everything would look correct, but creating folders would result in folders that couldn’t be written to, or renamed.

All the disks had the same permissions and setting on the file server – all had:

  • (Machine Admin user): Read & Write
  • staff: Read & Write
  • everyone: read only

And they were set to “Ignore Ownership”.

That ownership issue appears to be the problem – I had to enable ownership for the APFS volumes, and then add a dedicated filesharing user to the file server, add that user with read & write permissions to the APFS drives, and then apply permissions to the enclosed items.

Once that was done, it all worked as expected.

Fixing a Wacom Intuos 4 under macOS Ventura, with Keyboard Maestro.

Among the various changes happening in macOS under macOS 13 Ventura, is a problem with Wacom’s Intuos 4 graphics tablets. Following is a way to use Stairways Software’s Keyboard Maestro to solve the particular glitch thrown up by this hardware / driver / operating system combination.

The Symptom:

Upon waking from sleep, the OLED screens on larger size Wacom Intuos 4 tablets my be unresponsive. While all the hardware appears to function, and the controls for the screen brightness are accessible, the screens themselves remain inert.

The Cause:

The problem appears to be a result of the driver not working correctly over the sleep / wake cycle.

Troubleshooting:

I contacted Wacom support, and despite their driver notes showing device compatibility for my tablet clearly written:

…the support representative claimed that the Intuos 4 XL became unsupported after the previous driver, which does not support macOS Ventura.

To be clear, if it’s “unsupported”, one would question why the driver settings show this:

…that “Tablet Light Brightness” feature? Those OLED screens were removed from Wacom tablets after the Intuos 4. There are no newer tablets with those screens, so if the tablet isn’t supported by the driver, why is that there?

We could also check out the Wacom Centre app, which is used to… well it doesn’t really seem to do anything necessary. It’s effectively a thing that checks for driver update status, and provides shortcuts to the Wacom System Settings pane.


That’s “unsupported”? Really?

So on to…

The Solution.

Fixing this problem is a simple matter of quitting and re-launching the tablet driver. You can use Wacom Tablet Utility to do this manually, or you can use Keyboard Maestro to add a set of events to do this as a menu command, or as something that runs automatically upon wake, thus:

So what this macro is doing is it’s triggered by either the Keyboard Maestro menulet app, OR triggered by waking up from sleep. It waits 20 seconds, so that the wake process is out of the way and settled if it’s triggered by a Wake event, then it quits the driver, waits, and launches it again. You’ll need to reveal hidden files and folders to navigate to it, in order to populate the app’s location.

Problem solved.

Time Machine Duplication

To duplicate a Time Machine Drive, and Re-integrate it to the backup process:

  1. Switch off automatic backups.
  2. Copy the source drive using SuperDuper (the only utility that can properly clone a Time Machine volume) with the Backup All Files option.
  3. Wait hours or days for the copy to complete*.
  4. Add the drive in the time machine  prefpane
  5. terminal
    1. Inherit the backup (do this by dragging the actual computer name folder from Finder into Terminal after typing the inheritbackup – the full path will then be populated):
      sudo tmutil inheritbackup /Volumes/(The Backup Drive)/Backups.backupdb/(The Computer's Name on the Backup Drive)
    2. Associate the Boot Drive (again, drag the boot disk’s entry in the latest backup entry of the duplicated Time Machine volume, from Finder, to the Terminal window, and it will populate the area in brackets – make sure you check the number and spacing of forward slashes):
      sudo tmutil associatedisk -a / /(the path to the the last backup of the boot drive on the backup drive)
    3. Associate each backed-up non-boot volume (dragging again from Finder to the Terminal window for both of these):
      sudo tmutil associatedisk -a /Volumes/(Non-Boot Disk) /(The path to the most recent backup of the Non-Boot Disk)
  6. Open a terminal window and start recording the TMUtil log output:
    1. log stream --style syslog --predicate 'senderImagePath contains[cd] "TimeMachine"' --info
  7. Run a Time Machine backup manually and watch the terminal log to make sure each part of the backup is being connected correctly. Look for Inheritance Scans and watch the sizes of the backups, to make sure it’s not doing complete fresh backups.

Special Note: holding down the Option key in Terminal, allows you to place the cursor insertion point wherever you click in the text.

If this helped you, maybe go buy one of my eBooks.

* When I say days, I mean it can take days. Or, indeed in one case, weeks.

Fixing Capture One, with Keyboard Maestro.

Capture One is a RAW photo developer, editor and Digital Asset Manager app. It’s my current go-to as a long-term replacement for Apple’s long-discontinued Aperture.

In general, it has better image processing than Aperture, but falls down a bit on the DAM side of things. It can’t import directly from iOS devices, and doesn’t have export to iOS device integration through iTunes. It also lacks Aperture’s “Flag” option, which is super helpful for doing a first pass through a shoot, and flagging images as keep, or not, before filtering for flagged, and going on to subsequent passes for assigning star ratings.

The biggest problem from a fast workflow perspective, is in how it handles a multiple-display setup. You have your thumbnail Browser window open on one screen, and the image Viewer window open on another. When you click on a thumbnail, although the image is displayed in the Viewer, the application’s focus remains on the Browser. This means keyboard shortcuts to control the zoom level of the image are captured by the  Browser window, and not passed through to the Viewer. As can be seen in this video:

The workaround was to have to manually click on the Viewer window, to bring it into focus, then do the zoom keyboard shortcuts, and back and forth for every image.

This really defeats the purpose of shortcuts, which are designed to minimise unnecessary mouse movement.

I spent almost a year holding off committing to Capture One (after purchasing it) over this, before discovering Keyboard Maestro.

What Keyboard Maestro does is sit in the background, capture keystrokes, and use them to trigger various workflows & macros.

In this case, I configured it to listen for the keyboard shortcuts I had previously used in Capture One for the zoom-to-100%, & zoom-to-fit commands. I then configured it to generate two keystrokes in succession, in response to each of those original keyboard shortcuts.

  • The first, is the keyboard shortcut to make the Viewer the active window.
  • The second, is a reassigned shortcut for zoom-to-100% & zoom-to-fit respectively.
Use a Group to limit the macro’s scope to Capture One.
Set up the chain of keystrokes, triggered by the first.

So, the process now is:

Select a thumbnail, then:

  • Press the Key originally used to zoom to 100%.
    • Keyboard Maestro grabs the keystroke, and uses it as a trigger to fire off:
      • Keyboard Shortcut to make the Viewer window active, then
      • Reassigned Keyboard Shortcut to set the Viewer zoom to 100%.

Or:

  • Press the Key originally used to zoom to fit.
    • Keyboard Maestro grabs the keystroke, and uses it as a trigger to fire off:
      • Keyboard Shortcut to make the Viewer window active, then
      • Reassigned Keyboard Shortcut to set the Viewer zoom to fit.

The neat thing, is that using the shortcut to make the Viewer active while the Viewer is already active doesn’t seem to cause any problems, so there’s no need for conditional logic to test which window is currently active.

All in all, this is an elegant solution to a problem that seemed hopeless.

If this helped you, maybe go buy one of my eBooks.

#FixedItForYou

If you’re a user of Apple’s macOS, and you’re still using macOS 10.13 High Sierra, 10.12 Sierra, or earlier, you might have noticed that iCloud stopped working around April 7th, depending on your time zone.

The Problem:

The symptoms, apart from sync & iCloud Drive not working for the system, or apps that use iCloud, are that you can’t access the iCloud.com website in Safari, while it works fine in Firefox.

Looking into Safari’s Web Inspector, reveals the following:

Going into the iCloud preference pane in System Preferences (which looks like it’s logged in and everything is fine) and attempting to access your Account Details, brings up an error connecting to iCloud.

If you then decided to log out of iCloud, which is about the only troubleshooting technique Apple offers, and you decide to remove iCloud data from your Mac so as to completely clean it out, you will find yourself unable to log back in:

This leaves you without any contacts, calendars, Safari passwords, and probably breaks the ability to use Airdrop and Handoff etc.

So what’s going on?

From the Safari web inspector errors, it looks to me like Apple has broken / made incompatible something in the security certificate used by the iCloud server infrastructure. This was probably in the process of fixing an iCloud outage that had been going on in the days beforehand. Since these versions of macOS aren’t “supported”, one assumes this happened because they weren’t tested.

However, this issue does seem reminiscent of an issue from 2020, when Safari on High Sierra lost the ability to access all of Apple’s web services that ran through idmsa.apple.com (which includes Apple’s discussion forums, iTunes Connect etc). So after a bit of searching, I found the solution as was posted then, and tried it out.

The Solution:

If you go to Apple’s discussion forms, here:

https://discussions.apple.com/thread/251211674?page=3

You’ll see the solution – which involves downloading a new security certificate from Apple, and installing that in your Login keychain.

That fixes the problem.

Instantly.

No rebooting, no nothing. It’s fixed so quickly, that if the next thing you do, is switch to Safari and hit Reload on iCloud.com, or switch to the iCloud Prefpane and hit Account Details, it works immediately.

So, there you are, trillion dollar company, a big problem for a fair chunk of your userbase, just fixed for you, free of charge.

This certificate expires in May, I don’t know what will happen then – if Apple will have fixed things in the meantime, or if you’ll just need to keep replacing these certificates periodically, or if there’s a different certificate you can use that’ll be more permanent. If I find that out, I’ll update this.

EDIT May 21: The certificate expired at 1:45am Australian Eastern Time, and everything broke again, aside from getting Account Details in System Preferences.

Until Apple issues an updated certificate, a temporary workaround is to open Keychain Access, go to Login Keychain. View Menu > Show Expired Certificates. Right click on the  CA 2 – G1 certificate, go to the Trust Section, and set “When Using Certificate” to: Always Trust.

That will fix it instantly.

Edit May 22: It’s broken again, and nothing appears to fix it.

Edit May 23: In Keychain Access, System Keychain, changing the trust settings for GeoTrust global SA to “Always Trust”, fixes the problem instantly.

Edit May 25: Apple PKI issued a new certificate which solves al the problems, and allows you to reverse the Always Trust changes for the expired certificates.

If this helped you, maybe go buy one of my eBooks.

Adventures in Image Processing

So here’s an interesting mystery / conundrum / process I recently went through in trying to create a new workflow.

For reasons or not wanting to subscribe to software, I’m still using the CS5 versions of Adobe Apps for Surfing The Deathline. The original documents are heavily constructed in Photoshop, then all the text, sound effects etc are done in InDesign, which makes them rather non-portable to other solutions.

It had been so long since I’d done a serious update of the books, that I’d forgotten parts of my workflow, and so had started some things from Scratch.

Surfing The Deathline uses .png format images for its pages. Although they take up a lot more space than .jpg versions, they have an advantage of being colour-accurate. A major problem of .jpg is that for images in black & white, a single pixel of colour will shift the white and black values away from their correct tones. So, where you get two pages butting up against each other at the spine, the greys might not match.

InDesign CS5 has no direct png output option, so the workflow is:

  1. PageExporterUtility script to output the pages as individual .pdf files
  2. Convert the .pdfs to .pngs
  3. rename the .pngs and move them to the appropriate EPUB document’s image directory.

I had created an Automator action, which took in the .pdf files, and converted them to .png, and saved them to disk. It took about 3-5 minutes to do all 236 pages.

But there was a problem with he output…

Certain pages, seemed to have red & blue fringing on their text. Going through the .pdf files, it bcame apparent that it was linked to the pages which had a specific masterpage controlling their appearance. Looking at that masterpage, the thing that suggested itself as problematic, was the page number – it was frontmost in the layering stack. So, I deleted and recreated the page number object in the masterpage, applied it to the pages, and reran the .pdf to .png workflow.

Problem solved. Almost.

My large sound effects were still showing red/blue colour fringing. After a couple of days research, it became apparent that this was caused by the system applying sub-pixel antialiasing to the .pdf file during the render.

After experimenting with commandline options for disabling it, I found out there was actually a checkbox for it in the System Preferences app. Unfortunately switching it off makes the system’s display look worse, so what I needed was a way to toggle it off, run the image processing steps, then switch it back on.

After some experimenting, and asking on forums, I was able to get an Applescript that did the job, and add it to my Automator action:

What this does is:

  1. run an Applescript to open System Preferences, test if the checkbox is ticked, untick it if it is, close System Preferences, then,
  2. run all the pdf files through the Render PDF Pages as Images function to create png files, then,
  3. move the converted versions to a new location. Then,
  4. open System Preferences, test the antialiasing setting, and switch it back on, then close System Preferences.

It was fantastic – I had a wonderful system that, once  I’d output the .pdfs from InDesign, could after selecting all of them, render them all to .png in a single right-click.

But there was a problem…

Images which crossed the spine of the EPUB book weren’t aligning correctly. Clearly, something was wrong with the way the Automator renderer was converting .pdf into .png. It didn’t matter what scale I rendered it at – even at the full native 300dpi, the problem remained.

When I compared it against doing the same process manually in Photoshop, it also became apparent that the math behind Automator’s conversion was out – files were always cropped 1-2 pixels smaller from Automator, than they were from Photoshop.

Then I started researching if there was an alternative commandline image processor in macOS – something I could call from an Applescript, to replace Render PDF Pages as Images. Thankfully, there was – SIPS, Scriptable Image Processing System.

After a bunch more research, I managed to sort out the appropriate commands, and gave SIPS a go on my pdf files. The results were the same. I tried it manually with Preview, the results were the same, again. It appears SIPS is the core image processor all these built-in macOS tools use, and it’s SIPS that has the bad math function for rendering PDF files as images.

Sips also produced pretty garbage image quality, compared to Photoshop.

So now I was looking for an alternative to SIPS, and I managed to find one – ImageMagick, a cross-platform commandline image processor. It uses Ghostscript, an opensource alternative to Postscript to render the .pdf, so everything about it will be separate from the SIPS processes. After a couple of days trying to figure out how to install it (hey, opensource projects, try making your basic documentation an educational resource for people who haven’t used your tools previously), I was able to make it work…

It delivered fantastic results, but took 30 seconds per image to process the .pdf files. In contrast, Photoshop, which was so slow I was looking for an alternative, takes 8 seconds.

You might question why I don’t use Affinity Photo, which can tear through the entire 236 pages in around 8 seconds total (gotta love that multithreaded action). Well, unfortunately, Affinity Photo’s pdf renderer can’t handle the edge effects of my InDesign speech bubbles.

So I’m back to where I began, using a ancient versions of Photoshop and InDesign, and needing to take a 35 minute break so Photoshop’s Image Processor script can do its thing, every time I want to run a set of updates from InDesign to EPUB.

Update 23 April 2021:

In experiments with image sizes for Photoshop’s scaling when it renders the PDF file to TIFF, I’ve hit upon a target size that seems to be in some sort of mathematical sweet spot for Photoshop, because the processing time has gone down to about 1 second per image, from ~8 seconds.

Hard Reality.

In April 2016, HTC released the Vive VR headset. Designed in conjunction with games developer Valve, the Vive represented a significant evolution in consumer Virtual Reality.

Technologically,  the Vive’s breakthrough centred around a tracking system that could detect, within a 3x3x3m square volume, the position and orientation of the headset, controllers, and any other object that had a tracking puck attached to it. Crucially, this volumetric tracking ability was included as a default part of the basic kit.

The result, is that HTC’s hardware has effectively defined the minimum viable product for VR as “room scale” – an experience which lets you get out of the chair, and walk around within a defined area. Not only can you look out in all directions, you can physically circumnavigate a virtual object, as if it were a physical object sharing the room. When combined with Valve’s SteamVR platform and store, this has created an entire turnkey hardware and software ecosystem.

From my recent experience of them, the Vive plus Steam is a product, not a tech experiment. This is a tool, not a toy.


First, some basic terminology for the purposes of this article:

  • XR: Extended / Extensible Reality – A blanket term covering all “reality” versions.
  • VR: Virtual Reality – XR in which the real world is completely blocked out, and the user is immersed in a completely computer generated environment.
  • AR: Augmented Reality  XR in which the real world remains visible, directly or via camera feed, and computer generated elements are added, also known as “mediated reality”.
  • GPU: Graphics Processing Unit – the part of a computer that does the work to generate the immersive environment.
  • eGPU: A GPU in an external case, usually connected via Thunderbolt.

More than a year after the Vive’s release, Apple used their 2017 World Wide Developers Conference to announce they were bringing VR to macOS, in a developer preview form.

For those of us in the creative fields who are primarily Mac-based, and have wondered “when can I get this on my Mac?“, Apple’s announcement would seem to be good news. However, there are fundamental differences between Apple’s product philosophy for the Mac, and the needs of VR users and developers. This raises serious concerns as to the basic compatibility of Apple’s product and business model, with the rapidly evolving first decade of this new platform.

Hardware:

When it comes to Apple and VR, the screaming, clownsuit-wearing elephant in the room is this: Apple has weak graphics.

This is the overwhelming sentiment of everyone I have encountered with an interest in VR.

The most powerful GPU in Apple’s product range, AMD’s Vega 64 – with  availability  starting in the AU$8200 configuration of the iMac Pro, is a lowered-performance (but memory expanded) version of a card, which retails for around AU$800, and which is a fourth-tier product in terms of 3D performance, within the wider market.

Note: Adding that card to an iMac Pro, adds AU$960 to the price of the machine, whose price already includes the lower performance Vega 56. In contrast, the actual price difference between a retail Vega 56 and 64 is around AU$200. Effectively, you’re paying full price for both cards, even though Apple only supplies you with one of them.

The VR on Mac blog recent posted an article lamenting “Will we ever really see VR on the Mac?”, to which you can only respond “No, not in any meaningful sense, as long as Apple continues on its current product philosophy”.

To paraphrase Bill Clinton “It’s the GPUs, Stupid”.

When you’re looking at VR performance, what you’re effectively looking at, is the ability of the GPU to drive two high-resolution displays (one for each eye), at a high frame rate, with as many objects rendered at as high a quality as possible. Effectively, you’re looking at gaming performance – unsurprising, given a lot of VR is built on game engines.

Apple’s machines’ (discrete) GPUs are woefully underpowered, and regularly a generation out of date when compared to retail graphics cards for desktop computers, or those available in other brands of laptops.

Most of the presenters at Immerse were using macbooks for their slide decks, but none of the people I met use Apple gear, or seem to have any interest in using Apple gear to do VR, because, as I heard repeatedly, “the Mac has weak graphics”.

How weak is “weak”?

Looking at the GPUs available on the market, in terms of their ability to generate a complicated 3D environment, and render all the objects within that environment in high quality, at the necessary frame rate, here they are, roughly in order of performance, with a price comparison. This price comparison is important, because it represents not just how much it costs to get into VR if you already have a computer, but how much it costs, roughly on an annual schedule, to stay at the cutting edge of VR.

Note: This is excluding Pro GPUs like the Quadro, or Radeon Pro, since they are generally lower performance, in terms of 3D for gaming engines. The “Pro”-named GPUs in Apple’s products are gaming GPUs, and do not include error-correcting memory that is the primary distinguisher of  “Pro” graphics cards.

  • Nvidia Titan V: ~AU$3700. Although not designed as a gaming card, it generally outperforms any gaming card at gaming tasks.
  • Nvidia Titan XP: AU$1950
  • Nvidia 1080ti: ~AU$1100
  • Nvidia 1080 / AMD Vega 64: $AU850 (IF you can get the AMD card in stock)

Realistically, the 1080ti should be considered the entry level for VR. Anything less, and you are not getting an environment of sufficient fidelity that it ceases to be a barrier between yourself, and the work. A 1080 may be a reasonable compromise if you want to do mobile VR in a laptop, but we’re not remotely close to seeing a Vega 64 in a Mac laptop.

So what does this mean?

  • The highest-spec GPU in Apple’s “VR Ready” iMac Pro is a 4th-tier product, and is below the minimum spec any serious content creator should consider for their VR workstation. It’s certainly well below the performance that your potential customers will be able to obtain in a “Gaming PC” that costs a quarter of the price of your “Workstation”.
  • The GPU in the iMac Pro is effectively non-upgradable. The AU$8-20k machine you buy today will fall further behind the leading edge of visual fidelity for VR environments every year. A “Gaming PC” will stay cutting edge for around AU$1200 / year.
  • While Vega 64 is roughly equivalent in performance to Nvidia’s base 1080 (which is significantly lower performance than the 1080ti), in full-fat retail cards, it can require almost double the amount of electricity needed to power the 1080.
  • Apple’s best laptop GPU, the Radeon 560 offers less than half the gaming 3D performance (which again, is effectively VR equivalent) of the mobile 1080, and you can get Windows laptops with dual 1080s in them.
  • Apple is not providing support as yet, for Nvidia cards in eGPU enclosures, and so far only officially supports a single brand and model of AMD card – the Sapphire Radeon RX580 Pulse, which is not a “VR Capable” GPU by any reasonable definition.

The consequences of this are significant.

We’re not going to see performance gains in GPU hardware, and performance requirements for VR plateau any time in the near future. A decade ago, computers were fast enough to do pretty much anything in print production – 300dpi has remained the quality of most print, and paper sizes haven’t changed. That’s not going to happen for VR in the next decade.

GPU progress is not going to hold itself to Apple’s preferred refresh and repurchase cycles for computers. The relationship content producers have with GPUs is, I suspect, going to be similar to the relationship iOS developers have with iPhones & iPads – whatever the top of the range is, they’ll need to have it as soon as it’s released. People aren’t going to turn over a several thousand dollar computer every year, just to get the new GPU.

By Apple’s own admission at WWDC, eGPU is a secondrate option, as compared to a GPU in a slot on the motherboard. A slotted card on the motherboard has potentially four times the bandwidth of a card in an external enclosure. For a user with an 11-13″ microlight laptop, eGPU is a good option to have VR capability at a desk, but it’s not a good solution for desktop computers, or for portable VR.

While Nvidia’s mobile 1080 has been an option in PC laptops for some time now, and offers performance comparable to its full-fat desktop version, AMD (and by extension Apple) seems to have nothing comparable (a mobile Vega 64) on the horizon for Macbooks.

There are, therefore, some really serious questions that need to be asked about the priorities of Apple in using AMD for graphics hardware. Overall, AMD tends to be marginally better for computational GPUs, in other words, GPUs that are used for non-dislay purposes. For realtime 3D environments, Nvidia is significantly ahead, and in mobile, represents  having the capability to to VR at all.

If the balance of computation vs 3D gaming” performance means computation is faster, but VR isn’t possible, then it really starts to feel like back in the days when the iMac went with DVD-ROM while everyone else was building around CD burners.

Software:

Apart from operating system system changes relating to driving the actual VR hardware, Apple’s “embrace of VR” was more or less devoid of content on Apple’s part, in terms of tools for users.

Apple’s biggest announcement regarded adding “VR support” to Final Cut Pro X. As far as I can see, this is about 360 video, not VR. This needs to emphasised – 360 Video is not VR. It shares some superficial similarities, but these are overwhelmed by the fundamental differences:

  • 360 Video is usually not 3D. It’s effectively just video filling your field of vision.
  • 360 Video is a passive medium. While you can look around, you can’t interact with the environment, or move your viewpoint from a fixed location.

In contrast, VR is:

  • a place you go to,
  • a place you move about in, and
  • a place where you do things.

VR is an activity environment, 360 Video is television in which you can only see one third of what is happening, at any one time.

The power of VR is what you can do in it, not what you can see with it.

For example Tvori:

And for a more nuts & bolts vision of actually working in VR:

This is using a 3D VR workspace to create content that will be played on a 2D screen.

This is important – the future of content creation when it comes to VR is NOT going to be based upon using flat screens to create content that can then be viewed on VR goggles. It’s the other way around – we’re going to be using VR toolsets to make content that will be deployed back to 2D platforms.

All of the current development and deployment environments are inherently cross-platform. It’s unlikely that anyone is going to be making macOS-specific VR apps any time in the near future. That’s a self-evident reality – the install base & market for VR-capable Macs is simply too small, and the install base & market for VR-capable PCs too large, to justify not using an application platform that allows for cross-platform applications. VR does not have the problem of a cross-platform app feeling like a secondrate, uncanny-valley facsimile of a native application. In VR, the operating system conveys no “native” UI paradigms, it’s just a launcher, less in fact given that Steam and Viveport handle launching and management of apps – it’s a glorified BIOS.

This is not going to be a replay of iOS, where Apple’s mobile products were undeniably more powerful, and more capable than the majority of the vastly larger market of Android and Windows Mobile devices, and were therefore able to sustain businesses that could ignore other platforms. VR-capable Macs are smaller in market, less-capable as devices due to weak graphics, higher in price to buy, and radically higher in price to maintain relative performance, than VR-capable PCs. As long as this is the case, the Mac will be begging for scraps at a VR table, where Windows (and eventually Linux & SteamOS) will occupy the seats.

The inherent cross-operating-system metaplatform nature of Steam reflects a growing trend within the Pro software market – formerly Mac-only developers are moving their products to be cross-platform, in effect, making their own technologies the platform, and relegating macOS or Windows to little more than a dumb pipe for commoditised hardware management.

One of the recent darlings of the Apple world, Serif, has taken their Affinity suite of design, imaging and publishing apps across to Windows, as have Macphun, who’ve renamed themselves Skylum, and shifted their photography software cross-platform. In the past, developers had marketed their products, based on the degree to which they had embraced Apple’s in-house technologies as the basis of their apps – how “native” their apps were. These days, more and more are emphasising independence from Apple’s technology stack. The presence of the cross-platform lifeboat is becoming more important to customers of Pro apps, than any advantage brought by being “more native”. The pro creative market, by and large, is uncoupling its financial future from Apple’s product strategy. In effect, it’s betting against that strategy.

What does Apple, a company whose core purpose is in creating tasteful, consistent user interface (however debatable that might be these days), have to offer in a world where user environments are the sole domain of the apps themselves, and the operating system is invisible to the user?

Thought exercise, Apple & Gaming:

Video and cinema has always been considered a core market in which Apple had to invest. Gaming (on macOS) has always been a market that Apple fans have been fine with Apple ignoring. The argument has always been about the economics and relative scale of each. It’s worth bearing in mind however, that the size of the games market and industry dwarfs the cinema industry.

Why is it ok amongst Apple fans, Apple-centric media, and shareholders, for Apple to devote resources to making tools for moviemakers / watchers rather than directing it at game developers / players?

When Apple cuts a product, or restricts the versatility of a product under the guise of “focus” there’s no end of people who’ll argue that Apple is focussing on where the profits are. Mac sales are relatively stagnant year over year. Gaming PCs, or as they’d be called if Apple sold them “VR Workstations” have been consistently growing in sales of around 25% year upon year for a while now.

Windows’ gaming focus and games ecosystem, is co-evolutionary with VR. It is the relentless drive to make Windows as good as possible as a gaming platform, that makes it the better VR platform. No amount of optimisation Apple can do with Metal, their 3D infrastructure, can make up for the fact that they’re shipping sub-standard GPUs in their devices.

”High spirits are just no substitute for 800 rounds a minute!”


Apple’s WWDC VR announcements seem to have had very little impact on people who are using, and making with VR now. Noone I spoke to at Immerse seemed particularly excited about the prospect of Apple getting into the space, or seemed to think Apple had anything in particular to offer. If you look at what Apple did to professional photographers by neglecting, and then dumping their Aperture pro photo management solution, without providing a replacement (and no, Photos is not that), that wariness is well-justified.

What Immerse really opened my eyes to, is that VR is very probably a black swan for Apple, who have spent the last 5 years eliminating the very thing that is central to powering VR – motherboard PCI slots, the associated retail-upgradble GPU, and the entire culture of 3D performance focus, from their product philosophy.

VR is an iceberg,  and Apple, no matter how titanic, will not move it. The question is whether the current design, engineering and marketing leadership, who have produced generation upon generation of computers that sacrifice utility and customer-upgradability in the pursuit of smallness, are culturally capable of accepting that fact.


Hey, If you liked reading this, maybe you’d like to donate?