2019 started with high hopes for diving back into EPUB development. I put huge amounts of time into redesigning my EPUB books to be able to have their text on a separate layer, so different translations could be done, and to allow the artwork to be user-selectable between finished, sketch, and thumbnail versions.
Weeks were spent trying to get the image outputting from InDesign to work correctly, and I had more or less cracked it, and knew theoretically how it would all be achieved.
Unfortunately, updates to Apple’s books platform broke the core functionality I relied upon, and all my books on the Apple Books store are now broken.
My uncle, Travor Ashton, a wonderfully generous man, sadly passed away.
I entered a major outdoor sculpture festival, and put a bunch of work in to applying for a grant to cover training costs to refresh my welding skills.
Unfortunately, my grant application was unsuccessful, and the sculpture, once repaired, displayed some significant structural weaknesses when exposed to driving wind, that meant i would be unable to install it at the event, so I had to pull out.
I had a new residency project, Noosa Mnemonic – a VR recreation of Noosa, based on people recreating in VR, places they love in Noosa, working only from memory. The goal was to have all the separate places added together into a single VR environment. I created a really interesting new VR location, and arranged for other artists to contribute locations.
Unfortunately, by the end of the year, it seemed to have become moribund, lacking for funding, and a reduced scope that makes the vision more of less moot. After breaking myself on the previous year’s major project for the Drone Racing course, I resolved to be less emotionally invested in this project, so C’est la vie.
I did some more VR outreach projects for the Library Makerspace – I really wonder if I’ve missed my true calling, because I love doing public outreach events.
My bike was serviced, and I was able to get back into riding periodically. It’s good for my mental health. The bike works better now than it ever had since new. I bought a helmet-mounted action-cam, and took to videotaping all my rides, so as to protect myself from incompetent drivers.
We sold my late father’s house, and dissolved his estate. It was finally over. I find myself on occasion missing him, missing that ability to have someone to talk to, who was largely removed from my life, but towards the end, there wasn’t much left to say. His ideas and opinions had been so poisoned by the right-wing internet, that in the end, I was left with very little in the way of happy memories about him.
I started investigating drop-shipping my photo prints as I dipped a toe into Instagram, but then Instagram changed the ability to see metrics of user interaction with posts, and it became yet another platform that one has to question if it’s worth the effort.
More upgrades and updated to my 10 year old Mac Pro tower. it’s such a dependable tank of a machine, frankensteined to hell as it is.
Toward the last third of the year, gearing up for internatonal travel to Japan, gained a sense of urgency, and I designed and had fabricated a set of camera mounting plates to let me better mount my camera on a backpack, while remaining connected to my sling-strap. The design was a pretty amazing success.
Then, I was in Japan, and three of the happiest weeks of my life. Magical country, and I wish I could live in a country mountain town, with the constant sound of running water. The cities weren’t so much my thing, but I could spend months travelling around on local trains, seeing the little farming communities.
Once I came back, there was a lot of lost time – administrative things, medical things – I saw The Sisters Of Mercy play live – a lifelong ambiton, and they were “meh”.
The year ended sitting on inflatable pool chairs with friends, drinking margaritas in the pool, watching Return of The Living Dead on a projector I’d rigged up to screen onto the side of the building.
It was a year of highs and lows, but then aren’t they all?
My latest residency project with Noosa Library Service.
The overall goal was to get multiple artists, starting with me, to recreate their favourite places in Noosa, purely from memory. Each artist will create the location within Virtual Reality, using an application called Tilt Brush, from Google.
Once the places were all created, the goal is to incorporate all of them into a single read-only VR environment, which will have its default state as a 3D topographical map of Noosa.
A year in retrospect. It’s been all over the place – a year in which there was no silver lining that couldn’t tarnish.
The big events of the year, the project for Rent, my VR residency with the library, and the death of my father, all in quick succession in the first half of the year, cast a pall over the second half.
In technology, this was the year in which everything broke.
I came into the year with a Mac that could post directly to Facebook from its share sheet, and with third-party Twitter clients that made Twitter a truly powerful platform for connectivity, and with Twitter able to post directly to my Facebook feed.
By the end of the year, my Mac couldn’t post directly to Facebook, and neither could my Twitter account. My third-party Twitter apps were smoking ruins of their former selves, and the best outlet I had to connect to the outside world from my regional town, had been effectively ruined.
I started on a TIG welding course, and got a few weeks in, before it became apparent that being the one TIG student in a MIG class, that was being run without any semblance of professionalism, meant I spent half a 3 hour class waiting for my non-working machine (which I had told the tutor about at the beginning of the class), and ended up in a shouting match with the tutor, who tried to blame me for not spending hours chasing him around an OH&S-unsafe workshop to get him to come fix my machine.
I build some nestled table workbenches for my room, with the hope they’d be something I could show my father, that he might get some joy seeing me making. But he died from his cancer while I was in the process, and so I don’t know if he ever really saw them.
I built a huge sculpture that went on stage at QPAC. It was a project from hell, marked primarily by a department lead who wanted me for my talent, but didn’t credit me with any actual expertise to know what I was doing when it came to the technical side of things, and so didn’t listen to my specific technical demands. She alternated between being angry that I wan’t keeping her in the loop, and complaining that I was overwhelming her with pointless information when I actually tried to tell the the specific information that informed my technical requirements for what I needed the production to provide, and what the consequences of choices I needed her to make would be.
I had to do a huge cleanout at my father’s house in victoria, to get it on the market to sell. What a headfuck of a shitshow.
I did my first VR residency, learning about sculpting in VR, and 3D printing. The tools demonstrate great potential, but are also not fully baked.
My public outreach roles for the library continued – It’s a fun passtime.
I worked on a new project for the library, constructing racing gates and obstacles for their drone racing programme. It was fun, but the work was so intense, the deadlines so tight, the stress so high, that I caused my health to fuck up again, and gave myself some (so far) permanent neurological damage in my spine, with phantom sensation as a result.
One of the problems when carrying a DSLR, is that occasionally you might want to go somewhere with your camera, where you want to take the minimum amount of bag to protect it, but not have a second strap around your neck / shoulders when you take your camera out to shoot.
Another problem, is that you might go somewhere that doesn’t allow you to carry even a small bag into a venue (some art galleries, for example), but you still want your camera on a strap.
Here’s a solution that ads a Blackrapid connector, on a slider that runs along the small bag’s shoulder strap, and which, when you detach the strap from the bag and join its ends together, turns it into effectively a standalone Blackrapid camera strap.
The donor equipment
In this case, the bag is a Lowepro Toploader 70AW. This is a bag that can take a full size pro body, with a medium sized lens like a 24-70 still attached.
Two key features of this bag, the first is that the zipper that closes it – it’s one continuous length, even though there’s two zips and a buckle in the middle. That’ll be important later. The second, the shoulder strap clips on at both ends, so can be removed.
For the dropper to connect the Blackrapid connector, I’m using a Blackrapid Backpack Strap as the donor for all the parts.
This has the advantage that it comes with all the bits you need – the BR connector, the safety catch to cover the thumbscrew (not shown in this pic), and importantly, the webbing has a loop sewn on the end.
The Backpack strap has two plastic carabiners on it (the updated version only has one). One is connected to the full length of the strap, the other, to a short loop (left and right images above, respectively).
The other parts you’ll need, are a nice smooth stainless steel d-shackle, wide enough to fit the bag’s strap, but narrow enough that it won’t be able to slide over the clips, and off the end, and a safety splitring.
First Step, you need to cut through the eyelet of the plastic carabiner that’s connected to the long section of the strap. The sewn loop on the end of that strap is something we need to preserve.
You need to cut the pull tag (next to the blackrapid logo left pic) off the end of the strap, so you can unthread the full length of the strap from the cleat (right pic).
You need to cut the short strap and carabiner free of the cleat, so you have it as a separate piece.
Now, you’re ready to reassemble.
You’ll want to seal the cut end of the webbing with a bit of fire (I’ll leave it to you as to how to create that). The difficult part is getting it back through the cleat, which isn’t strictly designed for a double layer of webbing. It’s doable, just difficult. When you’re heat-sealing it, try to squash it flat, so it’ll feed through more easily. You could then double it over with glue / stitching so that it can’t possibly go back through.
Don’t forget to thread on the Blackrapid clip in the process.
The splitring is optional, but what it does, is prevent the pin in the shackle from turning, so that it can’t come loose.
Going back to the advantage of the Lowepro bag having a single zipper – once you connect your camera to the Blackrapid you can now put it in the bag while leaving it connected, and then close the bag behind it.
Not shown – The safety tether I use with any connection system. In this case, I attach it to the shackle, since the split ring means it isn’t capable of undoing.
If this article was of use, a donationwould help support my projects.
If there was one thing I wish I could do with this review, it’s to show you what this gig looked like. From the back of the venue, a view over the silhouetted throng of fans, the band ripping it up in full flight amidst the colour and smoke, the giant spray-stencil banner in the background like an industrial-music altarpiece, and the repeated perpendicular structural ribs of the drum-vaulted corrugated iron roof, that formed a perfect semicircle over the crowd, catching and outlined by the light, creating such a precise repetition in linear-perspective from stage to circle, that Kubrick himself could have set the scene.
I’d LIKE to show you that.
However, after buying tickets to the gig, driving 2 hours down to Brisbane, paying for a hotel room and parking for the night, phoning the venue and leaving a message asking them to let me know if there were any problems with cameras, packing seven grand worth of pro DSLR and some of the finest wide-angle optics ever made into a compact “body & lens only” camerabag, so that I could show you this near-religious vision of industrial music performance, I discovered at the door, that despite their website’s FAQ having no mention of this rule, The Triffid is yet another venue that has fallen victim to this idiotic policy of banning “professional” cameras from entry.
So, I can’t show you that.
Half the audience can block people’s view by holding up a cellphone, to take mediocre pictures that offer greater potential pixel detail than a “pro” camera could achieve 10 years ago. They can shoot video that would have required a steadycam-harnessed cine-camera that cost more than a luxury car 5 years ago, but apparently a DSLR, which will only block the user’s own view, is such a big problem, it requires blanket bans.
Sorry Ashley, but we missed Caligula (and the beginning of Jim Bob) walking back to the hotel to leave the gear – because again, when a venue has an unadvertised “no cameras” policy, you’d think they’d have enough of a clue as to have a proper security check-in situation with lockers, not “leave your camera here at the ticket booth” – an idea from which they retreated, when I told them what it cost.
Anyway, on to the music.
Jim Bob. Hmm, how to put this… Carter USM is consistently one of my favourite bands. They hold a deep sentimental spot for me because they were a high rotation band when I first got into the goth scene, and were on a couple of the played-to-death mix tapes I had back then. They’re also one of those bands that through poor timing, I never managed to see live. What Carter did, along with other contemporaries like the Poppies, even The KLF in their stadium house monsterworks, is construct huge, rich sounds, from so many dissonant sources, that you could just be overwhelmed by the music.
Jim Bob on his own with an acoustic guitar is not that. I don’t know what I was expecting – maybe the JB doing Carter tracks with a backing band, maybe with the Poppies actually doing the backing band stuff, I’m not sure. Even Carter’s acoustic tracks, like “The Man Who Bought The World”, have more in them. He joked several times about people being disappointed by the “is that it?” of it all, so I suppose he’s heard that reaction before.
In the end, it was an interesting performance, and thinking about it from the perspective of a soloist, doing acoustic protest songs, I’d have enjoyed it more if I was better prepared for that reality. As a positive, Jim Bob’s voice is still in great form. His anecdotes and chatter had the audience, myself included, laughing, but for someone hoping to see the indoor-nuclear-detonation opening of Surfin’ USM… maybe next time?
On to PWEI, or “PWEI Mk 2.5” as Mary Byker described them.
Holy freaking hell, they’ve so got it. Epic – there’s no other way to describe them. A big band, six musicians on stage – two vocalists, live drums, everyone looking like proper rock stars… except Graham, who in his grey, short-sleeved, button up collared shirt, looks like someone’s dad got lost in the wings, and ended up on stage. It’s adorable, and he looks like he’s really enjoying performing, so madprops, because nothing could detract from just how goddamn good, and how real, crunchy and live the band sounds.
It’s hard to say much more about them – how many superlatives can you come up with? Poppies fans in Melbourne, Sydney and Perth, you’re in for a freaking treat.
I’ve finally succeeded in getting my Urban Exploration / Urban Landscape photography kit together, so I thought I’d document it here.
The goal was to have a single backpack that I could travel with, which didn’t scream “tactical gear bag”, and which could handle a versatile photographic load.
Here’s the loadout.
Peak Design Everyday Backpack 20L, with:
Nikon d8XX with the 14-24 2.8, with a modified 3 Legged Thing QR-11 L-Plate. Umbrella in side pocket. Headphones and small medikit with hand sanitiser, paracetamol etc. Godox V860II & X1-n. Blackrapid Sport Breathe. GSI low-profile water battle in side pocket. 3 Legged Thing Leo with Airhed Switch.
It all packs in very snug, and there’s some modifications to the dividers to scavenge every last millimetre in width across the bag (not such a big commitment now they sell them separately). There’s also lens tissues, lens covers, remote release cable, camping knife-spork and a couple of cable adapters in the interior side pockets. Wallet and a protein bar in the top compartment, and still space for an iPad in the laptop sleeve.
Inside, there’s one vertical divider at the bottom to separate the camera onto the left, and tripod on the right, then one horizontal divider across the top of that.
The horizontal is folded up on the right to make one tall space for the tripod on the right. One layer of the folded up part on the horizontal divider is removed to give 5-10mm more room in the top left compartment. The vertical divider has a layer removed from the folding section as well, to give more room to the L-Plate on the camera, so as to stop the grip from poking out through the side. That vertical divider also has an extra row of velcro sewn onto it, so the whole side adheres to the inner surface of the bag, rather than just the stock tab. The Blackrapid bag packs in behind the tripod in the space it creates where the carbon fibre of the legs is exposed. The trimmed parts removed from the folding sections of the dividers are velcro-ed into the bottom of the bag with adhesive-backed velcro strips, to provide a bit of padding for the lens and bottom of the tripod.
If this article was of use, a donationwould help support my projects.
Here’s a gear hack to combine two products that should play well together, but don’t. The Blackrapid FR-T1 connector, and 3 Legged Thing QR-11 L-Bracket.
Technically, the QR-11 does work with Blackrapid straps – there’s a 1/4″ mounting hole in the short arm to screw in a Connector, however this interferes with the ability of the short arm’s rail to mount in the Tripod’s Arca clamp. Also, the ergonomics don’t work as well when the camera is hanging on the strap.
As a bonus, here’s a modification of the short arm on the L-Plate, to get it as close as possible against the side of the camera.
Material needs to be removed to clear the rubber gasket covers for the ports on the front of the camera, as well as the thumbnail catch for the port door on the side.
If this article was of use, a donationwould help support my projects.
In April 2016, HTC released the Vive VR headset. Designed in conjunction with games developer Valve, the Vive represented a significant evolution in consumer Virtual Reality.
Technologically, the Vive’s breakthrough centred around a tracking system that could detect, within a 3x3x3m square volume, the position and orientation of the headset, controllers, and any other object that had a tracking puck attached to it. Crucially, this volumetric tracking ability was included as a default part of the basic kit.
The result, is that HTC’s hardware has effectively defined the minimum viable product for VR as “room scale” – an experience which lets you get out of the chair, and walk around within a defined area. Not only can you look out in all directions, you can physically circumnavigate a virtual object, as if it were a physical object sharing the room. When combined with Valve’s SteamVR platform and store, this has created an entire turnkey hardware and software ecosystem.
From my recent experience of them, the Vive plus Steam is a product, not a tech experiment. This is a tool, not a toy.
First, some basic terminology for the purposes of this article:
XR: Extended / Extensible Reality – A blanket term covering all “reality” versions.
VR: Virtual Reality – XR in which the real world is completely blocked out, and the user is immersed in a completely computer generated environment.
AR: Augmented Reality XR in which the real world remains visible, directly or via camera feed, and computer generated elements are added, also known as “mediated reality”.
GPU: Graphics Processing Unit – the part of a computer that does the work to generate the immersive environment.
eGPU: A GPU in an external case, usually connected via Thunderbolt.
More than a year after the Vive’s release, Apple used their 2017 World Wide Developers Conference to announce they were bringing VR to macOS, in a developer preview form.
For those of us in the creative fields who are primarily Mac-based, and have wondered “when can I get this on my Mac?“, Apple’s announcement would seem to be good news. However, there are fundamental differences between Apple’s product philosophy for the Mac, and the needs of VR users and developers. This raises serious concerns as to the basic compatibility of Apple’s product and business model, with the rapidly evolving first decade of this new platform.
When it comes to Apple and VR, the screaming, clownsuit-wearing elephant in the room is this: Apple has weak graphics.
This is the overwhelming sentiment of everyone I have encountered with an interest in VR.
The most powerful GPU in Apple’s product range, AMD’s Vega 64 – with availability starting in the AU$8200 configuration of the iMac Pro, is a lowered-performance (but memory expanded) version of a card, which retails for around AU$800, and which is a fourth-tier product in terms of 3D performance, within the wider market.
Note: Adding that card to an iMac Pro, adds AU$960 to the price of the machine, whose price already includes the lower performance Vega 56. In contrast, the actual price difference between a retail Vega 56 and 64 is around AU$200. Effectively, you’re paying full price for both cards, even though Apple only supplies you with one of them.
The VR on Mac blog recent posted an article lamenting “Will we ever really see VR on the Mac?”, to which you can only respond “No, not in any meaningful sense, as long as Apple continues on its current product philosophy”.
To paraphrase Bill Clinton “It’s the GPUs, Stupid”.
When you’re looking at VR performance, what you’re effectively looking at, is the ability of the GPU to drive two high-resolution displays (one for each eye), at a high frame rate, with as many objects rendered at as high a quality as possible. Effectively, you’re looking at gaming performance – unsurprising, given a lot of VR is built on game engines.
Apple’s machines’ (discrete) GPUs are woefully underpowered, and regularly a generation out of date when compared to retail graphics cards for desktop computers, or those available in other brands of laptops.
Most of the presenters at Immerse were using macbooks for their slide decks, but none of the people I met use Apple gear, or seem to have any interest in using Apple gear to do VR, because, as I heard repeatedly, “the Mac has weak graphics”.
How weak is “weak”?
Looking at the GPUs available on the market, in terms of their ability to generate a complicated 3D environment, and render all the objects within that environment in high quality, at the necessary frame rate, here they are, roughly in order of performance, with a price comparison. This price comparison is important, because it represents not just how much it costs to get into VR if you already have a computer, but how much it costs, roughly on an annual schedule, to stay at the cutting edge of VR.
Note: This is excluding Pro GPUs like the Quadro, or Radeon Pro, since they are generally lower performance, in terms of 3D for gaming engines. The “Pro”-named GPUs in Apple’s products are gaming GPUs, and do not include error-correcting memory that is the primary distinguisher of “Pro” graphics cards.
Nvidia Titan V: ~AU$3700. Although not designed as a gaming card, it generally outperforms any gaming card at gaming tasks.
Nvidia Titan XP: AU$1950
Nvidia 1080ti: ~AU$1100
Nvidia 1080 / AMD Vega 64: $AU850 (IF you can get the AMD card in stock)
Realistically, the 1080ti should be considered the entry level for VR. Anything less, and you are not getting an environment of sufficient fidelity that it ceases to be a barrier between yourself, and the work. A 1080 may be a reasonable compromise if you want to do mobile VR in a laptop, but we’re not remotely close to seeing a Vega 64 in a Mac laptop.
So what does this mean?
The highest-spec GPU in Apple’s “VR Ready” iMac Pro is a 4th-tier product, and is below the minimum spec any serious content creator should consider for their VR workstation. It’s certainly well below the performance that your potential customers will be able to obtain in a “Gaming PC” that costs a quarter of the price of your “Workstation”.
The GPU in the iMac Pro is effectively non-upgradable. The AU$8-20k machine you buy today will fall further behind the leading edge of visual fidelity for VR environments every year. A “Gaming PC” will stay cutting edge for around AU$1200 / year.
While Vega 64 is roughly equivalent in performance to Nvidia’s base 1080 (which is significantly lower performance than the 1080ti), in full-fat retail cards, it can require almost double the amount of electricity needed to power the 1080.
Apple’s best laptop GPU, the Radeon 560 offers less than half the gaming 3D performance (which again, is effectively VR equivalent) of the mobile 1080, and you can get Windows laptops with dual 1080s in them.
Apple is not providing support as yet, for Nvidia cards in eGPU enclosures, and so far only officially supports a single brand and model of AMD card – the Sapphire Radeon RX580 Pulse, which is not a “VR Capable” GPU by any reasonable definition.
The consequences of this are significant.
We’re not going to see performance gains in GPU hardware, and performance requirements for VR plateau any time in the near future. A decade ago, computers were fast enough to do pretty much anything in print production – 300dpi has remained the quality of most print, and paper sizes haven’t changed. That’s not going to happen for VR in the next decade.
GPU progress is not going to hold itself to Apple’s preferred refresh and repurchase cycles for computers. The relationship content producers have with GPUs is, I suspect, going to be similar to the relationship iOS developers have with iPhones & iPads – whatever the top of the range is, they’ll need to have it as soon as it’s released. People aren’t going to turn over a several thousand dollar computer every year, just to get the new GPU.
By Apple’s own admission at WWDC, eGPU is a secondrate option, as compared to a GPU in a slot on the motherboard. A slotted card on the motherboard has potentially four times the bandwidth of a card in an external enclosure. For a user with an 11-13″ microlight laptop, eGPU is a good option to have VR capability at a desk, but it’s not a good solution for desktop computers, or for portable VR.
While Nvidia’s mobile 1080 has been an option in PC laptops for some time now, and offers performance comparable to its full-fat desktop version, AMD (and by extension Apple) seems to have nothing comparable (a mobile Vega 64) on the horizon for Macbooks.
There are, therefore, some really serious questions that need to be asked about the priorities of Apple in using AMD for graphics hardware. Overall, AMD tends to be marginally better for computational GPUs, in other words, GPUs that are used for non-dislay purposes. For realtime 3D environments, Nvidia is significantly ahead, and in mobile, represents having the capability to to VR at all.
If the balance of computation vs 3D “gaming” performance means computation is faster, but VR isn’t possible, then it really starts to feel like back in the days when the iMac went with DVD-ROM while everyone else was building around CD burners.
Apart from operating system system changes relating to driving the actual VR hardware, Apple’s “embrace of VR” was more or less devoid of content on Apple’s part, in terms of tools for users.
Apple’s biggest announcement regarded adding “VR support” to Final Cut Pro X. As far as I can see, this is about 360 video, not VR. This needs to emphasised – 360 Video is not VR. It shares some superficial similarities, but these are overwhelmed by the fundamental differences:
360 Video is usually not 3D. It’s effectively just video filling your field of vision.
360 Video is a passive medium. While you can look around, you can’t interact with the environment, or move your viewpoint from a fixed location.
In contrast, VR is:
a place you go to,
a place you move about in, and
a place where you do things.
VR is an activity environment, 360 Video is television in which you can only see one third of what is happening, at any one time.
The power of VR is what you can do in it, not what you can see with it.
For example Tvori:
And for a more nuts & bolts vision of actually working in VR:
This is using a 3D VR workspace to create content that will be played on a 2D screen.
This is important – the future of content creation when it comes to VR is NOT going to be based upon using flat screens to create content that can then be viewed on VR goggles. It’s the other way around – we’re going to be using VR toolsets to make content that will be deployed back to 2D platforms.
All of the current development and deployment environments are inherently cross-platform. It’s unlikely that anyone is going to be making macOS-specific VR apps any time in the near future. That’s a self-evident reality – the install base & market for VR-capable Macs is simply too small, and the install base & market for VR-capable PCs too large, to justify not using an application platform that allows for cross-platform applications. VR does not have the problem of a cross-platform app feeling like a secondrate, uncanny-valley facsimile of a native application. In VR, the operating system conveys no “native” UI paradigms, it’s just a launcher, less in fact given that Steam and Viveport handle launching and management of apps – it’s a glorified BIOS.
This is not going to be a replay of iOS, where Apple’s mobile products were undeniably more powerful, and more capable than the majority of the vastly larger market of Android and Windows Mobile devices, and were therefore able to sustain businesses that could ignore other platforms. VR-capable Macs are smaller in market, less-capable as devices due to weak graphics, higher in price to buy, and radically higher in price to maintain relative performance, than VR-capable PCs. As long as this is the case, the Mac will be begging for scraps at a VR table, where Windows (and eventually Linux & SteamOS) will occupy the seats.
The inherent cross-operating-system metaplatform nature of Steam reflects a growing trend within the Pro software market – formerly Mac-only developers are moving their products to be cross-platform, in effect, making their own technologies the platform, and relegating macOS or Windows to little more than a dumb pipe for commoditised hardware management.
One of the recent darlings of the Apple world, Serif, has taken their Affinity suite of design, imaging and publishing apps across to Windows, as have Macphun, who’ve renamed themselves Skylum, and shifted their photography software cross-platform. In the past, developers had marketed their products, based on the degree to which they had embraced Apple’s in-house technologies as the basis of their apps – how “native” their apps were. These days, more and more are emphasising independence from Apple’s technology stack. The presence of the cross-platform lifeboat is becoming more important to customers of Pro apps, than any advantage brought by being “more native”. The pro creative market, by and large, is uncoupling its financial future from Apple’s product strategy. In effect, it’s betting against that strategy.
What does Apple, a company whose core purpose is in creating tasteful, consistent user interface (however debatable that might be these days), have to offer in a world where user environments are the sole domain of the apps themselves, and the operating system is invisible to the user?
Thought exercise, Apple & Gaming:
Video and cinema has always been considered a core market in which Apple had to invest. Gaming (on macOS) has always been a market that Apple fans have been fine with Apple ignoring. The argument has always been about the economics and relative scale of each. It’s worth bearing in mind however, that the size of the games market and industry dwarfs the cinema industry.
Why is it ok amongst Apple fans, Apple-centric media, and shareholders, for Apple to devote resources to making tools for moviemakers / watchers rather than directing it at game developers / players?
When Apple cuts a product, or restricts the versatility of a product under the guise of “focus” there’s no end of people who’ll argue that Apple is focussing on where the profits are. Mac sales are relatively stagnant year over year. Gaming PCs, or as they’d be called if Apple sold them “VR Workstations” have been consistently growing in sales of around 25% year upon year for a while now.
Windows’ gaming focus and games ecosystem, is co-evolutionary with VR. It is the relentless drive to make Windows as good as possible as a gaming platform, that makes it the better VR platform. No amount of optimisation Apple can do with Metal, their 3D infrastructure, can make up for the fact that they’re shipping sub-standard GPUs in their devices.
”High spirits are just no substitute for 800 rounds a minute!”
Apple’s WWDC VR announcements seem to have had very little impact on people who are using, and making with VR now. Noone I spoke to at Immerse seemed particularly excited about the prospect of Apple getting into the space, or seemed to think Apple had anything in particular to offer. If you look at what Apple did to professional photographers by neglecting, and then dumping their Aperture pro photo management solution, without providing a replacement (and no, Photos is not that), that wariness is well-justified.
What Immerse really opened my eyes to, is that VR is very probably a black swan for Apple, who have spent the last 5 years eliminating the very thing that is central to powering VR – motherboard PCI slots, the associated retail-upgradble GPU, and the entire culture of 3D performance focus, from their product philosophy.
VR is an iceberg, and Apple, no matter how titanic, will not move it. The question is whether the current design, engineering and marketing leadership, who have produced generation upon generation of computers that sacrifice utility and customer-upgradability in the pursuit of smallness, are culturally capable of accepting that fact.
Hey, If you liked reading this, maybe you’d like to donate?
So, 2017. It’s been a year of ups and downs. More downs than ups, but that gives 2018 a lot more room to improve.
My knee had a setback this year, but is slowly on the mend after a visit to a surgeon who suggested a new exercise regime.
I had a change in an immuno-modulating medication, from requiring an injection every second day for the past 12 years, to one every 2 weeks. So that’s a pretty significant improvement.
Friends & Family
I was able to catch up with two of my dear friends in Melbourne this year. As nice as it was to see them, I was down to visit my father who’s battling cancer, so the trips were tinged with melancholy.
I still miss all my peeps in Sydney (and the amazing gozlame at Marrickville Markets). As nice as Noosa is, it’s somewhat isolating if your thing isn’t surfing. I’ve been going to a few meetups in the sunshine coast area for people interested in VR & video game development, I might just have to get used to travelling to neighbouring towns a bit more, which I did a fair bit of this year, taking day-trips out to various small towns in the area.
I balance that against the sheer natural beauty of this area. Earlier in the year, after driving 5 minutes from home, I was able to see humpback whales – an adult and a calf that had been overnighting in the bay. I didn’t have to go out in a boat, just walked a few steps from where we parked the car. I was able to see all kinds of marine life walking to the river at the end of my street. On Christmas day we had ducks from the river wandering about in our driveway.
We were also hit by the tail end of a cyclone, which was interesting. You get a real glimpse into the heavy-weather future here.
Art & Culture
I saw a bunch of interesting performances this year, standup comedy by Jimoin, an amateur musical version of Jurassic Park, a live performance of the British podcast My Dad Wrote a Porno, Damian Cowell’s Disco Machine, and while I was in Melbourne, a trip to the NGV to see a big Hokusai retrospective.
There’s a little rant building there, because this trip to Melbourne made me see a side of that city I’ve never felt before – an unjustified self-importance that manifests in a reflexive need to tell tourists from other parts of Australia that they’re finally going to be able to get some (cultural item), now they’re in Melbourne. The simple truth is that there’s nothing in Melbourne, not culture, not food, not interesting little bars, that can’t be found anywhere else with better weather. The NGV in particular, has a stupid “no professional cameras” policy, which means they try to stop you taking a DSLR into exhibitions. If a publicly funded gallery isn’t supposed to be a place for artists to investigate and document works of art, just what is its function?
I spent more time in Brisbane this year, and have developed a real affection for it as a city. It’s not crowded in the way that Sydney and Melbourne are, and the multitude of radically different bridges over the river, give it a quirkiness I dig.
A big event this year was an attempt to get one of my sculptures installed on the grounds of the local Men’s Shed. This was a significant professional undertaking, involving coordinating with the Men’s Shed leadership, and the local water utility who own the land. I photographed the site, created a pitch document showing how an unused scrap of land would be made into a feature for the entrance to the site, and laid out how it would all be done at no cost to either organisation. Everyone seemed quite keen, then the water utility told me that someone in the Men’s Shed leadership, who had been away when I was conducting initial meetings, had told them that the Men’s Shed wouldn’t be supportive of the proposal if it came up to a vote amongst their leadership. That’s left a bit of a bad taste in my mouth.
I continued to shoot photos throughout the year, my camera being one of the few creative instruments available in an absence of studio space. Updates for Surfing The Deathline came out, to a point at which all the niggling little issues I had with some of the older artwork appear to have been fixed. I also signed up for a TIG welding course, which will begin in February 2018, and should allow me to get my metal sculpture mojo back. It’s much neater, and can potentially be done at home, than the MIG process I’ve used previously.
The most effecting artistic endeavour this year however, goes to my first experience of freestanding VR.
Tools & Tech
VR is my new religion, it’s where I want to spend my computing time. Drawing and painting in a three dimensional environment is so profound, it left me almost in tears. Hand in hand with this, is a profound loss of faith in the ability of Apple to keep providing products I want to use. I’ve written about what a bad fit Apple’s hardware model is for VR, which requires regular user-upgrades of graphical hardware, but I add to that that nothing from Apple has improved my enjoyment of using their devices in the last couple of years – quite the opposite, every update they make, makes the products less reliable, less pleasant to use, and breaks compatibility with other products, forcing you onto this never-ending merry-go-round of upgrades, so that you never have a stable set of systems where everything works together. I’m left asking myself “what do I actually get from paying a premium for Apple gear, if it’s not any more stable, or pleasant to use than the alternative?”. More than the implementation, I find myself increasingly dissatisfied with the philosophy behind Apple’s products – more and more, these are products which reflect the decision-making people who create them – inhumanly wealthy, able-bodied people with unlimited bandwidth. That’s not me, and increasingly, Apple’s products are losing the ability to serve anyone who doesn’t want to sign up for a world of sealed, non-upgradable appliances, where the software is always a semi-functional work in progress.
It may be that my future is in Windows or Linux workstations, especially since all my professional software is cross platform these days.
The really big tech thing this year has been the arrival of the NBN where I live. We’ve gone from the fastest possible connection we could get – 7/1 (down/up), which would flake out and fail whenever we had sustained rain, to 107/44 which has been rock solid through the worst of weather. We also ditched our previous provider, Telstra, for a small company, who, for the same price, don’t offer any “unlimited” plans, so peak hour congestion is largely unnoticeable. Next to my change of medication, knowing I’ll never have to speak to one of Telstra’s “support” people ever agin, is one of the happiest changes this year has brought.
In photography, I finally bought a speedlight – something I’ve wanted for a while now, so that I could have lighting while I’m out and about. It’s an interesting piece of ahead-of-the-curve technology, using a lithium-ion battery rather than AAs, and having the ability to be driven wirelessly.
Closing out the year, I finally bought a travel tripod from a company I’ve been following for a number of years. They’re another small outfit, who try to engineer their way into punching above their weight. It hasn’t been delivered yet, but hopefully it’ll get here soon, and I’ll be free to do a bit more photography-oriented hiking.
Some terminology for the purposes of this article:
XR: Extended/Extensible Reality, or possibly just (x)R – an umbrella term covering all forms of simulated and mediated reality. (note: let’s agree to pronounce the “x” as a “z” like xylophone, so XR sounds like the bad guy from The Last Starfighter)
VR: Virtual Reality – a form of XR characterised by blocking out of the “real” world, providing a total immersion in a wholly simulated environment.
AR: Augmented Reality – XR in which the real world remains visible (either directly, or via a camera feed), and computer generated elements are added to mediate reality. (and sounds like a pirate noise)
GPU: Graphics Processing Unit – the card / components that drive the visuals of the VR experience. Usually a dedicated card in desktop computers, but built into the motherboard on many laptops.
eGPU: External GPU – A GPU in an external case, usually connected by Thunderbolt to the main system.
A few weeks ago, here in the sticks of regional Australia, we had a little conference day (immerseconf), with internationally practicing artists from all over the country (including the head of HTC Vive in Australia), demoing how various forms of Extended Reality are being used by artists to create content.
Interestingly, while there was a “serious games” (training & education simulations) discussion, traditional entertainment videogames weren’t covered – this was a conference targeted at makers, and the toolsets available to them for creating. This shouldn’t be taken as indicating the experience was dull – delight and joy are inherent to the experience of doing work in VR.
I’ve been reading about and waiting for this tech since the 1980s. Last time I tried it a couple of years ago, the head-mounted display (goggles) was an Oculus devkit, and interaction was via a playstation controller.
I was ill within a minute.
A theory of why this happens, is that it’s a result of lag between moving your head, and seeing the corresponding movement of the virtual world through the goggles. With the Vive, that problem is solved – the viewpoint is stuck fast to your proprioceptive experience of movement. Lag is gone, you are there.
For an artist, the experience of VR marks a division between everything you have done, learned or experienced in art-making prior, and what you are to do afterwards. It is as redefining an experience as postulated in Crosley Bendix’s discussion of the “discovery” of the new primary colour “Squant”.
In my life, I have been literally moved to tears once by a work I saw in an exhibition – Picasso’s “Head of a Woman”. Why? I had studied this work, part of the canon of historically important constructed sculpture, for years at art school. I’d written essays concerning, and answered slide tests about it. However, every photo I had seen was in black and white. I finally saw it in the flesh at an exhibition, and out of nowhere found myself weeping at the fact that I had never known what colour it was painted. Nothing I had read, or studied, prepared me for the overwhelming emotional impact of meeting it, face to face, and realising that I had not known something as fundamental as its colour.
Of all the great leaps in art making that Picasso was personally involved with, it was his collaboration with Julio González that more or less invented welded steel sculpture. He did this, primarily out of a desire to be able to “take a line for a walk” in three dimensions, to draw with thin metal rod, the only material whose structural strength could span distance without thickness or sagging.
In VR, free-standing, able to walk about with multi-function hand controllers in an entirely simulated, blank environment, I was once again almost in tears at how profound the experience of this tech is for artmaking. One can literally take a line for a walk, twist it, loop it around itself, trace out the topology of knots, zoom out, zoom inside, and see that three dimensional drawing as a physical object, hanging in the air.
The tools I played with were from Google – Blocks, a simple 3d modelling program, and Tilt Brush, a drawing and painting program (which is also a 3d modeller – it just models paint strokes, and so produces flat ribbons of paint, that follow the 3D orientation of the controller when you make them). They’re reasonably primitive compared to traditional 2D painting and modelling apps, but there’s clearly a commercial space for selling tools for VR.
Just watch this. That’s the actual experience of creating and working in Tilt Brush.
Why would you want to use a screen-based 3d modeller?
Speculation, based on Observation:
The authoring environment for VR content, is VR.
After 3D modelling, or drawing in VR, you’ll never want to model or paint on a screen again. The idea of not having a direct 3d experience while creating just becomes nonsensical. As for Tilt Brush, there’s no 2D equivalent, I’m not sure there’s even a way to think about howTilt Brush would work in 2d.
Don’t think about VR as a way to preview things you make on screen – making things in VR is so compelling, you will want to change the way you work, or change the sort of work you do, to get as much as possible into the immersion.
360 Video is probably going to end up being a niche or gimmick, like 3d television.
The very clear sense I have after this encounter, is that 360 Video (which I first saw demonstrated 16 years ago at the QTVR Forum at Macworld New York) is an attempt by an old, established artform (and players within that artform), to annex a new format for itself, regardless of whether it is appropriate for that new format. If all you have is a hammer, you treat everything as if it were a nail.
Outside of video art – time-lapses of locations, or documentary, 360 video may be a way to make video skyboxes and motion backgrounds, at least until software can make them more effectively than a film crew can shoot a real location, which if you look at any modern film, it can already do.
Video’s monopoly on “real” will not survive the growth in quality of simulation, which carries with it true interactivity. Why experience a 360 video version of surfing, when you can have a photoreal simulated surfing experience, in which you do more than control the direction you’re looking, and can have it on that water planet in Christopher Nolan’s Interstellar?
If 360 video fundamentally changes its nature, becoming something in which the narrative progression is reactive to the directed attention of the viewer, perhaps there’s a possibility there, but isn’t that just a video game with the skill tests removed?
Otherwise, how do you get a jumpscare to work, if the viewer is never looking in the direction of the monster? Interactivity and moving around within a place is VR’s point. 360 video is about being a fixed point. Think of it as similar to the way focus-pulling and depth of field are fundamentally incompatible with 3d cinema – viewers can struggle against the director’s chosen point of focus, trying to see unfocussed objects they physiologically understand they should be able to “grip” with their eyes and pull into focus.
There are also issues with the physics of optics, revolving around how panoramic images are captured, that make stereo separation with 360 video fundamentally problematic.
Using VR headsets to screen non-interactive, immersive stereoscopic 3D video (in other words, you only see what the single direction paired cameras are pointed at) would certainly seem to have a future, given the pornography industry has adopted it for the Point Of View genre.
VR is a platform, not a peripheral.
This is huge – Mac, Windows, Linux – all of these are irrelevant, you’re simply not going to interact with the host OS to any meaningful degree. The operating system of the computer, merely serves as the loader for the VR environment. You’ll have no more cause to interact with macOS or Windows, than you do to interact with your computer’s firmware. Tilt Brush will look like Tilt Brush, regardless of what operating system it is running on. Look at Adobe’s clear strategy to nullify the operating system as a differentiator, and get their users to think of their computers as “Creative Suite Workstations” rather than “Macs or PCs running Creative Suite”. VR will be even moreso.
Everything is up for grabs as new paradigms for fundamental control schemes are solidified. Think how revolutionary the first pull-down menu was, that’s the sort of world VR is in. From Blocks and Tilt Brush you can see already, UI paradigms that are perhaps overly-literal. Multi-sided, rotatable physical palettes wrapped around the controllers are in vogue, but why? Why not have the equivalent of a 30″ monitor, offset 45 degrees, full of palettes that appears in response to a button press, then goes away again? Or, why not a literal wheeled toolbox, that follows behind you? The physicality of creating in VR is a very different working experience to sitting at a desk.
The GPU is everything.
VR computers are just a host system for the GPU (Graphics Card). A non-upgradable GPU, or a system that can’t be traded up for the market retail cost of a GPU is a laughable idea, truly laughable. Once you use one of these systems and see how good it is, but more importantly how much better VR is going to get in the near future in terms of graphical fidelity, and consider the soon to arrive retina-scale upgrade to headset display densities, the thought of having to replace a whole computer, just to keep cutting edge, I mean it’s just an unthinkably stupid idea.
To put that in perspective, no matter what manufacturers claim, Nvidia’s 1080ti is the minimum graphics card you need to create a simple virtual environment of sufficient fidelity that you’d want to spend all day working within. That is the standard you have to show people, so they can think “this is here and I want to use it“.
The 1080ti is the second-fastest GPU Nvidia offers in terms of 3D gaming performance, which is the effective measure of how well the immersive environment will perform.
The 1080ti is around 30% faster than the fastest performing GPU AMD makes (Vega 64), for a significantly lower power draw and heat output.
Numerous developers, including HTC themselves, were demoing on laptops with Nvidia graphics – none of which required eGPUs. HTC’s laptop was subtly lower fidelity than the desktop machines, but not by a lot.
AMD graphics cards were nowhere to be seen. Every tower machine (which were bigger than my cheesegrater Classic Mac Pro, and mostly full of empty space) was team green (Nvidia).
VR has a huge future in healthcare.
Hospitals here are permanently installing Vive trackers in the children’s wards, so bedridden kids can go participate in networked virtual environments with other kids, and not be bored / confronted with the reality of being in hospital.
VR is being used for rehabilitation, gamifying physiotherapy rehab exercises for example, to ensure they’re done correctly, and to relieve the monotony of repeat-based therapy.
Food for Thought, AR vs VR:
There is a school of opinion which holds that AR is the “good” version of XR, that VR is a niche for games, that the goggles etc required for immersion makes VR inherently not a thing for the everyperson.
I have a different take on that. I think that AR would seem to be the “good” version of XR, vs full immersion VR, if you’re the sort of person whose socioeconomic status means your life is the sort of life from which you would never want to seek an escape. AR is lovely, if you’re able-bodied, rich, have a nice house, and a job with sufficient seniority that you have your own office and can shut out distraction.
In other words, if you’re employed with any sort of decision-making authority at a large tech company.
If you live in a tiny apartment or room in a sharehouse, or have a disability whose profundity stops you going out to access experiences, or work in a place where you can’t tune out visual distraction, in other words, if your life isn’t already the sort of 1%er fantasy that most people would like to escape to, then perhaps AR isn’t that compelling in comparison.
From that perspective, AR that does not have a “shut out the real world” function isn’t a complete solution – it’s not the whole story.
By the way, saying the goggles are inconvenient – go speak to anyone who does any sort of manual trade work. VR goggles are no more inconvenient than having to wear safety glasses, gloves, steel-capped boots, ear muffs, a respirator, or welding helmet. Just because it’s less convenient than an office worker is used to, doesn’t mean a lot – if I can sketch in 3d before I go out into the welding bay, that’s a huge convenience factor.
My encounter with Vive leaves me with mixed emotions. I am absolutely going to be gearing up for VR. You simply can’t try this tech, and then not move to make art with it. VR is here, and it is now. It is a complete, usable product with both entertainment, and work tools, not an early-access developer preview.
A lot of the coverage I’ve seen of VR, from people who perhaps don’t understand the sheer amount of heavy lifting necessary to drive the experience, centres around ideas like “wait until the PC isn’t required“. That isn’t going to happen, or rather, that’s going to be a sub-standard experience – a better packaging of current smartphone-based VR. The PC to drive VR isn’t going to go away, because the progress to be made in the medium, the complexity and graphical fidelity has so much room for growth that enthusiasts will keep asking for more, and content creators will have to keep up in order to feed that cycle.
Local Australian pricing has the Vive setup for about a thousand dollars, an Nvidia 1080ti for about another thousand, but what to do for a computer to run that rig?
Does Apple have a solution that lets me stay on the Mac, or do I jump to Windows, and begin the inevitable migration of all my Pro software (which is niche enough that it HAS to be cross platform) and production processes across to Windows versions?