Speak of the Devil VR — PART 3: Post Production Workflow Madness

Matthew Celia
12 min readMar 6, 2018

Last week, in Part 2, I detailed some of the amazing work that went into the production of Speak of the Devil. Production is funny because the final shot feels like you’ve reached the end of something, but in reality, it’s just a chapter and there is still so much more work to do. Post production is where everything starts to come together.

If you are just catching up with us, I highly recommend checking out the first two parts about the planning and filming of Speak of the Devil.

And then, be sure to watch our trailer!

2D Trailer for Speak of the Devil, you can find the 360 version here.

We had an incredibly ambitious goal of having something out by Halloween. Let me be the first to admit that the timeline in our heads was ridiculous and naive. We frankly had no idea how much work we had ahead of us. Obviously we missed that deadline by several months, the reasons for why I will cover in a future article.

This part gets a little technical, because creating VR content IS really technical. The tools are very new or non-existent. Workflows are barely established and we don’t have a 100 years of history helping guide decisions. In short, asking the internet a question rarely turns up the correct answer because there are few to none attempting any kind of project on this scale. To my knowledge, Speak of the Devil is the first truly large scale mesh narrative in VR. If you know of others, please share!

We had to change a lot of our workflow and constantly adapt to changing deadlines and technical hurdles.

This was our first whiteboard, which was mostly right, but lacking some details. It also didn’t take into account the complexities of having multiple scenes in various stages of completion.

A WHOLE LOT OF DATA

For Speak of the Devil, we were lucky enough to be in the Google Jump Start program which gave us access to the camera, but also Google’s Jump Cloud stitching. There are some very smart people at Google who have designed one of the best automatic stitchers out there.

We had shot approximately 6TB of footage that needed to be uploaded to the cloud for stitching. That’s a ton of data. Thankfully, we were able to head to YouTube Space LA and utilize their gigabit upload pipe (thanks Libor!). It took about 24 hours to send everything up to the cloud. Amazingly, we started receiving notifications that the stitches were completed and ready for download between 12–36 hours after those shots were uploaded. Every single shot from the over 7.5 hours of material was ready in two days. That’s incredible and a huge reason why using the Google Jump camera is so popular. Of course, it took us another 24 hours to download!

From the cloud, you receive an H.264 over-under file with dimensions of 5760x5760 running at 59.94fps and a 4:2:2 color space. Adobe Premiere Pro does not like these files… at all.

We needed to transcode all of our footage right away into something that Premiere Pro would have a much easier time working with. We elected to create our own proxy files as DNxHR LQ 8bit files at a resolution of 2048x2048. I think upon reflection, I would have generated higher quality files since we weren’t able to see the small details in the forest, which meant that we didn’t catch some errant crew members until way late in the workflow. Talking with the Jump engineering team, we also figured out later that you could have Jump create the proxies and use Adobe Premiere’s proxy feature to link up to them.

Aside from the editorial proxies, we also needed to have all used footage transcoded into a high quality frame sequences for work in Nuke where we would do our top and bottom stereo compositing. We had planned on using DPX files since our Mistika monoscopic VR pipeline uses them and we find it to be rock solid, however a quick calculation of our running media, the number of versions and intermediates we would need, and some headroom for mistakes in our workflow (which we knew we would at some point) led us to believe we were going to need over 100TB of storage. Holy hell!

Enter Lumaforge to save the day. We already use their 30TB Jellyfish Mobile as our main shared storage in our office. We needed them to help us with something much bigger. What I love about Lumaforge is that it is a company founded by indie filmmakers just like us. They understand the grit and hard work it takes to create something that pushes the envelope. They also know how to make awesome tools that made our lives easier.

This giant shared storage server from Lumaforge is what made Speak of the Devil even remotely possible to finish in 5.7K at 60fps!

We hooked up all three of our workstations to a 125TB tower they brought over. This meant we could be editing on one machine, organizing on another, and exporting on a third. Or at the end of the day, we could queue up renders on all three machines and have a chance of being finished. The time savings were enormous as well as being able to work from drives running upwards of 800mb/s read and write.

I don’t think our machines stopped working 24/7 the entire time we were in post of Speak of the Devil.

However, we quickly discovered that not only would we have needed a tremendous amount of space, we would have needed an even faster pipe than 10GbE could offer. We decided after a few scenes that DPX just wasn’t going to offer any advantages and we switched to Zip16 OpenEXR. There were so many formats to consider, but this is a good standard offering much smaller file sizes and rock solid workflows with Nuke, which would be a major part of our pipeline. Even with the switch to a more compressed format, our final project ended up taking about 110TB of space.

With our house in order, it was time to get down to tackling all these scenes.

EDITING

We brought in Chris Willett as our lead editor for this project. We’ve collaborated a bunch before and not only is he extremely organized, he’s also always game for creating something different.

Chris, checking his work on the Rift as he puts together the opening nightmare scene.

We started by simply creating all the empty spots in the forest, choosing the best 15 or so seconds to create an endless loop. It became obvious very quickly just how many locations and shots there were going to be and we needed a way to track progress.

I wrote a Medium article about using Kanbanchi, which is a service very similar to Trello, but with a deeper Google Drive integration. Go read that here. The TL:DR is that Kanbanchi proved invaluable to tracking shots through our whole pipeline (and notes) as well as collaborating with VFX and sound.

With so many shots to track, Chris and I settled on a pipeline to make sure everything went as smooth as possible. Every location had a folder that contained a sequence with every component (equirectangular, plate shots, etc) broken out and synchronized. Most of the plate elements were turned off and the sequence was then nested into an edit sequence where we could add dissolves and color effects for the deaths. We had to make sure that our cutscenes transitioned seamlessly into the loop scenes.

What became extremely obvious was that it was very hard to tell whether the scenes were working for two reasons:

  1. Final audio was going to play an enourmous role in successfuly creating the mood
  2. We were unable to really experience the narrative until we put these scenes into a game engine and worked out those bugs.

This meant that the quality control process we were so used to was turned upside down and we were going to have to integrate the game engine programming side of things as quickly as possible in order to confirm that everything fit together and played how we envisioned it when writing. Having never done a project like this, it was hard to understand.

12 WAYS TO DIE

The death scenes were the most fun and also some of the more challenging creative choices we had to make. With 12 unique deaths, creating enough variation to stay interesting was tough.

Some of the deaths we captured on set were falling flat in the edit bay. Frankly, having the demon yell at you, wasn’t really enough to incite fear, so we needed to amp it up. Chris and I played around with timing and sound. We even had a good opportunity to crack open our copy of Mettle’s Mantra VR.

Mantra VR was used to make the world warp and distort as if the sound waves from the Windego were impacting you. We thought it added a lot to the death.

My favorite deaths are the two scenes where Lindsey and Brian vomit flies into your face. For this effect, we turned to Josiah Reeves from Occlusion VFX to help model the flies in 3D and composite them into the scene.

This is the death that creeps me out the most.

It was by pure coincidence that our 12 deaths plus one way out equalled 13. We thought it a fitting number for our project.

HORROR FILMS ARE ABOUT WHAT YOU HEAR

In a horror film, sound design is one of the most important elements in helping build tension. If you ever feel like you are getting too scared, don’t close your eyes, turn off the sound!

For Speak of the Devil, we reunited with Eric Wegener who worked with us on our first 360 horror project, Paranormal Activity: The Ghost Dimension. I’ll let him explain his creative process:

I needed to constantly balance between the creative, design side and the literal, technical side of the soundscape. The amount of material needed was staggering, with 50+ unique locations in the forest, each one involving multiple ambient looping backgrounds and cutscenes. So the big challenge in that regard was to make each spot unique and interesting, but to make the whole forest feel real, cohesive and connected.

One way we did this was by enabling the player to hear distant sounds from adjacent locations, which not only makes for a creepy and realistic spatial effect (i.e. unseen characters yelling for help nearby), but draws the player toward certain locations in order to advance the story.

Once we established the realistic, literal sounds of the forest, we engineered methods to add more spooky, abstract layers on top, to try and mess with the player’s head and freak them out. These sounds are mostly made up of creepy tonal elements, whispers, breaths, laughing, crying, satanic chants, etc. They are triggered either as “looping beds”, once the player experiences certain events in the story, or as “random one-shots”, which are totally unpredictable and can happen at any time.

This combination proved extremely effective at steadily ramping up the feeling of isolation and dread as you progress, while the added randomization really keeps you on edge from start to finish, and across multiple playthroughs.

Panning around using the Facebook Spatial Audio Workstation.

We loved working with Eric throughout this process and think that the immersive sound is one of my favorite parts of the whole narrative.

FINISHING THE WORKFLOW

We love shooting on the GoPro Odyssey, but one thing we don’t like is that it doesn’t capture the top and bottom of the sphere. Robert and I were going for maximum immersion and looking down or up and seeing a blank area just wasn’t going to cut it. That meant we needed to take our plate shots of the zenith and nadir and composite them back into 3D space. This is not an easy task. Especially for the number of shots we had.

We knew from our experience working with the Google Odyssey on Immerse that we’d have to utilize Nuke to get the highest quality results.

We worked with Jeremy Vanneman from Treehouse Studios as our stereoscopic supervisor. We wanted to get started as soon as possible but since we were still in the process of editorial it wasn’t feasible to send over a giant drive with all the footage. Especially given how large of a drive we would need to send.

Stepping through all the nodes Jeremy used to composite top and bottom frames on over 110 shots.

Instead what we did was create cards in Kanbanchi and attached a still frame of the equirectangular along with still of the top and bottom plates. Jeremy would composite those together using Nuke and then send us back the Nuke script. We would then reconnect to the high resolution frame sequences and render. I think our shortest render was 8 hours and our longest render something like 75 hours.

Halfway through the finishing of the VFX, the Google Jump team released a new feature called ‘Sweet Spot’ that allowed us to send scenes back to the cloud to be restitched if they had weird optical flow artifacts. Since we shot this film in a forest, we had tons of artifacts in our footage. In fact, if you take a look at the scene in our trailer where the Cultist character is offering you the bloody deer skull and that same scene in the final narrative, you’ll find that the version in the narrative is way improved due to us reprocessing it with Sweet Spot. We couldn’t get to all the scenes, but we were able to use this great new feature on several to help improve the quality dramatically.

Notice the reduction in optical flow artifacts?

Our render created a separate EXR frame sequence for the left and right eye ready for our color grade. When we began this project, we had planned on completing our color grade in Davinci Resolve since I already knew the software so well. Speak of the Devil left us with an interesting problem.

You see, to maintain the accuracy of the geography in the narrative, we shot every single setup with camera 9 (the front camera of the Odyssey) facing true north as told by a cheap magnetic compass we strapped to the top of the rig. This meant that there is a lot of action that takes place due south (or directly on the seam line).

Davinci Resolve doesn’t understand 360 video. That means if I were to do a power window across the seam, the risk of it leaving a seam line would be very high and since I couldn’t even check the grade in a headset, I wouldn’t know until another multiple hour render. That was time we just didn’t have. Again, I was going to have to modify our workflow.

Fortunately, the folks at Assimilate have created a tremendously powerful post production tool that’s fully ready for 360 video with SCRATCH VR. Not only does Scratch VR give us the ability to use windows and masks over the seam line, but it also has full headset support for preview.

Scratch VR looks intimidating, but once you get the hang of it, it’s very powerful.

I colored Speak of the Devil using a Windows Mixed Reality headset. The ability to flip up, adjust settings, then flip down and check what I had done immediately was a huge timesaver. It did mean learning an entire new program and I made a lot of mistakes that required me to redo and rerender a ton of my work. Still, I’m really happy with the look of the final.

ARE WE DONE?

We were in post production from the end of September until the middle of January. To us, who usually work in the branded content space on short five minute commercials, it felt like an eternity, but several people told us that it was actually really fast for the size and scope of project we were attempting to pull off. We had to modify our techniques and workflow in the post production process several times, which was frustrating, but forced us to learn a lot and develop a plan to be able to replicate this style of media across future stories.

Still, we weren’t finished yet. As we were completing all the final VFX, sound, and color, we were also hard at work with Wemersive and The Oberon Group figuring out how to program an entire logic engine and stream it all to your mobile phone.

Pick up the penultimate chapter next week in PART 4: It’s not a movie and it’s not a game, detailing how we used the Unity game engine to create the interactivity and logic engine of the narrative.

Download Speak of the Devil NOW for Google Daydream and Gear VR

Coming soon to Oculus Rift, and HTC Vive.

A Light Sail VR original. Speak of the Devil was filmed on the GoPro Odyssey as partners in the Google Jump Start Program. Media storage was sponsored by Lumaforge. Hosting and Unity Development in collaboration with Wemersive and The Oberon Group.

--

--