Post Production 360 Video with the Blackmagic URSA Mini Pro 12K

Matthew Celia
11 min readFeb 13, 2021
We’ve landed on another world…

This is Part 2 of a two part series that chronicles how I built a workflow with this camera to capture and produce extremely high quality monoscopic 360 video. Here we’ll talk about the post production workflow I’ve been battle testing over the past few weeks, while part 1 talks in depth about the production and how the camera is to use on set. If you are considering purchasing or renting this system to use on a project, I strongly recommend reading both and understanding exactly what this camera is and is not capable of before diving in.

To follow along, check out this sample content. It’s quite large. You’re welcome:

Front: https://lightsail-public.s3-us-west-2.amazonaws.com/A001_01211618_C005.braw

Back:https://lightsail-public.s3-us-west-2.amazonaws.com/A001_01211621_C007.braw

This was shot at full 12K resolution at 60fps using Q0. The lens was set to f4.0. There was no IR filter installed, so I expect the colors to be quite red and difficult to nail. I’m looking forward to seeing how you decide to color it.

I cut my teeth in the industry designing post production workflows and learning how to manage jobs on a budget and deadline with a team of people. I am always looking for the fastest and most scalable way to produce content and that was my goal when battle testing these workflows.

Mistika Boutique is still my preferred post production tool for creating 360 media. Yes, it is expensive, but it saves me so much time that it easily pays for itself IMO. However, as of this writing, it can’t work with the native BRAW files from this camera, forcing a transcode into something in order to use it. That may not be as much of a deal breaker as originally thought, since utilizing BRAW in Resolve with the Fusion tab also seems to bring my workstation to its knees.

The DaVinci Resolve Post Production Workflow

Filming a project on a 360 camera without understanding the post production workflow is a recipe for disaster. I can’t tell you how many projects I’ve come across since beginning to work in this industry that have either failed to finish or weren’t able to finish in the way the creatives envisioned because of this fact. So, I knew that figuring out the post workflow for the camera would be a first priority before filming a commercial project with it.

TL:DR You need a serious machine to work with the best footage this camera can capture.

  • At least an NVIDIA RTX Titan. You need the 24GB of VRAM. Anything less won’t do. You might be able to use two cards. The VRAM is the most important part.
  • 128GB RAM is HIGHLY recommended. Without it, I was getting all sorts of errors while attempting to stitch in Resolve.
  • FAST storage and LOTS of it. These files can be HUGE and also you want fast read/write times to be able to access the files quickly. Faster, the better.
  • Ideally, access to a FAST shared storage server such as a Lumaforge Jellyfish and several other computers to help you render. More on this later in this article.

DaVinci Resolve Studio is what I used to work with the media we captured. It ships free with the purchase of the camera, but even without, it is only $295 for a perpetual license. That comes with a license to use Fusion Studio standalone with it’s unlimited render nodes. Having been used to the ridiculous pricing that is Nuke, that’s a screaming good deal any way you look at it. Resolve Studio supports the 12K BRAW codec, and its intuitive editing interface means you can complete your entire project in a single app.

Here are the tools you’ll need to complete this workflow:

  • PTGui 10 (not the latest version)
  • DaVinci Resolve Studio 17 — I am using the beta (7)
  • KartaVR (available via Reactor)
  • STMapper (also available via Reactor)

Interestingly, Fusion Studio does not support BRAW. But you can copy/paste the nodes between the two programs, which is pretty cool. For now, I suppose you could transcode to an image sequence format and send to a render farm, if that’s your thing. My experience tells me you might want to explore that, because while the following workflow is good, it is MIGHTY slow.

Ok, time to get to work.

Installing KartaVR

KartaVR is a massive set of FREE plugins and scripts for Fusion by Andrew Hazelden. Seriously, we all owe him a lot for making this workflow even usable. He’s a genius and an incredibly nice guy. Join the KartaVR Facebook group. Buy him a coffee.

You get KartaVR by first installing Reactor, available here. It couldn’t be easier. Drag and drop the .lua file you download into an empty fusion comp and let the magic happen. Once the download manager pops up, make sure you install KartaVR and the KartaVR 3rd Party Libraries.

You are also going to want to grab STMapper as it’s a vastly faster UV map positioning tool that take advantage of your GPU. You do have an RTX Titan, right? It’s available under Tools -> Warp

After you’ve installed, restart DaVinci Resolve Studio and let’s continue.

The Offline/Online Workflow

Working with this much footage at full resolution is a pain. After trying all sorts of transcodes and proxy workflows, I honestly think it might be best (depending on your project of course) to do a very rough edit with simply the fisheye lens. The reason I say this is that Blackmagic’s voodoo of working with the 12K BRAW footage natively is just so good that it allows you to start editing/syncing right away. This way you can roughly edit the usable footage and not waste time on frames you don’t need. Remember, at 60 frames/second even the 2 seconds it takes for the 2nd AC to get out of the shot with the slate can eat up a lot of processing power! Let the cut be a little heavy, you can always trim it later.

To do this, I would create a sequence for each take and sync my front and back cameras. Then I would edit my video using the take sequences as the media. Once I am happy with my rough assembly, I can simply “unpack” the compound clips to get all of my footage with my front camera on V1 and my back camera on V2 in a single sequence. Duplicate this sequence. It’s now time to stitch.

Transcoding

I really wanted to find a workflow that allowed me to work with the BRAW files natively, but to speed up rendering with this workflow, converting to frame sequence is really the way to go… provided you have enough fast disk space, which can get rather expensive.

When stitching in Resolve from the BRAW files, I would average .5 to 1fps. That’s pretty slow. After converting to frame sequences, that number was much higher. DPX files in particular are super fast to playback because the CPU doesn’t have to do much work. It’s all dependent on disk speed. But having that much disk space with high enough speeds can be hard. My best guess is that this workflow is best suited to work on a few shots at a time, which just means taking extra time to make sure the stitch and VFX are all final before moving on.

Before exporting out DPX files, make sure the white balance and exposure of your clips is optimal. Those can usually be changed in the raw image settings, so you’ll want to review (ideally with a preview LUT) before exporting to make sure you don’t need to adjust anything.

Also, be sure when exporting to check your timeline color space and gamma. If you are exporting a Rec709 timeline, you are going to be throwing away a ton of color information. If sticking with Resolve, I suggest setting up your timeline to use the DaVinci WG/Intermediate color space anyway.

PTGui

In order to use PTGui, we first have to generate still images from our footage. I’ve found that the fastest way to do this is the following:

  1. Starting in DaVinci Resolve, make sure the sequence size is set to the full vertical resolution but a 2:1 aspect ratio. In my case, this would be 12960x6480.
  2. Layout all my clips in a single sequence with the back camera angles on V1 and the front camera angles on V2 in a checkerboard fashion.
  3. Head over to the Color page. Right click and select “Grab All Stills -> From Middle”. All of your stills should now appear in the gallery. By default they are named in the following format: [track].[shot number on track].[version — which is usually 1]. You can customize this format if you like in the preferences.
  4. Select all the stills in the Gallery and select export. You now have a folder with still images from each setup in your edit.
Checkerboarding is helpful to separate front and back takes
All our stills, ready to export and send to PTGui

The next step is to stitch using PTGui. I’m not going to get into the nitty gritty of how to do that here, but I will give a few pointers that seemed to work for me.

  • The HAL250 lens on this camera has a focal length of 4.3mm, a 250 degree FOV and is a circular fisheye. The URSA Mini Pro sensor is the equivalent of a 1.4x crop.
  • Make sure your crop circles are really accurate. It should be the same for each image, since it was on the same camera!
  • After the first stitch, manually delete any control points on moving objects such as trees, sun shadows, etc. You may need to add some if you find the results not quite good enough.
  • When optimizing, uncheck Optimize lens FOC. I’ve found that changing minimize lens distortion to heavy + lens shift improved my results.

When you are done in PTGui, your preview should look perfectly stitched. Save the file.

NOTE: I think it’s perfectly possible to do this workflow on a single shot to generate the UVMaps, but it might not be perfect for every shot if you have changing geometry (room size). I would do a single shot, use the following UVMap workflow on all the shots and then return to PTGui to hand stitch any shots that don’t look good enough.

Stitching in the Fusion Tab

Back in Resolve Studio, I return to my assembly sequence and duplicate it again. I then start converting the pairs of clips into Fusion Clips. It’s time to head into Fusion and stitch!

Before I do so, be sure to change the project resolution to your final output size such as 8192x4096. You can try working at a lower resolution too, but you’ll need to add some transform nodes in the graph and a control node to be able to easily switch resolutions since the UVMaps are a set size and seem to not be resized by Fusion for some reason.

Nodes are cool. And using them makes you look smart.

Above is my node graph and I’m going to walk you through it. The key script I have to run before I can create this is Andrew’s “Generate UV Maps” script. This will take your PTGui project and create STMaps to unwrap the fisheye footage in Fusion using the STMapper node. You’ll want to use the following settings:

Ensure Compression is set to None!

This will generate in PTGui two files that we’ll bring in to Resolve as our FrontUV and BackUV.

Front and Back UV passes generated by PTGui

Plug the UVs into the yellow triangle on the STMapper node. Plug your video into the green triangle. Clicking on the merge node (I renamed mine to ‘Stitched’ using F2) and hitting the ‘2’ key loads it into the 2nd screen. Most likely one of the images is greatly overlapping the other, so I use a AlphaMaskErode node to smooth it out. That works for quick stitches, but for final stitches it’s better to manually paint your masks and plug them into the mask channel. Andrew Hazelden has a cool workflow that uses some sort of image analysis to draw the perfect seam line, but I haven’t quite gotten there. If anybody has tips, I’m all ears.

This manual paint node tree allows you to easily paint which parts of what image are masked out. Pretty slick and straight forward.

At this point it’s probably looking pretty good, but it’s hard to be sure in the distorted state. Let’s take a look at a preview!

Instant 360 preview. Nice!

You can preview in magic window style by clicking on the three dots -> 360 View -> Latlog. Use Shift + Right click to move the camera around. Or, if you have a VR headset hooked up to your PC (such as my Quest 2 running Oculus Link), hit the 3 key and it’ll boot up a preview on the headset. VERY cool. This is a feature that Mistika Boutique doesn’t have and I love it.

You should check the stitch to make sure it’s perfect. If it isn’t, you can add a GridWarp node after the UV, before the STMapper node and nudge stuff into place.

Now it’s time to paint the tripod out. I use the LatLongPatch tool and then Fusion’s excellent paint node. Honestly, it feels very quick and easy. I then merge that back into my footage.

Another approach is to render out a still frame, paint in Photoshop or Affinity Photo, and composite back in the still image. Your call here.

Overall, I found the process pretty straight forward and it was extremely powerful to accomplish all of this in a single app.

The “Gotcha”

Speed. Simple as that. This is really slow with render speeds anywhere from 1 frame/second to 6 seconds per frame. Yikes. If working inside DaVinci Resolve, there isn’t really any way to speed that up either.

If you convert to DPX frames as suggested above, you can load your media onto super fast NVME drives for a huge increase in performance. Each DPX frame is about 400MB (that’s megaBYTES) which means ideally you need a PCIe 4.0 M.2 drive to achieve close to realtime speeds. And even then, 8TB will only net you about 6 minutes of footage and cost upwards of $1700 to put together. Perhaps compressed EXR formats would fare better, but I haven’t had the time to test.

I also always got better stitching results using Mistika Boutique and I’m not sure why. There are a couple of workflow speed improvements in Mistika that make stitching happen in quite a few less clicks. Rendering speeds were still on the slower side (.5fps) but more consistent. However, I much prefer painting and color grading in Resolve. Plus the addition of headset preview is really nice.

Conclusion

This is a workflow I’m going to continue to explore. I’m sad Blackmagic doesn’t make it easier or provide as many quick tools to work with immersive media vs. Adobe Premiere because the underlying technology of Resolve is pretty impressive. While I complain that it’s slow in Resolve, this project wouldn’t even be usable in Premiere Pro. I know the number of us doing immersive 360 projects is still small, but fingers crossed it eventually makes it onto Blackmagic Design’s dev team list.

What would you like to see? How could you improve upon my workflow? Download the sample footage and drop me a line: matt@lightsailvr.com or follow me on Instagram @immersive_matt

Matthew Celia is the Co-founder and Creative Director at Light Sail VR, a commercial production company specializing in immersive media. You can follow him on Instagram @immersive_matt

--

--