encoding

Hardware Encoding 4K HDR10 Videos?

We are about to venture into a heated and abnormal world. Hardware encoders, designed for real-time encoding, may be reaching the point they can also be considered for video archival. The three common consumers options available that we will look at are:

  • AMD’s VCN encoder (using 6900XT / RDNA2 architecture)
  • Nvidia’s NVENC encoder (using 3060 RTX / 7th generation)
  • Intel QSV encoder (using i7-11800H / Version 8)

These tests will all use HEVC UHD HDR10 source material and have valid UHD HDR10 10-bit output as well. This is not a common use case. These tests have been done because these encoders are being added to FastFlix. FastFlix is a free and open source GUI for common video encoding software, specifically designed to help with HDR10 videos.

What’s so odd about using a hardware encoder for encoding videos?

When looking to compress an existing video file, one of the main purposes is to save disk space, i.e. lower the bitrate. This can help save on disk space or on bandwidth usage if the file will be transferred a lot. For example, a single megabyte difference for a popular file on a large site could start costing hundreds of dollars of bandwidth fees.

That means you want to get as much quality as you can, into the smallest package possible. Whereas, historically, hardware encoders were designed with the singular purpose to encode above real-time speeds. That way they could be used with video conferencing like Zoom or transcode videos as needed as you watch them. They always wanted good quality of course, but speed was always more important.

However, as their hardware and software matured it is now reaching a point where they can reasonably be considered instead of using crazy slow software encoders. I believe a large part of this is due to the introduction of the B-frame.

The almighty B-frame

In HEVC videos there are three types of frames, Index (I) frames, Predicted (P) frames, and Bidirectionally Predicted (B) frames. I frames are pretty easy to understand, an I frame is a full picture. It’s what everyone thinks about when they assume a video file is a bunch of pictures in a row, like it was back with real film.

However with modern video codecs, like HEVC, there are in-between frames that don’t have the full picture, instead are half filled with a bunch of math (motion vectors) that say “hey, move that area over this way for this frame.” P-frames do just that, they use the data in the frame before them and store the different as motion vectors.

P-frames

In idea scenarios, P-frames are about half the size of I-frames. That means if you have one I-frame and two P-frames after it for the entire movie, you’ve just cut off a third of the bitrate!

Linear-P-Frames Example

B-frames

B-frames are even more efficient, they can be half the size of a P-frame in an ideal world, aka a quarter of an I frame. Imagine having a single I frame, then two B-frames, a P-frame, then two more B-frames. You would have knocked off almost 60% of the bitrate! However, the problem there is that B-frames are crazy hard to calculate. B is for Bidirectional, which means they not only look at the frame that was encoded before them, but also the frame that will come after them. That’s right, you have to first encode the I frame, and the P-frame (or another I frame) that comes after it, before calculating the B-frames between them.

B-frame non-linear encoding workflow

Until recently, hardware encoders were only thinking of moving in a forward direction. They never thought to stop to wait to build frames that came behind the current one, who would want that? Well without the B-frame, you have to either compensate by having more index frames or having larger P-frames. Take the following scenario.

B-Frames example

If the B-Frame was replaced with a P-frame (so it was I P P I) the P-frame with the sun in it would require additional image data stored in that frame. Whereas by using a B-frame, it can be stored as a motion vector, thus saving large amounts of bitrate.

Thankfully Nvidia and Intel have both decided that it’s time to bring some quality to the hardware world, and do have B-frames in their latest hardware encoders. Sadly, AMD still doesn’t have any support for it with HEVC videos.

Hardware Encoding Head to Head

This is probably why you’re here, to see how they stack up to each other. We are going to compare two videos against four different encoders. All encodes produced valid HDR10 videos and used the same settings per encode (except for one hiccup with Intel QSV erroring using –extbrc on one video.)

Methodolgy

Tests like this are only as good as their documentation of how they were acquired. To that end, I wrote a script to run all these tests so that there were no quarrels about how they were tested (downloadable here). All tests were run on Windows 10 Version 10.0.19042 Build 19042 with the following settings.

The NVENC and QSV encodings were done on a laptop while plugged in on maximum power mode. The AMD VCN and x265 encodes were done on a desktop PC. Because of obvious hardware differences, encoding speed and power will not be considered as part of these tests.

AMD VCE / VCNNvidia NVENC Intel QSVx265 (Software)
Options

(bold are
non-default)
–ref 3
–preset slow
–pe
–tier High
–bframes 3
–ref 3
–preset quality
–tier high
–lookahead 16
–aq
–aq-strength 0
–multipass 2pass-full
–mv-precision Q-pel
–quality best
–la-depth 16
–la-quality slow
–extbrc

( –extbrc not set on
Glass Blowing)
-preset slow
aq-mode=2
strong-intra-smoothing=1
bframes=4
b-adapt=2

frame-threads=0
hdr10=1
hdr10_opt=1

chromaloc=2
Hardware6900 XT3060 RTXi7-11800Hi9-9900K
Driver21.7.2471.68 Game Ready27.20.100.974910.0.19041.546 (intelppl.sys)
SoftwareVCEEnc (x64) 6.13
AMF Version 1.4.21
NVEncC (x64) 5.37
NVENC API v11.1
CUDA 10.1
QSVEncC (x64) 5.06
Hardware API v1.35
FFmpeg ~4.4
x265 @ commit 82786fc

The x265 software was also given the benefit of running in dual pass mode. The slow preset was used as it was determined to be an ideal choice in previous tests.

The hardware encoders were set to use the “best” settings for measured quality, not perceived quality. I did not have as much time to test NVENC and QSV as I did VCN, so there may be more to eek out of those two encoders.

Downloadables

I wrote a python script to do all this testing, which can be downloaded: compare.py

If you want to check out the results of the VMAF / PSNR / SSIM yourself, here are the json files.

Wonderland Two – 4K HDR10 – 24fps – 51.4 Mb/s bitrate

First encoder comparison is with Samsung’s Wonderland Two 4K HDR10 video that is a time-lapse, where there are large chunks of the video that have small changes, while other parts have rapidly moving blurs. This one’s original bitrate is around 51,000k. We will be cutting it down to less than 1/20th it’s original size at 2500K, so there will be obvious quality lost. Let’s see how the encoders handle it!

VMAF scores - Wonderland Two

Both NVENC and QSV put up a great showing, riding the VMAF 93 mark at 7500K, a mere 15% of the original file size. VCN doesn’t even reach that point at 12500K, so would presumably need around twice as much bitrate to achieve the same quality!

Let’s take a deep dive into what these scores really translate too over the course of the movie. These charts are the VMAF scores at every 10 frames for each of the encoders with their 10000k bitrate video.

In this case the video is compressed to 5 times smaller than it’s original file size! It went on a diet from 821MB to 160MB, so expect to see some big encoding degradation.

VMAF breakdown chart for all encodings - wnderland two

It’s really obvious that AMD’s VCN encoder struggles with scene changes (the sudden sharp drops). I imagine this is due to lack of pre-analysis, as talked about in the last post. It also seems that QSV has some trouble with it, but at least NVENC has it’s head held high in that regard! I also suspect VCN is also trailing behind due to lack of B-frame support.

Dobly’s Glass Blowing Demo – 4K HDR10 – 60fps – 15.1 Mb/s bitrate

Second, Dobly’s Glass Blowing Demo is a three minute long 4K HDR10 video that is constant motion with lots of changing details, fire, smoke and rain. It is a high fps source video, but has much lower bitrate to start from. That means the scores should be higher, as it’s easier to reach the same quality.

Graph showing VMAF glass blowing demo comparison

AMD’s VCN and Nvidia’s NVENC trade blows the whole way with this video, with x265 taking a clear lead throughout the curve.

Intel QSV starts off rocking both NVENC and VCN then …something… happens. I honestly thought it was an error with my testing at first, possibly a misaligned video track while calculating VMAF. However after a quick look at the spread chat, we can see it’s more insidious than that.

VMAF breakdown chart for all encodings - glass blowing

There are two huge drops with QSV. I checked the video file, and found that there were two sections that became suddenly blocky and laggy, as if it was skipping or duplicating frames wrongly. I have no idea what caused it, and worse there was no indication of error! I can only speculate the encoder was designed to keep working even if there was some disruption in either compute capability or access to the video file. That is good for real-time encoding like streaming, but unacceptable for video archival.

This frankly leaves QSV out of the running for any consideration of use for backing up videos. If anyone knows of fixes or prevention of this, please leave a comment!

Samsung x RedBull: See the Unexpected – 4K HDR10 – 60fps – 51.8 Mb/s bitrate

Finally I tested with a high fps and high framerate video, Samsung x RedBull: See the Unexpected. I excluded testing QSV entirely with how bad it failed in the last run.

VAMF Chart - Samsung x RedBull

Nvidia’s NVENC is hot on x265’s tail, with VCN lagging behind.

VAMF breakdown for all encodings - Samsung x RedBull

AMD’s VCN still has clear drops. NVENC struggles more with this video than any of the others before, but still clearly pulls ahead of VCN. x265 is sitting tall, having a clean and very impressive line for a video that is a fifth of the original size with a minimum VMAF of 84.25 vs NVENC’s low of 56.13 then VCN taking rear guard at 48.16 (all had a max of 100).

Conclusions

Has HEVC hardware encoding caught up to the quality of software encoding?

No.

It seems that the three titans of the GPU industry still haven’t figured out how to build encoding hardware pipelines that are both fast and high quality.

Would I use hardware encoders for my own videos?

Yes.

Thee two Samsung tests were done on the extreme end of compressions, down to 1/20th of the original size. The Glass Blowing Demo VMAF deep dive gives a good idea of what would be more expected of a re-encode from going to 15Mb/s to 10Mb/s.

As we have said before, don’t needlessly re-encode videos. In my particular case I am happy using any of these hardware encoders for quick encodes rather that sitting around all day for a slightly, and probably unnoticeable, quality difference with x265.

Is there hope for a true consumer hardware encoding competitor to x265 quality?

No.

Why would there be? Everything available is “good enough” and there is no incentive for these companies to spend the phenomenal effort into this specific task.

I would absolutely love to be proven wrong on this, but I personally don’t see any improvements being invested on HEVC encoding when AV1 is right around the corner.

Disclaimer

These tests were done on my own hardware purchased myself. No company has not asked me to write this, modify or reword anything, nor omit anything. All conclusions are my own thoughts and opinions and in no way represent any company.


AMD Hardware Encoding in 2021 (VCE / VCN)

It’s 2021 and there still isn’t a lot of good info about AMD’s VCN hardware encoder for consumers. To that end, I will present my own take on the current “war” between software and hardware encoders, then go into quick details of how to best use AMD GPUs for encoding for video archival with FastFlix.

Note: I will only be comparing HEVC/H.265 10-bit HDR10 videos (both source and output). This use case is not usually covered in benchmarks and tests I have seen, and is more of the interest to those who have seen my previous posts on Encoding UHD HDR10 videos but may want to hardware accelerate it.

Terms:

  • VCE – Video Coding Engine – AMD’s early name for its built in encoding hardware
  • VCN – Video Core Next – AMD’s new name for GPU hardware encoders (VCE / VCN used interchangeably)
  • AMF – Advanced Media Framework – AMD’s code and tools for developers to work with VCE / VCN
  • HEVC / H.265 – High Efficiency Video Coding – The videos codec we will use that supports HDR10
  • HDR10 – A set of metadata presented alongside the video to give the display additional details
  • VMAF – Netflix’s video quality metric used to compare encoded video to source’s quality

Software vs Hardware Encoders

Software encoders are coded to run on any general purpose CPU. Doesn’t matter if it’s an Intel i7, an AMD 5900x or even on your phone’s ARM based CPU. It gives them great versatility. In the other corner, Hardware encoders rely on specific physical hardware components to exist in a system to use them to accelerate their transcoding. In today’s case, it takes a AMD GPU with VCN support to use the features we want to test.

Apples and oranges are both fruit, sports cars and pickup trucks are both vehicles, and software and hardware encoders both transcode videos. Just as it’s futile to compare the track capabilities of a supercar to the towing capacity of a pickup truck, we are about to venture into said territory with these encoders.

Please excuse the poor artwork, Clara wasn’t available this week so I had to do it myself!

Use case over metrics

The workhorse of the HEVC software encoding world is x265. There are plenty of other software encoders like the industry used ATEME TITAN File for UHD blu-rays or other open source encoders like Turing codec or kvazaar, but because of their lack of inclusion in standard tools like FFmpeg, they are overlooked.

So what is this workhorse good for? Flexibility and video archival. By being able to run on almost anything that can compile C code, x265 is a champion of cross platform operations. It is also the standard when looking for pure quality metrics in HEVC videos.

Comparatively, hardware encoding, in this case using AMD’s Video Coding Engine (VCE), is built to be power efficient and fast. Really, really fast. For example, on a 6900XT you can real-time encode a 60fps UHD stream on the slowest setting!

Let’s see what happens when they venture into each other’s bailiwicks.

Drag Race

Here’s what everybody loves: A good graph. We’re going to compare x265 using it’s fastest encoding speed vs the slowest setting AMD’s VCE currently has with a 60fps HDR10 4K source video.

Using a 60fps HDR10 UHD source, x265 was compared with it’s highest speed preset vs VCE’s slowest

As expected, it was a slaughter. Hardware encoding ran at 96 fps while x265 could only manage 14.5 fps. AMD’s hardware encoding clearly pummels the fastest setting x265 has to offer, even on an i9-9900k. Even if using an AMD 5950x which may be up to twice as fast, the hardware encoder would still dominate.

Where does this matter

Streaming and real-time transcoding. Hardware encoders were designed with the idea of “accelerated” encoding. Which makes them great for powering your Zoom calls or streaming to Twitch.

Encoding Quality Prowess

Now lets venture into x265’s house and compare computed quality with VMAF. We’ll be using the veryslow setting, darn the total time taken!

In this scenario we will compress a UHD video with a bitrate of 15,000k to four different rates. The goal for a decent encode is to reach at least VMAF 93, which is the bitrate range we will stay above. (VMAF 93+ doesn’t mean you won’t notice quality loss. It simply means that it probably WILL BE apparent if it is less than that.)

This was tested with a 30 second excerpt from Dolby’s Glass Blowing Demo (UHD profile 8.1)

Both encoders do great, keeping within a range that shouldn’t be to noticeable. However, x265 has a clear advantage at lower bitrates if all you care about is quality. It also maintains a steady edge throughout the test.

I have noticed while watching the AMD VCE encodes that it doesn’t do a great job with scene changes. I expect that is because VCE doesn’t support pre-analysis for HEVC, only for H.264. AMD VCE also suffers from lack of B-frame support, which I will talk about in the next blog post.

Where does this matter

Video archival. If you have a video that you are planning to discard for a high quality re-encode to save on file size, it’s better to stick with x265. Keep in mind, don’t just re-encode because you want to use a “better” codec, it’s always best to keep the original.

Gas Guzzling

This is a comparison I don’t see as often, and I think is overlooked. Encoding takes a lot of power, which means it costs money. I have been told by many FastFlix users that they let their x265 encodes run overnight, and some of their encodings take days!

This is also a harder to measure metric, as you need both encoders to produce the same quality output, as well as know their power usage. The entire thing also labors under the assumption that the only purpose of this machine is to encode the video while it is powered on, so please keep all that in mind as we dive into this.

To achieve the same quality of result file, it costs ten times as much in electricity to get the job done. This may not matter if you’re talking about a random encode here or there, but if you have a lot of videos to burn through, it could really start saving cash by switching to hardware encoders.

The Nitty Gritty about the power (Methodology)

Power usage will differ across hardware so this is for a very specific case that I can attest for (using both HWmonitor and a KillAWatt monitor). The 6900XT uses 63 watts over it’s baseline when encoding, for a total system draw of ~320w. The i9-9900k uses 111 watts over baseline for a total system draw of ~360w. (Keep in mind there is some extra CPU usage during hardware encode as well, so that is why total power is not a direct difference between the two.)

For the encoder speed, when using a UHD file I was able to get within 0.1% difference of VMAF when using VCE slow (same speed as above) and x265 veryfast (at 10.35fps).

Lets take a genericized use case of a two hour long video running at 24fps. 24fps * 60 seconds in a minutes * 60 minutes in an hour * 2 hours = 172,800 frames.

Estimated times and cost:

  • VCE – slow – 6900XT @ 96.47fps – 29.85 minutes
    • 0.16 kWh/day @ 320 watts
    • 0.019$ at @12 cents per kWh
  • x265 – i9-9900K@ 10.35fps – 287.3 minutes (four and a half hours)
    • 1.72 kWh/day @ 360 watts
    • 0.206$ at @12 cents per kWh

Where does this matter

The cost difference probably doesn’t sway many individuals But if you’re a prolific encoder, this could save you time and money.

Super Technical Head to Head Summary

Software (x265)Hardware (AMD VCE)
Quality⭐Best possibleLacks basic HEVC needs (B-frames / pre-analysis)
SpeedSlow to Super Slow⭐Crazy Fast
Requirements⭐Any old electrified rockNewer AMD GPU
Windows OS
Energy UsageAll the powah!sips daintily

So the winner is…. neither. If you’re encoding professionally you’ll be working with totally different software (like TITAN File). Then if you’re using it at home, it really just depends with what hardware you already have. If you’re wondering which GPU to get for the best encoding, wait for next month’s article 😉

Basically they both do what they were designed for. I would say Hardware encoders might have a slight overall edge, as they could be used for all cases. Whereas x265 currently can’t do UHD HDR10 real time encoding on consumer hardware.

Encoding HDR10 with AMD GPUs

Already got an AMD GPU and want to start encoding with it? Great, let’s get down to how to do it. First off make sure you are using Windows. If you’re using Linux for this, don’t.* If Linux is all you have, I would still recommend using a passthrough VM with Windows on it.

For Windows users, rigaya has made a beautiful tool called VCEEncC that has HDR10 support built in. It is a command line tool, but good news, FastFlix now supports it!

You will need to download VCEEncC manually as well, and make sure it is on the system path or link it up in File > Settings of FastFlix.

VCE doesn’t have a lot of options to worry about like other encoders, so can be on your way to re-encoding in no time!

* Possible on Linux to using VAAPI to encode HEVC. You would need to apply custom MESA patches to enable HDR10 support. AMF / VCEEncC only supports H.264 on Linix currently.

Best quality possible with VCE

Beauty is in the eye of the beholder, and so is video quality. Some features, like VBAQ (Variance Based Adaptive Quantization) will lower the measured metrics like VMAF and SSIM, but are designed look better to human eyes. Assuming you care about how the video looks, and aren’t just trying to impress your boss with numbers, we will stick with those.

Presetslow
Motion Vector Accuracyq-pel
VBAQenabled
Pre-Encodeenabled

Of course the largest determination of quality will be how much bitrate you will allow for (or which quantization rate you select). FastFlix has some loose recommendations, but what is truly needed will vary greatly dependent upon source. A GoPro bike ride video will require a lot more bitrate than a mounted security camera with very little movement overall.

Warnings and gotchas

Not all features are available for all cards. Also some features like b-frame support were promised for RDNA2 but still are not yet available.

Driver versions can make a difference. Always try using latest first, but if you experience issues using VCE it may not be using a new enough AMF version and need to downgrade to an older driver.

What do I use?

Personally I avoid re-encoding whenever possible. However, now that I do have an AMD GPU I do use it for any of my quick and dirty encoding needs. Though I would be saying the same about NVENC if I had a new Nvidia GPU (which does have B-frame support). In my opinion it’s simply not worth the time and energy investment for encoding with software. Either save the original or use a hardware encoder.

What about Nvidia (NVENC) or Intel (QSV)?

I am working to get access to latest generation hardware for both Nvidia’s NVENC and Intel’s QSV in the next month, so hopefully I will be able to create a follow up with some good head to head comparison. Historically NVENC has taken the crown, and by my research VCE hasn’t caught up yet, but who knows where QSV will end up!

Boring Details

  • x265 was used at commit 82786fccce10379be439243b6a776dc2f5918cb4 (2021-05-25) as part of FFmpeg
  • CPU is a i9-9900k
  • VCEEncC 6.13 on 6900xt with AMF Runtime 1.4.21 / SDK 1.4.21 using drivers 21.7.2

Disclaimer

These tests were done on my own hardware purchased myself. All conclusions are my own thoughts and opinions and in no way represent any company.

Stop Re-Encoding Videos!

I know this may sound like a weird statement coming from the author of FastFlix, but from the bottom of my heart, please stop the needless pixel loss! Every time you re-encode (aka transcode) a video it losses information, which lowers the quality and makes it a lot worse if you ever need to do it again. Re-encoding makes you a pixel killer!

Why am I saying re-encoding and not just “encoding” when you may have the original video? Sadly, to even fit onto your computer or device the video you are working with has already been encoded in a highly compressed manor. Even phones and professional cameras like the RED series use real-time compression. Raw video takes too much bandwidth for standard storage media. For example, imaging recording raw video using a video sensor at 16-bit 60FPS. A single minute of UHD footage would be near 60GB! That translates into a bitrate of over 7,500,000 kbps. Compare that to a re-encoded YouTube video of HDR 60FPS 4K footage is around 30,000 kbps, 250x times less quality!

Why does it matter?

Everyone has seen potato quality images and videos being re-shared again and again. The quality loss is due to needless re-encoding.

Websites generally always re-encode videos for a variety of reasons, such as adding watermarks or forcing certain bitrates. It may be required in some instances, but for most cases it’s overkill. If I had the power I would challenge websites to instead publish what encoding targets are required and allow for direct playback of the original file.

But while we don’t have control over that, that makes it even more important to not make it worse when you have control over it.

In the above example I took a short video and encoded it again and again and again using the same settings each time. The video still looks good while being watched, but if you zoom in you can see the entire thing has essentially become blurred. Don’t do this to your own videos, save the pixels!

Future Proofing

Another thing to consider is not just the result you have now, but how it will look in ten or twenty years. Sure, the single 1080p or 4K re-encode you did now may look perfect. But what about when 16K TVs are standard? What about when your phone or monitors pixel density is four to eight times higher than it is now? Behind the scenes of good TVs and devices is an “Ai” chip that upscales videos to look better on newer screens. With less detail to start with, it won’t look as good as if you didn’t re-encode.

Remember that amazing video you took with your flip phone all those years ago that you can’t even tell what is happening in it anymore? You don’t want that to happen to your videos now.

There are some cases that it cannot be avoided that we will cover later, but even common operations like trimming videos and rotating them can be accomplished without re-encoding.

Trim and Rotate without re-encoding

Two very common reasons people want to re-encode videos is to shorten them to a particular section, or rotate it the proper direction. Thankfully you can do both of those without modifying the original video stream. I am using the command line tool ffmpeg to accomplish this, which is available for free.

Rotate

To rotate the video, you just have to add some metadata to the container the video is in. This is how phones set videos to portrait or landscape mode without having to change their encoder settings.

ffmpeg -i your_video.mp4 -metadata 0:s:v rotate=90 -map 0 -c copy rotated_video.mp4

In the above example, replace your_video.mp4 with the video file you need rotated. It will be copied with the new metadata to rotated_video.mp4 and now should be rotated properly.

Trim

To cut out a section of the video, you simply need to “copy” everything between the two desired points. For example if you want to cut out a 48 second section between 1:02 and 1:50 (one minute two seconds and one minute fifty seconds), use the following command.

ffmpeg --ss 1:02 -to 1:50  -i your_video.mp4 -map 0 -c copy rotated_video.mp4

Are there any exceptions?

There are a few cases where you simply cannot avoid re-encoding. If you need to add effects, crop, make any actual modifications to the video, you will need to re-encode.

However if you are keeping the video as is, there are only three instances you should consider re-encoding for:

  • Limited bandwidth scenarios
  • Device compatibility
  • Cannot provide required storage space

Limited bandwidth scenario

ISPs still like to pretend upload speeds don’t matter. Even if an ISP provides 1Gbps down most still have less than 45Mbps upload peak. Things get even worse for mobile, where the average upload rate is around 10Mbps. Then on top of that, a general rule of thumb is your video bitrate should be around half or less of available bandwidth to ensure there isn’t stuttering.

That means if you’re planning to share videos in real time with the world, you simply have to re-encode (transcode) it.

Device compatibility

There are some large companies that have terrible compatibility with commonly accepted formats to be anti-competition. That means if you or a loved one is enslaved to some large fruit company, you may need to tinker with your videos to please the orchard overlords.

Storage space

“Storage is cheap” is a phrase I hear a little too much in the coding world. In the real world without a large quarterly tech budget, you have to count your pennies. If you can re-encode a video so that it’s near visually lossless while saving on storage space for your needs, go for it!

A Raspberry Pi Streaming Camera using MPEG-DASH, HLS or RTSP

This is an improvement on my previous article, Raspberry Pi Hardware Accelerated RTSP Camera, now with the option of using more modern technology, MPEG-DASH and HLS!

First off, if you don’t care about the technicalities and just want a script to do everything for you, here you go! If you’re still interested in how it all works or want to tweak the settings, read on.

This article will walk you through how to either copy or convert video from your webcam or pi camera, set it up as a systemd service, and finally view it on a webpage or access it remotely.

MPEG-DASH vs HLS vs RSTP

So to clear this up first of all, these are “containers” that wrap around the actual video, which is a particular “codec” (such as h264). DASH and RTSP are fully codec agnostic, meaning they are capable of wrapping around any type of video codec. The big gotcha is what type of videos the viewer supports (and in RTSP’s case the middleman server as well.)

So if DASH and RTSP can handle everything, why even bother with HLS? Long story short, Apple, who developed HLS, is a bully, so they don’t support the open MPEG-DASH on their devices. Meaning if you are trying to share these video streams with the public or view on an Apple device, you will get the most compatibility with HLS.

So now the difference really comes down to how DASH/HLS are HTTP based protocols that can easily be supported in browser. This makes it super easy to set up an all-in-one device that can host it’s own webpage to view a video at.

Whereas RTSP requires additional software, such as VLC or a security system to view it. The real advantage with RTSP is the fact it really is nearly “real time” compared to DASH/HLS. Using my Raspberry Pis DASH/HLS seem to have a 10~20 second delay, compared to about 1 second for RTSP. As I already went over how to set that up, I won’t repeat it here and only go over DASH. But I personally use RSTP for my own home setup still.

Setting up required software

The two programs you will need are a file server (nginx, apache, python -m http.server, etc…) to host the DASH/HLS content and ffmpeg. And you don’t have to hand compile either!

Super short version:

sudo apt install nginx ffmpeg -y

To better understand why we need them and how to test them to make sure they are working properly, read on. Otherwise, skip ahead to “Gather Camera Details”.

File Server

Last time we use RTSP which required a special service of it’s own. Now we are using HLS and MPEG-DASH, which produce manifest and accompany stream files on the local system. For example, MPEG-DASH will create a manifest.mpd file that contains links to *.m4s files in the same directory which are the chunked up video files.

That means if we make those files accessible remotely, we can use standard HTTP to transport the video. Hence the need for a basic file server. I personally use nginx for the final setup, as it’s fast, easy to use, and has defaults we can use out of the box. So lets install it!

 sudo apt install nginx -y

Now all you need to do is open up a web browser on another computer on that network and connect to http://raspberrypi (If you changed hostname, or having trouble connecting, run hostname -I to see it’s IP address and use http://<ip_address> instead.) You should see a simple webpage that says “Welcome to nginx!”

FFmpeg

Since the last article came out, FFmpeg has finally started shipping with hardware acceleration built in! If you still want to compile in some custom libraries or try and optimize it for your needs, check out my Raspberry Pi FFmpeg compile guide. Otherwise, just download it from the distribution repositories.

sudo apt install ffmpeg -y

You can verify it’s part of the package by checking the encoders for h264_omx.

ffmpeg -hide_banner -encoders | grep omx

Should produce V….. h264_omx OpenMAX IL H.264 video encoder (codec h264) or similar. If for some reason it doesn’t have that or other libraries you are looking for, such as the popular fdk-aac, look into my article onto compiling FFmpeg yourself, or use the helper script with the option --compile-ffmpeg.

Gather camera details

If you have the helper script, simply run it with the option --camera-info and it will print out each device and their formats with their highest resolution for each.

sudo python3 streaming_setup.py --camera-info
# /dev/video0: {'yuyv422': '1280x800', 'mjpeg': '1280x720'}

Under the hood, this is running the following command for every device found.

ffmpeg -hide_banner -f video4linux2 -list_formats all -i /dev/video0

To also see what frame rates are supported per resolution, you will have to run the v4l2-ctl command for that device.

v4l2-ctl -d /dev/video0 --list-formats-ext

Create the FFmpeg command

The FFmpeg command is particular about order when talking about input and output details. Ours will be broken down into the following blocks:

ffmpeg <incoming video details> -i <device> <conversion details> <output>

So let’s say you are using a raspberry pi camera and want to stream 1080p video without re-encoding it. We first have to tell FFmpeg about the camera details it will pull from.

Applying the Camera Details

In longhand, it would look like this:

-input_format h264 -f video4linux2 -video_size 1920x1080 -framerate 30 -i /dev/video0

Hopefully each of those parts are pretty self explanatory. We can also reduce -video_size, aka the incoming resolution to -s, and -framerate, aka the fps to -r.

Network Bandwidth Considerations

Internet streamers, beware you may not be able to upload directly from the camera’s full 1080p at 30fps. I did a quick test using vnstat over a wired connection with a Pi Zero, and found my 5MP OV5647 camera was using almost 20Mbit/s. Keep in mind the official Pi Camera with the sony sensor is 8MP so may be even higher than that.

The following tests were done at two minute averages while the stream was being watched. The averages were recorded, and generally the peaks were 2x the average.

ResolutionfpsAverage Mbit/s
1920×10803020.12
1920×1080155.12
1280×7206015.81
1280×720303.94
1280×720152.75
640×480905.66
640×480604.43
640×480300.93
640×480150.43
Using a Pi Zero with 5MP OV5647 camera over ethernet

Conversion options

Since this already is h264 we don’t need anything other than to say copy the incoming stream. So that option is -codec:v copy or shorthand -c:v copy. Which is saying set the codec of v for video tracks to copy aka don’t convert.

-c:v copy

If you instead had a webcam that only supported mjpeg input, or if you needed to add text overlay to the video, you would have to recompile FFmpeg. With the Raspberry Pi, you’ll want to use the built in hardware encoder, h264_omx. You would then also have to set the bitrate (-b:v) of the outgoing video. That is really camera / network dependent, but my rule of thumb is use video width x hight x 2. So 1920x1080x2 ==4,147,200, so I would set the bitrate to 4M (aka ~4000kb, or ~4000000 bytes).

-c:v h264_omx -b:v 4M

The Raspberry Pi OpenMAX (omx) hardware encoder has very limited options, and doesn’t support constant quality or rate factors like libx264 does. So the only way to adjust quality is with the bitrate. As for general quality, it sits between libx264s ultrafast and superfast presets, which is somewhat disappointing but not surprising for a real-time hardware encoder.

MPEG-DASH and HLS output

Personally I would never recommend HLS to a friend, as MPEG-DASH is all around a more open and powerful muxer. But I understand some legacy systems don’t have DASH support yet. Thankfully, FFmpeg’s dash module gives us HLS for free! (Note that some systems don’t even support that, and you may end up having to use only the hls muxer.)

DASH and HLS both crate playlist files locally, with chucked up video files beside them. This creates a few problems, first is the cleanup and management of those files. Thankfully, the DASH model has options to delete all those files on exit, as well as the ability to only keep so many video chunks on disk at a time.

The bigger problem is the constant writing to the disk. In this case an SD card that is a wear item and has higher error rates with the more writes it experiences. So to save the SD card, and ourselves future headaches, we are going to write these files to memory instead!

sudo mkdir -p /dev/shm/streaming/

Tada, we now have a folder in shared memory space we can use. The caveat is it will be removed after restarts, so we will have to make sure it’s recreated before or FFmpeg service is started. But lets not get ahead of ourselves. We just need to know the rest of our FFmpeg command.

-f dash -window_size 10 -remove_at_exit 1 -hls_playlist 1 /dev/shm/streaming/manifest.mpd

We are using just a few options of what FFmpeg’s DASH muxer can do if you do need further customization, but I doubt it for most cases.

I am setting the max number of video chuncks to be kept at 10 via -window_size and telling FFmpeg to delete them and the manifest file when it stops running with -remove_at_exit 1. Then we enable HLS with -hls_playlist 1 which creates a master.m3u8 file in the same directory as the manifest.mpd (Feel free to disable HLS if you don’t need it.)

Putting it all together

If you have that camera with native h264 encoding, like the Pi Camera, here is your copy and paste code!

# sudo mkdir -p /dev/shm/streaming/
sudo ffmpeg -input_format h264 -f video4linux2 -video_size 1920x1080 -framerate 30 -i /dev/video0 -c:v copy -f dash -window_size 10 -remove_at_exit 1 -hls_playlist 1 /dev/shm/streaming/manifest.mpd

You should soon start seeing messages about the manifest and chucks being updated and the current frame rate.

[dash @ 0x20bff00] Opening '/dev/shm/streaming/manifest.mpd.tmp' for writing
[dash @ 0x20bff00] Opening '/dev/shm/streaming/media_0.m3u8.tmp' for writing
[dash @ 0x20bff00] Opening '/dev/shm/streaming/chunk-stream0-00003.m4s.tmp' for writing
[dash @ 0x20bff00] Opening '/dev/shm/streaming/manifest.mpd.tmp' for writing
[dash @ 0x20bff00] Opening '/dev/shm/streaming/media_0.m3u8.tmp' for writing
[dash @ 0x20bff00] Opening '/dev/shm/streaming/chunk-stream0-00004.m4s.tmp' for writing
frame=  631 fps= 30 q=-1.0 size=N/A time=00:00:20.95 bitrate=N/A speed=1.01x

If you are showing errors like Operation not permitted or Cannot find a proper format please check your input formats and try lower resolutions. Sometimes cameras list their photo taking resolutions which are much higher than their streaming resolutions. If you are still receiving the errors even with the right codec selected, turn the Pi off, check the connections to the camera and turn it back on, as the camera can sometimes get in a bad state or have a loose wire.

To add audio or a text overlay like a timestamp please refer to those linked sections of my previous guide!

Setting up Remote Viewing

First we need to allow nginx to serve up that manifest file. By default nginx is serving up the /var/www/html directory. So it is easy enough to link our in memory folder as a sub folder there.

ln -s /dev/shm/streaming /var/www/html/streaming

Then we need to either have a way to view it via a webpage, or connect to it with a remote player such as VLC. If you have VLC or a viewer for DASH content, you can point it at http://raspberrypi/streaming/manifest.mpd and should start seeing the stream! (If you have a custom hostname or want to use IP, can use hostname -I command to use that in place of raspberrypi). To create a webpage to view the content, we will have to put it in a folder that won’t be deleted on reboot.

I personally chose /var/lib/streaming/index.html as I will also be putting a script in there that will help up set things up again each reboot. Make sure to create the directory first:

mkdir -p /var/lib/streaming

So open up your favorite text editor and copy the following html code into /var/lib/streaming/index.html

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <title>Raspberry Pi Camera</title>
    <style>
        html body .page {
            height: 100%;
            width: 100%;
        }

        video {
            width: 800px;
        }

        .wrapper {
            width: 800px;
            margin: auto;
        }
    </style>
</head>
<body>
<div class="page">
    <div class="wrapper">
        <h1> Raspberry Pi Camera</h1>
        <video data-dashjs-player autoplay controls type="application/dash+xml"></video>
    </div>
</div>
<script src="https://cdn.dashjs.org/latest/dash.all.min.js"></script>
<script>
    var player = init();

    function init() {
        var video = document.querySelector("video");
        player = dashjs.MediaPlayer().create();
        player.initialize(video, "manifest.mpd", true);
        player.updateSettings({
            'streaming': {
                'lowLatencyEnabled': true,
                'liveDelay': 1,
                'liveCatchUpMinDrift': 0.05,
                'liveCatchUpPlaybackRate': 0.5
            }
        });
        return player;
    }
</script>
</body>
</html>

Now lets link it up to the nginx directory.

ln -s /var/lib/streaming/index.html /var/www/html/streaming/index.html

Now you should be able to view your streaming camera webpage at http://raspberrypi/streaming!

We are using the very expansive dash.js open source library that has a lot of customization options. We are using a few basic ones here to set it up to be better at live streaming, but please check out their project and see how best to tweak it for your needs.

Reboot Script

Now that we have a sexy webpage and working ffmpeg command, we need to save ’em and make sure they can survive reboots. Therefor, we need to create a script that will run on restart to recreate the folder in memory and copy the index file over. I put mine right beside the permanent index file, with /var/lib/streaming/setup_streaming.sh add the following text.

# /var/lib/streaming/setup_streaming.sh
mkdir -p /dev/shm/streaming
if [ ! -e /var/www/html/streaming ]; then
    ln -s  /dev/shm/streaming /var/www/html/streaming
fi 
if [ ! -e /var/www/html/streaming/index.html ]; then
    ln -s /var/lib/streaming/index.html /var/www/html/streaming/index.html
fi 

Don’t forget to make it executable.

chmod +x /var/lib/streaming/setup_streaming.sh

Now to run it on restart, we are going to add this script to /etc/rc.local. Open the /etc/rc.local file and add these lines before the exit 0 at the bottom:

# Streaming Shared Memory Setup
if [ -f /var/lib/streaming/setup_streaming.sh ]; then
    /bin/bash /var/lib/streaming/setup_streaming.sh|| true
fi

Notice we are being extra extra careful to not throw errors here, as at the top of the rc.local file it makes it clear that it should never exit without a clean exit code of 0.

Camera Streaming Service

After we have the location in memory setup, we can start the camera. I chose to do so via a systemd service, so it can restart on errors and easy to manage. In this example it’s called stream_camera but you can change the actual service file name to suit your fancy.

Add a new file at /etc/systemd/system/stream_camera.service.

# /etc/systemd/system/stream_camera.service
[Unit]
Description=Camera Streaming Service
After=network.target rc-local.service

[Service]
Restart=always
RestartSec=20s
ExecStart=ffmpeg -input_format h264 -f video4linux2 -video_size 1920x1080 -framerate 30 -i /dev/video0 -c:v copy -f dash -window_size 10 -remove_at_exit 1 -hls_playlist 1 /dev/shm/streaming/manifest.mpd

[Install]
WantedBy=multi-user.target

Notice we set it to be run after rc-local to make sure we are ready for ffmpeg to write to /dev/shm/streaming.

Encoding UHD 4K HDR10 and HDR10+ Videos

tl;dr: If you don’t want to do the work by hand on the command line, use FastFlix. If you have any issues, reach out on discord or open a github issue.

I talked about this before with my encoding setting for handbrake post, but there is was a fundamental flaw using Handbrake for HDR 10-bit video….it only has had a 8-bit internal pipeline! It and most other GUIs don’t yet support dynamic metadata, such as HDR10+ or Dolby Vision though.

2021-02 Update: Handbrake’s latest code has HDR10 static metadata support.

Thankfully, you can avoid some hassle and save HDR10 or HDR10+ by using FastFlix instead, or directly use FFmpeg. (To learn about extracting and converting with HDR10+ or saving Dolby Vision by remuxing, skip ahead.) If you want to do the work yourself, here are the two most basic commands you need to save your juicy HDR10 data. This will use the Dolby Vision (profile 8.1) Glass Blowing demo.

Extract the Mastering Display metadata

First, we need to use FFprobe to extract the Mastering Display and Content Light Level metadata. We are going to tell it to only read the first frame’s metadata -read_intervals "%+#1" for the file GlassBlowingUHD.mp4

ffprobe -hide_banner -loglevel warning -select_streams v -print_format json -show_frames -read_intervals "%+#1" -show_entries "frame=color_space,color_primaries,color_transfer,side_data_list,pix_fmt" -i GlassBlowingUHD.mp4

A quick breakdown of what we are sending ffprobe:

  • -hide_banner -loglevel warning Don’t display what we don’t need
  • -select_streams v We only want the details for the video (v) stream
  • -print_format json Make it easier to parse
  • -read_intervals "%+#1" Only grab data from the first frame
  • -show_entries ... Pick only the relevant data we want
  • -i GlassBlowingUHD.mp4 input (-i) is our Dobly Vision demo file

That will output something like this:

{ "frames": [
        {
            "pix_fmt": "yuv420p10le",
            "color_space": "bt2020nc",
            "color_primaries": "bt2020",
            "color_transfer": "smpte2084",
            "side_data_list": [
                {
                    "side_data_type": "Mastering display metadata",
                    "red_x": "35400/50000",
                    "red_y": "14600/50000",
                    "green_x": "8500/50000",
                    "green_y": "39850/50000",
                    "blue_x": "6550/50000",
                    "blue_y": "2300/50000",
                    "white_point_x": "15635/50000",
                    "white_point_y": "16450/50000",
                    "min_luminance": "50/10000",
                    "max_luminance": "40000000/10000"
                },
                {
                    "side_data_type": "Content light level metadata",
                    "max_content": 0,
                    "max_average": 0
} ] } ] }

I chose to output it with json via the -print_format json option to make it more machine parsible, but you can omit that if you just want the text.

We are now going to take all that data, and break it down into groups of <color abbreviation>(<x>, <y>) while leaving off the right side of the in most cases*, so for example we combine red_x "35400/50000"and red_y "14600/50000" into R(35400,14600).

G(8500,39850)B(6550,2300)R(35400,14600)WP(15635,16450)L(40000000, 50)

*If your data for colors is not divided by /50000 or luminescence not divided by 10000 and have been simplified, you will have to expand it back out to the full ratio. For example if yours lists 'red_x': '17/25', 'red_y': '8/25' you will have to divide 50000 by the current denominator (25) to get the ratio (2000) and multiply that by the numerator (17 and 8) to get the proper R(34000,16000).

This data, as well as the Content light level <max_content>,<max_average> of 0,0 will be fed into the encoder command options.

Convert the video

This command converts only the video, keeping the HDR10 intact. We will have to pass these arguments not to ffmpeg, but to the x265 encoder directly via the -x265-params option. (If you’re not familiar with FFmpeg, don’t fret. FastFlix, which I talk about later, will do the work for you!)

ffmpeg  -i GlassBlowingUHD.mp4 -map 0 -c:v libx265 -x265-params hdr-opt=1:repeat-headers=1:colorprim=bt2020:transfer=smpte2084:colormatrix=bt2020nc:master-display=G(8500,39850)B(6550,2300)R(35400,14600)WP(15635,16450)L(40000000,50):max-cll=0,0 -crf 20 -preset veryfast -pix_fmt yuv420p10le GlassBlowingConverted.mkv

Let’s break down what we are throwing into the x265-params:

  • hdr-opt=1 we are telling it yes, we will be using HDR
  • repeat-headers=1we want these headers on every frame as required
  • colorprim, transfer and colormatrix the same as ffprobe listed
  • master-display this is where we add our color string from above
  • max-cll Content light level data, in our case 0,0

During a conversion like this, when a Dolby Vision layer exists, you will see a lot of messages like [hevc @ 000001f93ece2e00] Skipping NAL unit 62 because there is an entire layer that ffmpeg does not yet know how to decode.

For the quality of the conversion, I was setting it to -crf 20 with a -preset veryfast to convert it quickly without a lot of quality loss. I dig deeper into how FFmpeg handles crf vs preset with regards to quality below.

All sound and data and everything will be copied over thanks to the -map 0 option, that is a blanket statement of “copy everything from the first (0 start index) input stream”.

That is really all you need to know for the basics of how to encode your video and save the HDR10 data!

FFmpeg conversion settings

I covered this a bit before in the other post, but I wanted to go through the full gauntlet of presets and crfs one might feel inclined to use. I compared each encoding with the original using VMAF and SSIM calculations over a 11 second clip. Then, I created over a 100 conversions for this single chart, so it is a little cramped:

First takeaways are that there is no real difference between veryslow and slower, nor between veryfast and faster, as their lines are drawn on top of each other. The same is true for both VMAF and SSIM scores.

Second, no one in their right mind would ever keep a recording stored by using ultrafast. That is purely for real time streaming use.

Now for VMAF scores, 5~6 points away from source is visually distinguishable when watching. In other words it will have very noticeable artifacts. Personally I can tell on my screen with just a single digit difference, and some people are even more sensitive, so this is by no means an exact tell all. At minimum, lets zoom in a bit and get rid of anything that will produce video with very noticeable artifacts.

From these chart, it seems clear that there is obviously no reason whatsoever to ever use anything other than slow. Which I personally do for anything I am encoding. However, slow lives up to its namesake.

Encoding Speed and Bitrate

I had to trim off veryslow and slower from the main chart to be able to even see the rest, and slow is still almost three times slower than medium. All the charts contain the same data, just with some of the longer running presets removed from each to better see details of the faster presets.

Please note, the first three crf datapoints are little dirty, as the system was in use for the first three tests. However, there is enough clean data to see how that compares down the line.

To see a clearer picture of how long it takes for each of the presets, I will exclude those first three times, and average the remaining data. The data is then compared against the medium (default) present and the original clip length of eleven seconds.

PresetTimevs “medium”vs clip length (11s)
ultrafast11.2043.370x0.982x
superfast12.1753.101x0.903x
veryfast19.1391.973x0.575x
faster19.1691.970x0.574x
fast22.7921.657x0.482x
medium37.7641.000x0.291x
slow97.7550.386x0.112x
slower315.9000.120x0.035x
veryslow574.5800.066x0.019x

What is a little scary here is that even with “ultrafast” preset we are not able to get realtime conversion, and these tests were run on a fairly high powered system wielding an i9-9900k! While it might be clear from the crf graph that slow is the clear winner, unless you have a beefy computer, it may be a non-option.

Use the slowest preset that you have patience for

FFmpeg encoding guide

Also unlike VBR encoding, the average bitrate and filesize using crf will wildly differ based upon different source material. This next chart is just showing off the basic curve effect you will see, however it cannot be compared to what you may expect to see with your file.

The two big jumps are between slow and medium as well as veryfast and superfast. That is interesting because while slow and medium are quite far apart on the VMAF comparison, veryfast and superfast are not. I expected a much larger dip from superfast to ultrafast but was wrong.

FastFlix, doing the heavy lifting for you!

I have written a GUI program, FastFlix, around FFmpeg and other tools to convert videos easily to HEVC, AV1 and other formats. While I won’t promise it will provide everything you are looking for, it will do the work for you of extracting the HDR10 details of a video and passing them into a FFmpeg command. FastFlix can even handle HDR10+ metadata! It also has a panel that shows you exactly the command(s) it is about to run, so you could copy it and modify it to your hearts content!

If you have any problems with it please help by raising an issue!

Extracting and encoding with HDR10+ metadata

First off, this is not for the faint of heart. Thankfully the newest FFmpeg builds for Windows now support HDR10+ metadata files by default, so this process has become a lot easier. Here is a quick overview how to do it, also a huge shutout to “Frank” in the comments below for this tool out to me!

You will have to download a copy of hdr10plus_parser from quietviod’s repo.

Check to make sure your video has HDR10+ information it can read.

ffmpeg -loglevel panic -i input.mkv -c:v copy -vbsf hevc_mp4toannexb -f hevc - | hdr10plus_parser --verify -

It should produce a nice message stating there is HDR10+ metadata.

Parsing HEVC file for dynamic metadata…
Dynamic HDR10+ metadata detected.

Once you confirmed it exists, extract it to a json file

ffmpeg -i input.mkv -c:v copy -vbsf hevc_mp4toannexb -f hevc - | hdr10plus_parser -o metadata.json -

Option 1: Using the newest FFmpeg

You just need to pass the metadata file via the dhdr10-info option in the x265-params. And don’t forget to add your audio and subtitle settings!

ffmpeg.exe -i input.mkv -c:v libx265 -pix_fmt yuv420p10le -x265-params "colorprim=bt2020:transfer=smpte2084:colormatrix=bt2020nc:master-display=G(13250,34500)B(7500,3000)R(34000,16000)WP(15635,16450)L(10000000,1):max-cll=1016,115:hdr10=1:dhdr10-info=metadata.json" -crf 20 -preset medium "output.mkv"

Option 2: (original article) Using custom compiled x265

Now the painful part you’ll have to work on a bit yourself. To use the metadata file, you will need to custom compiled x265 with the cmake option “HDR10_PLUS” . Otherwise when you try to convert with it, you’ll see a message like “x265 [warning]: –dhdr10-info disabled. Enable HDR10_PLUS in cmake.” in the the output, but it will still encode, just without HDR10+ support.

Once that is compiled, you will have to use x265 as part of your conversion pipeline. Use x265 to convert your video with the HDR10+ metadata and other details you discovered earlier.

ffmpeg -loglevel panic -y -i input.mkv -to 30.0 -f yuv4mpegpipe -strict -1 - | x265 - --y4m --crf=20 --repeat-headers --hdr10 --colorprim=bt2020 --transfer=smpte2084 --colormatrix=bt2020nc --master-display="G(13250,34500)B(7500,3000)R(34000,16000)WP(15635,16450)L(10000000,1)" --max-cll="1016,115" -D 10 --dhdr10-info=metadata.json output.hevc

Then you will have to use that output.hevc file with ffmpeg to combine it with your audio conversions and subtitles and so on to repackage it.

Saving Dolby Vision

Thanks to Jesse Robinson in the comments, it seems it now may be possible to extract and convert a video with Dolby Vision without the original RPU file. Like the original HDR10+ section, you will need the x265 executable directly, as it does not support that option through FFmpeg at all.

Note I say “saving” and not “converting”, because unless you have the original RPU file for the DV to pass to x265, you’re out of luck as of now and cannot convert the video. The x265 encoder is able to take a RPU file and create a Dolby Vision ready movie.

Just like the manual HDR10+ process, first you have to use an extraction tool, in this case quietvoid’s dov_tool to extract the RPU.

ffmpeg -i GlassBlowing.mp4 -c:v copy -vbsf hevc_mp4toannexb -f hevc - | dovi_tool extract-rpu --rpu-out glass.rpu -

There are a few extra caveats when converting a Dolby Vision file with x265. First you need to use the appropriate 10 or 12 bit version. Second, change pix_fmt, input-depth and output-depth as required. Finally you MUST provide a dobly-vision-profile level for it to read the RPU as well as enable VBV by providing vbv-bufsize and vbv-maxrate (read the x265 command line options to see what settings are best for you.)

ffmpeg -i GlassBlowing.mp4 -f yuv4mpegpipe -strict -1 -pix_fmt yuv420p10le - | x265-10b - --input-depth 10 --output-depth 10 --y4m --preset veryfast --crf 22 --master-display "G(8500,39850)B(6550,2300)R(35400,14600)WP(15635,16450)L(40000000,50)" --max-cll "0,0" --colormatrix bt2020nc --colorprim bt2020 --transfer smpte2084 --dolby-vision-rpu glass.rpu --dolby-vision-profile 8.1 --vbv-bufsize 20000 --vbv-maxrate 20000 glass_dv.hevc

Once the glass_dv.hevc file is created, you will need to use tsMuxerR ( this version is recommended by some, and the newest nightly is linked below) to put it back into a container as well as add the audio back.

Currently trying to add the glass_dv.hevc back into a container with FFmpeg or MP4Box will break the Dolby Vision. Hence the need for the special Muxer. FFmpeg 5.0+ supports remuxing with Dolby Vision. You can do with with the “Copy” function in FastFlix or by hand with ffmpeg on command line and -c:v copy for the video track. Make sure you’re using at minimum 5.0 or it won’t work! This will result in video that has BL+RPU Dolby Vision.

It is possible to also only convert the audio and change around the streams with remuxers. For example tsMuxeR (nightly build, not default download) is popular to be able to take mkv files that most TVs won’t recognize HDR in, and remux them into into ts files so they do. If you also have TrueHD sound tracks, you may need to use eac3to first to break it into the TrueHD and AC3 Core tracks before muxing.

Easily Viewing HDR / Video information

Another helpful program to quickly view what type of HDR a video has is MediaInfo. For example here is the original Dolby Vision Glass Blowing video info (some trimmed):

Video
ID                                       : 1
Format                                   : HEVC
Format/Info                              : High Efficiency Video Coding
Format profile                           : Main 10@L5.1@Main
HDR format                               : Dolby Vision, Version 1.0, dvhe.08.09, BL+RPU, HDR10 compatible / SMPTE ST 2086, HDR10 compatible
Codec ID                                 : hev1
Color space                              : YUV
Chroma subsampling                       : 4:2:0 (Type 2)
Bit depth                                : 10 bits
Color range                              : Limited
Color primaries                          : BT.2020
Transfer characteristics                 : PQ
Matrix coefficients                      : BT.2020 non-constant
Mastering display color primaries        : BT.2020
Mastering display luminance              : min: 0.0050 cd/m2, max: 4000 cd/m2
Codec configuration box                  : hvcC+dvvC

And here it is after conversion:

Video
ID                                       : 1
Format                                   : HEVC
Format/Info                              : High Efficiency Video Coding
Format profile                           : Main 10@L5.1@Main
HDR format                               : SMPTE ST 2086, HDR10 compatible
Codec ID                                 : V_MPEGH/ISO/HEVC
Color space                              : YUV
Chroma subsampling                       : 4:2:0
Bit depth                                : 10 bits
Color range                              : Limited
Color primaries                          : BT.2020
Transfer characteristics                 : PQ
Matrix coefficients                      : BT.2020 non-constant
Mastering display color primaries        : BT.2020
Mastering display luminance              : min: 0.0050 cd/m2, max: 4000 cd/m2

Notice we have lost the Dolby Vision, BL+RPU information but at least we retained the HDR10 data, which Handbrake can’t do!

That’s a wrap!

Hope you found this information useful, and please feel free to leave a comment for feedback, suggestions or questions!

Until next time, stay safe and love each other!