I talked about this before with my encoding setting for handbrake post, but there is a fundamental flaw using Handbrake for HDR 10-bit video….it only has a 8-bit internal pipeline! So while you still get a 10-bit x265 video, you are losing the HDR10 data.
Thankfully, you can avoid that and save the HDR by using FastFlix instead, or directly use FFmpeg. (To learn about extracting and converting with HDR10+ or saving Dolby Vision by remuxing, skip ahead.) If you want to do the work yourself, here are the two most basic commands you need to save your juicy HDR10 data. This will use the Dolby Vision (profile 8.1) Glass Blowing demo.
Extract the Mastering Display metadata
First, we need to use FFprobe to extract the Mastering Display and Content Light Level metadata. We are going to tell it to only read the first frame’s metadata -read_intervals "%+#1"
for the file GlassBlowingUHD.mp4
ffprobe -hide_banner -loglevel warning -select_streams v -print_format json -show_frames -read_intervals "%+#1" -show_entries "frame=color_space,color_primaries,color_transfer,side_data_list,pix_fmt" -i GlassBlowingUHD.mp4
A quick breakdown of what we are sending ffprobe
:
-hide_banner -loglevel warning
Don’t display what we don’t need-select_streams v
We only want the details for the video (v
) stream-print_format json
Make it easier to parse-read_intervals "%+#1"
Only grab data from the first frame-show_entries ...
Pick only the relevant data we want-i GlassBlowingUHD.mp4
input (-i
) is our Dobly Vision demo file
That will output something like this:
{ "frames": [ { "pix_fmt": "yuv420p10le", "color_space": "bt2020nc", "color_primaries": "bt2020", "color_transfer": "smpte2084", "side_data_list": [ { "side_data_type": "Mastering display metadata", "red_x": "35400/50000", "red_y": "14600/50000", "green_x": "8500/50000", "green_y": "39850/50000", "blue_x": "6550/50000", "blue_y": "2300/50000", "white_point_x": "15635/50000", "white_point_y": "16450/50000", "min_luminance": "50/10000", "max_luminance": "40000000/10000" }, { "side_data_type": "Content light level metadata", "max_content": 0, "max_average": 0 } ] } ] }
I chose to output it with json
via the -print_format json
option to make it more machine parsible, but you can omit that if you just want the text.
We are now going to take all that data, and break it down into groups of <color abbreviation>(<x>, <y>)
while leaving off the right side of the \
in most cases*, so for example we combine red_x
"35400/50000"
and red_y
"14600/50000"
into R(35400,14600)
.
G(8500,39850)B(6550,2300)R(35400,14600)WP(15635,16450)L(40000000, 50)
*If your data for colors is not divided by /50000
or luminescence not divided by 10000
and have been simplified, you will have to expand it back out to the full ratio. For example if yours lists 'red_x': '17/25', 'red_y': '8/25'
you will have to divide 50000
by the current denominator (25
) to get the ratio (2000
) and multiply that by the numerator (17
and 8
) to get the proper R(34000,16000)
.
This data, as well as the Content light level <max_content>,<max_average>
of 0,0
will be fed into the encoder command options.
Convert the video
This command converts only the video, keeping the HDR10 intact. We will have to pass these arguments not to ffmpeg, but to the x265 encoder directly via the -x265-params
option. (If you’re not familiar with FFmpeg, don’t fret. FastFlix, which I talk about later, will do the work for you!)
ffmpeg -i GlassBlowingUHD.mp4 -map 0 -c:v libx265 -x265-params hdr-opt=1:repeat-headers=1:colorprim=bt2020:transfer=smpte2084:colormatrix=bt2020nc:master-display=G(8500,39850)B(6550,2300)R(35400,14600)WP(15635,16450)L(40000000,50):max-cll=0,0 -crf 20 -preset veryfast -pix_fmt yuv420p10le GlassBlowingConverted.mkv
Let’s break down what we are throwing into the x265-params
:
hdr-opt=1
we are telling it yes, we will be using HDRrepeat-headers=1
we want these headers on every frame as requiredcolorprim
,transfer
andcolormatrix
the same as ffprobe listedmaster-display
this is where we add our color string from abovemax-cll
Content light level data, in our case0,0
During a conversion like this, when a Dolby Vision layer exists, you will see a lot of messages like [hevc @ 000001f93ece2e00] Skipping NAL unit 62
because there is an entire layer that ffmpeg does not yet know how to decode.
For the quality of the conversion, I was setting it to -crf 20
with a -preset veryfast
to convert it quickly without a lot of quality loss. I dig deeper into how FFmpeg handles crf
vs preset
with regards to quality below.
All sound and data and everything will be copied over thanks to the -map 0
option, that is a blanket statement of “copy everything from the first (0 start index) input stream”.
That is really all you need to know for the basics of how to encode your video and save the HDR10 data!
FFmpeg conversion settings
I covered this a bit before in the other post, but I wanted to go through the full gauntlet of preset
s and crf
s one might feel inclined to use. I compared each encoding with the original using VMAF and SSIM calculations over a 11 second clip. Then, I created over a 100 conversions for this single chart, so it is a little cramped:

First takeaways are that there is no real difference between veryslow
and slower
, nor between veryfast
and faster
, as their lines are drawn on top of each other. The same is true for both VMAF
and SSIM
scores.
Second, no one in their right mind would ever keep a recording stored by using ultrafast
. That is purely for real time streaming use.
Now for VMAF scores, 5~6 points away from source is visually distinguishable when watching. In other words it will have very noticeable artifacts. Personally I can tell on my screen with just a single digit difference, and some people are even more sensitive, so this is by no means an exact tell all. At minimum, lets zoom in a bit and get rid of anything that will produce video with very noticeable artifacts.

From these chart, it seems clear that there is obviously no reason whatsoever to ever use anything other than slow
. Which I personally do for anything I am encoding. However, slow
lives up to its namesake.
Encoding Speed and Bitrate
I had to trim off veryslow
and slower
from the main chart to be able to even see the rest, and slow
is still almost three times slower than medium
. All the charts contain the same data, just with some of the longer running presets removed from each to better see details of the faster presets.
Please note, the first three crf
datapoints are little dirty, as the system was in use for the first three tests. However, there is enough clean data to see how that compares down the line.
To see a clearer picture of how long it takes for each of the presets, I will exclude those first three times, and average the remaining data. The data is then compared against the medium
(default) present and the original clip length of eleven seconds.
Preset | Time | vs “medium” | vs clip length (11s) |
ultrafast | 11.204 | 3.370x | 0.982x |
superfast | 12.175 | 3.101x | 0.903x |
veryfast | 19.139 | 1.973x | 0.575x |
faster | 19.169 | 1.970x | 0.574x |
fast | 22.792 | 1.657x | 0.482x |
medium | 37.764 | 1.000x | 0.291x |
slow | 97.755 | 0.386x | 0.112x |
slower | 315.900 | 0.120x | 0.035x |
veryslow | 574.580 | 0.066x | 0.019x |
What is a little scary here is that even with “ultrafast” preset we are not able to get realtime conversion, and these tests were run on a fairly high powered system wielding an i9-9900k! While it might be clear from the crf
graph that slow
is the clear winner, unless you have a beefy computer, it may be a non-option.
Use the slowest preset that you have patience for
FFmpeg encoding guide
Also unlike VBR
encoding, the average bitrate and filesize using crf
will wildly differ based upon different source material. This next chart is just showing off the basic curve effect you will see, however it cannot be compared to what you may expect to see with your file.

The two big jumps are between slow and medium as well as veryfast and superfast. That is interesting because while slow and medium are quite far apart on the VMAF comparison, veryfast
and superfast
are not. I expected a much larger dip from superfast
to ultrafast
but was wrong.
FastFlix, doing the heavy lifting for you!
I have written a GUI program, FastFlix, around FFmpeg and other tools to convert videos easily to HEVC, AV1 and other formats. While I won’t promise it will provide everything you are looking for, it will do the work for you of extracting the HDR10 details of a video and passing them into a FFmpeg command. FastFlix can even handle HDR10+ metadata! It also has a panel that shows you exactly the command(s) it is about to run, so you could copy it and modify it to your hearts content!

If you have any problems with it please help by raising an issue!
Extracting and encoding with HDR10+ metadata
First off, this is not for the faint of heart. Thankfully the newest FFmpeg builds for Windows now support HDR10+ metadata files by default, so this process has become a lot easier. Here is a quick overview how to do it, also a huge shutout to “Frank” in the comments below for this tool out to me!
You will have to download a copy of hdr10plus_parser
from quietviod’s repo.
Check to make sure your video has HDR10+ information it can read.
ffmpeg -loglevel panic -i input.mkv -c:v copy -vbsf hevc_mp4toannexb -f hevc - | hdr10plus_parser --verify -
It should produce a nice message stating there is HDR10+ metadata.
Parsing HEVC file for dynamic metadata… Dynamic HDR10+ metadata detected.
Once you confirmed it exists, extract it to a json file
ffmpeg -i input.mkv -c:v copy -vbsf hevc_mp4toannexb -f hevc - | hdr10plus_parser -o metadata.json -
Option 1: Using the newest FFmpeg
You just need to pass the metadata file via the dhdr10-info
option in the x265-params
. And don’t forget to add your audio and subtitle settings!
ffmpeg.exe -i input -c:v libx265 -pix_fmt yuv420p10le -x265-params "colorprim=bt2020:transfer=smpte2084:colormatrix=bt2020nc:master-display=G(13250,34500)B(7500,3000)R(34000,16000)WP(15635,16450)L(10000000,1):max-cll=1016,115:hdr10=1:dhdr10-info=metadata.json" -crf 20 -preset medium "output.mkv"
Option 2: (original article) Using custom compiled x265
Now the painful part you’ll have to work on a bit yourself. To use the metadata file, you will need to custom compiled x265 with the cmake option “HDR10_PLUS” . Otherwise when you try to convert with it, you’ll see a message like “x265 [warning]: –dhdr10-info disabled. Enable HDR10_PLUS in cmake.” in the the output, but it will still encode, just without HDR10+ support.
Once that is compiled, you will have to use x265 as part of your conversion pipeline. Use x265 to convert your video with the HDR10+ metadata and other details you discovered earlier.
ffmpeg -loglevel panic -y -i input.mkv -to 30.0 -f yuv4mpegpipe -strict -1 - | x265 - --y4m --crf=20 --repeat-headers --hdr10 --colorprim=bt2020 --transfer=smpte2084 --colormatrix=bt2020nc --master-display="G(13250,34500)B(7500,3000)R(34000,16000)WP(15635,16450)L(10000000,1)" --max-cll="1016,115" -D 10 --dhdr10-info=metadata.json output.hevc
Then you will have to use that output.hevc file with ffmpeg to combine it with your audio conversions and subtitles and so on to repackage it.
Saving Dolby Vision (or HDR10+)
Thanks to Jesse Robinson in the comments, it seems it now may be possible to extract and convert a video with Dolby Vision without the original RPU file. Like the original HDR10+ section, you will need the x265 executable directly, as it does not support that option through FFmpeg
at all.
Check out quietvoid’s dovi_tool to get started. Note I say “saving” and not “converting”, because unless you have the original RPU file for the DV to pass to x265, you’re out of luck as of now and cannot convert the video. However, the x265 encoder is able to take a RPU file and create a Dolby Vision ready movie, but we don’t have anything to extract that (yet).
It is possible to also only convert the audio and change around the streams with remuxers. For example tsMuxeR (nightly build, not default download) is popular to be able to take mkv
files that most TVs won’t recognize HDR in, and remux them into into ts
files so they do. If you also have TrueHD
sound tracks, you may need to use eac3to first to break it into the TrueHD
and AC3 Core
tracks before muxing.
Easily Viewing HDR / Video information
Another helpful program to quickly view what type of HDR a video has is MediaInfo. For example here is the original Dolby Vision Glass Blowing video info (some trimmed):
Video ID : 1 Format : HEVC Format/Info : High Efficiency Video Coding Format profile : Main 10@L5.1@Main HDR format : Dolby Vision, Version 1.0, dvhe.08.09, BL+RPU, HDR10 compatible / SMPTE ST 2086, HDR10 compatible Codec ID : hev1 Color space : YUV Chroma subsampling : 4:2:0 (Type 2) Bit depth : 10 bits Color range : Limited Color primaries : BT.2020 Transfer characteristics : PQ Matrix coefficients : BT.2020 non-constant Mastering display color primaries : BT.2020 Mastering display luminance : min: 0.0050 cd/m2, max: 4000 cd/m2 Codec configuration box : hvcC+dvvC
And here it is after conversion:
Video ID : 1 Format : HEVC Format/Info : High Efficiency Video Coding Format profile : Main 10@L5.1@Main HDR format : SMPTE ST 2086, HDR10 compatible Codec ID : V_MPEGH/ISO/HEVC Color space : YUV Chroma subsampling : 4:2:0 Bit depth : 10 bits Color range : Limited Color primaries : BT.2020 Transfer characteristics : PQ Matrix coefficients : BT.2020 non-constant Mastering display color primaries : BT.2020 Mastering display luminance : min: 0.0050 cd/m2, max: 4000 cd/m2
Notice we have lost the Dolby Vision, BL+RPU
information but at least we retained the HDR10 data, which Handbrake can’t do!
That’s a wrap!
Hope you found this information useful, and please feel free to leave a comment for feedback, suggestions or questions!
Until next time, stay safe and love each other!
Your guide is excellent. I felt like I was never explained about the filesize difference between slow and fast, so I didn’t understand the quality compensation. Ever since I’ve been doing Slow at RF20-23 for archival purposes and mobile rips with little compromise. I’m doing a project where I’m trying to batch upload some mobile rips so I’m finding that sweetspot. I find that some frames look a bit poor on dark and specially lit scenes, especially on 720p, with RF23 Slow. I was using RF19 Fast before, though in general, and slow is giving me a great quality output for the filesize. . In general the guide is very useful for anything 1080p+. Slow RF20 is a bit too big for a mobile encode but great for archiving anime, as for well lit scenes I can’t tell the difference with an A/B test even on RF23 in most 1080p scenes, 18 inches from my 22″ monitor. Anime is a bit trickier too and usually needs a lower RF, so it’s very promising overall.
Thanks for the feedback and more info!
I’m honestly surprised at the bad details at RF23 slow with lower resolution, thanks for that heads up. Maybe someday I’ll get around to doing a test like this across more resolutions, though I might recommenced using x264 for anything non HDR as the time to encode isn’t really worth it IMO.
You’re also right that a sample size of a single source isn’t super telling for stuff like anime vs action vs documentary. I find myself using a range of around 16~22CRF depending on source. 16~18 for anime, 20 for most things, then 22 for old / film gray heavy movies (looking at you Ron Howard).
Curious to hear your thoughts on staxrip or any other gui based program besides handbrake. For some of us the cli commands involved here are a bit much to tackle. Have you compared staxrip or tried it for 10bit encoding. I think it has a 10bit pipeline and provides a gui to make things a little easier for those of us that just can’t get used to CLI based tools. Thanks for the guide either way.
I haven’t used staxrip so I can’t verify, but it looks to be a wrapper around FFmpeg (and other tools) like FastFlix does, so if they internally copy the HDR details they may be doing the same thing.
There is a way to extract hdr10+ metadata with hdr10plus_parser.exe by quietvoid from Github.
Then use ffmpeg to extract:
ffmpeg64.exe -i “input.mkv” -c:v copy -vbsf hevc_mp4toannexb -f hevc – | hdr10plus_parser.exe -o metadata.json –
You can reencode with x265 or with special compiled ffmpeg (HDR10_PLUS enabled in cmake)
That could be a game changer, I will have to try that out and update the article, thank you so much!
I used the ffmpeg.exe from StaxRip.
You can find it (and me) on doom9.
Greetz from Germany.
Thanks for the great guide. I’ve been playing with ffmpeg CLI, StaxRip, and FastFlix and am thankful for the simple GUI offerings. I’ve started to test (but am struggling) with VMAF, but still need to work out some of the kinks in compiling windows.
I’ve been encoding my UHD movie library to x265 on an i7-10700 w 64gb of ram, testing RF18 and RF20, fast, medium, and slow presets. To be honest, I don’t see very much difference between them, even on a 110″ screen. The difference in filesize between them is insignificant relative to the total amount of storage needed for 200 UHD movies.
Some of the medium and slow encodes take between 18 and 30 hours. My most recent test, a UHD video, took 5 hours at RF18 Fast (9.8 GB), 5 hours with RF20 Fast (8.3 GB), and 16.5 hours at RF20 Slow preset (9.3 GB). I tested another UHD at RF18 Medium, and it took 10.5 hours, RF18 Fast took 7 hours. RF 18 Slow gave me an ETA of 36 hours, so I aborted that one.
I’m basically stuck in deciding which is the best way to go. RF18 Slow seems to be a bit unreasonable for my purposes. Is there much of a real-world difference between RF18 Fast and RF20 Slow? They seem to give me pretty similar file sizes. Is Medium a happy medium or more of a compromise to both?
Glad to be of any help! Despite the name “Fast” still produces high quality encodings overall. I would always avoid “ultrafast” and “superfast” as they are meant for real-time operations, but anything above that is fair game for storage. Especially when talking about RF20 or above, most videos (*in my own experience) would be hard pressed to see most visual differences. There might be some high motion scenes or particularly in animated videos have a little “blur” effect, but not to the extent you could tell without zooming in.
“Medium” I have found to be a pretty even split between “Fast” and “Slow” in both time and VMAF scores (for a single test video). However, it all comes down to what looks good to you. I doubt most people, including myself, could tell a difference from the couch between RF 18 SLOW and RF 20 FAST. I simply do mine slower for peace of mind more than anything else.
For building FFmpeg for Windows, I personally have done it via a Linux system (not WSL as that has issues), i.e. a virtual machine using https://github.com/rdp/ffmpeg-windows-build-helpers
Good luck with your endeavors, and thanks for commenting!
I have compiled ffmpeg 64 bit with media-autobuild_suite (Win 10) from Github – Zeranoe compatible – and it works now perfectly. HDR10+ and VMAF are enabled.
So you can then convert with the integrated x265 module.
So, interesting result with one movie. I tested with RF20 Fast, RF18 Fast, and RF 18 Medium. The original 4k UHD movie is 75GB.
RF 18 Medium: 27 hours, 95 GB output
RF 18 Fast: 12 hours, 87 GB output
RF 20 Fast: 11.5 hours, 63 GB output.
Other than the RF and preset, all parameters were the same.
“C:\StaxRip\Apps\Encoders\x265\x265.exe –crf 18 –output-depth 10 –master-display “G(13250,34500)B(7500,3000)R(34000,16000) WP(15635,16450)L(10000000,50)” –hdr10 –colorprim bt2020 –colormatrix bt2020nc –transfer smpte2084 –range limited –max-cll “1179,501” –repeat-headers –hrd –aud –frames 252890 –y4m –output “K:\4k UHD\movie.mkv”
This is the only movie, out of about a dozen I’ve tested, where the output was larger than the input. What happened?
The really terse answer is that it has to do with how the QP is calculated based on the CRF may be too high for that particular case. Keep in mind for x265 the CRFs can actually be a lot lower than x264 for same quality. From FFmpeg guide “The default is 28, and it should visually correspond to libx264 video at CRF 23, but result in about half the file size. CRF works just like in x264, so choose the highest value that provides an acceptable quality.”
So it sounds like for that video you will have to drop down to 22 or 24. Can always calculate VMAF to tell a benchmarked difference, but in the end it’s all up to your eyes.
Just I’d let you know that I used FastFlix to generate the command, but updated it to use the NVidia gpu in my system. It reduced the 56GB video down to 6GB, but the artifacts were horrible. I’m running a few more tests to see if there is some way to improve the output from the GPU or if I really do need to stick with CPU only.
On 1080 material, the output from the GPU has been quite good. This is my first time trying 4K material with it.
The options for the nvidia encoder are different than x265, so will take quite a bit of changes. It also does not support HDR10 metadata from what I understand.
View the options with: “ffmpeg -h encoder=hevc_nvenc”
I have done some light playing with it with settings like “-qp 20 -preset slow -tune hq -profile main10” though it may be better to do “-rc vbr_hq” and specify a bitrate instead. I personally would rather encoder at a faster preset with x265 (anything above “superfast”) than hardware accelerated encoding for storage of videos.
After a bit of experimenting with 1080 encoding, I settled on Nvidia crf 18 slow. I sampled 10 movies, CPU vs. GPU crf 18 preset slow. Playing the samples back on a 4k display, I can’t tell enough of a difference to identify which is which.
4K material looked awful looked awful using the same settings (CRF 18, preset slow and add profile main10) on GPU. 4K material looked pretty good with those settings on CPU. Using the settings produced by FastFlix, the 4K sources looked virtually identical to the original.
Thanks to your research, I now have personal “presets” that work for 1080 and 4K/10Bit/HDR material.
Just did my first successful HDR 10+ conversion, but when I did the first one it only had 8bit depth. I had to use “-D10” to get 10 bit.
Was that with
x265
directly I assume? Was the source 8-bit? For ffmpeg, to convert to a different pixel format, specifically 10-bit can use-pix_fmt yuv420p10le
Is there a progress bar or anything I can see that’ll show how long the encode will take?
For FastFlix I have in the backlog to add a ETA timer. For ffmpeg itself, it displays a line like “frame= 275 fps= 61 q=38.1 Lsize= 374kB time=00:00:04.69 bitrate= 653.7kbits/s speed=0.309x” while encoding. You can use the “speed” and divide it by the duration of the entire video. For example if you have a one hour video (60 minutes) and the speed is 0.309 it would equal 60 / 0.309 == 194 minutes for the encoding.
Hi Chris, so do I understand you right, that for x265 setting the quality to 20 (2160p) isn’t necessary for 4k quality and that it can be set lower? I have recently seen an increase in file size of ~100% since going from v2 to v3 of Fastflix but havent had chance to check if this just coincidence and its actually the film in question.
It is totally fine to adjust it as needed, yes, those are just rough guide lines. It really is source and preset dependent. I usually encode stuff with “medium” present anymore as I don’t have time for “slow”. Then with older films that contain a lot of film grain I will usually set it down to 22 before even trying the encode. Sometimes down to 24 for extreme cases.
CRF is just a programmatic guess at quality. So if the film has a lot of tiny changes always happening (i.e. film grain or constant action flix) the encoder will think it’s failing at keeping quality and keep pumping more and more bitrate needlessly at it. Whereas for stuff like animation that doesn’t change large portions of the screen, it’s going “I got this!” and may not put enough work into making sure the few lines that move are crisp.
Hello, thank you for all this explanations. The enconding in x265 is new for me and HDR, so I still don’t exactly understand what is the problem. If I have a video 4k UHD HDR, why can’t I simply rencode with something like that:
ffmpeg -i my_file.mkv -map 0 -c copy -c:v libx265 -preset slow -crf 18 output.mkv
If I understand right, the HDR information are lost. At the moment, I don’t have a device with HDR support, so I can’t check if the HDR is still present in the video after reencoding. So I won’t simply lose some metadata, I will also lose some quality in the video. Right ?
My second question: if I have Dolby Vision in the video or HDR10+, It’s better to avoid a reencoding. Otherwise, those information will be lost. In the future it will be possible but for now it’s better to keep the video unchanged.
Thank you for your help!
You can use a tool like mediainfo to check HDR details. But you are correct, without manually passing those details along, FFmpeg does not currently copy them when re-encoding, check out FastFlix as it will copy those details for you. You are also correct that anytime you re-encode something you lose quality. The only reasons to re-encode videos is either to save on storage space or to support a specific device’s format limitations.
Right now HDR10+ is actually possible to save! For Dolby Vision there is little chance of it ever being able to be-rencoded unless you are the original creator of the video with the RPU file. (Due to how it’s royalty based, that is not likely to change.)
Thank you
what is the problem?
Error while filtering: Cannot allocate memory
Failed to inject frame into filter network: Cannot allocate memory
Error while processing the decoded data for stream #0:0
x265 [info]: frame I: 1, Avg QP:18.39 kb/s: 94.75
x265 [info]: frame P: 2, Avg QP:19.50 kb/s: 18.32
x265 [info]: frame B: 4, Avg QP:22.31 kb/s: 16.35
x265 [info]: Weighted P-Frames: Y:0.0% UV:0.0%
x265 [info]: consecutive B-frames: 25.0% 0.0% 0.0% 0.0% 75.0%
encoded 7 frames in 23.06s (0.30 fps), 28.11 kb/s, Avg QP:20.95
Conversion failed!
Ran out of memory. I am guessing you have some filters applied? I see that most often with the
overlay
filter. What is the full command you are running (can remove file names)?Thanks so much for the info. I have been having washed out colors in all my videos that I encoded with FFMPEG. So I was forced to use Handbrake for everything. By copying the color group data into the encode string, like you showed above, I can finally get the correct color. My only problem now is that I would like to use the hevc_nvenc encoder, but it is not accepting the X265-params color data. Is there another way to pass the correct color data to the NVenc encoder?
The only way I have heard of how to do it is with another tool that adds the HDR10 data back after the video encoding. So would be a three step process of 1. Convert just video to .h265 / .hevc file. 2 Use NV HDR Patcher to add the master-display / CLL data back in. 3. Combine that file with the audio / subtitles you want to transfer or convert from the original source.
I should add that in most cases it’s just better to do libx265 at faster preset than hardware encoding if you’re storing the video, as HW Acceleration is lower quality / higher bitrate primarily meant for streaming.
Thanks, I think I will. I am ripping my movie collection and storing them on a media server. For a 4K UHD movie encoded at 9MB/s, at slow, libx265 only takes about 3 hrs, compared to Handbrake, which was taking a little over 2 days. But hevc_nvenc was taking about an hour, albeit with washed out colors. I can live with 3 hrs if I am getting better quality for my movie library, especially after watching Handbrake take multiple days for 4K and about 24 hrs for a 7MB/s 1080p.
Actually I was wrong about the encode time with libx265. I though that the counter in the DOS box was the elapsed time. It is actually the elapsed time of the movie that has been encoded. If it continues at this pace, it should take around 12 hours to finish the encode.
I was super shocked at the large time difference, so that sounds much better! From some quick googling it looks like NVENC slow is about the same as x265 faster https://www.reddit.com/r/Twitch/comments/c8ec2h/guide_x264_encoding_is_still_the_best_slow_isnt/ So even taking it down to x265 medium should be about 3x faster than slow, and still produce higher quality / lower file size than hardware.
it is difficult to find information about the picture quality of the newer nVidia GFX with Turin encoder and HEVC, especoially NVENC vs. x265. The few I found suggest that it may be not only (much!) faster but even delivers better quality! I am not talking about the older pascal or maxwell encoders. Did you do your tests with a newer card? It would be great to have support for this! the new iiPhones can shoot 4K HDR…
I know that Turing is a lot better, and possibly even on par with x265 slow in bitrate mode. However the two biggest problems I have are that it lacks a CRF mode and doesn’t currently support HDR10 through FFmpeg. For any near real-time time I would suggest hardware encoder, for storage I will stick with software encoders myself.
I do not have that new of a card to test with personally. If anyone feels like providing me hardware I would love to do some analysis between the various H.265 software and hardware options, but I doubt that will ever happen. This site and my projects cost me money to maintain, as I don’t want to profitize them, so I can only really test the software side of things.
thank you for your prompt, kind and detailed reply! Also for your work and superb unique transcoder tool. Yes, all three restrictions seem to be a game-killer for HDR turing hw-transcoding 😦 The best I can offer is to make some test runs with my turing gfx if it helped you and you were interested. Best regards.
I just got my gtx 1650 super and did a test run on the video “crowd run”. maybe interested.
x265 CRF 22 gives VMAF score 98,6 @22,3 MB size
nvenc HW-2pass at 0,15 bits/pixel gives VMAF score 96,1 @20,1 MB size
Great article, thank you! I didn’t suspect that the whole HDR encoding problem is so complex before I tried to reencode HDR10+ movie with SVT-HEVC in ffmpeg (manual patching required). Do you know if it’s technically possible to mux HDR metadata with encoded video stream? I couldn’t find any tools for this task, but if it’s possible in theory then creating such a tool would open a possibility to use any encoder with HDR material.
I do know of https://github.com/SK-Hardwired/nv_hevc_hdr_patcher which should be able to do that to a raw stream. I have not gotten around to testing it myself, so let me know if it works!
Does have the warning: “NOTE: This may not work well with HEVC streams made by x265 (x265 lib) with REPEATING NAL and SEI units! Output stream most probably could be corrupted!”
Have you encountered any issues with 10-bit files and FFProbe not showing the side_data_list information? For 10-bit files (including the linked glass blowing files) its not displaying for me. I tried a few FFProbe builds without success. Works fine for 8-bit videos though.
I have not come across that issue myself, extra worrying that it isn’t a specific ffprobe version. Are you sure you tried the right demo file (from the middle row, profile 8.1), the two top ones for iOS will not list that info.
Hm you are correct I had the wrong video and the 8.1 video does include the metadata. Is there a reason that the other videos don’t report that information? Are they simply missing that metadata and therefore FFProbe can’t report it?
I’m attempting to look into this issue presented here for a script I develop where someone thought it was a 10 bit issue but this sample file seems to prove otherwise
https://github.com/mdhiggins/sickbeard_mp4_automator/issues/1377
Your article here was incredibly helpful for pulling this data and just wanted to say thank you for putting it together
Glad it has helped!
Dolby Vision profile levels are confusing as they come with different information. Profile 5 only comes with Dolby Vision, whereas 8.1 comes with DV + HDR10 information. There is also 8.2 and 8.4 that do not, https://dolby.force.com/professionalsupport/s/article/What-is-Dolby-Vision-Profile?language=en_US.
The file in that issue is reporting it is a “bt.470bg” color space in uncompressed 4:4:4 chroma (yuvj444p), so it does not have HDR10 information (which is for bt.2020 + yuv420p10le). 4:4:4 chroma is usually reserved for content developers, so odd to see it outside that realm, and I do not personally have experience converting that to it’s HDR10 equivalent.
4:4:4 is also what graphics cards / PCs output for monitors. So my guess is someone was doing a raw copy from that, thinking they were getting 4K HDR without really understanding video encoding.
Appreciate you taking a look, that’s a very helpful clarification
Also I noticed a small error in the article where you have the initial parameters listed in the “Extract the Mastering Display metadata” section as
G(8500,39850)B(6550,2300)R(35400,14600)WP(15635,16450)L(50,40000000)
Looks like you switched the min/max luminance here but have it in the correct order later in the article
Great catch, thank you! Updated
Same error on this other article
https://codecalamity.com/hdr-hdr10-hdr10-hlg-and-dolby-vision/
Great read btw
Good eyes! BTW, thought of a quick test to tell if a video is in bt.2020 color space. FFmpeg and VLC do not automatically downmap colors from 10 -> 8 bit when capturing pictures. So take a screenshot with VLC or FFmpeg (
ffmpeg -ss 60 -i GlassBlowing.mp4 -map 0:v -vframes 1 out.jpg
) and if it looks washed out, that means it’s using the expected bt.2020 color space. That trick does not tell you if it has HDR10 info, but it’s a quick litmus test to avoid the issues like you just had. You’re also correct MediaInfo is a great tool that will tell you specifics in a really nice format, love that thing.I know that this is a freetime project of yours but still I have a feature request without any expectations 😉 NVENC h265 10 bit HDR support… Hybrid generates this line in this setup (using Rigayas NnvEncC) if you would come to the idea to implement it…
NVEnc –y4m -i – –fps 23.976 –codec h265 –profile main10 –level auto –tier high –sar 1:1 –lookahead 16 –output-depth 10 –vbrhq 10000 –max-bitrate 240000 –gop-len 0 –ref 3 –bframes 5 –bref-mode middle –mv-precision Q-pel –preset quality –fullrange –colorprim bt2020 –transfer smpte2084 –colormatrix bt2020nc –max-cll 1838,277 –master-display G(13250,34500)B(7500,3000)R(34000,16000)WP(15635,16450)L(40000000,50) –cuda-schedule sync –output “C:\xxxxx”
That’s good to know NVEnc itself supports HDR10 static metadata. I am trying to stick to what’s available through ffmpeg at the moment (I have tested and at least the 10-bit color space trasnfer works) and have not seen a way to pass master display / cll content through. The tracking issue for adding the nvenc_hevc encoder is here https://github.com/cdgriffith/FastFlix/issues/109
Great guide. Thanks so much. Only problem that I have is that ffprobe gives me an error on -read_intervals “%+#1”
(Win10 DOS bat). Coudnt not find out why.
Try retyping out the ” quotes, it may be copying them as fancy unicode realm quotes instead of standard ascii ones
No. Thats not it. I get
Invalid interval start specification ‘#’
Error parsing read interval #0 ‘+#’
Failed to set value ‘+#’ for option ‘read_intervals’: Invalid argument
Hrmm, only thing I can think of is possible the ffprobe version is out of date? Can grab the latest from https://github.com/BtbN/FFmpeg-Builds/releases (ffmpeg-N-*-win64-gpl.zip is a safe bet)
I’ve been using the tool below to create RPUs
https://github.com/quietvoid/dovi_tool/releases
The StaxRip encoded file then needs to be run thru MakeMKV to restore/fix DV info.
Thanks for sharing, I will have to try that out and update the article!
Great article, saves me a lot of time researching optimal settings. My free antivirus software ‘Avira’ has a problem with a (probably) false positive (heuristic) with Fastflix. Uploaded it to virustotal to check with all AV solutions and 3 came back: https://www.virustotal.com/gui/file/b800297aa2fe161971c7f6d91f1564658ddf1179af0a61e7f82be2ad593a8bef/detection
Thanks for the heads up, I will have to submit my files to them to get them cleared up probably. I imagine it’s because I use PyInstaller to make them into exeutables, and because there are some bad actors that use it to create malicious stuff it’s tripping on how it bundles it up into an exe.
I don’t write any malicious code, and you can always run it directly from source if you want to stay safe in case of a DNS attack against github itself, but I currently don’t have Hashes nor GPG key setup to “prove” a certain download was built cleanly via the CI/CD because of the enormous hassle.
Avast and F-secure false positives have both gotten fixed as of 4.0.4 https://www.virustotal.com/gui/file/b439003974c1acec63a1faf68ed1702d7a87ec340234cd7b4af6569b4699c61a/detection
Hi Chris,
you did a very good job here! Thanks. And the info about Handbrake´s 8-bit pipeline was helpful too.
I made some tests with HDR10 and HDR10+ material and it seems to work. Why not implement the metadata extraction for HDR10+ in the Fastflix-GUI? That would make things a lot easier.
With DolbyVision I´m making tests right now. BTW: what is meant by FEL, MEL and RPU and what do do with it after extraction?
Had to look up the MEL vs FEL myself from https://avdisco.com/t/demystifying-dolby-vision-profile-levels-dolby-vision-levels-mel-fel/95 “An enhancement layer that carries a residual signal is = 0 (zero), that is, the decoder does not need to process the residual signal, is called minimum enhancement layer (MEL). If the residual signal is > 0, it is called full enhancement layer (FEL).”
RPU is the file generated by the extraction and is like the HDR10+ metadata.json file, you use that to re-encode with.
I also totally agree a long term goal of FastFlix is to integrate that tool into the UI as well. Right now it technically only supports FFmpeg / FFprobe fully so adding other tools is a decent bit of work, but not impossible. It is nicely MIT licenesed and already built for windows, mac and linux, so might just bundle it in when it comes time.
Every time, i try to import a plain HEVC-File (base layer with HDR10 in this case) and no container with video and audio, Fastflix crashes…
Hi Jan, thanks for the report! Funny enough I had a similar issue when stress testing it with just an Image file. Fix is currently in
advanced-panel
branch and will be in 4.1.0 release.So I experimented a little bit with the HLG samples that are out there. Some say “HLG” as the transfer method. I know how to set this (arib-std-b67). But there are also videos where Mediainfo says there are two: “HLG / BT.2020 (10-bit)” I dont know what that means (some kind of fallback where both are included?) and how to the settings for x265 would be for these. Any Ideas?
I honestly only came across the first from the samples from https://4kmedia.org/tag/hlg/. If you have a link to where I could find the second, please send it to me at chris@cdgriffith.com and I will take a closer look at it.
I thinks its from here : https://4kmedia.org/lg-cymatic-jazz-hdr-hlg-uhd-4k-demo/ but when I downloaded it it was .ts not .mp4 like displayed on that page.
I think its the file mentioned here: https://github.com/cdgriffith/FastFlix/issues/135
So maybe Mediainfo detects it wrong?
What’s interesting with that video is that the regular color transfer shows as
arib-std-b67
but if you look at an individual frame it lists"color_transfer": "bt2020-10"
. Then after a conversion MediaInfo will instead sayTransfer characteristics: BT.2020 (10-bit) transfer_characteristics_Original: HLG
so it may be losing some of it’s true HLGness. (Honestly that video does not play for me properly before encoding, but does after so I am not sure what’s going on with it.)I normally use Kodi for playback of my movies and would like to know how to go forward (step by step) for creating working transcoded DV-MKVs. [removed] I’m unable to create transcoded videos with DV.
Demuxing und remuxing of BL, EL+RPU works practically but never on transcoded Streams!
The MKVs [sources] do contain only one single stream with BL+EL+RPU and profile 7.06. Tests with mp4muxer always create containers with two streams – despite of the used muxing profile! That looks like the structure of the .m2ts containers on the disc.
The other thing this is that right now, I do only have a projector without HDR capabilities so I can’t test the result! Tests will be done with Kodi 19 (beta) because of the included HDR capability.
I’m open to your thoughts!
[Edited by Moderator]
By the way, it is no longer true, that HandBrake has a 8-bit pipeline. The newest snapshot build has a 10-bit pipeline now. Will be released with HandBrake 1.4.0 in the future. But you need to disable all filters that are limited to 8-bit (I just disabled all filters).
Could you provide source for that please? Last I saw was https://github.com/HandBrake/HandBrake/issues/1307 which didn’t give dates, just a rando guess of “Definitely not before 2021, surely not before 2022”.
I just tried their latest release from yesterday, https://github.com/HandBrake/HandBrake/actions/runs/478528491 and even if it does have 10-bit pipeline it does not save HDR10 data yet.
Is there any way to analyse if a video actually uses the 10bit colors or if it uses only 8bit of it. (In other words if the source was 8 bit).