AMD Hardware Encoding in 2021 (VCE / VCN)

It’s 2021 and there still isn’t a lot of good info about AMD’s VCN hardware encoder for consumers. To that end, I will present my own take on the current “war” between software and hardware encoders, then go into quick details of how to best use AMD GPUs for encoding for video archival with FastFlix.

Note: I will only be comparing HEVC/H.265 10-bit HDR10 videos (both source and output). This use case is not usually covered in benchmarks and tests I have seen, and is more of the interest to those who have seen my previous posts on Encoding UHD HDR10 videos but may want to hardware accelerate it.


  • VCE – Video Coding Engine – AMD’s early name for its built in encoding hardware
  • VCN – Video Core Next – AMD’s new name for GPU hardware encoders (VCE / VCN used interchangeably)
  • AMF – Advanced Media Framework – AMD’s code and tools for developers to work with VCE / VCN
  • HEVC / H.265 – High Efficiency Video Coding – The videos codec we will use that supports HDR10
  • HDR10 – A set of metadata presented alongside the video to give the display additional details
  • VMAF – Netflix’s video quality metric used to compare encoded video to source’s quality

Software vs Hardware Encoders

Software encoders are coded to run on any general purpose CPU. Doesn’t matter if it’s an Intel i7, an AMD 5900x or even on your phone’s ARM based CPU. It gives them great versatility. In the other corner, Hardware encoders rely on specific physical hardware components to exist in a system to use them to accelerate their transcoding. In today’s case, it takes a AMD GPU with VCN support to use the features we want to test.

Apples and oranges are both fruit, sports cars and pickup trucks are both vehicles, and software and hardware encoders both transcode videos. Just as it’s futile to compare the track capabilities of a supercar to the towing capacity of a pickup truck, we are about to venture into said territory with these encoders.

Please excuse the poor artwork, Clara wasn’t available this week so I had to do it myself!

Use case over metrics

The workhorse of the HEVC software encoding world is x265. There are plenty of other software encoders like the industry used ATEME TITAN File for UHD blu-rays or other open source encoders like Turing codec or kvazaar, but because of their lack of inclusion in standard tools like FFmpeg, they are overlooked.

So what is this workhorse good for? Flexibility and video archival. By being able to run on almost anything that can compile C code, x265 is a champion of cross platform operations. It is also the standard when looking for pure quality metrics in HEVC videos.

Comparatively, hardware encoding, in this case using AMD’s Video Coding Engine (VCE), is built to be power efficient and fast. Really, really fast. For example, on a 6900XT you can real-time encode a 60fps UHD stream on the slowest setting!

Let’s see what happens when they venture into each other’s bailiwicks.

Drag Race

Here’s what everybody loves: A good graph. We’re going to compare x265 using it’s fastest encoding speed vs the slowest setting AMD’s VCE currently has with a 60fps HDR10 4K source video.

Using a 60fps HDR10 UHD source, x265 was compared with it’s highest speed preset vs VCE’s slowest

As expected, it was a slaughter. Hardware encoding ran at 96 fps while x265 could only manage 14.5 fps. AMD’s hardware encoding clearly pummels the fastest setting x265 has to offer, even on an i9-9900k. Even if using an AMD 5950x which may be up to twice as fast, the hardware encoder would still dominate.

Where does this matter

Streaming and real-time transcoding. Hardware encoders were designed with the idea of “accelerated” encoding. Which makes them great for powering your Zoom calls or streaming to Twitch.

Encoding Quality Prowess

Now lets venture into x265’s house and compare computed quality with VMAF. We’ll be using the veryslow setting, darn the total time taken!

In this scenario we will compress a UHD video with a bitrate of 15,000k to four different rates. The goal for a decent encode is to reach at least VMAF 93, which is the bitrate range we will stay above. (VMAF 93+ doesn’t mean you won’t notice quality loss. It simply means that it probably WILL BE apparent if it is less than that.)

This was tested with a 30 second excerpt from Dolby’s Glass Blowing Demo (UHD profile 8.1)

Both encoders do great, keeping within a range that shouldn’t be to noticeable. However, x265 has a clear advantage at lower bitrates if all you care about is quality. It also maintains a steady edge throughout the test.

I have noticed while watching the AMD VCE encodes that it doesn’t do a great job with scene changes. I expect that is because VCE doesn’t support pre-analysis for HEVC, only for H.264. AMD VCE also suffers from lack of B-frame support, which I will talk about in the next blog post.

Where does this matter

Video archival. If you have a video that you are planning to discard for a high quality re-encode to save on file size, it’s better to stick with x265. Keep in mind, don’t just re-encode because you want to use a “better” codec, it’s always best to keep the original.

Gas Guzzling

This is a comparison I don’t see as often, and I think is overlooked. Encoding takes a lot of power, which means it costs money. I have been told by many FastFlix users that they let their x265 encodes run overnight, and some of their encodings take days!

This is also a harder to measure metric, as you need both encoders to produce the same quality output, as well as know their power usage. The entire thing also labors under the assumption that the only purpose of this machine is to encode the video while it is powered on, so please keep all that in mind as we dive into this.

To achieve the same quality of result file, it costs ten times as much in electricity to get the job done. This may not matter if you’re talking about a random encode here or there, but if you have a lot of videos to burn through, it could really start saving cash by switching to hardware encoders.

The Nitty Gritty about the power (Methodology)

Power usage will differ across hardware so this is for a very specific case that I can attest for (using both HWmonitor and a KillAWatt monitor). The 6900XT uses 63 watts over it’s baseline when encoding, for a total system draw of ~320w. The i9-9900k uses 111 watts over baseline for a total system draw of ~360w. (Keep in mind there is some extra CPU usage during hardware encode as well, so that is why total power is not a direct difference between the two.)

For the encoder speed, when using a UHD file I was able to get within 0.1% difference of VMAF when using VCE slow (same speed as above) and x265 veryfast (at 10.35fps).

Lets take a genericized use case of a two hour long video running at 24fps. 24fps * 60 seconds in a minutes * 60 minutes in an hour * 2 hours = 172,800 frames.

Estimated times and cost:

  • VCE – slow – 6900XT @ 96.47fps – 29.85 minutes
    • 0.16 kWh/day @ 320 watts
    • 0.019$ at @12 cents per kWh
  • x265 – i9-9900K@ 10.35fps – 287.3 minutes (four and a half hours)
    • 1.72 kWh/day @ 360 watts
    • 0.206$ at @12 cents per kWh

Where does this matter

The cost difference probably doesn’t sway many individuals But if you’re a prolific encoder, this could save you time and money.

Super Technical Head to Head Summary

Software (x265)Hardware (AMD VCE)
Quality⭐Best possibleLacks basic HEVC needs (B-frames / pre-analysis)
SpeedSlow to Super Slow⭐Crazy Fast
Requirements⭐Any old electrified rockNewer AMD GPU
Windows OS
Energy UsageAll the powah!sips daintily

So the winner is…. neither. If you’re encoding professionally you’ll be working with totally different software (like TITAN File). Then if you’re using it at home, it really just depends with what hardware you already have. If you’re wondering which GPU to get for the best encoding, wait for next month’s article 😉

Basically they both do what they were designed for. I would say Hardware encoders might have a slight overall edge, as they could be used for all cases. Whereas x265 currently can’t do UHD HDR10 real time encoding on consumer hardware.

Encoding HDR10 with AMD GPUs

Already got an AMD GPU and want to start encoding with it? Great, let’s get down to how to do it. First off make sure you are using Windows. If you’re using Linux for this, don’t.* If Linux is all you have, I would still recommend using a passthrough VM with Windows on it.

For Windows users, rigaya has made a beautiful tool called VCEEncC that has HDR10 support built in. It is a command line tool, but good news, FastFlix now supports it!

You will need to download VCEEncC manually as well, and make sure it is on the system path or link it up in File > Settings of FastFlix.

VCE doesn’t have a lot of options to worry about like other encoders, so can be on your way to re-encoding in no time!

* Possible on Linux to using VAAPI to encode HEVC. You would need to apply custom MESA patches to enable HDR10 support. AMF / VCEEncC only supports H.264 on Linix currently.

Best quality possible with VCE

Beauty is in the eye of the beholder, and so is video quality. Some features, like VBAQ (Variance Based Adaptive Quantization) will lower the measured metrics like VMAF and SSIM, but are designed look better to human eyes. Assuming you care about how the video looks, and aren’t just trying to impress your boss with numbers, we will stick with those.

Motion Vector Accuracyq-pel

Of course the largest determination of quality will be how much bitrate you will allow for (or which quantization rate you select). FastFlix has some loose recommendations, but what is truly needed will vary greatly dependent upon source. A GoPro bike ride video will require a lot more bitrate than a mounted security camera with very little movement overall.

Warnings and gotchas

Not all features are available for all cards. Also some features like b-frame support were promised for RDNA2 but still are not yet available.

Driver versions can make a difference. Always try using latest first, but if you experience issues using VCE it may not be using a new enough AMF version and need to downgrade to an older driver.

What do I use?

Personally I avoid re-encoding whenever possible. However, now that I do have an AMD GPU I do use it for any of my quick and dirty encoding needs. Though I would be saying the same about NVENC if I had a new Nvidia GPU (which does have B-frame support). In my opinion it’s simply not worth the time and energy investment for encoding with software. Either save the original or use a hardware encoder.

What about Nvidia (NVENC) or Intel (QSV)?

I am working to get access to latest generation hardware for both Nvidia’s NVENC and Intel’s QSV in the next month, so hopefully I will be able to create a follow up with some good head to head comparison. Historically NVENC has taken the crown, and by my research VCE hasn’t caught up yet, but who knows where QSV will end up!

Boring Details

  • x265 was used at commit 82786fccce10379be439243b6a776dc2f5918cb4 (2021-05-25) as part of FFmpeg
  • CPU is a i9-9900k
  • VCEEncC 6.13 on 6900xt with AMF Runtime 1.4.21 / SDK 1.4.21 using drivers 21.7.2


These tests were done on my own hardware purchased myself. All conclusions are my own thoughts and opinions and in no way represent any company.

Encoding UHD 4K HDR10 and HDR10+ Videos

I talked about this before with my encoding setting for handbrake post, but there is was a fundamental flaw using Handbrake for HDR 10-bit video….it only has had a 8-bit internal pipeline! It and most other GUIs don’t yet support dynamic metadata, such as HDR10+ or Dolby Vision though.

2021-02 Update: Handbrake’s latest code has HDR10 static metadata support.

Thankfully, you can avoid some hassle and save HDR10 or HDR10+ by using FastFlix instead, or directly use FFmpeg. (To learn about extracting and converting with HDR10+ or saving Dolby Vision by remuxing, skip ahead.) If you want to do the work yourself, here are the two most basic commands you need to save your juicy HDR10 data. This will use the Dolby Vision (profile 8.1) Glass Blowing demo.

Extract the Mastering Display metadata

First, we need to use FFprobe to extract the Mastering Display and Content Light Level metadata. We are going to tell it to only read the first frame’s metadata -read_intervals "%+#1" for the file GlassBlowingUHD.mp4

ffprobe -hide_banner -loglevel warning -select_streams v -print_format json -show_frames -read_intervals "%+#1" -show_entries "frame=color_space,color_primaries,color_transfer,side_data_list,pix_fmt" -i GlassBlowingUHD.mp4

A quick breakdown of what we are sending ffprobe:

  • -hide_banner -loglevel warning Don’t display what we don’t need
  • -select_streams v We only want the details for the video (v) stream
  • -print_format json Make it easier to parse
  • -read_intervals "%+#1" Only grab data from the first frame
  • -show_entries ... Pick only the relevant data we want
  • -i GlassBlowingUHD.mp4 input (-i) is our Dobly Vision demo file

That will output something like this:

{ "frames": [
            "pix_fmt": "yuv420p10le",
            "color_space": "bt2020nc",
            "color_primaries": "bt2020",
            "color_transfer": "smpte2084",
            "side_data_list": [
                    "side_data_type": "Mastering display metadata",
                    "red_x": "35400/50000",
                    "red_y": "14600/50000",
                    "green_x": "8500/50000",
                    "green_y": "39850/50000",
                    "blue_x": "6550/50000",
                    "blue_y": "2300/50000",
                    "white_point_x": "15635/50000",
                    "white_point_y": "16450/50000",
                    "min_luminance": "50/10000",
                    "max_luminance": "40000000/10000"
                    "side_data_type": "Content light level metadata",
                    "max_content": 0,
                    "max_average": 0
} ] } ] }

I chose to output it with json via the -print_format json option to make it more machine parsible, but you can omit that if you just want the text.

We are now going to take all that data, and break it down into groups of <color abbreviation>(<x>, <y>) while leaving off the right side of the in most cases*, so for example we combine red_x "35400/50000"and red_y "14600/50000" into R(35400,14600).

G(8500,39850)B(6550,2300)R(35400,14600)WP(15635,16450)L(40000000, 50)

*If your data for colors is not divided by /50000 or luminescence not divided by 10000 and have been simplified, you will have to expand it back out to the full ratio. For example if yours lists 'red_x': '17/25', 'red_y': '8/25' you will have to divide 50000 by the current denominator (25) to get the ratio (2000) and multiply that by the numerator (17 and 8) to get the proper R(34000,16000).

This data, as well as the Content light level <max_content>,<max_average> of 0,0 will be fed into the encoder command options.

Convert the video

This command converts only the video, keeping the HDR10 intact. We will have to pass these arguments not to ffmpeg, but to the x265 encoder directly via the -x265-params option. (If you’re not familiar with FFmpeg, don’t fret. FastFlix, which I talk about later, will do the work for you!)

ffmpeg  -i GlassBlowingUHD.mp4 -map 0 -c:v libx265 -x265-params hdr-opt=1:repeat-headers=1:colorprim=bt2020:transfer=smpte2084:colormatrix=bt2020nc:master-display=G(8500,39850)B(6550,2300)R(35400,14600)WP(15635,16450)L(40000000,50):max-cll=0,0 -crf 20 -preset veryfast -pix_fmt yuv420p10le GlassBlowingConverted.mkv

Let’s break down what we are throwing into the x265-params:

  • hdr-opt=1 we are telling it yes, we will be using HDR
  • repeat-headers=1we want these headers on every frame as required
  • colorprim, transfer and colormatrix the same as ffprobe listed
  • master-display this is where we add our color string from above
  • max-cll Content light level data, in our case 0,0

During a conversion like this, when a Dolby Vision layer exists, you will see a lot of messages like [hevc @ 000001f93ece2e00] Skipping NAL unit 62 because there is an entire layer that ffmpeg does not yet know how to decode.

For the quality of the conversion, I was setting it to -crf 20 with a -preset veryfast to convert it quickly without a lot of quality loss. I dig deeper into how FFmpeg handles crf vs preset with regards to quality below.

All sound and data and everything will be copied over thanks to the -map 0 option, that is a blanket statement of “copy everything from the first (0 start index) input stream”.

That is really all you need to know for the basics of how to encode your video and save the HDR10 data!

FFmpeg conversion settings

I covered this a bit before in the other post, but I wanted to go through the full gauntlet of presets and crfs one might feel inclined to use. I compared each encoding with the original using VMAF and SSIM calculations over a 11 second clip. Then, I created over a 100 conversions for this single chart, so it is a little cramped:

First takeaways are that there is no real difference between veryslow and slower, nor between veryfast and faster, as their lines are drawn on top of each other. The same is true for both VMAF and SSIM scores.

Second, no one in their right mind would ever keep a recording stored by using ultrafast. That is purely for real time streaming use.

Now for VMAF scores, 5~6 points away from source is visually distinguishable when watching. In other words it will have very noticeable artifacts. Personally I can tell on my screen with just a single digit difference, and some people are even more sensitive, so this is by no means an exact tell all. At minimum, lets zoom in a bit and get rid of anything that will produce video with very noticeable artifacts.

From these chart, it seems clear that there is obviously no reason whatsoever to ever use anything other than slow. Which I personally do for anything I am encoding. However, slow lives up to its namesake.

Encoding Speed and Bitrate

I had to trim off veryslow and slower from the main chart to be able to even see the rest, and slow is still almost three times slower than medium. All the charts contain the same data, just with some of the longer running presets removed from each to better see details of the faster presets.

Please note, the first three crf datapoints are little dirty, as the system was in use for the first three tests. However, there is enough clean data to see how that compares down the line.

To see a clearer picture of how long it takes for each of the presets, I will exclude those first three times, and average the remaining data. The data is then compared against the medium (default) present and the original clip length of eleven seconds.

PresetTimevs “medium”vs clip length (11s)

What is a little scary here is that even with “ultrafast” preset we are not able to get realtime conversion, and these tests were run on a fairly high powered system wielding an i9-9900k! While it might be clear from the crf graph that slow is the clear winner, unless you have a beefy computer, it may be a non-option.

Use the slowest preset that you have patience for

FFmpeg encoding guide

Also unlike VBR encoding, the average bitrate and filesize using crf will wildly differ based upon different source material. This next chart is just showing off the basic curve effect you will see, however it cannot be compared to what you may expect to see with your file.

The two big jumps are between slow and medium as well as veryfast and superfast. That is interesting because while slow and medium are quite far apart on the VMAF comparison, veryfast and superfast are not. I expected a much larger dip from superfast to ultrafast but was wrong.

FastFlix, doing the heavy lifting for you!

I have written a GUI program, FastFlix, around FFmpeg and other tools to convert videos easily to HEVC, AV1 and other formats. While I won’t promise it will provide everything you are looking for, it will do the work for you of extracting the HDR10 details of a video and passing them into a FFmpeg command. FastFlix can even handle HDR10+ metadata! It also has a panel that shows you exactly the command(s) it is about to run, so you could copy it and modify it to your hearts content!

FastFlix 3.0

If you have any problems with it please help by raising an issue!

Extracting and encoding with HDR10+ metadata

First off, this is not for the faint of heart. Thankfully the newest FFmpeg builds for Windows now support HDR10+ metadata files by default, so this process has become a lot easier. Here is a quick overview how to do it, also a huge shutout to “Frank” in the comments below for this tool out to me!

You will have to download a copy of hdr10plus_parser from quietviod’s repo.

Check to make sure your video has HDR10+ information it can read.

ffmpeg -loglevel panic -i input.mkv -c:v copy -vbsf hevc_mp4toannexb -f hevc - | hdr10plus_parser --verify -

It should produce a nice message stating there is HDR10+ metadata.

Parsing HEVC file for dynamic metadata…
Dynamic HDR10+ metadata detected.

Once you confirmed it exists, extract it to a json file

ffmpeg -i input.mkv -c:v copy -vbsf hevc_mp4toannexb -f hevc - | hdr10plus_parser -o metadata.json -

Option 1: Using the newest FFmpeg

You just need to pass the metadata file via the dhdr10-info option in the x265-params. And don’t forget to add your audio and subtitle settings!

ffmpeg.exe -i input.mkv -c:v libx265 -pix_fmt yuv420p10le -x265-params "colorprim=bt2020:transfer=smpte2084:colormatrix=bt2020nc:master-display=G(13250,34500)B(7500,3000)R(34000,16000)WP(15635,16450)L(10000000,1):max-cll=1016,115:hdr10=1:dhdr10-info=metadata.json" -crf 20 -preset medium "output.mkv"

Option 2: (original article) Using custom compiled x265

Now the painful part you’ll have to work on a bit yourself. To use the metadata file, you will need to custom compiled x265 with the cmake option “HDR10_PLUS” . Otherwise when you try to convert with it, you’ll see a message like “x265 [warning]: –dhdr10-info disabled. Enable HDR10_PLUS in cmake.” in the the output, but it will still encode, just without HDR10+ support.

Once that is compiled, you will have to use x265 as part of your conversion pipeline. Use x265 to convert your video with the HDR10+ metadata and other details you discovered earlier.

ffmpeg -loglevel panic -y -i input.mkv -to 30.0 -f yuv4mpegpipe -strict -1 - | x265 - --y4m --crf=20 --repeat-headers --hdr10 --colorprim=bt2020 --transfer=smpte2084 --colormatrix=bt2020nc --master-display="G(13250,34500)B(7500,3000)R(34000,16000)WP(15635,16450)L(10000000,1)" --max-cll="1016,115" -D 10 --dhdr10-info=metadata.json output.hevc

Then you will have to use that output.hevc file with ffmpeg to combine it with your audio conversions and subtitles and so on to repackage it.

Saving Dolby Vision

Thanks to Jesse Robinson in the comments, it seems it now may be possible to extract and convert a video with Dolby Vision without the original RPU file. Like the original HDR10+ section, you will need the x265 executable directly, as it does not support that option through FFmpeg at all.

Note I say “saving” and not “converting”, because unless you have the original RPU file for the DV to pass to x265, you’re out of luck as of now and cannot convert the video. The x265 encoder is able to take a RPU file and create a Dolby Vision ready movie.

Just like the manual HDR10+ process, first you have to use an extraction tool, in this case quietvoid’s dov_tool to extract the RPU.

ffmpeg -i GlassBlowing.mp4 -c:v copy -vbsf hevc_mp4toannexb -f hevc - | dovi_tool extract-rpu --rpu-out glass.rpu -

There are a few extra caveats when converting a Dolby Vision file with x265. First you need to use the appropriate 10 or 12 bit version. Second, change pix_fmt, input-depth and output-depth as required. Finally you MUST provide a dobly-vision-profile level for it to read the RPU as well as enable VBV by providing vbv-bufsize and vbv-maxrate (read the x265 command line options to see what settings are best for you.)

ffmpeg -i GlassBlowing.mp4 -f yuv4mpegpipe -strict -1 -pix_fmt yuv420p10le - | x265-10b - --input-depth 10 --output-depth 10 --y4m --preset veryfast --crf 22 --master-display "G(8500,39850)B(6550,2300)R(35400,14600)WP(15635,16450)L(40000000,50)" --max-cll "0,0" --colormatrix bt2020nc --colorprim bt2020 --transfer smpte2084 --dolby-vision-rpu glass.rpu --dolby-vision-profile 8.1 --vbv-bufsize 20000 --vbv-maxrate 20000 glass_dv.hevc

Once the glass_dv.hevc file is created, you will need to use tsMuxerR ( this version is recommended by some, and the newest nightly is linked below) to put it back into a container as well as add the audio back.

Currently trying to add the glass_dv.hevc back into a container with FFmpeg or MP4Box will break the Dolby Vision. Hence the need for the special Muxer. This will result in video that has BL+RPU Dolby Vision.

It is possible to also only convert the audio and change around the streams with remuxers. For example tsMuxeR (nightly build, not default download) is popular to be able to take mkv files that most TVs won’t recognize HDR in, and remux them into into ts files so they do. If you also have TrueHD sound tracks, you may need to use eac3to first to break it into the TrueHD and AC3 Core tracks before muxing.

Easily Viewing HDR / Video information

Another helpful program to quickly view what type of HDR a video has is MediaInfo. For example here is the original Dolby Vision Glass Blowing video info (some trimmed):

ID                                       : 1
Format                                   : HEVC
Format/Info                              : High Efficiency Video Coding
Format profile                           : Main 10@L5.1@Main
HDR format                               : Dolby Vision, Version 1.0, dvhe.08.09, BL+RPU, HDR10 compatible / SMPTE ST 2086, HDR10 compatible
Codec ID                                 : hev1
Color space                              : YUV
Chroma subsampling                       : 4:2:0 (Type 2)
Bit depth                                : 10 bits
Color range                              : Limited
Color primaries                          : BT.2020
Transfer characteristics                 : PQ
Matrix coefficients                      : BT.2020 non-constant
Mastering display color primaries        : BT.2020
Mastering display luminance              : min: 0.0050 cd/m2, max: 4000 cd/m2
Codec configuration box                  : hvcC+dvvC

And here it is after conversion:

ID                                       : 1
Format                                   : HEVC
Format/Info                              : High Efficiency Video Coding
Format profile                           : Main 10@L5.1@Main
HDR format                               : SMPTE ST 2086, HDR10 compatible
Codec ID                                 : V_MPEGH/ISO/HEVC
Color space                              : YUV
Chroma subsampling                       : 4:2:0
Bit depth                                : 10 bits
Color range                              : Limited
Color primaries                          : BT.2020
Transfer characteristics                 : PQ
Matrix coefficients                      : BT.2020 non-constant
Mastering display color primaries        : BT.2020
Mastering display luminance              : min: 0.0050 cd/m2, max: 4000 cd/m2

Notice we have lost the Dolby Vision, BL+RPU information but at least we retained the HDR10 data, which Handbrake can’t do!

That’s a wrap!

Hope you found this information useful, and please feel free to leave a comment for feedback, suggestions or questions!

Until next time, stay safe and love each other!

Encoding settings for HDR 4K videos using 10-bit x265

There is currently a serious lack of data on compressing 4K HDR videos out there, so I took it upon myself to get learned in the ways of the x265 encoding world.

First things first, this is NOT a guide for Dolby Vision or HDR10. This is simply for videos using the BT.2020 color primaries. Please read the new article for saving HDR.

I have historically been using the older x264 mp4s for my videos, as it just works on everything. However most devices finally have some native h.265 decoding. (As a heads up h.265 is the specification, and x265 is encoder for it. I may mix it up myself in this article, don’t worry about the letter, just the numbers.)

Updated: 6/29/2020 – Please refer to the new guide

Updated: 4/14/2019 – New Preset Setting (tl;dr: use slow)

What are the best settings for me to use when encoding x265 videos?

The honest to god true answer is “it depends”, however I find that answer unsuitable for my own needs. I want a setting that I can use on any incoming 4K HDR video I buy.

I mainly use Handbrake now use ffmpeg because I learned Handbrake only has a 8-bit internal pipeline. In the past, I went straight to Handbrake’s documentation. It states that for 4K videos with x265 they suggest a Constant Rate Factor (CRF) encoding in the range of 22-28 (the larger the number the lower the quality).

Through some experimentation I found that I personally never can really see a difference between anything lower than 22 using a Slow present. Therefore I played it safe, bump it down a notch and just encode all of my stuff with x265 10-bit at CRF of 20 on Slow preset. That way I know I should never be disappointed.

Then I recently read YouTubes suggest guidelines for bitrates. They claim that a 4K video coming into their site should optimally be 35~45Mbps when encoded with the older x264 codecs.

Now I know that x265 can be around 50% more efficient than x264, and that YouTube needs it higher quality coming in so when they re-compress it it will still look good. But when I looked at the videos I was enjoying just fine at CRF 22, they were mostly coming out with less than a 10Mbps bitrate. So I had to ask myself:

How much better is x265 than x264?

To find out I would need a lot of comparable data. I started with a 4K HDR example video. First thing I did was to chop out a minute segment and promptly remove the HDR. Thus comparing the two encoders via their default 8-bit compressors.

I found this code to convert the 10-bit “HDR” yuv420p10le colorspace down to the standard yuv420p 8-bit colorspace from the colourspace blog so props to them for having a handy guide just for this.

ffmpeg -y -ss 07:48 -t 60 -i my_movie.mkv-vf zscale=t=linear:npl=100,format=gbrpf32le,zscale=p=bt709,tonemap=tonemap=hable:desat=0,zscale=t=bt709:m=bt709:r=tv,format=yuv420p -c:v libx265 -preset ultrafast -x265-params lossless=1 -an -sn -dn -reset_timestamps 1 movie_non_hdr.mkv

Average Overall SSIM

Then I ran multiple two pass ABR runs using ffmpeg for both x264 and x265 using the same target bitrate. Afterwards compared them to the original using the Structural Similarity Index (SSIM). Put simply, the closer the result is to 1 the better. It means there is less differences between the original and the compressed one

Generated via Python and matplotlib
(Click to view larger version)

The SSIM result is done frame by frame, so we have to average them all together to see which is best overall. On the section of video I chose, x264 needed considerably more bitrate to achieve the same score. The horizontal line shows this where x264 needs 14Mbps to match x265’s 9Mbps, a 5000kbps difference! If we wanted to go by YouTube’s recommendations for a video file that will be re-encoded again, you would only need a 25Mbps x265 file instead of a 35Mbps x264 video.

Sample commands I used to generate these files:

ffmpeg -i movie.mkv -c:v libx265 -b:v 500k -x265-params pass=1 -an -f mp4 NUL

ffmpeg -i movie.mkv -c:v libx265 -b:v 500k -x265-params pass=2 -an h265\movie_500.mp4

ffmpeg -i my_movie.mkv -i h265\movie_500.mp4 -lavfi  ssim=265_movie_500_ssim.log -f null -

Lowest 1% SSIM

However the averages don’t tell the whole story. Because if every frame was that good, we shouldn’t need more than 6Mbps x265 or 10Mbps x264 4K video. So lets take a step back and look at the lowest 1% of the frames.

Generated via Python and matplotlib
(Click to view larger version)

Here we can see x264 has a much harder time at lower bitrates. Also note that the highest marker on this chart is 0.98, compared the total average chart’s 0.995.

This information alone confirmed for me that I will only be using x265 or newer encodings (maybe AV1 in 2020) for storing videos going forward.

Download the SSIM data as CSV.

How does CRF compare to ABR?

I have always read to use Constant Rate Factor over Average BitRate for stored video files (and especially over Constant Quality). CRF is the best of both worlds. If you have an easily compressible video, it won’t bloat the encoded video to meet some arbitrary bitrate. And bitrate directly correlates to file size. It also won’t be constrained to that limit if the video requires a lot more information to capture the complex scene.

But that is all hypothetical. We have some hard date, lets use it. So remember, Handbrake recommends a range of 22-28 CRF, and I personally cannot see any visual loss at CRF 20. So where does that show up on our chart?

Generated via Python and matplotlib
(Click to view larger version)

Now this is an apples to oranges comparison. The CRF videos were done via Handbrake using x265 10-bit, whereas everything else was done via ffmpeg using x265 or x264 8-bit. Still, we get a good idea of where these show up. At both CRF 24 and CRF 22, even the lowest frames don’t dip below SSIM 0.95. I personally think the extra 2500kbps for the large jump in minimum quality from CRF 24 to CRF 22 is a must. To some, including myself, it could be worth the extra 4000kbps jump from CRF 22 to CRF 20.

So let’s get a little more apples to apples. In this test, I encoded all videos with ffmpeg using the default presents. I did three CRF videos first, at 22, 20, and 18, then using their resulting bitrates created three ABR videos.

Generated via Python and matplotlib
(Click to view larger version)

Their overall average SSIM scores were near as identical. However, CRF shows its true edge on the lowest 1%, easily beating out ABR at every turn.

To 10-bit or not to 10-bit?

Thankfully there is a simple answer. If you are encoding to x264 or x265, encode to 10-bit if your devices support it. Even if your source video doesn’t use the HDR color space, it compresses better.

There is only one time to not use it. When the device you are going to watch it on doesn’t support it.

Which preset should I use?

The normal wisdom is to use the the slowest you can stand for the encoding time. Slower = better video quality and compression. However, that does not mean smaller file size at the same CRF.

Even though others have tackled this issue, I wanted to use the same material I was already testing and make sure it held true with 4K HDR video.

Generated via Python and matplotlib
(Click to view larger version)

I used a three minute 4K HDR clip, using Handbrake to only modify which present was used. The results were surprising to me to be honest, I was expecting medium to have a better margin between fast and slow. But based on just the average, slow was the obvious choice, as even bumping up the CRF from 18 to 16 didn’t match the quality. Even thought the file size was much larger for the CRF 16 Medium encoding than it was than for the CRF 18 Slow! (We’ll get to that later.)

Okay, okay, lets back up a step and look at the bottom 1% again as well.

Generated via Python and matplotlib
(Click to view larger version)

Well well wishing well, that is even more definitive. The jump from medium to slow is very significant in multiple ways. Even though it does cost double the time of medium it really delivers in the quality department. Easily beating out the lowest 1% of even CRF 16 medium, two entire steps away.

Generated via Excel
(Click to view larger version)

The bitrates are as expected, the higher quality it gets the more bitrate it will need. What is interesting, is if we put CRF 16 - Medium encoding’s bitrate on this chart it would go shoot off the top at a staggering 15510kbps! Keep in mind that is while still being lesser quality than CRF 18 - Slow.

In this data set, slow is the clear winner in multiple ways. Which is very similar to other’s results as well, so I’m personally sticking too it. (And if I ran these tests first, I would have even used slow for all the other testing!)


If you want a single go to setting for encoding, based on my personal testing CRF 20 with Slow preset looks amazing (but may take too long if you are using older hardware).

Now, if I have a super computer and unlimited storage, I might lean towards CRF 18 or maybe even 16, but still wouldn’t feel the need to take it the whole way to CRF 14 and veryslow or anything crazy.

I hope you found this information as useful as I did, if you have any thoughts or feedback please let me know!