It’s 2021 and there still isn’t a lot of good info about AMD’s VCN hardware encoder for consumers. To that end, I will present my own take on the current “war” between software and hardware encoders, then go into quick details of how to best use AMD GPUs for encoding for video archival with FastFlix.
Note: I will only be comparing HEVC/H.265 10-bit HDR10 videos (both source and output). This use case is not usually covered in benchmarks and tests I have seen, and is more of the interest to those who have seen my previous posts on Encoding UHD HDR10 videos but may want to hardware accelerate it.
Terms:
- VCE – Video Coding Engine – AMD’s early name for its built in encoding hardware
- VCN – Video Core Next – AMD’s new name for GPU hardware encoders (VCE / VCN used interchangeably)
- AMF – Advanced Media Framework – AMD’s code and tools for developers to work with VCE / VCN
- HEVC / H.265 – High Efficiency Video Coding – The videos codec we will use that supports HDR10
- HDR10 – A set of metadata presented alongside the video to give the display additional details
- VMAF – Netflix’s video quality metric used to compare encoded video to source’s quality
Software vs Hardware Encoders
Software encoders are coded to run on any general purpose CPU. Doesn’t matter if it’s an Intel i7, an AMD 5900x or even on your phone’s ARM based CPU. It gives them great versatility. In the other corner, Hardware encoders rely on specific physical hardware components to exist in a system to use them to accelerate their transcoding. In today’s case, it takes a AMD GPU with VCN support to use the features we want to test.
Apples and oranges are both fruit, sports cars and pickup trucks are both vehicles, and software and hardware encoders both transcode videos. Just as it’s futile to compare the track capabilities of a supercar to the towing capacity of a pickup truck, we are about to venture into said territory with these encoders.
Use case over metrics
The workhorse of the HEVC software encoding world is x265. There are plenty of other software encoders like the industry used ATEME TITAN File for UHD blu-rays or other open source encoders like Turing codec or kvazaar, but because of their lack of inclusion in standard tools like FFmpeg, they are overlooked.
So what is this workhorse good for? Flexibility and video archival. By being able to run on almost anything that can compile C code, x265 is a champion of cross platform operations. It is also the standard when looking for pure quality metrics in HEVC videos.
Comparatively, hardware encoding, in this case using AMD’s Video Coding Engine (VCE), is built to be power efficient and fast. Really, really fast. For example, on a 6900XT you can real-time encode a 60fps UHD stream on the slowest setting!
Let’s see what happens when they venture into each other’s bailiwicks.
Drag Race
Here’s what everybody loves: A good graph. We’re going to compare x265 using it’s fastest encoding speed vs the slowest setting AMD’s VCE currently has with a 60fps HDR10 4K source video.
As expected, it was a slaughter. Hardware encoding ran at 96 fps while x265 could only manage 14.5 fps. AMD’s hardware encoding clearly pummels the fastest setting x265 has to offer, even on an i9-9900k. Even if using an AMD 5950x which may be up to twice as fast, the hardware encoder would still dominate.
Where does this matter
Streaming and real-time transcoding. Hardware encoders were designed with the idea of “accelerated” encoding. Which makes them great for powering your Zoom calls or streaming to Twitch.
Encoding Quality Prowess
Now lets venture into x265’s house and compare computed quality with VMAF. We’ll be using the veryslow
setting, darn the total time taken!
In this scenario we will compress a UHD video with a bitrate of 15,000k to four different rates. The goal for a decent encode is to reach at least VMAF 93, which is the bitrate range we will stay above. (VMAF 93+ doesn’t mean you won’t notice quality loss. It simply means that it probably WILL BE apparent if it is less than that.)
Both encoders do great, keeping within a range that shouldn’t be to noticeable. However, x265 has a clear advantage at lower bitrates if all you care about is quality. It also maintains a steady edge throughout the test.
I have noticed while watching the AMD VCE encodes that it doesn’t do a great job with scene changes. I expect that is because VCE doesn’t support pre-analysis for HEVC, only for H.264. AMD VCE also suffers from lack of B-frame support, which I will talk about in the next blog post.
Where does this matter
Video archival. If you have a video that you are planning to discard for a high quality re-encode to save on file size, it’s better to stick with x265. Keep in mind, don’t just re-encode because you want to use a “better” codec, it’s always best to keep the original.
Gas Guzzling
This is a comparison I don’t see as often, and I think is overlooked. Encoding takes a lot of power, which means it costs money. I have been told by many FastFlix users that they let their x265 encodes run overnight, and some of their encodings take days!
This is also a harder to measure metric, as you need both encoders to produce the same quality output, as well as know their power usage. The entire thing also labors under the assumption that the only purpose of this machine is to encode the video while it is powered on, so please keep all that in mind as we dive into this.
To achieve the same quality of result file, it costs ten times as much in electricity to get the job done. This may not matter if you’re talking about a random encode here or there, but if you have a lot of videos to burn through, it could really start saving cash by switching to hardware encoders.
The Nitty Gritty about the power (Methodology)
Power usage will differ across hardware so this is for a very specific case that I can attest for (using both HWmonitor and a KillAWatt monitor). The 6900XT uses 63 watts over it’s baseline when encoding, for a total system draw of ~320w. The i9-9900k uses 111 watts over baseline for a total system draw of ~360w. (Keep in mind there is some extra CPU usage during hardware encode as well, so that is why total power is not a direct difference between the two.)
For the encoder speed, when using a UHD file I was able to get within 0.1% difference of VMAF when using VCE slow (same speed as above) and x265 veryfast (at 10.35fps).
Lets take a genericized use case of a two hour long video running at 24fps. 24fps * 60 seconds in a minutes * 60 minutes in an hour * 2 hours = 172,800 frames.
Estimated times and cost:
- VCE – slow – 6900XT @ 96.47fps – 29.85 minutes
- 0.16 kWh/day @ 320 watts
- 0.019$ at @12 cents per kWh
- x265 – i9-9900K@ 10.35fps – 287.3 minutes (four and a half hours)
- 1.72 kWh/day @ 360 watts
- 0.206$ at @12 cents per kWh
Where does this matter
The cost difference probably doesn’t sway many individuals But if you’re a prolific encoder, this could save you time and money.
Super Technical Head to Head Summary
Software (x265) | Hardware (AMD VCE) | |
Quality | ⭐Best possible | Lacks basic HEVC needs (B-frames / pre-analysis) |
Speed | Slow to Super Slow | ⭐Crazy Fast |
Requirements | ⭐Any old electrified rock | Newer AMD GPU Windows OS |
Energy Usage | All the powah! | ⭐ sips daintily |
So the winner is…. neither. If you’re encoding professionally you’ll be working with totally different software (like TITAN File). Then if you’re using it at home, it really just depends with what hardware you already have. If you’re wondering which GPU to get for the best encoding, wait for next month’s article 😉
Basically they both do what they were designed for. I would say Hardware encoders might have a slight overall edge, as they could be used for all cases. Whereas x265 currently can’t do UHD HDR10 real time encoding on consumer hardware.
Encoding HDR10 with AMD GPUs
Already got an AMD GPU and want to start encoding with it? Great, let’s get down to how to do it. First off make sure you are using Windows. If you’re using Linux for this, don’t.* If Linux is all you have, I would still recommend using a passthrough VM with Windows on it.
For Windows users, rigaya has made a beautiful tool called VCEEncC that has HDR10 support built in. It is a command line tool, but good news, FastFlix now supports it!
You will need to download VCEEncC manually as well, and make sure it is on the system path or link it up in File > Settings of FastFlix.
VCE doesn’t have a lot of options to worry about like other encoders, so can be on your way to re-encoding in no time!
* Possible on Linux to using VAAPI to encode HEVC. You would need to apply custom MESA patches to enable HDR10 support. AMF / VCEEncC only supports H.264 on Linix currently.
Best quality possible with VCE
Beauty is in the eye of the beholder, and so is video quality. Some features, like VBAQ (Variance Based Adaptive Quantization) will lower the measured metrics like VMAF and SSIM, but are designed look better to human eyes. Assuming you care about how the video looks, and aren’t just trying to impress your boss with numbers, we will stick with those.
Preset | slow |
Motion Vector Accuracy | q-pel |
VBAQ | enabled |
Pre-Encode | enabled |
Of course the largest determination of quality will be how much bitrate you will allow for (or which quantization rate you select). FastFlix has some loose recommendations, but what is truly needed will vary greatly dependent upon source. A GoPro bike ride video will require a lot more bitrate than a mounted security camera with very little movement overall.
Warnings and gotchas
Not all features are available for all cards. Also some features like b-frame support were promised for RDNA2 but still are not yet available.
Driver versions can make a difference. Always try using latest first, but if you experience issues using VCE it may not be using a new enough AMF version and need to downgrade to an older driver.
What do I use?
Personally I avoid re-encoding whenever possible. However, now that I do have an AMD GPU I do use it for any of my quick and dirty encoding needs. Though I would be saying the same about NVENC if I had a new Nvidia GPU (which does have B-frame support). In my opinion it’s simply not worth the time and energy investment for encoding with software. Either save the original or use a hardware encoder.
What about Nvidia (NVENC) or Intel (QSV)?
I am working to get access to latest generation hardware for both Nvidia’s NVENC and Intel’s QSV in the next month, so hopefully I will be able to create a follow up with some good head to head comparison. Historically NVENC has taken the crown, and by my research VCE hasn’t caught up yet, but who knows where QSV will end up!
Boring Details
- x265 was used at commit 82786fccce10379be439243b6a776dc2f5918cb4 (2021-05-25) as part of FFmpeg
- CPU is a i9-9900k
- VCEEncC 6.13 on 6900xt with AMF Runtime 1.4.21 / SDK 1.4.21 using drivers 21.7.2
Disclaimer
These tests were done on my own hardware purchased myself. All conclusions are my own thoughts and opinions and in no way represent any company.
Thanks for your work and new article! Nice to hear that you are planning to evaluate NVENC in the future and maybe implement it into fastflix! HDR support would be nice too to transcode iPhone HDR UHD home videos!
Really appreciate this write up. I’ve been wondering about the different encoders and makes them work the way they do… and stumbled upon this article during search. Answered a lot of my questions.
Hi good work, just wondering if you managed to get a chance to do a competitive test against Intel and Nvidia?
The author has finished and published it now https://codecalamity.com/hardware-encoding-4k-hdr10-videos/