Encoding settings for HDR 4K videos using 10-bit x265

There is currently a serious lack of data on compressing 4K HDR videos out there, so I took it upon myself to get learned in the ways of the x265 encoding world.

I have historically been using the older x264 mp4s for my videos, as it just works on everything. However most devices finally have some native h.265 decoding. (As a heads up h.265 is the specification, and x265 is encoder for it. I may mix it up myself in this article, don’t worry about the letter, just the numbers.)

Updated: 4/14/2019 – New Preset Setting (tl;dr: use slow)

What are the best settings for me to use when encoding x265 videos?

The honest to god true answer is “it depends”, however I find that answer unsuitable for my own needs. I want a setting that I can use on any incoming 4K HDR video I buy.

I mainly use Handbrake to encode my videos, so I went straight to their documentation. It states that for 4K videos with x265 they suggest a Constant Rate Factor (CRF) encoding in the range of 22-28 (the larger the number the lower the quality).

Through some experimentation I found that I personally never can really see a difference between anything lower than 22 using a Slow present. Therefore I played it safe, bump it down a notch and just encode all of my stuff with x265 10-bit at CRF of 20 on Slow preset. That way I know I should never be disappointed.

Then I recently read YouTubes suggest guidelines for bitrates. They claim that a 4K video coming into their site should optimally be 35~45Mbps when encoded with the older x264 codecs.

Now I know that x265 can be around 50% more efficient than x264, and that YouTube needs it higher quality coming in so when they re-compress it it will still look good. But when I looked at the videos I was enjoying just fine at CRF 22, they were mostly coming out with less than a 10Mbps bitrate. So I had to ask myself:

How much better is x265 than x264?

To find out I would need a lot of comparable data. I started with a 4K HDR example video. First thing I did was to chop out a minute segment and promptly remove the HDR. Thus comparing the two encoders via their default 8-bit compressors.

I found this code to convert the 10-bit “HDR” yuv420p10le colorspace down to the standard yuv420p 8-bit colorspace from the colourspace blog so props to them for having a handy guide just for this.

ffmpeg -y -ss 07:48 -t 60 -i my_movie.mkv-vf zscale=t=linear:npl=100,format=gbrpf32le,zscale=p=bt709,tonemap=tonemap=hable:desat=0,zscale=t=bt709:m=bt709:r=tv,format=yuv420p -c:v libx265 -preset ultrafast -x265-params lossless=1 -an -sn -dn -reset_timestamps 1 movie_non_hdr.mkv

Average Overall SSIM

Then I ran multiple two pass ABR runs using ffmpeg for both x264 and x265 using the same target bitrate. Afterwards compared them to the original using the Structural Similarity Index (SSIM). Put simply, the closer the result is to 1 the better. It means there is less differences between the original and the compressed one

Generated via Python and matplotlib
(Click to view larger version)

The SSIM result is done frame by frame, so we have to average them all together to see which is best overall. On the section of video I chose, x264 needed considerably more bitrate to achieve the same score. The horizontal line shows this where x264 needs 14Mbps to match x265’s 9Mbps, a 5000kbps difference! If we wanted to go by YouTube’s recommendations for a video file that will be re-encoded again, you would only need a 25Mbps x265 file instead of a 35Mbps x264 video.

Sample commands I used to generate these files:

ffmpeg -i movie.mkv -c:v libx265 -b:v 500k -x265-params pass=1 -an -f mp4 NUL

ffmpeg -i movie.mkv -c:v libx265 -b:v 500k -x265-params pass=2 -an h265\movie_500.mp4

ffmpeg -i my_movie.mkv -i h265\movie_500.mp4 -lavfi  ssim=265_movie_500_ssim.log -f null -

Lowest 1% SSIM

However the averages don’t tell the whole story. Because if every frame was that good, we shouldn’t need more than 6Mbps x265 or 10Mbps x264 4K video. So lets take a step back and look at the lowest 1% of the frames.

Generated via Python and matplotlib
(Click to view larger version)

Here we can see x264 has a much harder time at lower bitrates. Also note that the highest marker on this chart is 0.98, compared the total average chart’s 0.995.

This information alone confirmed for me that I will only be using x265 or newer encodings (maybe AV1 in 2020) for storing videos going forward.

Download the SSIM data as CSV.

How does CRF compare to ABR?

I have always read to use Constant Rate Factor over Average BitRate for stored video files (and especially over Constant Quality). CRF is the best of both worlds. If you have an easily compressible video, it won’t bloat the encoded video to meet some arbitrary bitrate. And bitrate directly correlates to file size. It also won’t be constrained to that limit if the video requires a lot more information to capture the complex scene.

But that is all hypothetical. We have some hard date, lets use it. So remember, Handbrake recommends a range of 22-28 CRF, and I personally cannot see any visual loss at CRF 20. So where does that show up on our chart?

Generated via Python and matplotlib
(Click to view larger version)

Now this is an apples to oranges comparison. The CRF videos were done via Handbrake using x265 10-bit, whereas everything else was done via ffmpeg using x265 or x264 8-bit. Still, we get a good idea of where these show up. At both CRF 24 and CRF 22, even the lowest frames don’t dip below SSIM 0.95. I personally think the extra 2500kbps for the large jump in minimum quality from CRF 24 to CRF 22 is a must. To some, including myself, it could be worth the extra 4000kbps jump from CRF 22 to CRF 20.

So let’s get a little more apples to apples. In this test, I encoded all videos with ffmpeg using the default presents. I did three CRF videos first, at 22, 20, and 18, then using their resulting bitrates created three ABR videos.

Generated via Python and matplotlib
(Click to view larger version)

Their overall average SSIM scores were near as identical. However, CRF shows its true edge on the lowest 1%, easily beating out ABR at every turn.

To 10-bit or not to 10-bit?

Thankfully there is a simple answer. If you are encoding to x264 or x265, encode to 10-bit if your devices support it. Even if your source video doesn’t use the HDR color space, it compresses better.

There is only one time to not use it. When the device you are going to watch it on doesn’t support it.

Which preset should I use?

The normal wisdom is to use the the slowest you can stand for the encoding time. Slower = better video quality and compression. However, that does not mean smaller file size at the same CRF.

Even though others have tackled this issue, I wanted to use the same material I was already testing and make sure it held true with 4K HDR video.

Generated via Python and matplotlib
(Click to view larger version)

I used a three minute 4K HDR clip, using Handbrake to only modify which present was used. The results were surprising to me to be honest, I was expecting medium to have a better margin between fast and slow. But based on just the average, slow was the obvious choice, as even bumping up the CRF from 18 to 16 didn’t match the quality. Even thought the file size was much larger for the CRF 16 Medium encoding than it was than for the CRF 18 Slow! (We’ll get to that later.)

Okay, okay, lets back up a step and look at the bottom 1% again as well.

Generated via Python and matplotlib
(Click to view larger version)

Well well wishing well, that is even more definitive. The jump from medium to slow is very significant in multiple ways. Even though it does cost double the time of medium it really delivers in the quality department. Easily beating out the lowest 1% of even CRF 16 medium, two entire steps away.

Generated via Excel
(Click to view larger version)

The bitrates are as expected, the higher quality it gets the more bitrate it will need. What is interesting, is if we put CRF 16 - Medium encoding’s bitrate on this chart it would go shoot off the top at a staggering 15510kbps! Keep in mind that is while still being lesser quality than CRF 18 - Slow.

In this data set, slow is the clear winner in multiple ways. Which is very similar to other’s results as well, so I’m personally sticking too it. (And if I ran these tests first, I would have even used slow for all the other testing!)

Conclusion

If you want a single go to setting for encoding, based on my personal testing CRF 20 with Slow preset looks amazing (but may take too long if you are using older hardware).

Now, if I have a super computer and unlimited storage, I might lean towards CRF 18 or maybe even 16, but still wouldn’t feel the need to take it the whole way to CRF 14 and veryslow or anything crazy.

I hope you found this information as useful as I did, if you have any thoughts or feedback please let me know!


Paint, Paper, Panoramas, and Python

I’m an artist and a python developer, two things that rarely occupy the same worlds, let alone the same sentence. However, I have recently found a way to combine these two passions: Panoramas.

My current smartphone takes excellent pictures. It does a great job at figuring out colors, lighting, and focus, even in low lighting. As an artist, this is important to me because I often use my phone to snap quick pictures of a scene as a reference to take back to my studio. It’s a huge improvement in the technology I had in my hands even five years ago. There is one thing about my old phone that I miss though – its ‘panorama’ photo mode, but not because it was better.

I miss how amazingly awful it used to be, and more importantly, the freedom to make awful pictures it allowed. I’d point the lens out the window of the car as it sped along (as a passenger of course) to make jagged and confusing images of tiny bits of the landscape that the phone struggled to hodgepodge together. I’d tilt and move the phone in random directions to make weird swirls of the horizon. Even when being used ‘as directed’, it would usually struggle with focus and lighting coming up with spontaneously and wonderfully terrible photos with abstract light glare or menacing dark patches. It’s hard to explain, but sometimes as an artist, a terrible photo can be just as inspiring as those picture perfect reference pics I take with me back to the studio.

My current phone is too smart for that though, and it snatches away any joy of bad photography by making consistently beautiful and seamless panoramas. Not only that, but it accomplishes this mostly by yelling at you (“You’re going too fast!”) or by using angry arrows to make sure you can only move the phone in one direction, and then abruptly ending the photograph when you don’t cooperate. So, I did what anyone does when they get nostalgic for awful photography – I made a python script to make my own terrible panoramas.

My plan was simple. First, I would shoot short videos where my phone wouldn’t yell at me for moving, tilting, and spinning the image as much as I wanted. Next, use Python to convert each frame of the video clip to an image, crop the image into a tiny sliver out of the center of the image and then glue them all together. The results are imperfect. And gloriously so.

Side note: Although I used my smartphone to shoot some video, this script could be applied to any video. Think of the wild panoramas you could create from some Russian dash cam footage, or a GoPro strapped to a fish, or a tiny clip from the Lord of the Rings. However, this script works best on videos that are less than 10 seconds long or else it produces mile long panoramic images. Currently, I don’t bother limiting the image size at all, but theoretically I could by using one out of every five frames for instance, or by cutting down the image slice size based on video length.

The Python

I used ffmpeg for turning each video frame into an image. It was simple to install, just download and unzip. Here’s a handy installation guide -> https://github.com/adaptlearning/adapt_authoring/wiki/Installing-FFmpeg

The Python Image Library is the only other requirement, installed with pip.

pip3 install pillow

The script works by pulling all videos out of a source directory based on file suffix and creating a panorama for each. This could easily be modified to convert just one video at a time by removing the loops and passing the path to the desired video directly to ffmpeg.


directory = Path('my\\videos\\dir')

vids = []
for vid in directory.iterdir():
    if vid.suffix.lower() in ('.mp4', '.mkv'):
        vids.append(vid)

Every frame pulled out by ffmpeg is stored in a file. I delete the directory and recreate it before ffmpeg runs to delete the old frames from the last run.


for vid in vids:
    shutil.rmtree("pics", ignore_errors=True)
    os.makedirs("pics", exist_ok=True)

    print(f'Creating panoramic {vid.stem}')
    result = run(
        f'ffmpeg -i {vid.absolute()} '
        f'-y pics\\thumb%04d.jpg -hide_banner', 
        shell=True, stderr=PIPE)
    result.check_returncode()
    print(result.stderr.decode('utf-8'))

After it finishes pulling out all the frames, I start the panorama by creating an empty image. I need to have the dimensions of the finished image to create it. To get the final width, I multiply the number of frames ffmpeg pulled out by the width of my image slice (40 pixels). For the height, I open up one of the frames and use it size as a reference. I also use the sample image’s dimensions to figure out the center of the image for cropping everything down later.

Then, I loop through all the frame images in reverse order (because … long story short, it usually looks better that way) and then work on slicing each image down to 40 pixels wide to glue into the panorama.

    
    sample = Image.open("pics/{}".format(os.listdir("pics")[0]))
    width, height = sample.size
    center = width / 2

    panoramic = Image.new('RGB', (len(os.listdir("pics")*40), height))
    
    # This offset is so PIL knows where to start adding 
    # each image slice to the panorama
    x_offset = 0

    for i in reversed(os.listdir("pics")):
        img = Image.open("pics/{}".format(i))
        area = (center - 20, 0, center + 20, height)
        cropped_img = img.crop(area)
        panoramic.paste(cropped_img, (x_offset, 0))
        x_offset += 40

    panoramic.save(f'{vid.stem}.jpeg')
    panoramic.close()

The Painting

So far, I am quite happy with the results of this adorable little script.
It has definitely given me the creative inspiration I was missing. In the past two weeks, I have done three series of paintings based on panoramas I have created using it, with plans for many more. Here’s an example of how I used it to create some artwork!

I took this video:

turned it into this panorama:

played with some paint and smudged around some charcoal and pastels:

and came up with this:

Final Thoughts

It feels really amazing to apply Python to unusual problems, even if that challenge is finding a unique way of creating original art. Plus, if the inspiration ever dries up, I have some ideas for making this script even more fun:

  • grab each slice from a random spot rather than dead center of the image for something much more jumbled and abstract
  • options to not use every frame for longer videos
  • PIL ‘effects’, like a black and white mode, over saturation, or extra blurry images
  • an ‘up and down’ mode for tall panoramas

I hope you enjoyed! Feel free to check out my website or my instagram for more artwork if you are interested.

Replacing NZXT’s CAM software on Windows for Kraken

NZXT Kraken coolers are awesome for CPUs or GPUs. Their CAM software on the other hand is slow, bloated and possibly stealing your data. Thankfully, there are open source alternatives available.

The option that I will walk you through using is a command line tool that doesn’t need to be constantly running in the background, and doesn’t require any internet connection to work.

APRIL 2019 UPDATE: liquidctl now has the best support all around and automated windows builds! Use this and forget about everything below!

UPDATE: I have created a standalone executable for Windows for those that do not want to bother with the steps below. It is available on github. (For x61 and other second gen users, you will still need to update the driver as part of step five below.)

There are a few different libraries to control third generation coolers ( Kraken x62, x72, x52 and x42 ). The only one that supports fan control for second generation ( Kraken x61, x41 and x31 ) as well as third on Windows that I have found is liquidctl (as of 2/6/2019 still in an experimental dev branch, can check on the issue directly for updates ).

Please note: none of this is my own software, and this is only a guide based on my own experience. This is fully “as-is”, no warranty or guarantee it won’t harm your hardware or other software.

I am going to get a little more detailed for these steps than usual, as I want to make this easy for anyone’s skill level. For power users, here are the abbreviated steps:

  1. Download and install Python 3.7+ x86 version
  2. Create Python virtual env
    • run command python -m venv venv
    • Activate it venv/Scripts/activate.bat
  3. Downloaded libusb
    • extracted the files from MS32/dll into the venv/Scripts folder
    • You may need an extractor program like 7zip
  4. Downloaded liquidctl
    • Make sure you are in the activated venv
    • pip install liquidctl
  5. Kraken x61 / gen two users only – Downloaded zadig and install libusbK driver for the Kraken device
    • WARNING – CAM will no longer be able to use this device unless you uninstall this new driver
    • Select Options > List all Devices
    • Find the device “690LC” in the dropdown list, should have USB ID of 2433 B200
  6. Run liquidctl to change your fan speed!
    • liquidctl set fan speed 60

As of time of writing their software does not by default include the code for changing the logo color for the second generation Krakens (x61, etc..), but I’ll show you how to add it easily. Checkout the liquidctl repo, that now has great support for older devices, as well as some EVGA and Corsair support!

Download Python

For those of you who do not already have it, we will need to make sure there is a version of Python 3 on your system. Go to the official download page from the Python Software Foundation at https://www.python.org/downloads/windows/. Click on the top link for “Latest Python 3 Release” (version may be newer than shown below.)

Scroll down to the bottom of the next page, and select the “Windows x86 Executable Installer”. Yes, install the x86 version, aka 32-bit , even if you have 64-bit Windows. If you do happen to download and install the x86_64 version instead, you just will have to use a different libusb dll (thankfully included in same download bundle).

After the exe is downloaded, double click on it, and lets make a few changes during the installation to make your life easier. First make sure to add Python to PATH. This will make it possible to run python from the command line without specifying the full path to it’s executable. Then click on “Customize installation”.

Next page we don’t need to touch anything. If you are only going to use python for only this, you can remove Documentation and Python test suite and it will still work fine and reduce bloat.

Final page we just want to make sure it’s installed the program files directory instead of the obscure user app dir that it uses by default. Then hit install.

You should now be able to open python easily in the command prompt. You can open it by hitting the Windows key on your keyboard (or manually opening start menu), typing “Command” and then click on “Command Prompt”.

C:\Users\me>python --version
Python 3.7.2

Create a virtual environment

By creating a virtual environment you isolate the new libraries you are going to install from the system version of Python, eliminating possible future headaches if you need to install conflicting packages.

Thankfully it is super simple to do, open a command prompt, navigate to the directory you want a new folder with the virtual environment in, and type:

python -m venv venv

This is simply saying “Run python, using module venv -m venv to create a virtual env at the venv directory. Then you want to “activate” it so that your command prompt will instead use that new isolated version of Python and Python tools (like pip).

venv\Scripts\activate.bat

This will cause your command prompt to give a nice little notification that you are now using that venv:

C:\Users\me> venv\Scripts\activate.bat

(venv) C:\Users\me>

Remember the path you installed the virtual env too, we will use it later!

Install libusb DLLs

Just because life has to be a little extra difficult, libusb only pushes their releases in 7z and tar.bz2 archives. Which means, if you don’t already have a tool that can open these type of files, head over to https://www.7-zip.org/download.html and download either the exe or msi for your version of Windows (if you are unsure, just get the 32-bit exe one).

Installing it should be fine with the defaults, feel free to change as you want. When done, go over and download libusb. The developers of
liquidctl suggest you use 1.0.21 at time of writing so we are linking directly to that version https://github.com/libusb/libusb/releases/tag/v1.0.21.

After download, extract those files with 7zip.

Then you will want to go into the new directory, and find the proper DLLs for your installed version of Python. If you followed this guide so far, that will be the MS32 folder.

You are going to copy those into your virtual environment directory Scripts directory, as that is by default added to the PATH when you activate the virtual env. So if I was in the C:\Users\me directory when I ran python -m venv venv I would copy those three files into C:\Users\me\venv\Scripts.

They look a little out of place, but meh, it works.

Change the Kraken drivers (x61, x41, x31 Only)

Sadly (or not so sadly) we can’t use the default second gen Kraken drivers installed by CAM. Which also mean that when we switch these, CAM will no longer be able to interface with the Kraken unless this new driver is removed via the Driver Manager.

ONLY FOR KRAKEN x61, x41 and x31! Skip ahead to the next section for anything else.

Go to https://zadig.akeo.ie/ and download their awesome driver switcher tool.

Run it as an administrator, then go to Options and select “List all Devices”.

Chose 690LC from the drop down list. It’s USB ID should be 2433 B200. Select either the libusb-win32 or libusbK driver from the right hand tiny select buttons. (I had trouble the first time timing out on the libusbdrivers so I switched to libusbK, your mileage may vary.) Then hit ‘Replace Driver’.

This will probably take a while. Hopefully it pops up a success window in a few seconds or minutes. If not, I suggest waiting a full ten minutes, then switching to the other driver type (aka select libusbK if you first tried libusb-win32) and trying again before running to their github for help.

Install liquidctl

We’re almost there!

Switch back to the command prompt and make sure you are in the active virtual env. If you don’t see (venv) at the start of the line, go back and take a look at the “Create a virtual environment” section.

If you are using this for a newer third generation device, like the Kraken x62, you can use the regular module.

pip install liquidctl

However, if you do need second generation support for the Kraken x61, x41 or x31, you will need to grab the experimental branch. Please be aware that this may be at an unstable state at time of your download. You can follow current issues directly in the pull request.

pip install git+https://github.com/jonasmalacofilho/liquidctl.git@add-second-gen-krakens

This will install the command in the active virtual environment as liquidctl! Keep in mind, you will need to activate this virtual environment to use this command.

Using liquidctl

You can find the full list of what you can do via --help or from their github. But to start you want to list out devices detected on the system, initialize them, and view their status.

(venv) C:\>liquidctl list
Device 0, Asetek 690LC (NZXT, EVGA or other) (experimental)

(venv) C:\>liquidctl initialize
report: failed to (pre) open (control: open), ignoring

(venv) C:\>liquidctl status
Device 0, Asetek 690LC (NZXT, EVGA or other) (experimental)
report: failed to (pre) open (control: open), ignoring
Liquid temperature          31.2  °C
Fan speed                   1320  rpm
Pump speed                  3600  rpm
Firmware version         0.7.0.0

For the Kraken x61 and gen-two devices, the only thing you can do* is set the fan speed. (* As of time of writing, could have come a long way by whenever ‘now’ is. Also, you can used my “hack” below to change the logo color.)

(venv) C:\> liquidctl set fan speed 100

If you are using third gen devices, you can have fun with pump speeds or colors as well!

liquidctl set ring color fading 350017 ff2608
liquidctl set logo color spectrum-wave --speed slowest
liquidctl set pump speed 90

Multiple Devices

If you happen to have multiple supported devices on your system, you will have to select which one you want to use. First figure out which device is numbered as which.

(venv) C:\> liquidctl list
Device 0, Asetek 690LC (NZXT, EVGA or other) (experimental)
Device 1, NZXT Kraken X (X42, X52, X62 or X72

Then just append --device <num> to any follow up commands.

liquidctl --device 1 set ring color spectrum-wave
liquidctl --device 0 set fan speed 100

You can see more detailed options including color modes for Kraken devices here.

Getting color working for the Kraken x61

If you don’t code and are never going to use this virtual environment for anything other than this tool, don’t fret. It’s a copy paste operation! (For others that don’t want to modify their site-packages file and know what that means, you can do a local develop mode install via setup.py and play to your heart’s content.)

Okay, so all we need to do is simply add a new section of code to one of the files. You’ll have to navigate a few layers deep into the virtual environment you created: venv\Lib\site-packages\liquidctl\driver there is a file called asetek.py that you need to open. If you installed the full Python suite, you should be able to easily right click and edit in IDLE.

Scroll to the bottom of the file, and paste the following block of code:

    def set_color(self, channel, mode, colors, speed):
        """Set the color of the logo."""
        modes = ('fixed', 'alternating', 'blinking', 'off')
        speeds = {
            'fastest': 1,
            'faster': 2,
            'normal': 3,
            'slower': 4,
            'slowest': 5
        }
        try:
            speed = int(speed)
        except ValueError:
            if speed not in speeds:
                LOGGER.warning('Speed must be a value between 1 and 255, setting to 1')
                speed = 1
            else:
                speed = speeds[speed]
        else:
            if speed < 1 or speed > 255:
                speed = 1
                LOGGER.warning('Speed must be a value between 1 and 255, setting to 1')

        if channel != 'logo':
            LOGGER.warning('Only "logo" channel supported for this device, falling back to that')

        if mode not in modes:
            raise NotImplementedError('Modes available are: {}'.format(",".join(modes)))

        if mode == 'off':
            color1, color2 = [0x00, 0x00, 0x00], [0x00, 0x00, 0x00]
        else:
            color1, *color2 = colors
            if len(color2) > 1:
                LOGGER.warning('Only maximum of 2 colors supported, ignoring further colors')
            color2 = color2[0] if color2 else [0x00, 0x00, 0x00]

        self._begin_transaction()
        data = ([0x10] +
                color1 +
                color2 +
                [0xff, 0x00, 0x00, 0x37,
                 speed,
                 speed,
                 0x00 if mode == 'off' else 0x01,
                 0x01 if mode == 'alternating' else 0x00,
                 0x01 if mode == 'blinking' else 0x00,
                 0x01, 0x00, 0x01])
        data.to_yaml(filename=self.data_file)
        self._write(data)
        self._end_transaction_and_read()

Because Python is spacing specific, make sure the def set_color lines up with the def _write above it. It should look something like this (code may change vs what is in the image):

Then, make sure to save the file, by either hitting Ctrl+s or go to File > Save. The options for colors are quite limited for second gen devices versus third gen, but at least you can use them!

ModeColors to supply
off0
fixed1
alternating2
blinking1
liquidctl --device 0 set logo color blinking ffffff --speed 2

Now you can issue a command to set the colors like normal. Provide --speed to set number of seconds between alternating colors or blinking.

If you have any issues with the colors command, please let me know. If you have issues with the program itself, please open an issue directly on liquidctl github page.

Windows Auto-start

Alright, I’ve played around and have everything set perfectly. But wait, I rebooted and it’s all gone! NOOOO!

Thankfully, it’s rather simple to have your configuration auto load on startup. We simply have to put the commands into a script, and put it into the windows auto start directory.

Open your run menu by hitting the Windows Key plus r at the same time (or typing in run to windows search). When the little box pops up, type in shell:startup and hit OK. It will take you to a deep dark folder like C:\Users\me\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Startup

Open your favorite text editor (or if you know what you’re doing, just right click in the folder, create a new text file, then rename it with a .bat extension.) Keep that window open, so we can copy it’s path.

Now to create the small script. This script’s first command is to go into the venv\Scripts directory, so make sure to change to the full path of where that is in your version. Then, simply modify the liquidctl commands to the ones you want to issue.

cd C:\YOUR_OWN_FULL_PATH_HERE_MAKE_SURE_TO_CHANGE\venv\Scripts
call activate.bat

liquidctl --device 0 set logo color fixed ffffff
liquidctl --device 0 set fan speed 30 25  40 50  50 100

liquidctl --device 1 set logo color fixed ffffff
liquidctl --device 1 set ring color fixed ffffff
liquidctl --device 1 set pump speed 30 50  40 75  50 100
liquidctl --device 1 set fan speed 30 25  50 50  60 75  70 100

Now when you go to save this new file, copy that long path from the open shell:startup directory into the save bar so it is saved into that directory. Then save the file as a name you will remember, like kraken_control.bat just make sure it ends in .bat. In notepad, you may have to change the dropdown from Save as type to All files.

You should now see that file in the startup directory. Go ahead and double click on it to make sure it works. Easiest way to test is first manually change your Kraken colors to something different and make sure running the script changes them back.

If it doesn’t work, most likely a command is miss typed. Copy line for line into a cmd prompt and make sure it works for you there. (Or just run the .bat file from an open cmd prompt if you know how so you don’t have to copy-paste.)

That’s all for this tutorial. I hope you found it useful and stress free! If there is something that needs improved please let me know in the comments below.

Personal Cloud Media Server – Encrypted, Streamable, Affordable…Possible?

Is it possible to build a personal media server that is hosted in the cloud, while making privacy, security, and accessibility paramount?I wanted to find out, and this post will dive into the options available to achieve such a possibly as well. (Spoiler: I did end up making my own software to do just this!)  

First, of course, is the why even try this when other options already exist? For example, Plex and Subsonic are some great options if you want to host a media server from your own home. The catch is then you have to have good upload speeds, storage space, an always running server or NAS, and concerned about how private your data really is. Because at the end of the day these are companies, not just software, and they are beholden to the requests of government agencies. They also have user all data in a single, potentially hackable, silo.

Cost Breakdown

So first, fast upload speeds. If you got it, you’re golden, but if your ISP doesn’t offer high enough speeds, you’re screwed. And even if you do have fast upload speeds you now need a server with hard drives that is always being fed electricity. Time to figure out what it costs to remove that need entirely.

On the flip side, a media server is pretty simple, logistics wise. You need a server to host the web page or API, and a storage provider. These could even be the same thing, however I have yet to find a price conscious option that includes both. Instead, for my personal needs, I priced out the difference between buying a 2-bay NAS (as it offers low electric and data redundancy) and using a local server, and constantly paying for an online one storage roughly 2TBs of data (BackBlaze for storage, DigitalOcean for webserver).

 Local Low  High     Cloud Low  High 
2-Bay NAS $150 $300   2TB
Storage 
$140 $600
2TB HDDs $70 $130   Web
server 
$25 $80
Electric $5 $30        
Internet  $0 $50        
             
Immediate
Cost
 $220  $430        
 Yearly Cost $5 $80     $165 $680

And the numbers speak for themselves. It is much, much cheaper,and more viable to buy a NAS and use the established software. The low end of the yearly cloud costs would match the cost of the high end NAS after only five years. Only an idiot with a paranoiac need for total control and security would even think about making their own software and paying so much to host it online.

I naturally started working on the architecture for how to build my cloud media player after the price analysis.

Design

Having content that is both easily streamable and encrypted is a doozy. It wouldn’t really have been possibly for the his and hers at home a few years ago.But thanks to MPEG DASH and HLS, we now have video formats with those features built in!

HLS is far more common, but it is a proprietary format developed by Apple and doesn’t have nearly the same feature set as DASH. (Note: Apple should rename their company to Sour Apple, because they refuse to support the internationally standardized DASH format because they hate competition.) So for my own purposes, I chose MPEG DASH.

The real downsize to either of these formats though, is now you have to re-encode all your videos before uploading them, ugh. But it really can’t be helped, and then at least it standardizes your library. After figuring out a bit more of how DASH works, I created a super basic structure I wanted to follow:

The webserver needs to allow for finding and playing the encrypted movies. DASH supports multiple DRM methods, but the best option for a home user is ‘cleartext DRM’ aka a password. Well, you don’t want to store raw passwords in the database, so that means anything in there also has to be encrypted. Oh, and if you really want to storage provider to have no clue what’son there if they scan your stuff, that means subtitles and cover files need to be encrypted too. Oy.

But I really wanted to see if this was possible and learn this new tech, so I plowed on. I also was heavily working with JavaScript at work,so I wrote it to use Node instead of my beloved Python. Two weeks, forty dependencies and a job change later, I had ZABAVA!

Solution?

Zabava translates to “fun” or “entertainment” from Bosnian / Czech / Croatian(and maybe more?) and I thought it was a cool sounding word. Every time I say it aloud, I imitate Jim Carrey from Ace Ventura saying “shikaka”. (No idea if that’s correct, but no one’s stopped me yet.)

It has user authentication with JWT tokens. Thought, admittedly just single user right now with admin rights.

It of course supports editing video information and changing cover file.

And has a script that allows for automatic converting videos to DASH, adding them to the DB and uploading them to the storage provider. I only designed the backend for BackBlaze B2 currently, as that is what I use, but it has a fairly agnostic provider setup to allow easy creation of others.

Sexy right? I am quite proud I was able to get nearly everything to work as envisioned.

Of course, not all fairy tales have the ending we imagine. Some videos still don’t like being converted or played via DASH format and it takes forever and a day to convert and upload terabytes of media files. The code is also not up to my personal quality standard, as a lot was written while figuring out how the tech worked without consideration to overall architecture.

In the end, after uploading all my media and not using it for a few months, I stopped work on the project and have bought a home NAS anyways (as I needed some solution for tons other files as well.)

I may go back and refactor it some weekend I am in a crazy mood, but I don’t think it will fit my person standard for code quality. However, if you are interested in the code and working on it yourself, you can find it on my github

Summary

I learned a lot about building a streaming site and different security methodologies for it, so I even though I wouldn’t qualify the code as a success, it’s surely a personal win.

If I were to do this again I would of course do things a little differently:

  • Try using HLS instead of DASH for wider support
  • Write the backend in Python instead of Node
  • SQL instead of Mongo (in my defense I was using Node at the time)

So, lets go over the criteria again:

  • Encrypted – Yes!
    • Everything was secure
  • Streamable – Yes!
    • Some conversions might have issues until tweaked right, but majority of the content worked as expected.
  • Affordable – No
    • The NAS that I ended up buying cost more than streaming the media, but it was used for a lot more than just the few VHS backups I have. So not the cheapest option, but not a bank breaker.
    • Oh wait, factor in the time spent building the new app… yeah, it ain’t cheap.

As expected, you can’t have all the perks with no downsides, but if Security and Accessibility are your goals more than cost, something like this might interest you.

Is the world ready for Python 3?

The trek from Python 2 to Python 3 has been drawn-out, arduous and fraught with perils. How close are our dear Knights developers to all reaching the long sought glory of Python 3?

Quest for the Python 3 – Artwork by Clara Griffith (Link may contain NSFW art)

PIP Downloads

Let’s first jump into what is being used the most currently. This data examines fifteen different libraries downloaded via PIP for a particular Python version. We are only including 2.7 and 3.4+, the Python Versions that are currently supported.

The libraries analyzed are ones that have over 10K stars on github and have been downloaded via PIP. The contenders are: celery, django, flask, ipython, keras, mitmproxy, numpy, pandas, python-box, requests, scrapy, selenium, tensorflow, and tornado. (To be fair, numpy and python-box didn’t have 10K stars, but I used them in the script to make these graphics, so gave them some spotlight too.)

As of January 2019, Python 3 downloads are eclipsing Python 2 by over 20% with Python 3.6 bringing over 39% of it, almost directly matching Python 2.7’s total.

That is good, but not great news. Thankfully Python 2 won’t just stop working at the end of this year, but those are rookie Python 3 numbers, we got to pump them up!

Of course, we have to remember this is a small subset of all downloads. Subsequently, pip downloads themselves don’t tell the whole tale, but this does give us an idea of how things of are going.

This is accomplished by using the PyPI BigQuery data and some SQL (adapted from Artem Golubin’s post about this from last year), then throwing it into matplotlib.

SELECT
  SUBSTR(details.python, 0, 3) as python_version,
  COUNT(*) as download_count
FROM
  TABLE_DATE_RANGE(
    [the-psf:pypi.downloads],
    DATE_ADD(CURRENT_TIMESTAMP(), -30, "DAY"),
    CURRENT_TIMESTAMP()
  )
WHERE
 details.installer.name='pip' and
 file.project = 'requests' -- change project name here
GROUP BY
  python_version
ORDER BY
  download_count DESC
LIMIT 100

Library Brawl: Who’s the Python 3 champs?

In this head to head, we are going to compare two similar libraries, and see how they are doing on the switch to Python 3.

Web Frameworks

The first two up are very popular web frameworks to develop in, Flask and Django.

It’s a dead heat! Both libraries are doing well at attracting developers with a fresh mindset.

Machine Learning

The most popular github package by far was tensorflow with over a hundred thousand stars. Here it’s paired against it’s younger brother keras, which actually depends on it (or other AI tools) to operate.

Machine learning needs to teach it’s developers how to update! It’s a sad day for AI.

Hacker vs Web Scraper

Okay, not really directly comparable tools with a man-in-the-middle proxy and a web scraper, but it’s still an interesting match up.

With this duo I was surprised they didn’t have a higher correlation. I was honestly expecting the mitm tool to have less Python 3 love, as a lot of “hacker” tools depend on the broken way Python 2 handles strings vs unicode, thus are hard to update.

Good job hackers, always keep your tool belt fresh! Scrapers….scrape it together.

Data Science

The last head to head is for the data scientists out there, and you got science in your name and numbers in your veins, you should be at the bleeding edge of tech!

Ouch, yinz need to get with the times.

Python Version Developers Use More Often

This is some hard to gather data as an individual, so I’m going to have to cheat and just base this information off JetBrain’s yearly state of the ecosystem reports from 2017 and 2018.

In 2017, 53% of devs reported using Python 3 as their main language, which went up 22% in 2018 to 75%. Based on those two points of data, we can come to a crystal clear, no doubt conclusion to how many developers will be using Python 3 as their main language in 2019.

That’s right, based on the past two year trend, 97% of developers should be using Python 3 in 2019.

Okay, well, maybe not. But I personal expect that number to be over 90% by the time Python 2 is EOL, which is excellent news.

Operating System Default Language

OSes have a fun time of being in the cross hairs of everyone from desktop to server users, trying to figure out the right combo of what’s best for their users and for their own technology stack going forward. Every major Linux distribution agrees Python 3 is the way to the future and they will need to change over. The hard part is deciding when it will impact the users least and best for their own release cycle. This has caused lots of headaches over the years. So where do we stand now?

OSPython Version
Windows 10None
OSX 10.82.7
Debian 92.7
RedHat 8*3.6
Fedora 293.7
Ubuntu 19.04*3.7

(* denotes upcoming releases this year)

Windows has the easy stance of just saying “do it yourself” and Mac is, as usual, not bothering to innovate and just hum along until it breaks. Thankfully most Linux distros, which power the internet, are either already updated or updating this year. I haven’t seen for sure that Debian 10 will be released with Python 3 or that it w ill be out before year’s end, but I would be surprised if either were not true. Then there’s Arch linux. Arch has had Python 3 as the standard for almost as long as it existed, good boy!

Are we ready?

In all honesty, we are. We are far more prepared for this than the financial sector was ready for Y2K, and we all survived that. Moreover, there are always going to be code bases that can’t update to the latest version easily, but that’s true across the entire software development world. That and the fact the Python Software Foundation has given an extended eleven years which has allowed for even the slowest of companies to have ample time to migrate to Python 3.

Python 3 everywhere? Bring it on!