python

Let it Djan-go!

https://www.claragriffith.com/
Artwork by Clara Griffith

Do you know someone who hails from the strange land of Django development? Where there are code based settings files, is customary to use a ORM for all database calls, and classes reign supreme. It’s a dangerous place that sits perilously close to the Javalands. Most who originally traveled there were told there would be riches, but they lost themselves along the way and simply need our help to come home again.

Classless classes

The first bad habit that must be broken is the horrendous overuse of classes. While classes are considered the building blocks of most object oriented languages, Python has the power of first-class functions and a clean namespacing system which negates many the usual reasons for classes.

When using Django’s framework, where a lot of its powerful operations are performed by subclassing, classes actually makes sense. When creating a small class for a one use instance in a library or script it’s most likely simply over complicating and slowing down your code.

Let’s use an example I have seen recently, where someone wanted to run a command, but needed to know if a custom shell wrapper existed before running the command.

import subprocess
import os

class DoIt:

    def __init__(self, command):
        self.command = command
        self.shell = "/bin/secure_shell" if os.path.exists("/bin/secure_shell") else "/bin/bash"

    def execute(self):
        return subprocess.run([self.shell, "-c", self.command], stdout=subprocess.PIPE)

runner = DoIt('echo "order 66" ')
runner.execute() 

Most of this class is setup and unnecessary. Especially if the state of the system won’t change while the script is running (i.e. if “/bin/secure_shell” is added or removed during execution). Let’s clean this up a little.

import subprocess
import os

shell = "/bin/secure_shell" if os.path.exists("/bin/secure_shell") else "/bin/bash"


def do_it(command):
    return subprocess.run([shell, "-c", command], stdout=subprocess.PIPE)

do_it('echo "order 66" ')

Shorter, cleaner, faster, easier to read. It doesn’t get much better than that. Now if the system state may change, you would need to throw the shell check inside the command itself, but is also possible.

To learn more view the great PyCon video entitled “stop writing classes.” The video goes over more details of why classes can stink up your code.

Help me ORM-Kenobi!

Now onto probably the hardest change that will have the most significant impact. If you or someone you know has mesothelioma learned how to interact databases by using an ORM (Django’s Models) and don’t know how to hand write SQL queries, it’s time to make an effort to learn.

An ORM is always going to be slower than its raw SQL equivalent, and in a many cases it will be much much slower. ORMs are brilliant, but still can’t always optimize queries. Even if they do, they then load the retrieved data into Python objects which takes longer and uses more memory. Their abstraction also comes at the price of not always being clear on what is actually happening under the hood.

I am not suggesting never using an ORM. They have a lot of benefits for simple datasets such as ease of development, cross database compatibility, custom field transforms and so on. However if you don’t know SQL to begin with, your ORM will most likely be extremely inefficient, under-powered and the SQL tables will be non-normalized.

For example, one of the worst sins in SQL is to use SELECT * in production. Which is the default behavior for ORM queries. In fact, the Django main query page doesn’t even go over how to limit columns returned!

Now, I am not going to tell when to use an ORM or not, or link to a dozen other articles that fight over this point further. All I want is for everyone that thinks they should use an ORM to at least sit down and learn a moderate amount of SQL before making that decision.

Text Config is King

When was the last time you had to convince an IT admin to update a source code file? Yeah, good luck with that.

By separate settings from code you gain a lot of flexibility and safety. It’s the difference between “Mr. CEO, you need the default from email changed? So for our Django docker deploy we’ll need to make the change in code, commit it to the feature branch, PR it to develop, pray we don’t have other changes and require a hotfix for this instead, then PR again for release to master, wait for the CI/CD to finish and then the auto-deployment should be good by end of the day.” vs. “One sec, let me update the config file and restart it. Okay, good to go.”

“Code based setting files are more powerful!” – Old Sith Proverb

That’s the path to the dark side. If you think you need programmatic help in a configuration file, your logic is in the wrong place. A config file shouldn’t validate itself, not have if statements, nor have to find the absolute path to the relative directory structured the user entered.

The best configuration handling I have seen are text files that are read, sanity checked, loaded into memory and have paths extrapolated as needed, run any necessary safety checks, then any additional logic run. This allows almost anyone to update a setting safely and quickly.

The scary Django way

Lets take a chunk from the “one django settings.py file to rule them all”

import os 
import json 

DEBUG = os.environ.get("APP_DEBUG") == "1"

ALLOWED_HOSTS = os.environ \
  .get("APP_ALLOWED_HOSTS", "localhost") \
  .split(",")

DEFAULT_DATABASES = {...}
DATABASES = (
  json.loads(os.environ["APP_DATABASES"])
  if "APP_DATABASES" in os.environ
  else DEFAULT_DATABASES
)

The basic design is to allow for overrides on runtime based on environment variables, which is very common and good practice. However, the way this example has been implemented honestly scares me. If the json loads fails, you are going to have an exception raised, in your freaking settings file. Which may even happen before logging is setup, project dependent.

Second, say you have “APP_ALLOWED_HOSTS” (or worse “APP_DATABASES”) set to an empty string accidentally. Then the “ALLOWED_HOSTS” would be set to [""] !

Finally spending all that work to grab environment variables makes it very unclear where to actually update the settings. For example, to update the ALLOWED_HOSTS you would have to change the default in the os.environ.get which wouldn’t make much sense to someone who didn’t know Python.

The safe way

Why not a much more easily readable file, in this case a YAML file paired with python-box (lots of other options as well, this was simply fast to write and easy to digest.)

# config.yaml
---
debug: false
allowed_hosts:
  - 'localhost'
databases:
  - 'sqlite3'

Wonderful, now we have a super easy to digest config file that would be easy to update. Let’s implement the parser for it.

import os
from box import Box

config = Box.from_yaml(filename="config.yaml")

config.debug |= os.getenv("DEBUG") == "1"

# Doing it this way will make sure it exists and is not empty
if os.getenv("ALLOWED_HOSTS"):
    # This is overwritten like in settings.py above instead of extended 
    config.allowed_hosts = os.getenv("ALLOWED_HOSTS").split(",")

if os.getenv("APP_DATABASES"):
    try:
        databases = Box.from_json(filename=os.environ["APP_DATABASES"])
    except Exception as err:
        # Can add proper logging or print message as well / instead of exception
        raise Exception(f"Could not load database set from {os.environ['APP_DATABASES']}: {err}")
    else:
        config.databases.extend(databases)

Tada! Now this is a lot easier to understand the logic that is happening, much easier to update the config file, as well as a crazy amount safer.

Now the only thing to argue about with your coworkers is what type of config file to use. cfg is old-school, json doesn’t support comments, yaml is slow, toml is fake ini, hocon is overcomplicated, and you should probably just stop coding if you’re even considering xml. But no matter which you go with, they are all better (except xml) than using a language based settings file.

Wrap Up

So you’ve heard me prattle on and on of the dangers Djangoers might bring with them to your next project, but also keep in mind the good things they will bring. Django helps people learn how to document well, keep a rigid project structure and has a very warm and supportive community that we should all strive to match.

Thanks for reading, and I hope you enjoyed and found this informative or at least amusing!

On demand gaming server for pennies a month!

The only downside with most dedicated gaming servers is the cost. If you need a 24/7 server or someone to manage it for you, you can’t avoid it. However, if you are self sufficient for setting up the gaming software, and only play certain games intermittently, your costs can be significantly reduced.

Let’s use Factorio as an example. To pay for a dedicated server it will cost around $5~10 per month, used or not. Now, if you are like me and play with friends and family only a few nights a month, that’s an unnecessary expense. Sure you can self host, but then everyone is dependent on a single person to have it up and running. Instead, I switched to using a Digital Ocean Droplet, and have paid less than a quarter this past month for an on demand gaming server.

Even if we played every night after work and weekends, it would only be $1 or $2 a month. You may be thinking “oh, you just turn off the droplet when you’re not using it!”, but alas, it is not that simple. A turned off droplet or server is still holding onto resources, so it still incurs cost, which is standard across all the hosting giants I looked into.

The trick on how to save cash? When you’re not playing, turn the droplet off, snapshot it, then destroy the droplet. When you want to play again, restore the snapshot to a new droplet. That way you are only paying server costs while it is running, then paying the super cheap snapshot storage the rest of the time.

This is painful to do by hand, but really easy with a helper script. (If you want to take it further, you could make it a website with access controls so only who you want could control it whenever they wanted.)

Digital Ocean Gaming Service (DOGS)

The code repo is available on github and the instructions are boringly standard.

git clone https://github.com/cdgriffith/dogs.git
# Alternatively, just download and extract the zip file
# https://github.com/cdgriffith/dogs/archive/master.zip

# Create a venv if you are python savvy
pip install requirements.txt
cp config.yaml.example config.yaml
# Update the config file to match your digital ocean settings
python -m dogs

This script does have some prerequisites. You need to have a Digital Ocean account token, and will have to manually create and startup the server once before you can manage it via this script. (Pull Requests always considered if you want to ease that pain for others.)

Droplet setup

When you do create the droplet you want, make sure to pick the smallest specs you think you will need. You can always upgrade to larger disk size, but cannot go back down to smaller. Also during this process, record the hostname which we will use as our server name in the config file.

You will also need your SSH key id, which are the digits at the very end of the public key. For example if the key ends with rsa-key-20190721 the ssh id is 20190721. You can always find this info in the Accounts > Security section as well by selecting a key and hitting “Edit”.

Also if you hook a firewall up to the droplet you will also need that ID, which can you retrieve via the API.

DOGS Server Config

When you got all that info, add it to the config.yaml file.

token: <your 64 character hex string>
 servers:
   ubuntu-1804-factorio:
     region: nyc1
     size: s-1vcpu-2gb
     firewall_id: <unique hex string separated by dashes>
     snapshot_max: 2
     ssh_key: <8 digits from end of ssh public cert>

You can get a full list of regions and sizes from their API as well.

Again, to run you just need to be in the dogs directory and run python -m dogs.

Binary (EXE) files

If you want to package it into an easy to use exe (can also modify for mac or linux binaries), just use the included build scripts.

# Windows specific requirements, otherwise just install 'pyinstaller'
pip install -r requirements-build.txt
python dogs\build.py

And now you should have a super handy dogs.exe in the dist directory. Don’t forget to keep your config.yaml in the same directory as it!

Gaming Setup Scripts

I have a directory to put files for setting up and updating game servers. Right now I only have factorio, but feel free to add your own, a PR would be very appreciated! All you need is a way to near automatically create / install the service and a way to have it auto start (in my example using systemd for standard Ubuntu servers)

Check the scripts out on github.

Build your own Pi Powered Enlarger Timer!

I cringed when my wife told me her digital enlarger timer broke. Thankfully it didn’t cost us a lot at the time, but it should have. I was lucky enough to restore a broken one I bought off eBay, no such luck this time. But then I thought, a timer is a really simple thing. Why should buying a new one cost over $200? I decided to build my own, and now you can too for less than $60!

An enlarger timer is very simple, it just needs to turn on the power to the enlarger for an exact period of time and then turn it back off. I currently have it set to have tenth of second accuracy. I also went ahead and made the darkroom light switch off during the process, as the IoT relay has a built in negative gate logic for one plug.

Required equipment for the timer

A quick warning before we begin: I did all this over a year ago, and while it is still working flawlessly, I can’t promise I remember everything. If there is anything missing and you get it working, please leave a comment so I can update the post or code! Also standard disclaimer that I am not an electric engineer and I am not responsible for you hurting your equipment or yourself by trying to follow this guide.

Software Install

If you haven’t already, get your raspberry pi ready to roll.

Then before you plug all the toys into it, make sure the system is up to date. Next we need to enable SPI. Make sure to go through the pre-requisites and the install section. Then install the python requirements.

git clone https://github.com/cdgriffith/darkroom.git --depth 1
cd darkroom
pip install -r requirements.txt

Hardware Install

Now turn off and unplug the raspberry pi, and lets connect stuff! Follow the luma guide to hook up the display, copied below for convenience.

Let’s take a look at the raspberry pin layout. The Pi 2 that I used only has the first 26 GPIO pins, and even if you have a 3 or 4, the first 26 pin layout is the same. Below is my PowerPpoint diagram reference. I am going to color the boxes the same as the wires in my pictures so they match up visually.

If you look closely at the image of mine, you’ll note I was naughty and put the +5v (pin 2) into +3.3v (pin 1) instead. I think I did it on purpose to make it not as bright. Though I can’t really remember if that or just mistake, either way, it works for me. Next we’re going to add in the IoT Relay.

So now it should look kinda look like how I have mine (just the two positives in different positions at the top).

Running the Enlarger Timer

Now it’s time to turn on the Pi and see if stuff works!

First, run through the example program provided by luma. For my 4×4 display I needed to run:

python examples/matrix_demo.py --block-orientation -90 --cascaded 4

When that is working, plug in a light (or your enlarger) to the IoT relay, go back into the darkroom directory and give it a try!

cd darkroom
python darkroom/main.py

(If you don’t like the startup message “LOVE U” I left for my wife, feel free to change it at the bottom of main.py.) To use is pretty simple: press *, enter the time you want to set, and hit enter. View all the available commands on the github project page.

To have the code run on startup, I simply added it to my /etc/rc.local file.

PYTHONPATH=/home/pi/darkroom python /home/pi/darkroom/darkroom/main.py

I hope this guide was useful for you!

Introducing FastFlix – AV1 encoder GUI and more!

First, straight to the fun, download FastFlix here to try it out! (Windows only builds right now). FastFlix started out to be a small clip and GIF maker for myself, but quickly realized I could expand it for larger usage. And it just so happens that AV1 is an emerging codec that doesn’t have a lot of GUI options yet, it’s like it was meant to be!

The main GUI

Before going out and re-encoing everything with AV1, which will be the next standard codec as everyone is on board with it, there are a few catches, so make sure to read on

What makes FastFlix unique?

AV1 Support for multiple libraries

FastFlix is designed as a general command wrapper, so it can support multiple different programs. Right now it supports both the libaom-av1 and SVT-AV1 libraries.

Totally MIT open source code, reuse to your hearts content!

Unlike most converts that are limited to the GPL license thanks to the libx265/libx264 libraries and others, FastFlix has been designed in a way to keep it legal to steal use any of it’s core code in your own projects without forcing them to be open source.

Extensible

FastFlix was designed to have a plug in architecture. That way anyone can develop or use their own plugins on top of what is already available to bring additional functionally.

What’s the catch?

New program using an experimental codec

There will be bugs. Both on the GUI side and on the codec side. Report anything weird you see and we’ll try to figure out it’s a GUI problem or needs to be passed along to the codec team.

SVT-AV1 makes it difficult to convert videos

Right now SVT-AV1 requires the source input to be broken up to smaller chucks (if it is longer than the segment size) as raw YUV video, which can take up gigabytes of space. They are automatically cleaned up as it goes along, but it is still silly that SVT-AV1 cannot take a regular video file as input yet.

I’m just one guy, and this is a side project

I don’t make any money (nor take donations. $10K+ bribes, please email me 😉) and FastFlix is not something I will spend all my free time on. So I am always looking for help and feedback!

Wrapping up

Please give a github star if you like FastFlix and be sure to send your love to SVT-AV1 as well if you find their program useful!

Again, you can download FastFlix on the Github release page.

For those of you interested more in how FastFlix works or was created I hope to do a follow up post that goes into using PySide2 and the full workflow of using Appveyor to deliver releases.

Encoding settings for HDR 4K videos using 10-bit x265

There is currently a serious lack of data on compressing 4K HDR videos out there, so I took it upon myself to get learned in the ways of the x265 encoding world.

First things first, this is NOT a guide for Dolby Vision or HDR10. This is simply for videos using the BT.2020 color primaries. Please read the new article for saving HDR.

I have historically been using the older x264 mp4s for my videos, as it just works on everything. However most devices finally have some native h.265 decoding. (As a heads up h.265 is the specification, and x265 is encoder for it. I may mix it up myself in this article, don’t worry about the letter, just the numbers.)

Updated: 6/29/2020 – Please refer to the new guide

Updated: 4/14/2019 – New Preset Setting (tl;dr: use slow)

What are the best settings for me to use when encoding x265 videos?

The honest to god true answer is “it depends”, however I find that answer unsuitable for my own needs. I want a setting that I can use on any incoming 4K HDR video I buy.

I mainly use Handbrake now use ffmpeg because I learned Handbrake only has a 8-bit internal pipeline. In the past, I went straight to Handbrake’s documentation. It states that for 4K videos with x265 they suggest a Constant Rate Factor (CRF) encoding in the range of 22-28 (the larger the number the lower the quality).

Through some experimentation I found that I personally never can really see a difference between anything lower than 22 using a Slow present. Therefore I played it safe, bump it down a notch and just encode all of my stuff with x265 10-bit at CRF of 20 on Slow preset. That way I know I should never be disappointed.

Then I recently read YouTubes suggest guidelines for bitrates. They claim that a 4K video coming into their site should optimally be 35~45Mbps when encoded with the older x264 codecs.

Now I know that x265 can be around 50% more efficient than x264, and that YouTube needs it higher quality coming in so when they re-compress it it will still look good. But when I looked at the videos I was enjoying just fine at CRF 22, they were mostly coming out with less than a 10Mbps bitrate. So I had to ask myself:

How much better is x265 than x264?

To find out I would need a lot of comparable data. I started with a 4K HDR example video. First thing I did was to chop out a minute segment and promptly remove the HDR. Thus comparing the two encoders via their default 8-bit compressors.

I found this code to convert the 10-bit “HDR” yuv420p10le colorspace down to the standard yuv420p 8-bit colorspace from the colourspace blog so props to them for having a handy guide just for this.

ffmpeg -y -ss 07:48 -t 60 -i my_movie.mkv-vf zscale=t=linear:npl=100,format=gbrpf32le,zscale=p=bt709,tonemap=tonemap=hable:desat=0,zscale=t=bt709:m=bt709:r=tv,format=yuv420p -c:v libx265 -preset ultrafast -x265-params lossless=1 -an -sn -dn -reset_timestamps 1 movie_non_hdr.mkv

Average Overall SSIM

Then I ran multiple two pass ABR runs using ffmpeg for both x264 and x265 using the same target bitrate. Afterwards compared them to the original using the Structural Similarity Index (SSIM). Put simply, the closer the result is to 1 the better. It means there is less differences between the original and the compressed one

Generated via Python and matplotlib
(Click to view larger version)

The SSIM result is done frame by frame, so we have to average them all together to see which is best overall. On the section of video I chose, x264 needed considerably more bitrate to achieve the same score. The horizontal line shows this where x264 needs 14Mbps to match x265’s 9Mbps, a 5000kbps difference! If we wanted to go by YouTube’s recommendations for a video file that will be re-encoded again, you would only need a 25Mbps x265 file instead of a 35Mbps x264 video.

Sample commands I used to generate these files:

ffmpeg -i movie.mkv -c:v libx265 -b:v 500k -x265-params pass=1 -an -f mp4 NUL

ffmpeg -i movie.mkv -c:v libx265 -b:v 500k -x265-params pass=2 -an h265\movie_500.mp4

ffmpeg -i my_movie.mkv -i h265\movie_500.mp4 -lavfi  ssim=265_movie_500_ssim.log -f null -

Lowest 1% SSIM

However the averages don’t tell the whole story. Because if every frame was that good, we shouldn’t need more than 6Mbps x265 or 10Mbps x264 4K video. So lets take a step back and look at the lowest 1% of the frames.

Generated via Python and matplotlib
(Click to view larger version)

Here we can see x264 has a much harder time at lower bitrates. Also note that the highest marker on this chart is 0.98, compared the total average chart’s 0.995.

This information alone confirmed for me that I will only be using x265 or newer encodings (maybe AV1 in 2020) for storing videos going forward.

Download the SSIM data as CSV.

How does CRF compare to ABR?

I have always read to use Constant Rate Factor over Average BitRate for stored video files (and especially over Constant Quality). CRF is the best of both worlds. If you have an easily compressible video, it won’t bloat the encoded video to meet some arbitrary bitrate. And bitrate directly correlates to file size. It also won’t be constrained to that limit if the video requires a lot more information to capture the complex scene.

But that is all hypothetical. We have some hard date, lets use it. So remember, Handbrake recommends a range of 22-28 CRF, and I personally cannot see any visual loss at CRF 20. So where does that show up on our chart?

Generated via Python and matplotlib
(Click to view larger version)

Now this is an apples to oranges comparison. The CRF videos were done via Handbrake using x265 10-bit, whereas everything else was done via ffmpeg using x265 or x264 8-bit. Still, we get a good idea of where these show up. At both CRF 24 and CRF 22, even the lowest frames don’t dip below SSIM 0.95. I personally think the extra 2500kbps for the large jump in minimum quality from CRF 24 to CRF 22 is a must. To some, including myself, it could be worth the extra 4000kbps jump from CRF 22 to CRF 20.

So let’s get a little more apples to apples. In this test, I encoded all videos with ffmpeg using the default presents. I did three CRF videos first, at 22, 20, and 18, then using their resulting bitrates created three ABR videos.

Generated via Python and matplotlib
(Click to view larger version)

Their overall average SSIM scores were near as identical. However, CRF shows its true edge on the lowest 1%, easily beating out ABR at every turn.

To 10-bit or not to 10-bit?

Thankfully there is a simple answer. If you are encoding to x264 or x265, encode to 10-bit if your devices support it. Even if your source video doesn’t use the HDR color space, it compresses better.

There is only one time to not use it. When the device you are going to watch it on doesn’t support it.

Which preset should I use?

The normal wisdom is to use the the slowest you can stand for the encoding time. Slower = better video quality and compression. However, that does not mean smaller file size at the same CRF.

Even though others have tackled this issue, I wanted to use the same material I was already testing and make sure it held true with 4K HDR video.

Generated via Python and matplotlib
(Click to view larger version)

I used a three minute 4K HDR clip, using Handbrake to only modify which present was used. The results were surprising to me to be honest, I was expecting medium to have a better margin between fast and slow. But based on just the average, slow was the obvious choice, as even bumping up the CRF from 18 to 16 didn’t match the quality. Even thought the file size was much larger for the CRF 16 Medium encoding than it was than for the CRF 18 Slow! (We’ll get to that later.)

Okay, okay, lets back up a step and look at the bottom 1% again as well.

Generated via Python and matplotlib
(Click to view larger version)

Well well wishing well, that is even more definitive. The jump from medium to slow is very significant in multiple ways. Even though it does cost double the time of medium it really delivers in the quality department. Easily beating out the lowest 1% of even CRF 16 medium, two entire steps away.

Generated via Excel
(Click to view larger version)

The bitrates are as expected, the higher quality it gets the more bitrate it will need. What is interesting, is if we put CRF 16 - Medium encoding’s bitrate on this chart it would go shoot off the top at a staggering 15510kbps! Keep in mind that is while still being lesser quality than CRF 18 - Slow.

In this data set, slow is the clear winner in multiple ways. Which is very similar to other’s results as well, so I’m personally sticking too it. (And if I ran these tests first, I would have even used slow for all the other testing!)

Conclusion

If you want a single go to setting for encoding, based on my personal testing CRF 20 with Slow preset looks amazing (but may take too long if you are using older hardware).

Now, if I have a super computer and unlimited storage, I might lean towards CRF 18 or maybe even 16, but still wouldn’t feel the need to take it the whole way to CRF 14 and veryslow or anything crazy.

I hope you found this information as useful as I did, if you have any thoughts or feedback please let me know!