Tutorial

Posts designed as a walk through tutorial of how to achieve desired behavior.

Create an image in Task Manager from CPU Usage

A lot of people have pass around a internet video of someone using a crazy high core count computer to display a video via Task Manager’s CPU usage. The one problem with it: it’s probably fake. Task Manager displays the last 60 seconds of activity, not a instant view like shown in the video. But what if we kept the usage the same for sixty seconds, could we at least make an image?

So the first problem, is how do we generate enough load for a CPU usage to noticeably increase? Luckily there is an old goto into the benchmark world that is really simple to implement. Square Roots. Throw a few million square roots at a CPU and watch it light up.

from itertools import count
import math

def run_benchmark():
    # Warning, no way to stop other than Ctrl+C or killing the process
    for num in count():
        math.sqrt(num)

run_benchmark()

The real problem is making sure we have a way to stop it. The easiest way is with a timeout, but just the slight work of gathering system time may throw off how much work we want to do in the future. So lets only do that, say, every 100,000 operations.

from itertools import count
import math
import time 

def run_benchmark(timeout=60):
    start_time = time.perf_counter()
    for num in count():
        if num % 100_000 == 0.0:
            if time.perf_counter() > start_time + timeout:
                return
        math.sqrt(num)

run_benchmark()

Excellent, now we can pump up a core to max workload. However, we currently have no control over which CPU it will run on! We have to set the “affinity” of the process. There is no easy way to do that with Python directly, and in this case we have to go straight to the Win32 API. In this case we will use the pywin32 library to access the win32process.

pip install pywin32

We will then use the SetProcessAffinityMask function to lock our program to a specific CPU, which will also require grabbing some details from GetCurrentProcess Win32 API function. One thing not really documented anywhere I could find, is the fact you set the CPU core affinity not by it’s actual number, but by the mask itself. Which we can create by taking 2 ** cpu_core.

from itertools import count
import math
import time 

import win32process  # pywin32

def run_benchmark(cpu_core=0, timeout=60):
    start_time = time.perf_counter()
    process_id = win32process.GetCurrentProcess()
    win32process.SetProcessAffinityMask(process_id, 2 ** cpu_core)

    for num in count():
        if num % 100_000 == 0.0:
            if time.perf_counter() > start_time + timeout:
                return
        math.sqrt(num)

run_benchmark()

So now every time we run the program, it should only max out the first core. If we want to do this across multiple cores, we will have to create multiple processes at the same time. My favorite way to do that is with multiprocessing maps. So instead of lighting up a single processor, let’s pump them all to the roof.

from itertools import count
import math
import time
from multiprocessing.pool import Pool
from multiprocessing import cpu_count

import win32process  # pywin32


def run_benchmark(cpu_core=0, timeout=60):
    start_time = time.perf_counter()
    process_id = win32process.GetCurrentProcess()
    win32process.SetProcessAffinityMask(process_id, 2 ** cpu_core)

    for num in count():
        if num % 100_000 == 0.0:
            if time.perf_counter() > start_time + timeout:
                return
        math.sqrt(num)


if __name__ == '__main__':
    arguments = [(core, 60) for core in range(cpu_count())]
    # [(0, 60), (1, 60), (2, 60), (3, 60)] 
    # each tuple will be as arguments to run_benchmark as its arguments

    with Pool(processes=cpu_count()) as p:
        p.starmap(run_benchmark, arguments)
    run_benchmark()

We have to throw the multiprocessing after the `if __name__ == ‘__main__’: block due to how CPython starts up on Windows. (It’s also just a good idea for any scripts.)

At this point you could also change up which cores it is running on to see how they correspond on the Task Manager. For example, you could change the range increments to only launch on each other core. range(0, cpu_count(), 2)

On my 8 core machine (so 16 logical cores) I can make a quick X shape by selecting certain cores

arguments = [(core, 100) for core in [0, 3, 5, 6, 9, 10, 12, 15]]

Now remember there are two types of CPU cores according to the operating system, physical and logical. Physical is the number of actual cores on the CPU, but if the CPU has SMT (Simultaneous multi-threading) it will double that. So that means every odd number core is actually a fake one (remember cores start at 0). Which means it has to use some of previous core’s resources. Hence why cores 2, 4, 8 and 14 are showing higher usage.

But what if we want even cooler graphics and don’t want to be limited to just 100% cpu usage? Well then we need to tell the computer to not work for very small amounts of time. Aka sleep. So lets try adding a sleep every 100K ops for oh say, 60 milliseconds.

def run_benchmark(cpu_core=0, timeout=60):
    start_time = time.perf_counter()
    process_id = win32process.GetCurrentProcess()
    win32process.SetProcessAffinityMask(process_id, 2 ** cpu_core)

    for num in count():
        if num % 100_000 == 0.0:
            if time.perf_counter() > start_time + timeout:
                return
            time.sleep(0.06)
        math.sqrt(num)

This time I am also going to run it just on physical cores.

arguments = [(core, 100, 80) for core in range(0, cpu_count(), 2)]

How about that, now it’s using just about 50% usage on each core on my computer. If you’re trying this on your own, this is where the fun begins. I suggest trying out different time offsets to see if you can get a list of times for 10~90% usage. For example, mine is close to:

usage_to_sleep_time = {
    100: 0,
    90: 0.014,
    80: 0.02,
    70: 0.03,
    60: 0.04,
    50: 0.06,
    40: 0.08,
    30: 0.1,
    20: 0.2,
    10: 0.5
}

Then lets throw that into the run_benchmark function to be able to set a precise amount of usage per core.

def run_benchmark(cpu_core=0, timeout=60, usage=100):
    start_time = time.perf_counter()
    process_id = win32process.GetCurrentProcess()
    win32process.SetProcessAffinityMask(process_id, 2 ** cpu_core)

    for num in count():
        if num % 100_000 == 0.0:
            if time.perf_counter() > start_time + timeout:
                return
            if usage != 100:
                time.sleep(usage_to_sleep_time[usage])
        math.sqrt(num)

Then if you have enough cores, you can see them all in action at the same time.

arguments = [(core, 100, usage) for core, usage in enumerate(usage_to_sleep_time)]

(I was impatient and didn’t do this one while the computer was idle, hence the messy lower ones.)

From here I am sure some of you could go crazy making individual cores ramp up and done, and create something truly spectacular, but my whistle was wetted. It is totally possible to have control over exactly how much CPU core usage is being shown in Task Manager with Python.

Here is the full final code, hope you enjoyed!

from itertools import count
import math
import time
from multiprocessing.pool import Pool
from multiprocessing import cpu_count

import win32process  # pywin32

usage_to_sleep_time = {
    100: 0,
    90: 0.014,
    80: 0.02,
    70: 0.03,
    60: 0.04,
    50: 0.06,
    40: 0.08,
    30: 0.1,
    20: 0.2,
    10: 0.5
}

def run_benchmark(cpu_core=0, timeout=60, usage=100):
    start_time = time.perf_counter()
    process_id = win32process.GetCurrentProcess()
    win32process.SetProcessAffinityMask(process_id, 2 ** cpu_core)

    for num in count():
        if num % 100_000 == 0.0:
            if time.perf_counter() > start_time + timeout:
                return num
            if usage != 100:
                time.sleep(usage_to_sleep_time[usage])
        math.sqrt(num)



if __name__ == '__main__':
    arguments = [(core, 100, usage) for core, usage in enumerate(usage_to_sleep_time)]

    with Pool(processes=len(arguments)) as p:
        print(p.starmap(run_benchmark, arguments))

A Raspberry Pi Streaming Camera using MPEG-DASH, HLS or RTSP

This is an improvement on my previous article, Raspberry Pi Hardware Accelerated RTSP Camera, now with the option of using more modern technology, MPEG-DASH and HLS!

First off, if you don’t care about the technicalities and just want a script to do everything for you, here you go! If you’re still interested in how it all works or want to tweak the settings, read on.

This article will walk you through how to either copy or convert video from your webcam or pi camera, set it up as a systemd service, and finally view it on a webpage or access it remotely.

MPEG-DASH vs HLS vs RSTP

So to clear this up first of all, these are “containers” that wrap around the actual video, which is a particular “codec” (such as h264). DASH and RTSP are fully codec agnostic, meaning they are capable of wrapping around any type of video codec. The big gotcha is what type of videos the viewer supports (and in RTSP’s case the middleman server as well.)

So if DASH and RTSP can handle everything, why even bother with HLS? Long story short, Apple, who developed HLS, is a bully, so they don’t support the open MPEG-DASH on their devices. Meaning if you are trying to share these video streams with the public or view on an Apple device, you will get the most compatibility with HLS.

So now the difference really comes down to how DASH/HLS are HTTP based protocols that can easily be supported in browser. This makes it super easy to set up an all-in-one device that can host it’s own webpage to view a video at.

Whereas RTSP requires additional software, such as VLC or a security system to view it. The real advantage with RTSP is the fact it really is nearly “real time” compared to DASH/HLS. Using my Raspberry Pis DASH/HLS seem to have a 10~20 second delay, compared to about 1 second for RTSP. As I already went over how to set that up, I won’t repeat it here and only go over DASH. But I personally use RSTP for my own home setup still.

Setting up required software

The two programs you will need are a file server (nginx, apache, python -m http.server, etc…) to host the DASH/HLS content and ffmpeg. And you don’t have to hand compile either!

Super short version:

sudo apt install nginx ffmpeg -y

To better understand why we need them and how to test them to make sure they are working properly, read on. Otherwise, skip ahead to “Gather Camera Details”.

File Server

Last time we use RTSP which required a special service of it’s own. Now we are using HLS and MPEG-DASH, which produce manifest and accompany stream files on the local system. For example, MPEG-DASH will create a manifest.mpd file that contains links to *.m4s files in the same directory which are the chunked up video files.

That means if we make those files accessible remotely, we can use standard HTTP to transport the video. Hence the need for a basic file server. I personally use nginx for the final setup, as it’s fast, easy to use, and has defaults we can use out of the box. So lets install it!

 sudo apt install nginx -y

Now all you need to do is open up a web browser on another computer on that network and connect to http://raspberrypi (If you changed hostname, or having trouble connecting, run hostname -I to see it’s IP address and use http://<ip_address> instead.) You should see a simple webpage that says “Welcome to nginx!”

FFmpeg

Since the last article came out, FFmpeg has finally started shipping with hardware acceleration built in! If you still want to compile in some custom libraries or try and optimize it for your needs, check out my Raspberry Pi FFmpeg compile guide. Otherwise, just download it from the distribution repositories.

sudo apt install ffmpeg -y

You can verify it’s part of the package by checking the encoders for h264_omx.

ffmpeg -hide_banner -encoders | grep omx

Should produce V….. h264_omx OpenMAX IL H.264 video encoder (codec h264) or similar. If for some reason it doesn’t have that or other libraries you are looking for, such as the popular fdk-aac, look into my article onto compiling FFmpeg yourself, or use the helper script with the option --compile-ffmpeg.

Gather camera details

If you have the helper script, simply run it with the option --camera-details and it will print out each device and their formats with their highest resolution for each.

python3 streaming_setup.py --camera-details
# /dev/video0: {'yuyv422': '1280x800', 'mjpeg': '1280x720'}

Under the hood, this is running the following command for every device found.

ffmpeg -hide_banner -f video4linux2 -list_formats all -i /dev/video0

To also see what frame rates are supported per resolution, you will have to run the v4l2-ctl command for that device.

v4l2-ctl -d /dev/video0 --list-formats-ext

Create the FFmpeg command

The FFmpeg command is particular about order when talking about input and output details. Ours will be broken down into the following blocks:

ffmpeg <incoming video details> -i <device> <conversion details> <output>

So let’s say you are using a raspberry pi camera and want to stream 1080p video without re-encoding it. We first have to tell FFmpeg about the camera details it will pull from.

Applying the Camera Details

In longhand, it would look like this:

-input_format h264 -f video4linux2 -video_size 1920x1080 -framerate 30 -i /dev/video0

Hopefully each of those parts are pretty self explanatory. We can also reduce -video_size, aka the incoming resolution to -s, and -framerate, aka the fps to -r.

Network Bandwidth Considerations

Internet streamers, beware you may not be able to upload directly from the camera’s full 1080p at 30fps. I did a quick test using vnstat over a wired connection with a Pi Zero, and found my 5MP OV5647 camera was using almost 20Mbit/s. Keep in mind the official Pi Camera with the sony sensor is 8MP so may be even higher than that.

The following tests were done at two minute averages while the stream was being watched. The averages were recorded, and generally the peaks were 2x the average.

ResolutionfpsAverage Mbit/s
1920×10803020.12
1920×1080155.12
1280×7206015.81
1280×720303.94
1280×720152.75
640×480905.66
640×480604.43
640×480300.93
640×480150.43
Using a Pi Zero with 5MP OV5647 camera over ethernet

Conversion options

Since this already is h264 we don’t need anything other than to say copy the incoming stream. So that option is -codec:v copy or shorthand -c:v copy. Which is saying set the codec of v for video tracks to copy aka don’t convert.

-c:v copy

If you instead had a webcam that only supported mjpeg input, or if you needed to add text overlay to the video, you would have to recompile FFmpeg. With the Raspberry Pi, you’ll want to use the built in hardware encoder, h264_omx. You would then also have to set the bitrate (-b:v) of the outgoing video. That is really camera / network dependent, but my rule of thumb is use video width x hight x 2. So 1920x1080x2 ==4,147,200, so I would set the bitrate to 4M (aka ~4000kb, or ~4000000 bytes).

-c:v h264_omx -b:v 4M

The Raspberry Pi OpenMAX (omx) hardware encoder has very limited options, and doesn’t support constant quality or rate factors like libx264 does. So the only way to adjust quality is with the bitrate. As for general quality, it sits between libx264s ultrafast and superfast presets, which is somewhat disappointing but not surprising for a real-time hardware encoder.

MPEG-DASH and HLS output

Personally I would never recommend HLS to a friend, as MPEG-DASH is all around a more open and powerful muxer. But I understand some legacy systems don’t have DASH support yet. Thankfully, FFmpeg’s dash module gives us HLS for free! (Note that some systems don’t even support that, and you may end up having to use only the hls muxer.)

DASH and HLS both crate playlist files locally, with chucked up video files beside them. This creates a few problems, first is the cleanup and management of those files. Thankfully, the DASH model has options to delete all those files on exit, as well as the ability to only keep so many video chunks on disk at a time.

The bigger problem is the constant writing to the disk. In this case an SD card that is a wear item and has higher error rates with the more writes it experiences. So to save the SD card, and ourselves future headaches, we are going to write these files to memory instead!

sudo mkdir -p /dev/shm/streaming/

Tada, we now have a folder in shared memory space we can use. The caveat is it will be removed after restarts, so we will have to make sure it’s recreated before or FFmpeg service is started. But lets not get ahead of ourselves. We just need to know the rest of our FFmpeg command.

-f dash -window_size 10 -remove_at_exit 1 -hls_playlist 1 /dev/shm/streaming/manifest.mpd

We are using just a few options of what FFmpeg’s DASH muxer can do if you do need further customization, but I doubt it for most cases.

I am setting the max number of video chuncks to be kept at 10 via -window_size and telling FFmpeg to delete them and the manifest file when it stops running with -remove_at_exit 1. Then we enable HLS with -hls_playlist 1 which creates a master.m3u8 file in the same directory as the manifest.mpd (Feel free to disable HLS if you don’t need it.)

Putting it all together

If you have that camera with native h264 encoding, like the Pi Camera, here is your copy and paste code!

# sudo mkdir -p /dev/shm/streaming/
sudo ffmpeg -input_format h264 -f video4linux2 -video_size 1920x1080 -framerate 30 -i /dev/video0 -c:v copy -f dash -window_size 10 -remove_at_exit 1 -hls_playlist 1 /dev/shm/streaming/manifest.mpd

You should soon start seeing messages about the manifest and chucks being updated and the current frame rate.

[dash @ 0x20bff00] Opening '/dev/shm/streaming/manifest.mpd.tmp' for writing
[dash @ 0x20bff00] Opening '/dev/shm/streaming/media_0.m3u8.tmp' for writing
[dash @ 0x20bff00] Opening '/dev/shm/streaming/chunk-stream0-00003.m4s.tmp' for writing
[dash @ 0x20bff00] Opening '/dev/shm/streaming/manifest.mpd.tmp' for writing
[dash @ 0x20bff00] Opening '/dev/shm/streaming/media_0.m3u8.tmp' for writing
[dash @ 0x20bff00] Opening '/dev/shm/streaming/chunk-stream0-00004.m4s.tmp' for writing
frame=  631 fps= 30 q=-1.0 size=N/A time=00:00:20.95 bitrate=N/A speed=1.01x

If you are showing errors like Operation not permitted or Cannot find a proper format please check your input formats and try lower resolutions. Sometimes cameras list their photo taking resolutions which are much higher than their streaming resolutions. If you are still receiving the errors even with the right codec selected, turn the Pi off, check the connections to the camera and turn it back on, as the camera can sometimes get in a bad state or have a loose wire.

To add audio or a text overlay like a timestamp please refer to those linked sections of my previous guide!

Setting up Remote Viewing

First we need to allow nginx to serve up that manifest file. By default nginx is serving up the /var/www/html directory. So it is easy enough to link our in memory folder as a sub folder there.

ln -s /dev/shm/streaming /var/www/html/streaming

Then we need to either have a way to view it via a webpage, or connect to it with a remote player such as VLC. If you have VLC or a viewer for DASH content, you can point it at http://raspberrypi/streaming/manifest.mpd and should start seeing the stream! (If you have a custom hostname or want to use IP, can use hostname -I command to use that in place of raspberrypi). To create a webpage to view the content, we will have to put it in a folder that won’t be deleted on reboot.

I personally chose /var/lib/streaming/index.html as I will also be putting a script in there that will help up set things up again each reboot. Make sure to create the directory first:

mkdir -p /var/lib/streaming

So open up your favorite text editor and copy the following html code into /var/lib/streaming/index.html

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <title>Raspberry Pi Camera</title>
    <style>
        html body .page {
            height: 100%;
            width: 100%;
        }

        video {
            width: 800px;
        }

        .wrapper {
            width: 800px;
            margin: auto;
        }
    </style>
</head>
<body>
<div class="page">
    <div class="wrapper">
        <h1> Raspberry Pi Camera</h1>
        <video data-dashjs-player autoplay controls type="application/dash+xml"></video>
    </div>
</div>
<script src="https://cdn.dashjs.org/latest/dash.all.min.js"></script>
<script>
    var player = init();

    function init() {
        var video = document.querySelector("video");
        player = dashjs.MediaPlayer().create();
        player.initialize(video, "manifest.mpd", true);
        player.updateSettings({
            'streaming': {
                'lowLatencyEnabled': true,
                'liveDelay': 1,
                'liveCatchUpMinDrift': 0.05,
                'liveCatchUpPlaybackRate': 0.5
            }
        });
        return player;
    }
</script>
</body>
</html>

Now lets link it up to the nginx directory.

ln -s /var/lib/streaming/index.html /var/www/html/streaming/index.html

Now you should be able to view your streaming camera webpage at http://raspberrypi/streaming!

We are using the very expansive dash.js open source library that has a lot of customization options. We are using a few basic ones here to set it up to be better at live streaming, but please check out their project and see how best to tweak it for your needs.

Reboot Script

Now that we have a sexy webpage and working ffmpeg command, we need to save ’em and make sure they can survive reboots. Therefor, we need to create a script that will run on restart to recreate the folder in memory and copy the index file over. I put mine right beside the permanent index file, with /var/lib/streaming/setup_streaming.sh add the following text.

# /var/lib/streaming/setup_streaming.sh
mkdir -p /dev/shm/streaming
if [ ! -e /var/www/html/streaming ]; then
    ln -s  /dev/shm/streaming /var/www/html/streaming
fi 
if [ ! -e /var/www/html/streaming/index.html ]; then
    ln -s /var/lib/streaming/index.html /var/www/html/streaming/index.html
fi 

Don’t forget to make it executable.

chmod +x /var/lib/streaming/setup_streaming.sh

Now to run it on restart, we are going to add this script to /etc/rc.local. Open the /etc/rc.local file and add these lines before the exit 0 at the bottom:

# Streaming Shared Memory Setup
if [ -f /var/lib/streaming/setup_streaming.sh ]; then
    /bin/bash /var/lib/streaming/setup_streaming.sh|| true
fi

Notice we are being extra extra careful to not throw errors here, as at the top of the rc.local file it makes it clear that it should never exit without a clean exit code of 0.

Camera Streaming Service

After we have the location in memory setup, we can start the camera. I chose to do so via a systemd service, so it can restart on errors and easy to manage. In this example it’s called stream_camera but you can change the actual service file name to suit your fancy.

Add a new file at /etc/systemd/system/stream_camera.service.

# /etc/systemd/system/stream_camera.service
[Unit]
Description=Camera Streaming Service
After=network.target rc-local.service

[Service]
Restart=always
RestartSec=20s
ExecStart=ffmpeg -input_format h264 -f video4linux2 -video_size 1920x1080 -framerate 30 -i /dev/video0 -c:v copy -f dash -window_size 10 -remove_at_exit 1 -hls_playlist 1 /dev/shm/streaming/manifest.mpd

[Install]
WantedBy=multi-user.target

Notice we set it to be run after rc-local to make sure we are ready for ffmpeg to write to /dev/shm/streaming.

On demand gaming server for pennies a month!

The only downside with most dedicated gaming servers is the cost. If you need a 24/7 server or someone to manage it for you, you can’t avoid it. However, if you are self sufficient for setting up the gaming software, and only play certain games intermittently, your costs can be significantly reduced.

Let’s use Factorio as an example. To pay for a dedicated server it will cost around $5~10 per month, used or not. Now, if you are like me and play with friends and family only a few nights a month, that’s an unnecessary expense. Sure you can self host, but then everyone is dependent on a single person to have it up and running. Instead, I switched to using a Digital Ocean Droplet, and have paid less than a quarter this past month for an on demand gaming server.

Even if we played every night after work and weekends, it would only be $1 or $2 a month. You may be thinking “oh, you just turn off the droplet when you’re not using it!”, but alas, it is not that simple. A turned off droplet or server is still holding onto resources, so it still incurs cost, which is standard across all the hosting giants I looked into.

The trick on how to save cash? When you’re not playing, turn the droplet off, snapshot it, then destroy the droplet. When you want to play again, restore the snapshot to a new droplet. That way you are only paying server costs while it is running, then paying the super cheap snapshot storage the rest of the time.

This is painful to do by hand, but really easy with a helper script. (If you want to take it further, you could make it a website with access controls so only who you want could control it whenever they wanted.)

Digital Ocean Gaming Service (DOGS)

The code repo is available on github and the instructions are boringly standard.

git clone https://github.com/cdgriffith/dogs.git
# Alternatively, just download and extract the zip file
# https://github.com/cdgriffith/dogs/archive/master.zip

# Create a venv if you are python savvy
pip install requirements.txt
cp config.yaml.example config.yaml
# Update the config file to match your digital ocean settings
python -m dogs

This script does have some prerequisites. You need to have a Digital Ocean account token, and will have to manually create and startup the server once before you can manage it via this script. (Pull Requests always considered if you want to ease that pain for others.)

Droplet setup

When you do create the droplet you want, make sure to pick the smallest specs you think you will need. You can always upgrade to larger disk size, but cannot go back down to smaller. Also during this process, record the hostname which we will use as our server name in the config file.

You will also need your SSH key id, which are the digits at the very end of the public key. For example if the key ends with rsa-key-20190721 the ssh id is 20190721. You can always find this info in the Accounts > Security section as well by selecting a key and hitting “Edit”.

Also if you hook a firewall up to the droplet you will also need that ID, which can you retrieve via the API.

DOGS Server Config

When you got all that info, add it to the config.yaml file.

token: <your 64 character hex string>
 servers:
   ubuntu-1804-factorio:
     region: nyc1
     size: s-1vcpu-2gb
     firewall_id: <unique hex string separated by dashes>
     snapshot_max: 2
     ssh_key: <8 digits from end of ssh public cert>

You can get a full list of regions and sizes from their API as well.

Again, to run you just need to be in the dogs directory and run python -m dogs.

Binary (EXE) files

If you want to package it into an easy to use exe (can also modify for mac or linux binaries), just use the included build scripts.

# Windows specific requirements, otherwise just install 'pyinstaller'
pip install -r requirements-build.txt
python dogs\build.py

And now you should have a super handy dogs.exe in the dist directory. Don’t forget to keep your config.yaml in the same directory as it!

Gaming Setup Scripts

I have a directory to put files for setting up and updating game servers. Right now I only have factorio, but feel free to add your own, a PR would be very appreciated! All you need is a way to near automatically create / install the service and a way to have it auto start (in my example using systemd for standard Ubuntu servers)

Check the scripts out on github.

Replacing NZXT’s CAM software on Windows for Kraken

NZXT Kraken coolers are awesome for CPUs or GPUs. Their CAM software on the other hand is slow, bloated and possibly stealing your data. Thankfully, there are open source alternatives available.

The option that I will walk you through using is a command line tool that doesn’t need to be constantly running in the background, and doesn’t require any internet connection to work.

APRIL 2019 UPDATE: liquidctl now has the best support all around and automated windows builds! Use this and forget about everything below!

UPDATE: I have created a standalone executable for Windows for those that do not want to bother with the steps below. It is available on github. (For x61 and other second gen users, you will still need to update the driver as part of step five below.)

There are a few different libraries to control third generation coolers ( Kraken x62, x72, x52 and x42 ). The only one that supports fan control for second generation ( Kraken x61, x41 and x31 ) as well as third on Windows that I have found is liquidctl (as of 2/6/2019 still in an experimental dev branch, can check on the issue directly for updates ).

Please note: none of this is my own software, and this is only a guide based on my own experience. This is fully “as-is”, no warranty or guarantee it won’t harm your hardware or other software.

I am going to get a little more detailed for these steps than usual, as I want to make this easy for anyone’s skill level. For power users, here are the abbreviated steps:

  1. Download and install Python 3.7+ x86 version
  2. Create Python virtual env
    • run command python -m venv venv
    • Activate it venv/Scripts/activate.bat
  3. Downloaded libusb
    • extracted the files from MS32/dll into the venv/Scripts folder
    • You may need an extractor program like 7zip
  4. Downloaded liquidctl
    • Make sure you are in the activated venv
    • pip install liquidctl
  5. Kraken x61 / gen two users only – Downloaded zadig and install libusbK driver for the Kraken device
    • WARNING – CAM will no longer be able to use this device unless you uninstall this new driver
    • Select Options > List all Devices
    • Find the device “690LC” in the dropdown list, should have USB ID of 2433 B200
  6. Run liquidctl to change your fan speed!
    • liquidctl set fan speed 60

As of time of writing their software does not by default include the code for changing the logo color for the second generation Krakens (x61, etc..), but I’ll show you how to add it easily. Checkout the liquidctl repo, that now has great support for older devices, as well as some EVGA and Corsair support!

Download Python

For those of you who do not already have it, we will need to make sure there is a version of Python 3 on your system. Go to the official download page from the Python Software Foundation at https://www.python.org/downloads/windows/. Click on the top link for “Latest Python 3 Release” (version may be newer than shown below.)

Scroll down to the bottom of the next page, and select the “Windows x86 Executable Installer”. Yes, install the x86 version, aka 32-bit , even if you have 64-bit Windows. If you do happen to download and install the x86_64 version instead, you just will have to use a different libusb dll (thankfully included in same download bundle).

After the exe is downloaded, double click on it, and lets make a few changes during the installation to make your life easier. First make sure to add Python to PATH. This will make it possible to run python from the command line without specifying the full path to it’s executable. Then click on “Customize installation”.

Next page we don’t need to touch anything. If you are only going to use python for only this, you can remove Documentation and Python test suite and it will still work fine and reduce bloat.

Final page we just want to make sure it’s installed the program files directory instead of the obscure user app dir that it uses by default. Then hit install.

You should now be able to open python easily in the command prompt. You can open it by hitting the Windows key on your keyboard (or manually opening start menu), typing “Command” and then click on “Command Prompt”.

C:\Users\me>python --version
Python 3.7.2

Create a virtual environment

By creating a virtual environment you isolate the new libraries you are going to install from the system version of Python, eliminating possible future headaches if you need to install conflicting packages.

Thankfully it is super simple to do, open a command prompt, navigate to the directory you want a new folder with the virtual environment in, and type:

python -m venv venv

This is simply saying “Run python, using module venv -m venv to create a virtual env at the venv directory. Then you want to “activate” it so that your command prompt will instead use that new isolated version of Python and Python tools (like pip).

venv\Scripts\activate.bat

This will cause your command prompt to give a nice little notification that you are now using that venv:

C:\Users\me> venv\Scripts\activate.bat

(venv) C:\Users\me>

Remember the path you installed the virtual env too, we will use it later!

Install libusb DLLs

Just because life has to be a little extra difficult, libusb only pushes their releases in 7z and tar.bz2 archives. Which means, if you don’t already have a tool that can open these type of files, head over to https://www.7-zip.org/download.html and download either the exe or msi for your version of Windows (if you are unsure, just get the 32-bit exe one).

Installing it should be fine with the defaults, feel free to change as you want. When done, go over and download libusb. The developers of
liquidctl suggest you use 1.0.21 at time of writing so we are linking directly to that version https://github.com/libusb/libusb/releases/tag/v1.0.21.

After download, extract those files with 7zip.

Then you will want to go into the new directory, and find the proper DLLs for your installed version of Python. If you followed this guide so far, that will be the MS32 folder.

You are going to copy those into your virtual environment directory Scripts directory, as that is by default added to the PATH when you activate the virtual env. So if I was in the C:\Users\me directory when I ran python -m venv venv I would copy those three files into C:\Users\me\venv\Scripts.

They look a little out of place, but meh, it works.

Change the Kraken drivers (x61, x41, x31 Only)

Sadly (or not so sadly) we can’t use the default second gen Kraken drivers installed by CAM. Which also mean that when we switch these, CAM will no longer be able to interface with the Kraken unless this new driver is removed via the Driver Manager.

ONLY FOR KRAKEN x61, x41 and x31! Skip ahead to the next section for anything else.

Go to https://zadig.akeo.ie/ and download their awesome driver switcher tool.

Run it as an administrator, then go to Options and select “List all Devices”.

Chose 690LC from the drop down list. It’s USB ID should be 2433 B200. Select either the libusb-win32 or libusbK driver from the right hand tiny select buttons. (I had trouble the first time timing out on the libusbdrivers so I switched to libusbK, your mileage may vary.) Then hit ‘Replace Driver’.

This will probably take a while. Hopefully it pops up a success window in a few seconds or minutes. If not, I suggest waiting a full ten minutes, then switching to the other driver type (aka select libusbK if you first tried libusb-win32) and trying again before running to their github for help.

Install liquidctl

We’re almost there!

Switch back to the command prompt and make sure you are in the active virtual env. If you don’t see (venv) at the start of the line, go back and take a look at the “Create a virtual environment” section.

If you are using this for a newer third generation device, like the Kraken x62, you can use the regular module.

pip install liquidctl

However, if you do need second generation support for the Kraken x61, x41 or x31, you will need to grab the experimental branch. Please be aware that this may be at an unstable state at time of your download. You can follow current issues directly in the pull request.

pip install git+https://github.com/jonasmalacofilho/liquidctl.git@add-second-gen-krakens

This will install the command in the active virtual environment as liquidctl! Keep in mind, you will need to activate this virtual environment to use this command.

Using liquidctl

You can find the full list of what you can do via --help or from their github. But to start you want to list out devices detected on the system, initialize them, and view their status.

(venv) C:\>liquidctl list
Device 0, Asetek 690LC (NZXT, EVGA or other) (experimental)

(venv) C:\>liquidctl initialize
report: failed to (pre) open (control: open), ignoring

(venv) C:\>liquidctl status
Device 0, Asetek 690LC (NZXT, EVGA or other) (experimental)
report: failed to (pre) open (control: open), ignoring
Liquid temperature          31.2  °C
Fan speed                   1320  rpm
Pump speed                  3600  rpm
Firmware version         0.7.0.0

For the Kraken x61 and gen-two devices, the only thing you can do* is set the fan speed. (* As of time of writing, could have come a long way by whenever ‘now’ is. Also, you can used my “hack” below to change the logo color.)

(venv) C:\> liquidctl set fan speed 100

If you are using third gen devices, you can have fun with pump speeds or colors as well!

liquidctl set ring color fading 350017 ff2608
liquidctl set logo color spectrum-wave --speed slowest
liquidctl set pump speed 90

Multiple Devices

If you happen to have multiple supported devices on your system, you will have to select which one you want to use. First figure out which device is numbered as which.

(venv) C:\> liquidctl list
Device 0, Asetek 690LC (NZXT, EVGA or other) (experimental)
Device 1, NZXT Kraken X (X42, X52, X62 or X72

Then just append --device <num> to any follow up commands.

liquidctl --device 1 set ring color spectrum-wave
liquidctl --device 0 set fan speed 100

You can see more detailed options including color modes for Kraken devices here.

Getting color working for the Kraken x61

If you don’t code and are never going to use this virtual environment for anything other than this tool, don’t fret. It’s a copy paste operation! (For others that don’t want to modify their site-packages file and know what that means, you can do a local develop mode install via setup.py and play to your heart’s content.)

Okay, so all we need to do is simply add a new section of code to one of the files. You’ll have to navigate a few layers deep into the virtual environment you created: venv\Lib\site-packages\liquidctl\driver there is a file called asetek.py that you need to open. If you installed the full Python suite, you should be able to easily right click and edit in IDLE.

Scroll to the bottom of the file, and paste the following block of code:

    def set_color(self, channel, mode, colors, speed):
        """Set the color of the logo."""
        modes = ('fixed', 'alternating', 'blinking', 'off')
        speeds = {
            'fastest': 1,
            'faster': 2,
            'normal': 3,
            'slower': 4,
            'slowest': 5
        }
        try:
            speed = int(speed)
        except ValueError:
            if speed not in speeds:
                LOGGER.warning('Speed must be a value between 1 and 255, setting to 1')
                speed = 1
            else:
                speed = speeds[speed]
        else:
            if speed < 1 or speed > 255:
                speed = 1
                LOGGER.warning('Speed must be a value between 1 and 255, setting to 1')

        if channel != 'logo':
            LOGGER.warning('Only "logo" channel supported for this device, falling back to that')

        if mode not in modes:
            raise NotImplementedError('Modes available are: {}'.format(",".join(modes)))

        if mode == 'off':
            color1, color2 = [0x00, 0x00, 0x00], [0x00, 0x00, 0x00]
        else:
            color1, *color2 = colors
            if len(color2) > 1:
                LOGGER.warning('Only maximum of 2 colors supported, ignoring further colors')
            color2 = color2[0] if color2 else [0x00, 0x00, 0x00]

        self._begin_transaction()
        data = ([0x10] +
                color1 +
                color2 +
                [0xff, 0x00, 0x00, 0x37,
                 speed,
                 speed,
                 0x00 if mode == 'off' else 0x01,
                 0x01 if mode == 'alternating' else 0x00,
                 0x01 if mode == 'blinking' else 0x00,
                 0x01, 0x00, 0x01])
        data.to_yaml(filename=self.data_file)
        self._write(data)
        self._end_transaction_and_read()

Because Python is spacing specific, make sure the def set_color lines up with the def _write above it. It should look something like this (code may change vs what is in the image):

Then, make sure to save the file, by either hitting Ctrl+s or go to File > Save. The options for colors are quite limited for second gen devices versus third gen, but at least you can use them!

ModeColors to supply
off0
fixed1
alternating2
blinking1
liquidctl --device 0 set logo color blinking ffffff --speed 2

Now you can issue a command to set the colors like normal. Provide --speed to set number of seconds between alternating colors or blinking.

If you have any issues with the colors command, please let me know. If you have issues with the program itself, please open an issue directly on liquidctl github page.

Windows Auto-start

Alright, I’ve played around and have everything set perfectly. But wait, I rebooted and it’s all gone! NOOOO!

Thankfully, it’s rather simple to have your configuration auto load on startup. We simply have to put the commands into a script, and put it into the windows auto start directory.

Open your run menu by hitting the Windows Key plus r at the same time (or typing in run to windows search). When the little box pops up, type in shell:startup and hit OK. It will take you to a deep dark folder like C:\Users\me\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Startup

Open your favorite text editor (or if you know what you’re doing, just right click in the folder, create a new text file, then rename it with a .bat extension.) Keep that window open, so we can copy it’s path.

Now to create the small script. This script’s first command is to go into the venv\Scripts directory, so make sure to change to the full path of where that is in your version. Then, simply modify the liquidctl commands to the ones you want to issue.

cd C:\YOUR_OWN_FULL_PATH_HERE_MAKE_SURE_TO_CHANGE\venv\Scripts
call activate.bat

liquidctl --device 0 set logo color fixed ffffff
liquidctl --device 0 set fan speed 30 25  40 50  50 100

liquidctl --device 1 set logo color fixed ffffff
liquidctl --device 1 set ring color fixed ffffff
liquidctl --device 1 set pump speed 30 50  40 75  50 100
liquidctl --device 1 set fan speed 30 25  50 50  60 75  70 100

Now when you go to save this new file, copy that long path from the open shell:startup directory into the save bar so it is saved into that directory. Then save the file as a name you will remember, like kraken_control.bat just make sure it ends in .bat. In notepad, you may have to change the dropdown from Save as type to All files.

You should now see that file in the startup directory. Go ahead and double click on it to make sure it works. Easiest way to test is first manually change your Kraken colors to something different and make sure running the script changes them back.

If it doesn’t work, most likely a command is miss typed. Copy line for line into a cmd prompt and make sure it works for you there. (Or just run the .bat file from an open cmd prompt if you know how so you don’t have to copy-paste.)

That’s all for this tutorial. I hope you found it useful and stress free! If there is something that needs improved please let me know in the comments below.

Truffle: going from ganache to testnet (ropsten)

Truffle is an amazing suite of tools created by Consensys to develop smart contracts for the Ethereum blockchain network. However, it can be a bit jarring to make the leap from local development to the real test network, ropsten.

Required Setup

For this walk through, I have installed:

I will be using the default example truffle project, MetaCoin, that you can walk through how to unbox here or follow along using your own project.

First things first, if you do NOT have a package.json file yet, make sure to run npm init. This will turn the directory into a node package that we can easily manage with the npm package manager. So we don’t have to download dependices into the global package scope.

Now we can download all the things we are going to need:

npm install bip39 dotenv --save
  • bip39 – used to generate wallet mnemonic
  • dotenv – simple way to read environment variable files

We got everything development wise we need now.

Storing Secrets outside the code

We will have to create a private key or mnemonic, and that means we need somewhere relatively secure to store it. For testnet stuff, this can be as simple as making sure it’s not being put into version control alongside the code. To that end, we are going to use Environment Variables, and will to store them in a file called .env (that’s it, just an extension basically. Make sure to add it to your .gitignore if you’re using git). To learn more, check out the github page for dotenv. But for our purposes, all you need to know is that this file will have a format of:

ENV_VARIABLE_NAME=someting
ANOTHER_ENV=something else

Accessing testnet

The easiest way to reach out to testnet is by using a provider. I personally like using infura.io (free, just requires registration).  After you register and have your API key emailed to you, make sure you select the URL for the test network and add to the .env file using a variable named ROPSTEN_URL.

ROPSTEN_URL=https://ropsten.infura.io/<your-api-key>

It’s also possible to use your own geth node set to testnet, but that is not required.

Next we are going to create our own wallet, if you already have one set up, like with MetaMask, you can skip this next part.

Creating your testnet wallet

So now you have an place to put your secrets, lets create some. This is where bip39 comes in, it will create random mnemonics which can be used as the basis for private key of a wallet. It will be a series of 12 random words.

We could put this generation in a file, but it’s easy enough to just do straight from the command line:

node -e "console.log(require('bip39').generateMnemonic())"

This will output 12 words, DO NOT SHARE THESE ANYWHERE. The ones I am using below are example ones, and also shout NOT be used. Put them in .env file as the variable MNEMONIC. So now your .env file should now contain:

MNEMONIC=candy maple cake sugar pudding cream honey rich smooth crumble sweet treat
ROPSTEN_URL=https://ropsten.infura.io/<your-api-key>

We have our seed, so it’s time to hook it into our code. In your truffle.js or truffle-config.js file, you will need to now import the environment variables and a wallet provider at the top of the file.

require('dotenv').config()
const HDWalletProvider = require('truffle-hdwallet-provider')

After that is added, we will move down to the the exports section, we are going to add a new network, named ropsten. Then are going to use the HDWalletProvider and supply it with the mnemonic and Ifura url provided via environment variables.

module.exports = {
  networks: {
    ropsten: {
      provider: () => new HDWalletProvider(
        process.env.MNEMONIC,
        process.env.ROPSTEN_URL),
      network_id: 3
    },
  },
}

Test and make sure everything’s working by opening a truffle console, specifying our new network.

truffle console --network ropsten

We can then get our public account address via the console.

truffle(ropsten)> web3.eth.getAccounts((err, accounts) => console.log(accounts))
[ '0x627306090abab3a6e1400e9345bc60c78a8bef57' ]

If you are seeing this same wallet address, you did it wrong. Go back and make your own mnemonic, don’t copy the candy one from above.

Funding the wallet

In your development environment, the wallet already has ETH in it to pay for gas and deploying the contract. On the mainnet, you will have to buy some real ETH. On testnet, you can get some for free by using a Faucet, such as https://faucet.ropsten.be/ or if you’re using MetaMask just use https://faucet.metamask.io/.

Make sure to use the address you gathered from the console for the faucet,  and soon you should have test funds to play around with and actually deploy your contract.

Deploying the Contract

Now where the rubber meets the road, getting your contract out into the real (test) world.

truffle deploy --network ropsten

If everything is successful, you’ll get messages like these:

Using network 'ropsten'.

Running migration: 1_initial_migration.js
  Deploying Migrations...
  ... 0xefe70115c578c92bfa97154f70f9c3fbaa2b8400b1da1ee7cdxxxxxxxxxxxxxx
  Migrations: 0x6eeedefb64bd6ee6618ac54623xxxxxxxxxxxxxx
Saving successful migration to network...
  ... 0xd4294e35c166e2dca771ba9bf5eb3801bc1793e30db6a53d4dxxxxxxxxxxxxxx
Saving artifacts...
Running migration: 2_deploy_contracts.js
  Deploying Capture...
  ... 0x446d5e92d6976bb05c85bb95b243d6f7405af6bb12b3b6fe08xxxxxxxxxxxxxx
  Capture: 0x1d2f60c6ef979ca86f53af1942xxxxxxxxxxxxxx
Saving successful migration to network...
  ... 0x0b6f918ccc8e3b82cdf43038a2c32fe1fef66d0fa9aeb2260bxxxxxxxxxxxxxx
Saving artifacts...

Tada! You now have your custom contracts deployed to testnet!

Or, you got an out of gas error, as it is not uncommon to have to adjust the gas price to get it onto the network, as truffle does not automatically figure that out for you. A follow up post will show how to calculate and adjust gas price as needed.