How NOT to call Robinhood’s secret API with Python

This article is meant to go over the basics of calling an API via Python, as well as a critique of a top google result I recently ran across. We will start by using Robinhood’s undocumented API, as I saw this article that had python code so bad I had to try and fix it.

Also, because I can get us each a free stock if you sign up with this Robinhood referral link (seems to be usually around a $10 stock, but they like to advertise you might get Facebook or Visa.)

Murder – Artwork by Clara Griffith

So at the base of the bad example (non-working call), he was using curl calls like:

curl -v https://api.robinhood.com/quotes/XIV/ -H "Accept: application/json"

Which is a great use of curl. However, then the author simply decided to wrap it up in a subprocess call and manually parsed the output. He then states “You could also use PYCurl but I didn’t feel like learning it.” Well, thankfully there is no need to learn it.

How to interact with a REST API, the basic GET and POST

The most common text based web APIs are JSON REST-like nowadays. I add the “like” because there are usually cases a company does not fully stick to the standard, and to be honest is a good decision a lot of the time. Thankfully that doesn’t usually change the behavior of the most simple calls of GET and POST.

A properly implemented GET call on the server’s side won’t change anything on their server. So they are best to practice with. It is possible to do this all with pure python, however the requests library is so well known and has exactly what we want already, so we will stick to that library for the examples. To install it, simply pip install requests.

Example GET call

import requests

response = requests.get('https://api.robinhood.com')
# Check to make sure we got 'good' response, aka in HTTP code 2XX range
if response.ok: 
    # Display the result of the parsed json
    print(response.json())

This should simply print out {}.

Next I thought it would be a good idea to try and log in using that article’s example login. If you don’t have an account (not necessary), you can sign up here which uses a referral link, gives you and me both a free random stock.

Example (Non-working) POST call

import requests

response = requests.post('https://api.robinhood.com/api-token-auth/', 
                         json={'username': YOUR_USERNAME, 'password': YOUR_PASSWORD})
if response.ok:
    print(response.json())
else:
    print(response.status_code)
    print(response.text)

Now wait a minute, we are getting an error.

404
<h1>Not Found</h1><p>The requested resource was not found on this server.</p>

Oh gosh darn it. Not only did the author have bad python code, they have outdated examples! How dare they! (*quickly scans my most recent articles for anything obviously outdated*)

Now we get a quick lesson on the dangers of using undocumented APIs. They have absolutely no guarantees. Anything can change at anytime or even remove access to them. In this case, some people reversed engineered how to login to robinhood using a different endpoint, and have an unofficial API wrapped around it.

So for now, lets use Postman’s excellent echo API that will return what you send it. This time, lets put it in a more reusable function.

Example (good) POST call

import requests

api_root = 'https://postman-echo.com'

def post(path='/post', payload=None, timeout=10, **request_args):
    response = requests.post(f'{api_root}/{path}', 
                   json=payload, timeout=timeout, **request_args)
    if response.ok:
            return response.json()
    raise Exception(f'error while calling "{path}", returned status "{response.status_code}" with message: "{response.text}"')

print(post(payload={'username': 'chris'}))

The endpoint will return a JSON object, and requests will automatically translate that into a python dictionary when called with .json() on the response.

{'args': {}, 'data': {'username': 'chris'}, 'files': {}, 'form': {},  'json': {'username': 'chris'}, 'url': 'https://postman-echo.com/post'}

Lot of fluff, but it is nice that Postman’s API spells out exactly what that endpoint receives. For example, you can test sending form data by changing the post parameter call from json to data.

Now this is doing a better job of checking response and using much more pythonic methods. But let’s take it one step further and do better error catching. Because if this is something you are putting into a bigger program, you probably have custom exceptions and want to be able to catch them in a certain way.

This is going to be a little exception heavy, but is showcasing all the ways you might need to deal with certain errors. I would probably only have one wrapper around the call and the parsing JSON personally, but it is really program dependent.

import requests

api_root = 'https://postman-echo.com'

class MyError(Exception):
    """Exception for anything in my program"""

class MyConnectionIssue(MyError):
    """child exception for issues with connections"""


def post(path='/post', payload=None, timeout=10, **request_args):
    try:
        response = requests.post(f'{api_root}/{path}', json=payload, timeout=timeout, **request_args)
    except requests.Timeout:
        raise MyConnectionIssue(f'Timeout of {timeout} seconds reached while calling "{path}"') from None
    except requests.RequestException as err:
        raise MyConnectionIssue(f'Error while calling {path}: {err}')
    else:
        if response.ok:
            try:
                return response.json()
            except ValueError:
                raise MyError(f'Expected output from {path} was not in JSON format. Output: {response.text}')
        raise MyError(f'error while calling "{path}", returned status "{response.status_code}" with message: "{response.text}"')

print(post(payload={'username': 'chris'}))

If you’re a little rusty on exceptions, feel free to brush up on them here.

How NOT to parse JSON

In the terrible code below, we see the other article’s code manually parsing JSON output. Please for the love of all that is holy do not copy!

import subprocess

class data():
    def __init__(self, stock):
            parameter_list = ['open', 'high', 'low', 'volume', 'average_volume', 'last_trade_price', 'previous_close']

            # replacing CURL call
            out = '{"open": 54 "high": 55 "low": 52 "volume": 6000 "average_volume": 6000 "last_trade_price": 52 "previous_close": 54 "}'
            # back to copied code

            string = out  
            for p in parameter_list:  
                parameter = p  
                if parameter in string:  
                    x = 0  
                    output = ''  
                    iteration = 0  
                    for i in string:  
                        if i != parameter[x]:  
                            x = 0  
                        if i == parameter[x]:  
                            x = x+1  
                        if x == len(parameter):  
                            eowPosition = iteration  
                            break  
                        iteration = iteration + 1

                    target_position = eowPosition + 4
                    for i in string[target_position:]:  
                        if i == '"':  
                            break  
                        elif i == 'u':
                            output = 'NULL'  
                            break  
                        else:  
                            output = output+i

                    if output != 'NULL':  
                        output = float(output)  
                    if p == 'open':  
                        self.open = output  
                    if p == 'high':  
                        self.high = output  
                    if p == 'low':  
                        self.low = output  
                    if p == 'volume':  
                        self.volume = output  
                    if p == 'average_volume':  
                        self.average_volume = output  
                    if p == 'last_trade_price':  
                        self.current = output  
                    if p == 'previous_close':  
                        self.close = output  
XIV = data('XIV')

print('Current price: ', XIV.current)

So let’s do a critique starting from the top. class data():
First, classes should be CamelCase, second, this is a single function, shouldn’t even be a class. However does make a tiny bit of sense considering they are basically using it as a namespace, but there are better ways.

Next is them using subprocess to do a curl command instead of the built in urllib or requests. (Omitted so this code runs.)

Then onto string = out which is just…why? No reason to copy it to a new variable. Just name it what you want in the first place.

Finally the for loop that actually parses the JSON. Which is just painful to look at. If you ever need to parse JSON, without using requests method, just use the built-in json library!

import json

data = json.loads('{"open": 54}')
print(data["open"]) 
# 54

Bam, 42 lines into 3.

Finishing on a high note

Now for as much grief as I am giving this author, I also respect him a lot. He was able to accomplish what he wanted to achieve using Python, a language he had little experience with. To his credit he was clear up front that “I’m not an expert in stock trading nor coding so I can’t say the code is the cleanest”.

He also was willing to write up an article on how to do it for others, which most people don’t dream of doing, so in all earnestness, good job Spencer!

Last mention, free stock (legit actual money) from robinhood, just use this referral!

Raspberry Pi Hardware Accelerated RTSP Camera

Raspberry Pi’s are wonderful little computers, just sometimes they lack the umph to get stuff done. That may change with the new Raspberry Pi 4, but what to do with all those old ones? Or how about that pile of old webcams? Well this article will help turn all those into a full on security system. (Can also use a raspberry pi camera if you got one!)

Other posts I have read on this subject often only use motion to capture detection events locally. Sometimes they go a bit further and set the Raspberry PI to stream MJPEG as an IP camera. Or set up MotionEyeOS and make it into a singular video surveillance system.

With our IP camera, we are going to take it further and encode the video stream locally. Then we will send it over the network via rtsp. This will save huge amounts of bandwidth! It also does not require the client to re-encode the stream before saving, distributing the work. That way we can also hook it into a larger security suite without draining any of its resources, in this case I will use Blue Iris.

Now, the first thing I am going to do is discourage you. If you don’t already have a Pi and a webcam or pi camera for the cause, don’t run out to buy them just for this. It’s just not economical. A 1080p WiFi camera that has has ONVIF capabilities can be had for less than $50. So why do this at all? Well because A.) It’s all under your control and no worry about Chinaware, 2.) If you already got the equipment, it’s another free security eye, and ğ“€€) It’s fun.

Standard Raspbian setup

Not going to go into too much detail here. If you haven’t already, download Raspbian and get it onto a SD Card. (I used raspbian buster for this tutorial) If you aren’t going to connect a display and keyboard to it, make sure to add an empty file named ssh on the root of the boot (SD Card) drive. That way you can just SSH to the raspberry pi via command line or PuTTY on Windows.

# Default settings
host: raspberrypi 
username: pi
password: raspberry

Remember to run sudo raspi-config, change your password and don’t forget to set up wifi, then reboot. Also, good idea to update the system before continuing.

sudo apt update --fix-missing
sudo apt upgrade -y 
sudo reboot 

Install node rtsp server

To start with, we need a place for ffmpeg to connect to for the rtsp connection. Most security systems expect to connect to a rstp server, instead of listening as a server themselves, so we need a middleman.

There are a lot of rstp server options out there, I wanted to go with a lightweight one we can just run on the pi itself that is easy to install and run easily. This is what I run at my own house, so don’t think I’m skimping out for this post 😉

First of, we need to install Node JS. The easiest way I have found is to use the pre-created scripts to add the proper package links to the apt system for us.

If you are on an arm6 based system, such as the pi zero, you will need to do just a little extra work to install Node. For arm7 systems, like anything Raspberry Pi 3 or newer, we will use Node 12. Find out your arm version with uname -a command and seeing if the string “arm6” or “arm7” appears.

Now, lets install Node JS and other needed libraries, such as git and coffeescript. If you want to view the script itself before running it, it is available to view here.

curl -sL https://deb.nodesource.com/setup_12.x | sudo -E bash -

sudo apt-get install -y nodejs git
sudo npm install -g coffeescript

Once that is complete, we want to download the node rtsp server code and install all it’s dependencies. Note, I am assuming you are doing this in the root of your home folder, which will later use as the base for the directory for the service.

cd ~ 
git clone https://github.com/iizukanao/node-rtsp-rtmp-server.git --depth 1
cd node-rtsp-rtmp-server
npm install -d

Now you should be good to go, you can test it out by running:

sudo coffee server.coffee

It takes about 60 seconds or more to start up, so give it minute before you will see any text. Example output is below.

2019-12-16 14:24:18.465 attachRecordedDir: dir=file app=file
(node:6812) [DEP0005] DeprecationWarning: Buffer() is deprecated ...
2019-12-16 14:24:18.683 [rtmp] server started on port 1935
2019-12-16 14:24:18.691 [rtsp/http/rtmpt] server started on port 80

Simple make sure it starts up and then you can stop it by hitting Ctrl+c At this point you can also go into the server.coffee file and edit it to your hearts content, however I do keep it standard myself.

Create rtsp server service

You probably want this to always start this on boot, so lets add it as a systemd service. Copy and paste the following code into /etc/systemd/system/rtsp_server.service.

# /etc/systemd/system/rtsp_server.service

[Unit]
Description=rtsp_server
After=network.target rc-local.service

[Service]
Restart=always
WorkingDirectory=/home/pi/node-rtsp-rtmp-server
ExecStart=coffee server.coffee

[Install]
WantedBy=multi-user.target

Now we can start it up via the service, and enable it to start on boot.

sudo systemctl start rtsp_server
# Can make sure it works with sudo systemctl status rtsp_server
sudo systemctl enable rtsp_server

Compile FFMPEG with Hardware Acceleration

If you are just using the raspberry pi camera, or another one with h264 or h265 built in support, you can use the distribution version of ffmpeg instead.

This is going to take a while to make. I suggest reading a good blog post or watching some Red vs Blue while it builds. This guide is just small modifications from another one. We are also adding libfreetype font package so we can add text (like a datetime) to the video stream, as well as the default libx264 so that we can use it with the Pi Camera if you have one.

sudo apt-get install libomxil-bellagio-dev libfreetype6-dev libmp3lame-dev checkinstall libx264-dev fonts-freefont-ttf libasound2-dev -y
cd ~
git clone https://github.com/FFmpeg/FFmpeg.git --depth 1
cd FFmpeg
sudo ./configure --arch=armel --target-os=linux --enable-gpl --enable-omx --enable-omx-rpi --enable-nonfree --enable-libfreetype --enable-libx264 --enable-libmp3lame --enable-mmal --enable-indev=alsa --enable-outdev=alsa

# For old hardware / Pi zero remove the `-j4` 
sudo make -j4

When that is finally done, run the steps below that will install it. We take the additional precaution of turning it into a standard system package and hold it so we don’t overwrite our ffmpeg version.

sudo checkinstall --pkgname=ffmpeg -y 
sudo apt-mark hold ffmpeg 
echo "ffmpeg hold" | sudo dpkg --set-selections

Figure out your camera details

If you haven’t already, plug the webcam into the raspberry pi. Then we are going to use video4linux2 to discover what it’s capable of.

v4l2-ctl --list-devices

Mine lists my webcam and two paths it’s located at. Sometimes a camera will have multiple devices for different types of formats it supports, so it’s a good idea to check each one out.

Microsoft® LifeCam Cinema(TM): (usb-3f980000.usb-1.2):
        /dev/video0
        /dev/video1

Now we need to see what resolutions and FPS it can handle. Be warned MJPEG streams are much more taxing to encode them some of their counterparts. In this example we are going to specifically try to find YUYV 4:2:2 streams, as they are a lot easier to encode. (Unless you see h264, then use that!)

In my small testing group, MJPEG streams averaged only 70% of the FPS of the YUYV, while running the CPU up to 60%. Comparatively, YUYV encoding only took 20% of the CPU usage on average.

v4l2-ctl -d /dev/video0 --list-formats-ext

This pumps out a lot of info. Basically you want to find the subset under YUYV and figure out which resolution and fps you want. Here is an example of some of the ones my webcam supports.

ioctl: VIDIOC_ENUM_FMT
        Type: Video Capture

        [0]: 'YUYV' (YUYV 4:2:2)
                Size: Discrete 640x480
                        Interval: Discrete 0.033s (30.000 fps)
                        Interval: Discrete 0.050s (20.000 fps)
                        Interval: Discrete 0.067s (15.000 fps)
                        Interval: Discrete 0.100s (10.000 fps)
                        Interval: Discrete 0.133s (7.500 fps)
                Size: Discrete 1280x720
                        Interval: Discrete 0.100s (10.000 fps)
                        Interval: Discrete 0.133s (7.500 fps)
                Size: Discrete 960x544
                        Interval: Discrete 0.067s (15.000 fps)
                        Interval: Discrete 0.100s (10.000 fps)
                        Interval: Discrete 0.133s (7.500 fps)

I am going to be using the max resolution of 1280×720 and the highest fps of 10. Now if it looks perfect as is, you can skip to the next section. Though if you need to tweak the brightness, contrast or other camera settings, read on.

Image tweaks

Let’s figure out what settings we can play with on the camera.

v4l2-ctl -d /dev/video0 --all
brightness (int)                : min=30 max=255 step=1 default=-8193 value=135
contrast (int)                  : min=0 max=10 step=1 default=57343 value=5
saturation (int)                : min=0 max=200 step=1 default=57343 value=100
power_line_frequency (menu)     : min=0 max=2 default=2 value=2
sharpness (int)                 : min=0 max=50 step=1 default=57343 value=27
backlight_compensation (int)    : min=0 max=10 step=1 default=57343 value=0
exposure_auto (menu)            : min=0 max=3 default=0 value=3
exposure_absolute (int)         : min=5 max=20000 step=1 default=156 value=156
pan_absolute (int)              : min=-201600 max=201600 step=3600 default=0 
tilt_absolute (int)             : min=-201600 max=201600 step=3600 default=0 
focus_absolute (int)            : min=0 max=40 step=1 default=57343 value=12
focus_auto (bool)               : default=0 value=0
zoom_absolute (int)             : min=0 max=10 step=1 default=57343 value=0

Plenty of options, excellent. Now, if you don’t have a method to look at the camera display just yet, come back to this part after you have the live stream going. You can change these settings while it is going thankfully.

v4l2-ctl -d /dev/video0 --set-ctrl <setting>=<value>

The main problems I had with my camera was that it was a little dark and liked to auto-focus every 5~10 seconds. So I added the following lines of code to my rc.local file, but there are various way to run commands on startup.

#  I added these lines right before the exit 0

# dirty hack to make sure v4l2 has time to initialize the cameras
sleep 10 

v4l2-ctl -d /dev/video0 --set-ctrl focus_auto=0
v4l2-ctl -d /dev/video0 --set-ctrl focus_absolute=12
v4l2-ctl -d /dev/video0 --set-ctrl brightness=135

Now onto the fun stuff!

Real Time Encoding

Now we are going to use hardware accelerated ffmpeg library h264_omx to encode the webcam stream. That is, unless you happen to already be using a camera that supports h264 already. Like the built-in raspberry pi camera. If you are lucky enough to have one, you can just copy the output directly to the rtsp stream.

# Only for cameras that support h264 natively!
ffmpeg -input_format h264 -f video4linux2 -video_size 1920x1080 -framerate 30 -i /dev/video0 -c:v copy -an -f rtsp rtsp://localhost:80/live/stream

If at any point you receive the error ioctl(VIDIOC_STREAMON) failure : 1, Operation not permitted, go into raspi-config and up the video memory (memory split) to 256 and reboot.

In the code below, make sure to change the -s 1280x720 to your video resolution (can also use -video_size instead of -s) and both -r 10 occurrences to your frame rate (can also use -framerate).

ffmpeg -input_format yuyv422 -f video4linux2 -s 1280x720 -r 10 -i /dev/video0 -c:v h264_omx -r 10 -b:v 2M -an -f rtsp rtsp://localhost:80/live/stream

So lets brake this down. The first part is telling ffmpeg what to expect from your device, -i /dev/video0. Which means all those arguments must go before the declaration of the device itself.

-input_format yuyv422 -f video4linux2 -s <your resolution> -r <your framerate>

We are making clear that we only want the yuyv format as it is best available for my two cameras, yours may be different. Then specifying what resolution and fps we want it at. Be warned, that if you set one of them wrong, it may seem like it works (still encodes) but will give an error message to look out for:

[video4linux2] The V4L2 driver changed the video from 1280x8000 to 1280x800
[video4linux2] The driver changed the time per frame from 1/30 to 1/10

The next section is our conversion parameters.

-c:v h264_omx -r <your framerate> -b:v 2M

Here -c:v h264_omx we are saying the video codex to use h264, with the special omx hardware encoder. We are then telling it what the frame rate will be out as well, -r 10, and specifying the quality with -b:v 2M (aka bitrate) which determines how much bandwidth will be used when transmitting the video. Play around with different settings like -b:v 500k to see where you want it to be at. You will need a higher bitrate for higher resolution and framerate, and a lot less for lower resolution.

After that, we are telling it to disable audio with -an for the moment. If you do want audio, there is an optional section below going over how to enable that.

-f rtsp rtsp://localhost:80/live/stream

Finally we are telling it where to send the video, and to send it in the expected rtsp format (rstp is the video wrapper format, the video itself is still mp4). Notice that with the rstp server we can have as many camera with their own sub url, so instead of live/stream at the end could be live/camera1 and live/camera2.

Adding audio

Optional, not included in my final service script

As most webcams have built-in microphones, it makes it easy to add it to our stream if you want. First we need to identify our audio device.

arecord -l

You should get a list of possible devices back, in this case only my webcam is showing up as expected. If you have more than one, make sure you check out the ffmpeg article on “surviving the reboot” so they don’t get randomly re-ordered.

**** List of CAPTURE Hardware Devices ****
card 1: CinemaTM [Microsoft® LifeCam Cinema(TM)], device 0: USB Audio [USB Audio]
  Subdevices: 0/1
  Subdevice #0: subdevice #0

Notice it says card 1 at the very being of the webcam, and specifically device 0, that is the ID we are going to use to reference it with ffmpeg. I’m going to show the full command first like before and break it down again.

ffmpeg -input_format yuyv422 -f video4linux2 -s 1280x720 -r 10 -i /dev/video0 -f alsa -ac 1 -ar 44100 -i hw:1,0 -map 0:0 -map 1:0 -c:a aac -b:a 96k -c:v h264_omx -r 10 -b:v 2M -f rtsp -rtsp_transport tcp rtsp://127.0.0.1:80/live/webcam

So to start with, we are adding a new input of type ALSA ( Advanced Linux Sound Architecture) -f alsa -i hw:1,0. Because it’s a webcam, which generally only has a single channel of audio (aka mono), it needs -ac 1 passed to it as it by defaults tries to interpret it as stereo (-ac 2). It you get the error cannot set channel count to 1 (Invalid argument) that means it probably actually does have stereo, so you can remove it or set it to -ac 2.

Finally, I am setting a custom sampling rate of 44.1kHz, -ar 44100, the same used on CDs. All that giving us the new input of -f alsa -ac 1 -ar 44100 -i hw:1,0 .

Next we do a custom mapping to make sure our output streams are set up as we expect. Now ffmpeg is usually pretty good about doing this by default if we have a single input with video, and a single input with audio, this is really just to make sure that nobody out there has weird issues. -map 0:0 -map 1:0 is saying that we want the first track from the first source 0:0 and the first track from the second source 1:0.

Finally our encoding for the audio is set up with -c:a aac -b:a 96k which is saying to use the AAC audio type, with a bitrate of 96k. Now this could be a lot higher, as the theoretical bitrate of this source is now 352k (sample rate X bit depth X channels), but I can’t tell the difference past 96k with my mic is why I stuck with that.

One gotcha with sound, is that if the ffmpeg encoding can’t keep up with the source, aka the fps output isn’t the same as the input, the audio will probably skip weirdly, so you may need to step it down to a lower framerate or resolution if it can’t keep up.

Adding timestamp

Optional, but is included in my service script

This is optional, but I find it handy to directly add the current timestamp to the stream. I also like to have the timestamp in a box so I can always read it in case the background is close to the same color as the font. Here is what we are going to add into the middle of our ffmpeg command.

-vf "drawtext=fontfile=/usr/share/fonts/truetype/freefont/FreeSerif.ttf:text='%{localtime}':x=8:y=8:fontcolor=white: box=1: boxcolor=black"

It’s a lot text, but pretty self explanatory. We specify which font file to use drawtext=fontfile=/usr/share/fonts/truetype/freefont/FreeSerif.ttf, that the text will be the local time (make sure you have set your locale right!). Next we are going to start the box 8 pixels in and 8 pixels down from the top left corner. Then we set the font’s color, and that it will have a box around it with a different color.

ffmpeg -input_format yuyv422 -f video4linux2 -s 1280x720 -r 10 -i /dev/video0 -c:v h264_omx -r 10 -b:v 2M -vf "drawtext=fontfile=/usr/share/fonts/truetype/freefont/FreeSerif.ttf:text='%{localtime}':x=8:y=8:fontcolor=white: box=1: boxcolor=black" -an -f rtsp rtsp://localhost:80/live/stream

Making it systemd service ready

When running ffmpeg as a service, you probably don’t want to pollute the logs with standard output info. I also had a random issue with it trying to read info from stdin when a service, so I also added the -nostdin for my own sake. You can add these at the start of the command.

-nostdin -hide_banner -loglevel error

You can hide even more if you want to up it to -loglevel panic, but I personally want to see any errors that come up just in case.

So now our full command is pretty hefty.

ffmpeg -nostdin -hide_banner -loglevel error -input_format yuyv422 -f video4linux2 -s 1280x720 -r 10 -i /dev/video0 -c:v h264_omx -r 10 -b:v 2M -vf "drawtext=fontfile=/usr/share/fonts/truetype/freefont/FreeSerif.ttf:text='%{localtime}:x=8:y=8:fontcolor=white: box=1: boxcolor=black" -an -f rtsp rtsp://localhost:80/live/stream

Our new full command is a lot in one line, but it gets the job done!

Viewing the stream

When you have the stream running, you can pull up VLC or other network enabled media players and point to rtsp://raspberrypi:80/live/stream (if you changed your hostname, will have to do it based off ip).

When you have the command massaged exactly how you want it, we are going to create a systemd file with it, just like we did for the rstp server. In this case we will save it to the file /etc/systemd/system/encode_webcam.service and we will also add the argument -nostdin right after ffmpeg safety. sudo vi /etc/systemd/system/encode_webcam.service

# /etc/systemd/system/encode_webcam.service

[Unit]
Description=encode_webcam
After=network.target rtsp_server.service rc-local.service

[Service]
Restart=always
RestartSec=20s
User=pi
ExecStart=ffmpeg -nostdin -hide_banner -loglevel error -input_format yuyv422 
 -f video4linux2 -s 1280x720 -r 10 -i /dev/video0 -c:v h264_omx -r 10 -b:v 2M 
 -vf "drawtext=fontfile=/usr/share/fonts/truetype/freefont/FreeSerif.ttf 
 :text='%{localtime}:x=8:y=8:fontcolor=white: box=1: boxcolor=black" -an 
 -f rtsp rtsp://localhost:80/live/stream


[Install]
WantedBy=multi-user.target

Now start it up, and enable it to run on boot.

sudo systemctl start encode_webcam
sudo systemctl enable encode_webcam

Connect it to your security center

I have looked at a few different security suits for my personal needs. They included iSpy (Windows) and ZoneMinder (Linux) but I finally decided upon the industry standard Blue Iris (Windows). I like it because of feature set: mobile app, motion detection, mobile alerts, NAS and Cloud storage, etc… Blue Iris also has a 15 day evaluation period to try before you buy. You don’t even need to register or provide credit info!

For our needs, the best part about Blue Iris is that it supports direct to disk recording. That way we don’t have to re-encode the stream! So lets get rolling, on the top left, select the first menu and hit “Add new camera”.

add new camera

It will then have a popup to name and configure the camera, here make sure to select the last option “Direct to disk recording”.

Next it will need the network info for the camera, put in the same info as you did for VLC. Blue Iris should auto parse it into the fields it wants, and hit OK.

Volia! Your raspberry pi is now added to your security suite!

Now you can have fun setting up recording schedules, motion detection recording, mobile alerts, and more!

Do heatsinks and fans make a difference for the Raspberry Pi 4?

If you have been looking at cases for your shiny new Raspberry Pi 4, I am positive you have come across ones that either include heatsinks, a fan, or both. The question is, is that just a needless extra cost, or does it really help?

Well I ended up buying a case that came with both, and decided to test if it was worth running the fan or just keeping it off.

Methodology

Now as this was my own hardware, I wasn’t about to undo stress onto it. I decided to arbitrarily run the stress tests for two minutes each, which actually turned out to be pretty good to allow the core to heat up and even out.

I would then wait for the temperature to return to idle before attempting another run. This was all run in a 26.5*C room. I followed this guide to run the programs stress and cpuburn.

No Fan, No Heatsinks

Open airEnclosed
idle 5254
stress7882 FAIL
cpuburn81 FAIL

So by itself, as long as the raspberry pi is in the open air, it seemed to just hang in there for the stress test before hitting the 80*C thermal limit. Which, when hit, the raspberry pi will automatically start throttling the speed of the processor to cool it.

No Fan, Heatsinks

Open air Enclosed
idle 5254
stress 75 81 FAIL
cpuburn 81 FAIL

We can see that the heatsink helped out minimally when in the open air, but it couldn’t keep up in an enclosed case without moving air.

Fan, No Heatsinks

Enclosed
idle 48
stress 64
cpuburn 70

Now that’s a difference! That tiny little fan really does do more than I expected it too.

Fan and Heatsinks

Enclosed
idle44
stress60
cpuburn66

Well-well-well, looky there. Putting everything together really does make a difference.

Conclusion and Chart

Lets make this a little more digestible with a chart.

It’s pretty obvious that yes, both fans and heatsinks help. However, if you have to chose just one, pick the fan any day.

On demand gaming server for pennies a month!

The only downside with most dedicated gaming servers is the cost. If you need a 24/7 server or someone to manage it for you, you can’t avoid it. However, if you are self sufficient for setting up the gaming software, and only play certain games intermittently, your costs can be significantly reduced.

Let’s use Factorio as an example. To pay for a dedicated server it will cost around $5~10 per month, used or not. Now, if you are like me and play with friends and family only a few nights a month, that’s an unnecessary expense. Sure you can self host, but then everyone is dependent on a single person to have it up and running. Instead, I switched to using a Digital Ocean Droplet, and have paid less than a quarter this past month for an on demand gaming server.

Even if we played every night after work and weekends, it would only be $1 or $2 a month. You may be thinking “oh, you just turn off the droplet when you’re not using it!”, but alas, it is not that simple. A turned off droplet or server is still holding onto resources, so it still incurs cost, which is standard across all the hosting giants I looked into.

The trick on how to save cash? When you’re not playing, turn the droplet off, snapshot it, then destroy the droplet. When you want to play again, restore the snapshot to a new droplet. That way you are only paying server costs while it is running, then paying the super cheap snapshot storage the rest of the time.

This is painful to do by hand, but really easy with a helper script. (If you want to take it further, you could make it a website with access controls so only who you want could control it whenever they wanted.)

Digital Ocean Gaming Service (DOGS)

The code repo is available on github and the instructions are boringly standard.

git clone https://github.com/cdgriffith/dogs.git
# Alternatively, just download and extract the zip file
# https://github.com/cdgriffith/dogs/archive/master.zip

# Create a venv if you are python savvy
pip install requirements.txt
cp config.yaml.example config.yaml
# Update the config file to match your digital ocean settings
python -m dogs

This script does have some prerequisites. You need to have a Digital Ocean account token, and will have to manually create and startup the server once before you can manage it via this script. (Pull Requests always considered if you want to ease that pain for others.)

Droplet setup

When you do create the droplet you want, make sure to pick the smallest specs you think you will need. You can always upgrade to larger disk size, but cannot go back down to smaller. Also during this process, record the hostname which we will use as our server name in the config file.

You will also need your SSH key id, which are the digits at the very end of the public key. For example if the key ends with rsa-key-20190721 the ssh id is 20190721. You can always find this info in the Accounts > Security section as well by selecting a key and hitting “Edit”.

Also if you hook a firewall up to the droplet you will also need that ID, which can you retrieve via the API.

DOGS Server Config

When you got all that info, add it to the config.yaml file.

token: <your 64 character hex string>
 servers:
   ubuntu-1804-factorio:
     region: nyc1
     size: s-1vcpu-2gb
     firewall_id: <unique hex string separated by dashes>
     snapshot_max: 2
     ssh_key: <8 digits from end of ssh public cert>

You can get a full list of regions and sizes from their API as well.

Again, to run you just need to be in the dogs directory and run python -m dogs.

Binary (EXE) files

If you want to package it into an easy to use exe (can also modify for mac or linux binaries), just use the included build scripts.

# Windows specific requirements, otherwise just install 'pyinstaller'
pip install -r requirements-build.txt
python dogs\build.py

And now you should have a super handy dogs.exe in the dist directory. Don’t forget to keep your config.yaml in the same directory as it!

Gaming Setup Scripts

I have a directory to put files for setting up and updating game servers. Right now I only have factorio, but feel free to add your own, a PR would be very appreciated! All you need is a way to near automatically create / install the service and a way to have it auto start (in my example using systemd for standard Ubuntu servers)

Check the scripts out on github.

Build your own Pi Powered Enlarger Timer!

I cringed when my wife told me her digital enlarger timer broke. Thankfully it didn’t cost us a lot at the time, but it should have. I was lucky enough to restore a broken one I bought off eBay, no such luck this time. But then I thought, a timer is a really simple thing. Why should buying a new one cost over $200? I decided to build my own, and now you can too for less than $60!

An enlarger timer is very simple, it just needs to turn on the power to the enlarger for an exact period of time and then turn it back off. I currently have it set to have tenth of second accuracy. I also went ahead and made the darkroom light switch off during the process, as the IoT relay has a built in negative gate logic for one plug.

Required equipment for the timer

A quick warning before we begin: I did all this over a year ago, and while it is still working flawlessly, I can’t promise I remember everything. If there is anything missing and you get it working, please leave a comment so I can update the post or code! Also standard disclaimer that I am not an electric engineer and I am not responsible for you hurting your equipment or yourself by trying to follow this guide.

Software Install

If you haven’t already, get your raspberry pi ready to roll.

Then before you plug all the toys into it, make sure the system is up to date. Next we need to enable SPI. Make sure to go through the pre-requisites and the install section. Then install the python requirements.

git clone https://github.com/cdgriffith/darkroom.git --depth 1
cd darkroom
pip install -r requirements.txt

Hardware Install

Now turn off and unplug the raspberry pi, and lets connect stuff! Follow the luma guide to hook up the display, copied below for convenience.

Let’s take a look at the raspberry pin layout. The Pi 2 that I used only has the first 26 GPIO pins, and even if you have a 3 or 4, the first 26 pin layout is the same. Below is my PowerPpoint diagram reference. I am going to color the boxes the same as the wires in my pictures so they match up visually.

If you look closely at the image of mine, you’ll note I was naughty and put the +5v (pin 2) into +3.3v (pin 1) instead. I think I did it on purpose to make it not as bright. Though I can’t really remember if that or just mistake, either way, it works for me. Next we’re going to add in the IoT Relay.

So now it should look kinda look like how I have mine (just the two positives in different positions at the top).

Running the Enlarger Timer

Now it’s time to turn on the Pi and see if stuff works!

First, run through the example program provided by luma. For my 4×4 display I needed to run:

python examples/matrix_demo.py --block-orientation -90 --cascaded 4

When that is working, plug in a light (or your enlarger) to the IoT relay, go back into the darkroom directory and give it a try!

cd darkroom
python darkroom/main.py

(If you don’t like the startup message “LOVE U” I left for my wife, feel free to change it at the bottom of main.py.) To use is pretty simple: press *, enter the time you want to set, and hit enter. View all the available commands on the github project page.

To have the code run on startup, I simply added it to my /etc/rc.local file.

PYTHONPATH=/home/pi/darkroom python /home/pi/darkroom/darkroom/main.py

I hope this guide was useful for you!