The GPU hardware encoder in the Raspberry Pi can greatly speed up encoding for H.264 videos. It is perfect to use for transcoding live streams as well. It can be accessed in FFmpeg with the h264_omx encoder. But is it fast enough for live stream a 1080p webcam?
You might have already seen a lot of people using the built-in raspberry pi cameras to stream crisp 1080p video, so why is this even a question? Well the catch there is the Pi Camera itself supports native H.264 encoding. Some webcams do as well, and they are honestly the best choice to use rather than constantly battering the GPU encoder if you don’t need to.
However, you may just happen to have an old cheap webcam that only does MJPEG streams. Those streams are generally too large to pump over the Raspberry Pi’s wifi at full fps. Would using the hardware encoder help you?
The Results First
This is why you’re here, let’s cut to the chase and do a comparison of the two latest Raspberry Pi’s available, the Pi 4 B, and Pi 3 B+ (we’ll throw in the little Pi Zero Wireless for fun too.) We’ll talk about the two videos used later, but suffice to say, Trackday is easier to encode and closer to what an average Webcam would produce. Artist is more of a torture test.
Boom! The Raspberry Pi 4 B is right in the butter zone. Most webcams that are 30fps would be handled just fine with the Pi 4 (depending on the quality of sensor and what you’re filming). The Pi 3 B+ isn’t terrible, but wouldn’t be able to encode a realtime stream smoothly.
The little Pi Zero? Well, it did its best and we’re proud of it!
The first video I used was a video captured from a car on a racetrack. It is 1920×1080 at 30fps captured from a dash cam.
The original bitrate was a 10.5MB/s and was cut down to 5MB/s with all our encodes.
The second file, artwork in progress by Clara Griffith, is also 1920×1080 at 30fps. However it is using BT.709 color space and started out at 35MB/s!
If you see a webcam that advertises as “HDR” it is most likely using the BT.709 color space as well, and may give your Pi a headache.
This one was also compressed down to only 5MB/s. Why 5MB/s you ask? Well as it turns out, using the standard 2.4GHz wifi band, the Pi 3 and Pi 4 can each sustain about 6.5MB/s download speed over my wireless. That means I know these videos could be played smoothly over wifi. The Pi Zero W on the other hand could only sustain around 3MB/s wifi transfer speed.
All three systems were set up to use 256MB of GPU ram.
This actually took me by surprise to be honest. The quality of the encode is quite good when comparing to what a software encoder could do. I didn’t pull any punches either, the x264 encoder was set to dual pass and using veryslow preset with the film tune set. x264 commands:
Of the two videos, Trackday is more realistic to what a webcam would experience and both encoders are near equal. So why was the Artist video so much better quality after encode, even though it started out with a lot higher bitrate? My informed guess on that is how crisp the original was, as well as the content is slow moving enough, the H.264 was able to reuse larger parts of the video for subsequent frames.
That means the software encoder x264 wins by virtue of being able to effectively use B-frames. Whereas the OMX hardware encoder doesn’t have support for B-frames. Therefor the Pi is on even ground when B-frames aren’t effective, but lags behind when they come into play.
A Note on Pi Camera Native H.264
I have found very little information about what Pi Cameras actually support H.264 natively. I only have “knock off” Raspberry Pi cameras that use the ribbon cable. They all support H.264 streams, which you can check with:
I was kinda worried they were using some hackery to “pretend” to actually have native H.264 but instead using the GPU. However if the Pi Zero has anything to show, it has a really hard time encoding 1080p videos with the GPU encoder, so I do believe they have native support.
Raspberry Pi’s are wonderful little computers, just sometimes they lack the umph to get stuff done. That may change with the new Raspberry Pi 4, but what to do with all those old ones? Or how about that pile of old webcams? Well this article will help turn all those into a full on security system. (Can also use a raspberry pi camera if you got one!)
Other posts I have read on this subject often only use motion to capture detection events locally. Sometimes they go a bit further and set the Raspberry PI to stream MJPEG as an IP camera. Or set up MotionEyeOS and make it into a singular video surveillance system.
With our IP camera, we are going to take it further and encode the video stream locally. Then we will send it over the network via rtsp. This will save huge amounts of bandwidth! It also does not require the client to re-encode the stream before saving, distributing the work. That way we can also hook it into a larger security suite without draining any of its resources, in this case I will use Blue Iris.
Now, the first thing I am going to do is discourage you. If you don’t already have a Pi and a webcam or pi camera for the cause, don’t run out to buy them just for this. It’s just not economical. A 1080p WiFi camera that has has ONVIF capabilities can be had for less than $50. So why do this at all? Well because 1.) It’s all under your control and no worry about Chinaware, 2.) If you already got the equipment, it’s another free security eye, and 3.) It’s fun.
Update: If you’re just looking for results, check out my helper script that does all the work for you!
Not going to go into too much detail here. If you haven’t already, download Raspbian and get it onto a SD Card. (I used raspbian buster for this tutorial) If you aren’t going to connect a display and keyboard to it, make sure to add an empty file named ssh on the root of the boot (SD Card) drive. That way you can just SSH to the raspberry pi via command line or PuTTY on Windows.
# Default settings
To start with, we need a place for ffmpeg to connect to for the rtsp connection. Most security systems expect to connect to a rstp server, instead of listening as a server themselves, so we need a middleman.
There are a lot of rstp server options out there, I wanted to go with a lightweight one we can just run on the pi itself that is easy to install and run easily. This is what I run at my own house, so don’t think I’m skimping out for this post 😉
UPDATE: I have stopped using the below server, and instead use rtsp-simple-server, which has ARM builds per-compiled. This is what is used with the helper script. (Not because the Node based one gave me issues, just this other one is much more lightweight and easy to install.)
First of, we need to install Node JS. The easiest way I have found is to use the pre-created scripts to add the proper package links to the apt system for us.
If you are on an arm6 based system, such as the pi zero, you will need to do just a little extra work to install Node. For arm7 systems, like anything Raspberry Pi 3 or newer, we will use Node 12. Find out your arm version with uname -a command and seeing if the string “arm6” or “arm7” appears.
Now, lets install Node JS and other needed libraries, such as git and coffeescript. If you want to view the script itself before running it, it is available to view here.
Once that is complete, we want to download the node rtsp server code and install all it’s dependencies. Note, I am assuming you are doing this in the root of your home folder, which will later use as the base for the directory for the service.
git clone https://github.com/iizukanao/node-rtsp-rtmp-server.git --depth 1
npm install -d
Now you should be good to go, you can test it out by running:
sudo coffee server.coffee
It takes about 60 seconds or more to start up, so give it minute before you will see any text. Example output is below.
2019-12-16 14:24:18.465 attachRecordedDir: dir=file app=file
(node:6812) [DEP0005] DeprecationWarning: Buffer() is deprecated ...
2019-12-16 14:24:18.683 [rtmp] server started on port 1935
2019-12-16 14:24:18.691 [rtsp/http/rtmpt] server started on port 80
Simple make sure it starts up and then you can stop it by hitting Ctrl+c At this point you can also go into the server.coffee file and edit it to your hearts content, however I do keep it standard myself.
Create rtsp server service
You probably want this to always start this on boot, so lets add it as a systemd service. Copy and paste the following code into /etc/systemd/system/rtsp_server.service.
Now we can start it up via the service, and enable it to start on boot.
sudo systemctl start rtsp_server
# Can make sure it works with sudo systemctl status rtsp_server
sudo systemctl enable rtsp_server
Compile FFMPEG with Hardware Acceleration
If you are just using the raspberry pi camera, or another one with h264 or h265 built in support, you can use the distribution version of ffmpeg instead.
UPDATE: The built in FFmpeg now had hardware acceleration built in, so you can skip the compilation, or use my helper script to compile it for you with a lot of extras.
This is going to take a while to make. I suggest reading a good blog post or watching some Red vs Blue while it builds. This guide is just small modifications from another one. We are also adding libfreetype font package so we can add text (like a datetime) to the video stream, as well as the default libx264 so that we can use it with the Pi Camera if you have one.
sudo apt-get install libomxil-bellagio-dev libfreetype6-dev libmp3lame-dev checkinstall libx264-dev fonts-freefont-ttf libasound2-dev -y
git clone https://github.com/FFmpeg/FFmpeg.git --depth 1
sudo ./configure --arch=armel --extra-libs="-lpthread -lm" --extra-ldflags="-latomic" --target-os=linux --enable-gpl --enable-omx --enable-omx-rpi --enable-nonfree --enable-libfreetype --enable-libx264 --enable-libmp3lame --enable-mmal --enable-indev=alsa --enable-outdev=alsa
# For old hardware / Pi zero remove the `-j4`
sudo make -j4
When that is finally done, run the steps below that will install it. We take the additional precaution of turning it into a standard system package and hold it so we don’t overwrite our ffmpeg version.
Now we need to see what resolutions and FPS it can handle. Be warned MJPEG streams are much more taxing to encode them some of their counterparts. In this example we are going to specifically try to find YUYV 4:2:2 streams, as they are a lot easier to encode. (Unless you see h264, then use that!)
In my small testing group, MJPEG streams averaged only 70% of the FPS of the YUYV, while running the CPU up to 60%. Comparatively, YUYV encoding only took 20% of the CPU usage on average.
v4l2-ctl -d /dev/video0 --list-formats-ext
This pumps out a lot of info. Basically you want to find the subset under YUYV and figure out which resolution and fps you want. Here is an example of some of the ones my webcam supports.
I am going to be using the max resolution of 1280×720 and the highest fps of 10. Now if it looks perfect as is, you can skip to the next section. Though if you need to tweak the brightness, contrast or other camera settings, read on.
Let’s figure out what settings we can play with on the camera.
Plenty of options, excellent. Now, if you don’t have a method to look at the camera display just yet, come back to this part after you have the live stream going. You can change these settings while it is going thankfully.
The main problems I had with my camera was that it was a little dark and liked to auto-focus every 5~10 seconds. So I added the following lines of code to my rc.local file, but there are various way to run commands on startup.
# I added these lines right before the exit 0
# dirty hack to make sure v4l2 has time to initialize the cameras
v4l2-ctl -d /dev/video0 --set-ctrl focus_auto=0
v4l2-ctl -d /dev/video0 --set-ctrl focus_absolute=12
v4l2-ctl -d /dev/video0 --set-ctrl brightness=135
Now onto the fun stuff!
Real Time Encoding
Now we are going to use hardware accelerated ffmpeg library h264_omx to encode the webcam stream. That is, unless you happen to already be using a camera that supports h264 already. Like the built-in raspberry pi camera. If you are lucky enough to have one, you can just copy the output directly to the rtsp stream.
# Only for cameras that support h264 natively!
ffmpeg -input_format h264 -f video4linux2 -video_size 1920x1080 -framerate 30 -i /dev/video0 -c:v copy -an -f rtsp rtsp://localhost:80/live/stream
If at any point you receive the error ioctl(VIDIOC_STREAMON) failure : 1, Operation not permitted, go into raspi-config and up the video memory (memory split) to 256 and reboot.
In the code below, make sure to change the -s 1280x720 to your video resolution (can also use -video_size instead of -s) and both -r 10 occurrences to your frame rate (can also use -framerate).
We are making clear that we only want the yuyv format as it is best available for my two cameras, yours may be different. Then specifying what resolution and fps we want it at. Be warned, that if you set one of them wrong, it may seem like it works (still encodes) but will give an error message to look out for:
[video4linux2] The V4L2 driver changed the video from 1280x8000 to 1280x800
[video4linux2] The driver changed the time per frame from 1/30 to 1/10
The next section is our conversion parameters.
-c:v h264_omx -r <your framerate> -b:v 2M
Here -c:v h264_omx we are saying the video codex to use h264, with the special omx hardware encoder. We are then telling it what the frame rate will be out as well, -r 10, and specifying the quality with -b:v 2M (aka bitrate) which determines how much bandwidth will be used when transmitting the video. Play around with different settings like -b:v 500k to see where you want it to be at. You will need a higher bitrate for higher resolution and framerate, and a lot less for lower resolution.
After that, we are telling it to disable audio with -an for the moment. If you do want audio, there is an optional section below going over how to enable that.
-f rtsp rtsp://localhost:80/live/stream
Finally we are telling it where to send the video, and to send it in the expected rtsp format (rstp is the video wrapper format, the video itself is still mp4). Notice that with the rstp server we can have as many camera with their own sub url, so instead of live/stream at the end could be live/camera1 and live/camera2.
Optional, not included in my final service script
As most webcams have built-in microphones, it makes it easy to add it to our stream if you want. First we need to identify our audio device.
You should get a list of possible devices back, in this case only my webcam is showing up as expected. If you have more than one, make sure you check out the ffmpeg article on “surviving the reboot” so they don’t get randomly re-ordered.
**** List of CAPTURE Hardware Devices ****
card 1: CinemaTM [Microsoft® LifeCam Cinema(TM)], device 0: USB Audio [USB Audio]
Subdevice #0: subdevice #0
Notice it says card 1 at the very being of the webcam, and specifically device 0, that is the ID we are going to use to reference it with ffmpeg. I’m going to show the full command first like before and break it down again.
So to start with, we are adding a new input of type ALSA ( Advanced Linux Sound Architecture) -f alsa -i hw:1,0. Because it’s a webcam, which generally only has a single channel of audio (aka mono), it needs -ac 1 passed to it as it by defaults tries to interpret it as stereo (-ac 2). It you get the error cannot set channel count to 1 (Invalid argument) that means it probably actually does have stereo, so you can remove it or set it to -ac 2.
Finally, I am setting a custom sampling rate of 44.1kHz, -ar 44100, the same used on CDs. All that giving us the new input of -f alsa -ac 1 -ar 44100 -i hw:1,0 .
Next we do a custom mapping to make sure our output streams are set up as we expect. Now ffmpeg is usually pretty good about doing this by default if we have a single input with video, and a single input with audio, this is really just to make sure that nobody out there has weird issues. -map 0:0 -map 1:0 is saying that we want the first track from the first source 0:0 and the first track from the second source 1:0.
Finally our encoding for the audio is set up with -c:a aac -b:a 96k which is saying to use the AAC audio type, with a bitrate of 96k. Now this could be a lot higher, as the theoretical bitrate of this source is now 352k (sample rate X bit depth X channels), but I can’t tell the difference past 96k with my mic is why I stuck with that.
One gotcha with sound, is that if the ffmpeg encoding can’t keep up with the source, aka the fps output isn’t the same as the input, the audio will probably skip weirdly, so you may need to step it down to a lower framerate or resolution if it can’t keep up.
Optional, but is included in my service script
This is optional, but I find it handy to directly add the current timestamp to the stream. I also like to have the timestamp in a box so I can always read it in case the background is close to the same color as the font. Here is what we are going to add into the middle of our ffmpeg command.
It’s a lot text, but pretty self explanatory. We specify which font file to use drawtext=fontfile=/usr/share/fonts/truetype/freefont/FreeSerif.ttf, that the text will be the local time (make sure you have set your locale right!). Next we are going to start the box 8 pixels in and 8 pixels down from the top left corner. Then we set the font’s color, and that it will have a box around it with a different color.
When running ffmpeg as a service, you probably don’t want to pollute the logs with standard output info. I also had a random issue with it trying to read info from stdin when a service, so I also added the -nostdin for my own sake. You can add these at the start of the command.
-nostdin -hide_banner -loglevel error
You can hide even more if you want to up it to -loglevel panic, but I personally want to see any errors that come up just in case.
Our new full command is a lot in one line, but it gets the job done!
Viewing the stream
When you have the stream running, you can pull up VLC or other network enabled media players and point to rtsp://raspberrypi:80/live/stream (if you changed your hostname, will have to do it based off ip).
When you have the command massaged exactly how you want it, we are going to create a systemd file with it, just like we did for the rstp server. In this case we will save it to the file /etc/systemd/system/encode_webcam.service and we will also add the argument -nostdin right after ffmpeg safety. sudo vi /etc/systemd/system/encode_webcam.service
I have looked at a few different security suits for my personal needs. They included iSpy (Windows) and ZoneMinder (Linux) but I finally decided upon the industry standard Blue Iris (Windows). I like it because of feature set: mobile app, motion detection, mobile alerts, NAS and Cloud storage, etc… Blue Iris also has a 15 day evaluation period to try before you buy. You don’t even need to register or provide credit info!
For our needs, the best part about Blue Iris is that it supports direct to disk recording. That way we don’t have to re-encode the stream! So lets get rolling, on the top left, select the first menu and hit “Add new camera”.
It will then have a popup to name and configure the camera, here make sure to select the last option “Direct to disk recording”.
Next it will need the network info for the camera, put in the same info as you did for VLC. Blue Iris should auto parse it into the fields it wants, and hit OK.
Volia! Your raspberry pi is now added to your security suite!
Now you can have fun setting up recording schedules, motion detection recording, mobile alerts, and more!
If you have been looking at cases for your shiny new Raspberry Pi 4, I am positive you have come across ones that either include heatsinks, a fan, or both. The question is, is that just a needless extra cost, or does it really help?
Well I ended up buying a case that came with both, and decided to test if it was worth running the fan or just keeping it off.
Now as this was my own hardware, I wasn’t about to undo stress onto it. I decided to arbitrarily run the stress tests for two minutes each, which actually turned out to be pretty good to allow the core to heat up and even out.
I would then wait for the temperature to return to idle before attempting another run. This was all run in a 26.5*C room. I followed this guide to run the programs stress and cpuburn.
No Fan, No Heatsinks
So by itself, as long as the raspberry pi is in the open air, it seemed to just hang in there for the stress test before hitting the 80*C thermal limit. Which, when hit, the raspberry pi will automatically start throttling the speed of the processor to cool it.
No Fan, Heatsinks
We can see that the heatsink helped out minimally when in the open air, but it couldn’t keep up in an enclosed case without moving air.
Fan, No Heatsinks
Now that’s a difference! That tiny little fan really does do more than I expected it too.
Fan and Heatsinks
Well-well-well, looky there. Putting everything together really does make a difference.
Conclusion and Chart
Lets make this a little more digestible with a chart.
It’s pretty obvious that yes, both fans and heatsinks help. However, if you have to chose just one, pick the fan any day.
I cringed when my wife told me her digital enlarger timer broke. Thankfully it didn’t cost us a lot at the time, but it should have. I was lucky enough to restore a broken one I bought off eBay, no such luck this time. But then I thought, a timer is a really simple thing. Why should buying a new one cost over $200? I decided to build my own, and now you can too for less than $60!
An enlarger timer is very simple, it just needs to turn on the power to the enlarger for an exact period of time and then turn it back off. I currently have it set to have tenth of second accuracy. I also went ahead and made the darkroom light switch off during the process, as the IoT relay has a built in negative gate logic for one plug.
A quick warning before we begin: I did all this over a year ago, and while it is still working flawlessly, I can’t promise I remember everything. If there is anything missing and you get it working, please leave a comment so I can update the post or code! Also standard disclaimer that I am not an electric engineer and I am not responsible for you hurting your equipment or yourself by trying to follow this guide.
Then before you plug all the toys into it, make sure the system is up to date. Next we need to enable SPI. Make sure to go through the pre-requisites and the install section. Then install the darkroom software.
Launch sudo raspi-config and into 3 Boot Options then B1 Desktop / CLI and set B2 Console Autologin
Add user to proper groups sudo usermod -a -G spi,gpio pi
Now turn off and unplug the raspberry pi, and lets connect stuff! Follow the luma guide to hook up the display, copied below for convenience.
Let’s take a look at the raspberry pin layout. The Pi 2 that I used only has the first 26 GPIO pins, and even if you have a 3 or 4, the first 26 pin layout is the same. Below is my PowerPpoint diagram reference. I am going to color the boxes the same as the wires in my pictures so they match up visually.
If you look closely at the image of mine, you’ll note I was naughty and put the +5v (pin 2) into +3.3v (pin 1) instead. I think I did it on purpose to make it not as bright. Though I can’t really remember if that or just mistake, either way, it works for me. Next we’re going to add in the IoT Relay.
So now it should look kinda look like how I have mine (just the two positives in different positions at the top).
Running the Enlarger Timer
Now it’s time to turn on the Pi and see if stuff works!
First, run through the example program provided by luma. For my 4×4 display I needed to run: