This is an improvement on my previous article, Raspberry Pi Hardware Accelerated RTSP Camera, now with the option of using more modern technology, MPEG-DASH and HLS!
First off, if you don’t care about the technicalities and just want a script to do everything for you, here you go! If you’re still interested in how it all works or want to tweak the settings, read on.
This article will walk you through how to either copy or convert video from your webcam or pi camera, set it up as a systemd
service, and finally view it on a webpage or access it remotely.
MPEG-DASH vs HLS vs RSTP
So to clear this up first of all, these are “containers” that wrap around the actual video, which is a particular “codec” (such as h264). DASH and RTSP are fully codec agnostic, meaning they are capable of wrapping around any type of video codec. The big gotcha is what type of videos the viewer supports (and in RTSP’s case the middleman server as well.)
So if DASH and RTSP can handle everything, why even bother with HLS? Long story short, Apple, who developed HLS, is a bully, so they don’t support the open MPEG-DASH on their devices. Meaning if you are trying to share these video streams with the public or view on an Apple device, you will get the most compatibility with HLS.
So now the difference really comes down to how DASH/HLS are HTTP based protocols that can easily be supported in browser. This makes it super easy to set up an all-in-one device that can host it’s own webpage to view a video at.
Whereas RTSP requires additional software, such as VLC or a security system to view it. The real advantage with RTSP is the fact it really is nearly “real time” compared to DASH/HLS. Using my Raspberry Pis DASH/HLS seem to have a 10~20 second delay, compared to about 1 second for RTSP. As I already went over how to set that up, I won’t repeat it here and only go over DASH. But I personally use RSTP for my own home setup still.
Setting up required software
The two programs you will need are a file server (nginx
, apache
, python -m http.server
, etc…) to host the DASH/HLS content and ffmpeg
. And you don’t have to hand compile either!
Super short version:
sudo apt install nginx ffmpeg -y
To better understand why we need them and how to test them to make sure they are working properly, read on. Otherwise, skip ahead to “Gather Camera Details”.
File Server
Last time we use RTSP
which required a special service of it’s own. Now we are using HLS
and MPEG-DASH
, which produce manifest and accompany stream files on the local system. For example, MPEG-DASH
will create a manifest.mpd
file that contains links to *.m4s
files in the same directory which are the chunked up video files.
That means if we make those files accessible remotely, we can use standard HTTP
to transport the video. Hence the need for a basic file server. I personally use nginx
for the final setup, as it’s fast, easy to use, and has defaults we can use out of the box. So lets install it!
sudo apt install nginx -y
Now all you need to do is open up a web browser on another computer on that network and connect to http://raspberrypi
(If you changed hostname, or having trouble connecting, run hostname -I
to see it’s IP address and use http://<ip_address>
instead.) You should see a simple webpage that says “Welcome to nginx!”
FFmpeg
Since the last article came out, FFmpeg has finally started shipping with hardware acceleration built in! If you still want to compile in some custom libraries or try and optimize it for your needs, check out my Raspberry Pi FFmpeg compile guide. Otherwise, just download it from the distribution repositories.
sudo apt install ffmpeg -y
You can verify it’s part of the package by checking the encoders for h264_omx
.
ffmpeg -hide_banner -encoders | grep omx
Should produce V….. h264_omx OpenMAX IL H.264 video encoder (codec h264)
or similar. If for some reason it doesn’t have that or other libraries you are looking for, such as the popular fdk-aac, look into my article onto compiling FFmpeg yourself, or use the helper script with the option --compile-ffmpeg
.
Gather camera details
If you have the helper script, simply run it with the option --camera-info
and it will print out each device and their formats with their highest resolution for each.
sudo python3 streaming_setup.py --camera-info # /dev/video0: {'yuyv422': '1280x800', 'mjpeg': '1280x720'}
Under the hood, this is running the following command for every device found.
ffmpeg -hide_banner -f video4linux2 -list_formats all -i /dev/video0
To also see what frame rates are supported per resolution, you will have to run the v4l2-ctl command for that device.
v4l2-ctl -d /dev/video0 --list-formats-ext
Create the FFmpeg command
The FFmpeg command is particular about order when talking about input and output details. Ours will be broken down into the following blocks:
ffmpeg <incoming video details> -i <device> <conversion details> <output>
So let’s say you are using a raspberry pi camera and want to stream 1080p video without re-encoding it. We first have to tell FFmpeg about the camera details it will pull from.
Applying the Camera Details
In longhand, it would look like this:
-input_format h264 -f video4linux2 -video_size 1920x1080 -framerate 30 -i /dev/video0
Hopefully each of those parts are pretty self explanatory. We can also reduce -video_size
, aka the incoming resolution to -s
, and -framerate
, aka the fps to -r
.
Network Bandwidth Considerations
Internet streamers, beware you may not be able to upload directly from the camera’s full 1080p at 30fps. I did a quick test using vnstat
over a wired connection with a Pi Zero, and found my 5MP OV5647 camera was using almost 20Mbit/s. Keep in mind the official Pi Camera with the sony sensor is 8MP so may be even higher than that.
The following tests were done at two minute averages while the stream was being watched. The averages were recorded, and generally the peaks were 2x the average.
Resolution | fps | Average Mbit/s |
1920×1080 | 30 | 20.12 |
1920×1080 | 15 | 5.12 |
1280×720 | 60 | 15.81 |
1280×720 | 30 | 3.94 |
1280×720 | 15 | 2.75 |
640×480 | 90 | 5.66 |
640×480 | 60 | 4.43 |
640×480 | 30 | 0.93 |
640×480 | 15 | 0.43 |
Conversion options
Since this already is h264
we don’t need anything other than to say copy the incoming stream. So that option is -codec:v copy
or shorthand -c:v copy
. Which is saying set the codec of v
for video tracks to copy
aka don’t convert.
-c:v copy
If you instead had a webcam that only supported mjpeg input, or if you needed to add text overlay to the video, you would have to recompile FFmpeg. With the Raspberry Pi, you’ll want to use the built in hardware encoder, h264_omx. You would then also have to set the bitrate (-b:v
) of the outgoing video. That is really camera / network dependent, but my rule of thumb is use video width
x hight
x 2
. So 1920x1080x2 ==4,147,200
, so I would set the bitrate to 4M
(aka ~4000kb
, or ~4000000
bytes).
-c:v h264_omx -b:v 4M
The Raspberry Pi OpenMAX (omx) hardware encoder has very limited options, and doesn’t support constant quality or rate factors like libx264
does. So the only way to adjust quality is with the bitrate. As for general quality, it sits between libx264
s ultrafast
and superfast
presets, which is somewhat disappointing but not surprising for a real-time hardware encoder.
MPEG-DASH and HLS output
Personally I would never recommend HLS to a friend, as MPEG-DASH is all around a more open and powerful muxer. But I understand some legacy systems don’t have DASH support yet. Thankfully, FFmpeg’s dash module gives us HLS for free! (Note that some systems don’t even support that, and you may end up having to use only the hls
muxer.)
DASH and HLS both crate playlist files locally, with chucked up video files beside them. This creates a few problems, first is the cleanup and management of those files. Thankfully, the DASH model has options to delete all those files on exit, as well as the ability to only keep so many video chunks on disk at a time.
The bigger problem is the constant writing to the disk. In this case an SD card that is a wear item and has higher error rates with the more writes it experiences. So to save the SD card, and ourselves future headaches, we are going to write these files to memory instead!
sudo mkdir -p /dev/shm/streaming/
Tada, we now have a folder in shared memory space we can use. The caveat is it will be removed after restarts, so we will have to make sure it’s recreated before or FFmpeg service is started. But lets not get ahead of ourselves. We just need to know the rest of our FFmpeg command.
-f dash -window_size 10 -remove_at_exit 1 -hls_playlist 1 /dev/shm/streaming/manifest.mpd
We are using just a few options of what FFmpeg’s DASH muxer can do if you do need further customization, but I doubt it for most cases.
I am setting the max number of video chuncks to be kept at 10
via -window_size
and telling FFmpeg to delete them and the manifest file when it stops running with -remove_at_exit 1
. Then we enable HLS with -hls_playlist 1
which creates a master.m3u8
file in the same directory as the manifest.mpd
(Feel free to disable HLS if you don’t need it.)
Putting it all together
If you have that camera with native h264
encoding, like the Pi Camera, here is your copy and paste code!
# sudo mkdir -p /dev/shm/streaming/ sudo ffmpeg -input_format h264 -f video4linux2 -video_size 1920x1080 -framerate 30 -i /dev/video0 -c:v copy -f dash -window_size 10 -remove_at_exit 1 -hls_playlist 1 /dev/shm/streaming/manifest.mpd
You should soon start seeing messages about the manifest and chucks being updated and the current frame rate.
[dash @ 0x20bff00] Opening '/dev/shm/streaming/manifest.mpd.tmp' for writing [dash @ 0x20bff00] Opening '/dev/shm/streaming/media_0.m3u8.tmp' for writing [dash @ 0x20bff00] Opening '/dev/shm/streaming/chunk-stream0-00003.m4s.tmp' for writing [dash @ 0x20bff00] Opening '/dev/shm/streaming/manifest.mpd.tmp' for writing [dash @ 0x20bff00] Opening '/dev/shm/streaming/media_0.m3u8.tmp' for writing [dash @ 0x20bff00] Opening '/dev/shm/streaming/chunk-stream0-00004.m4s.tmp' for writing frame= 631 fps= 30 q=-1.0 size=N/A time=00:00:20.95 bitrate=N/A speed=1.01x
If you are showing errors like Operation not permitted
or Cannot find a proper format
please check your input formats and try lower resolutions. Sometimes cameras list their photo taking resolutions which are much higher than their streaming resolutions. If you are still receiving the errors even with the right codec selected, turn the Pi off, check the connections to the camera and turn it back on, as the camera can sometimes get in a bad state or have a loose wire.
To add audio or a text overlay like a timestamp please refer to those linked sections of my previous guide!
Setting up Remote Viewing
First we need to allow nginx to serve up that manifest file. By default nginx
is serving up the /var/www/html
directory. So it is easy enough to link our in memory folder as a sub folder there.
ln -s /dev/shm/streaming /var/www/html/streaming
Then we need to either have a way to view it via a webpage, or connect to it with a remote player such as VLC. If you have VLC or a viewer for DASH content, you can point it at http://raspberrypi/streaming/manifest.mpd
and should start seeing the stream! (If you have a custom hostname or want to use IP, can use hostname -I
command to use that in place of raspberrypi
). To create a webpage to view the content, we will have to put it in a folder that won’t be deleted on reboot.
I personally chose /var/lib/streaming/index.html
as I will also be putting a script in there that will help up set things up again each reboot. Make sure to create the directory first:
mkdir -p /var/lib/streaming
So open up your favorite text editor and copy the following html code into /var/lib/streaming/index.html
<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Raspberry Pi Camera</title> <style> html body .page { height: 100%; width: 100%; } video { width: 800px; } .wrapper { width: 800px; margin: auto; } </style> </head> <body> <div class="page"> <div class="wrapper"> <h1> Raspberry Pi Camera</h1> <video data-dashjs-player autoplay controls type="application/dash+xml"></video> </div> </div> <script src="https://cdn.dashjs.org/latest/dash.all.min.js"></script> <script> var player = init(); function init() { var video = document.querySelector("video"); player = dashjs.MediaPlayer().create(); player.initialize(video, "manifest.mpd", true); player.updateSettings({ 'streaming': { 'lowLatencyEnabled': true, 'liveDelay': 1, 'liveCatchUpMinDrift': 0.05, 'liveCatchUpPlaybackRate': 0.5 } }); return player; } </script> </body> </html>
Now lets link it up to the nginx directory.
ln -s /var/lib/streaming/index.html /var/www/html/streaming/index.html
Now you should be able to view your streaming camera webpage at http://raspberrypi/streaming
!
We are using the very expansive dash.js
open source library that has a lot of customization options. We are using a few basic ones here to set it up to be better at live streaming, but please check out their project and see how best to tweak it for your needs.
Reboot Script
Now that we have a sexy webpage and working ffmpeg command, we need to save ’em and make sure they can survive reboots. Therefor, we need to create a script that will run on restart to recreate the folder in memory and copy the index file over. I put mine right beside the permanent index file, with /var/lib/streaming/setup_streaming.sh
add the following text.
# /var/lib/streaming/setup_streaming.sh mkdir -p /dev/shm/streaming if [ ! -e /var/www/html/streaming ]; then ln -s /dev/shm/streaming /var/www/html/streaming fi if [ ! -e /var/www/html/streaming/index.html ]; then ln -s /var/lib/streaming/index.html /var/www/html/streaming/index.html fi
Don’t forget to make it executable.
chmod +x /var/lib/streaming/setup_streaming.sh
Now to run it on restart, we are going to add this script to /etc/rc.local
. Open the /etc/rc.local
file and add these lines before the exit 0
at the bottom:
# Streaming Shared Memory Setup if [ -f /var/lib/streaming/setup_streaming.sh ]; then /bin/bash /var/lib/streaming/setup_streaming.sh|| true fi
Notice we are being extra extra careful to not throw errors here, as at the top of the rc.local
file it makes it clear that it should never exit without a clean exit code of 0
.
Camera Streaming Service
After we have the location in memory setup, we can start the camera. I chose to do so via a systemd service, so it can restart on errors and easy to manage. In this example it’s called stream_camera
but you can change the actual service file name to suit your fancy.
Add a new file at /etc/systemd/system/stream_camera.service
.
# /etc/systemd/system/stream_camera.service [Unit] Description=Camera Streaming Service After=network.target rc-local.service [Service] Restart=always RestartSec=20s ExecStart=ffmpeg -input_format h264 -f video4linux2 -video_size 1920x1080 -framerate 30 -i /dev/video0 -c:v copy -f dash -window_size 10 -remove_at_exit 1 -hls_playlist 1 /dev/shm/streaming/manifest.mpd [Install] WantedBy=multi-user.target
Notice we set it to be run after rc-local
to make sure we are ready for ffmpeg
to write to /dev/shm/streaming
.
Good day Chris. I like your project. I see you know a lot more than me about code.
I built the raspberry Pi robot of the tutorial from Allan H on youtube and that went quite well.
I now have a different project as I need to see the monitor of a kiosk with as little latency as possible and control the buttons on the kiosk.I would like to know if I can continue using lighttpd as my webserver as it works with the cgi scripts to control the buttons.
I got a HDMI capture card connected to the raspberry pi on /dev/video0 that is a EZ cap U3 Capture when I use list devices. it does MJPEG only. on the case it says USB 3.0 HD capture EZCAP261201908 1080p 60fps capture.
I want to stream 640×480 with a framerate of 30.
I can use ffplay to play the video and ffmpeg to record the mjpeg stream. However I struggle to send it to the web. I have been at it for 2 weeks and i am pulling my hair out as I am mostly and electronic guy. This HTML and FFMPEG is getting me.
Even mjpeg streamer I am struggling as I tried to see if it will work for my needs.
Can I get ffmpeg to convert it to h264 and then send it to nginx to serve to my network?
I tried today with v4l2rtspserver. I can only view it on my ubuntu and not on the phone as for the phone it needs h264 input from the card to create the http stream.
Please let me know if you can help
So the methods in this tutorial are for a HQ stream that can be incorporated into a security suite, that is compressed to save bandwidth, but that causes delay. What it sounds like is more important in your case is to have as real time as possible.
I would personally just access the MJPEG stream directly on the device with “Motion” package (tutorial here https://pimylifeup.com/raspberry-pi-webcam-server/) and then in your existing webpage with the controls, add a video tag in the HTML of your existing webpage like https://stackoverflow.com/a/34024692/3244542