Dealing with communications across programs, processes or even threads can be a real pain in the patella. Each situation usually calls for something slightly different and has to work with a limited set of options. On top of that, a lot of tutorials I see are people simply starting one program from inside the other (for example with subprocess
) but that just won’t do for my taste.
Just show me the table of which IPC method I need
I’m going to go through options that can handle have two independent programs, started on their own, talk with each other. And when I say talk, I mean able to at least execute tasks in the other’s process and, in most cases, transfer data as well.
Pipes
Probably one of the most old school ways of transferring data is to just pipe it back and forth as needed. This is how terminals and shells interact with programs, by using the standard pipes of stdin
, stdout
, and stderr
(standard in, standard out, standard error).
Every time you print
you are writing to stdout
and it is very common to use this in Python when running another program from within it (i.e. if we did subprocess.run('bob.py')
we could easily communicate with it via the pipes). These are anonymous pipes, that exist only while the program is running. To communicate between different programs you should use a Named Pipe which creates a file descriptor that you connect to as an entry point.
Here are some short examples showing their use on Linux. It is also possible on Windows with the pywin32 module or do it yourself with ctypes. But I find it easier to just use other methods on Windows.
Our example will be two programs, the first of which, Alice.py
is simply a text converter. The message sent will be in three parts, a starting identifier, X, four integers to denote the message length, and the message itself. (This is by no means a common standard or practical, just something I made up for a quick example.)
So to send ‘Howdy’, it would look like X0005HOWDY
, X
being the identifier of a new message, 0005
denoting the length of the message to come, and HOWDY
being the message body.
Alice.py
import time import os # This is a full path on the file system, # in this scenario both are run from the same directory pipename = 'fifo' os.mkfifo(pipename) # For non-blocking pipes, you have to be reading from it # before you can have something else writing to it pipein = os.open(pipename, os.O_NONBLOCK|os.O_RDONLY) p = os.fdopen(pipein, 'rb', buffering=0) # This program will simply make the output of the other program more readable def converter(message): return message.decode('utf-8').replace("_", " ").replace("-", " ").lower() while True: # Wait until we have a message identifier new_data = p.read(1) if new_data == b'X': # Figure out the length of the message raw_length = p.read(4) message = p.read(int(raw_length)) # Read and convert the message print(converter(message)) elif new_data: # If we read a single byte that isn't an identifier, something went wrong raise Exception('Out of sync!') else: time.sleep(1)
That’s all our conversion server is. It creates a pipe that something else can connect to, and will convert the incoming messages to lower case and replace dashes and underscores with spaces.
So let’s have Bob.py
talk to Alice.py
, but as Bob is a computer program, he sometimes spits out gobilty gook messages that need some help.
Bob.py
import os # Connect to the pipe created by Alice.py pipename = 'fifo' pipeout = os.open(pipename, os.O_NONBLOCK|os.O_WRONLY) p = os.fdopen(pipeout, 'wb', buffering=0, closefd=False) def write_message(message): """Covert a string into our message format and send it""" length = '{0:04d}'.format(len(message)) p.write(b'X') p.write(length.encode('utf-8')) p.write(message.encode('utf-8')) write_message('TERriBLe_looking-machine-oUTput')
Start up Alice.py
first, then Bob.py
.
Alice will print out a pretty message of: terrible looking machine output
While pipes are super handy for terminal usage and running programs inside each other, it has become uncommon to use named pipes as actual comms between two independent programs. Mainly because of the required setup procedure and non-uniformity across operating systems. This has lead to more modern and cross-compatible being preferred.
Pros:
- Fast and Efficient
- No external services
Cons:
- Not cross-platform compatible code
- Difficult to code well
Files
Another ye olde (yet perfectly valid) way to communicate between programs is to simply create files that each program can interpret. Sometimes it’s as simple as having a lockfile
. If the lockfile exists, it can serve as a message to other programs to let it finish before they do something, or even to stop new instances of itself from running. For example, running system updates in most Linux environments will create a lock file to make sure that two different update processes aren’t started at the same time.
It’s possible to take that idea further and share a file or two to actually transfer information. In these examples, the two programs will both work out of the same file, with a lock file for safety. (You can write your own code for file lock control, but I will be using the py-filelock package for brevity.)
There are a lot of possible ways to format the shared file, this example will keep it very basic, giving each command a unique id (used to know if command has been run before or not), then the command, and it’s argument. The same dictionary will also leave room for a result or an error message.
The shared JSON file will have the following format:
{ "commands": { "<random uuid>": { "command": "add", "args": [2,5], "result": 7 # "result" field only exists after server has responded # if "error" key exists instead of "result", something bad happened } } }
The server, Alice.py
will then keep looping, waiting for the shared file to change. Once it does, Alice will obtain the lock for safety (so there isn’t any corrupt JSON from writing being interrupted), read the file, run the desired command, and save the result. In a real world scenario the lock would only be obtained during the individual reading and writing phases, so to keep the lock held as short as possible by a single program. But that complicates the code (as then you would have to do a second read before writing and only update the sections you ran, in case there were other ones added) and makes it a bit much for an off the shelf example.
Alice.py
import json import time import os from filelock import FileLock # Keep track of commands that have been run completed = [] # Track file size to know when new commands have been added last_size = 0 lock_file = "shared.json.lock" lock = FileLock(lock_file) shared_file = "shared.json" # Not totally necessary, but if you ever need to raise exceptions # it's better to create your own than use the built-ins class AliceBroke(Exception): """Custom exception for catching our own errors""" # Functions that can be executed by Bob.py def adding(a, b): if not isinstance(a, (int, float)) or not isinstance(b, (int, float)): raise AliceBroke('Yeah, we are only going to add numbers') return a + b def multiply(a, b): if not isinstance(a, (int, float)) or not isinstance(b, (int, float)): raise AliceBroke('Yeah, we are only going to multiply numbers') return a * b # Right now just have a way to map the incoming strings from Bob to functions # This could be expanded to include what arguments it expects for pre-validation translate = { "add": adding, "multiply": multiply } def main(): global last_size, completed while True: # Poor man's file watcher file_size = os.path.getsize(shared_file) if file_size == last_size: # File hasn't changed time.sleep(.1) continue else: print(f'File has changed from {last_size} to {file_size} bytes,' f' lets see what they want to do!') last_size = file_size run_commands() last_size = os.path.getsize(shared_file) def run_commands(): # Grab the lock, once it is acquired open the file with lock, open(shared_file, 'r+') as f: data = json.load(f) # Iterate over the command keys, if we haven't run it yet, do so now for name in data['commands']: if name not in completed: command = data['commands'][name]['command'] args = data['commands'][name]['args'] print(f'running command {command} with args {args}') try: data['commands'][name]['result'] = translate[command](*args) except AliceBroke as err: # Arguments weren't the type we were expecting data['commands'][name]['error'] = str(err) except TypeError: data['commands'][name]['error'] = "Incorrect number of arguments" finally: completed.append(name) # As we are writing the data back to the same file that is still # open, we need to go back to the begging of it before writing f.seek(0) json.dump(data, f, indent=2) if __name__ == '__main__': # Create / blank out the shared file with open(shared_file, 'w') as f: json.dump({"commands": {}}, f) last_size = os.path.getsize(shared_file) try: main() finally: # Be nice and clean up after ourselves os.unlink(shared_file) os.unlink(lock_file)
Our client, Bob.py
will ask some simple commands that Alice supports and wait until it gets the answer back.
Bob.py
import json import time import uuid from filelock import FileLock completed = [] last_size = 0 lock = FileLock("shared.json.lock") shared_file = "shared.json" def ask_alice(command, *args, wait_for_answer=True): # Create a new random ID for the command # could be as simple as incremented numbers cmd_uuid = str(uuid.uuid4()) with lock, open(shared_file, "r+") as f: data = json.load(f) data['commands'][cmd_uuid] = {'command': command, 'args': args} f.seek(0) json.dump(data, f) if wait_for_answer: return get_answer(cmd_uuid) return cmd_uuid def get_answer(cmd_uuid): # Wait until we get an answer back for a command # Ideally this would be more of an asynchronous callback, but there # are plenty of cases where serialized processes like this must happen while True: with lock, open(shared_file) as f: data = json.load(f) command = data['commands'][cmd_uuid] if 'result' in command: return command['result'] elif 'error' in command: raise Exception(command['error']) time.sleep(.2) print(f"Lets add 2 and 5 {ask_alice('add', 2, 5, wait_for_answer=True)}") print(f"Lets multiply 8 and 5 {ask_alice('multiply', 2, 5, wait_for_answer=True)}") print("Lets break it and cause an exception!") ask_alice('add', 'bad', 'data', wait_for_answer=True)
Start up Alice.py
first, then Bob.py
.
Alice will return:
File has changed from 16 to 90 bytes, lets see what they want to do! running command add with args [2, 5] File has changed from 103 to 184 bytes, lets see what they want to do! running command multiply with args [2, 5] File has changed from 198 to 283 bytes, lets see what they want to do! running command add with args ['bad', 'data']
Bob
Lets add 2 and 5: 7 Lets multiply 8 and 5: 40 Lets break it and cause an exception! Traceback (most recent call last): ... Exception: Yeah, we are only going to add numbers
Pros:
- Cross-platform compatible
- Simple to implement and understand
Cons:
- Have to worry about File System security if anything sensitive is being shared
- Programs now responsible to clean up after themselves
Message Queue
Welcome to the 21st Century, where message queues serve as quick and efficient ways to transfer commands and information. I like to think of them as an always running middlemen that can survive outside your process.
Here you are spoiled for choice with options: ActiveMQ, ZeroMQ, Redis, RabbitMQ, Sparrow, Starling, Kestrel, Amazon SQS, Beanstalk, Kafka, IronMQ, and POSIX IPC message queue are the ones I know. You can even use NoSQL databases like mongoDB or couchDB in a similar manner, though for simple IPC I suggest against using those.
I suggest looking into RabbitMQ’s tutorials to get a good in-depth look at how you can write code for it (and across multiple different languages). RabbitMQ is cross-platform and even provides Windows binaries directly, unlike some others.
For my own examples we will use Redis, as they have the best summary I can steal quote from their website https://redis.io/ yet have a surprising lack of Python tutorials.
Redis is an open source (BSD licensed), in-memory data structure store, used as a database, cache and message broker. It supports data structures such as strings, hashes, lists, sets, sorted sets with range queries, bitmaps, [etc…]
That’s it, it stores data somewhere that is easily access from multiple programs, just what you need for IPC. You can use it as a keystore (for data transfer and storage) or, like I am about to show, a publisher / subscriber model, better for command and control.
Alice.py
is our service and simply subscribes to a channel and waits for any updates and will print them to the screen.
import time import redis r = redis.StrictRedis() p = r.pubsub() def handler(message): print(message['data'].decode('utf-8')) p.subscribe(my_channel=handler) thread = p.run_in_thread(sleep_time=0.001) try: # Runs in the background while your program otherwise operates time.sleep(1000) finally: # Shut down cleanly when all done thread.stop()
Bob.py
is another program just simply pushes a message to that channel
import redis r = redis.StrictRedis() p = r.pubsub() r.publish('my_channel', 'example data'.encode('utf-8'))
Start up Alice.py
first, then Bob.py
.
Alice.py
will print example data
, and can be stopped by pressing CTRL+C
Pros:
- Built-in publisher / subscriber methodology, don’t have to do manual checks
- Built-in background thread running
- (Can be) Cross-platform compatible
- State can be saved if a program exits unexpectedly (depending on setup)
Cons:
- Requirement for external service
- More overhead
- Have to worry about the external service’s security and how you connect to it
Shared Memory
If written correctly, this can one of the fastest way to transfer data or execute tasks between programs, but it is also the most low-level, meaning a lot more coding and error handling yourself.
Think of using memory in the same way as the single file with using a lock. While one program writes to memory, the other has to wait until it finishes, then can read it and write its own thing back. (It’s possible to also use multiple sections of memory, just it gets to be a lot of example code really fast so I am holding off.)
So now you have to create the Semaphore (basically a memory lockfile) and mapped memory that each program can access.
On Linux (and Windows with Cywin) you can use posix_ipc, and check out their example here.
A lot simpler is to just use the built in mmap
directly and share a file in memory. This example won’t even go as far as creating the lockfile, just showing off the basics only.
Alice.py
is going to create and write stuff to the memory mapped file
Alice.py
import time import mmap import os # Create the file and fill it with line ends fd = os.open('mmaptest', os.O_CREAT | os.O_TRUNC | os.O_RDWR) os.write(fd, b'\n' * mmap.PAGESIZE) # Map it to memory and write some data to it, then change it 10 seconds later buf = mmap.mmap(fd, mmap.PAGESIZE, access=mmap.ACCESS_WRITE) buf.write(b'now we are in memory\n') time.sleep(10) buf.seek(0) buf.write(b'again\n')
Bob.py
will simply read the content of the memory mapped file. Showing that it does know when the contents change.
Bob.py
import mmap import os import time # Open the file for reading only fd = os.open('mmaptest', os.O_RDONLY) buf = mmap.mmap(fd, mmap.PAGESIZE, access=mmap.ACCESS_READ) # Print when the content changes last = b'' while True: buf.seek(0) msg = buf.readline() if msg != last: print(msg) last = msg else: time.sleep(1)
This example isn’t the most friendly to run, as you have to start-up Alice.py
and then Bob.py
within ten seconds after that, but it shows the basics of how memory mapping is very similar to just using a file.
Pros:
- Fast
- Cross-platform compatible code
Cons:
- Can be difficult to write
- Limited size to accessible memory
- Using mmap without posix_ipc will also create a physical file with the same content
Signals
On Linux? Don’t want to send information, just need to toggle state on something? Send a signal!
Imagine you have a service, Alice
, that anything local can connect to, but you want to be able to tell them if the service goes down or comes back up.
In your clients code, they should register their process identification number with Alice
when they first connect to her, and have a method to capture a custom signal to know Alice
‘s current state.
Bob.py
import signal, os service_running = True my_pid = os.getpid() # 'Touch' a file as the name of the program's PID # in Alice service's special directory # Make sure to delete this file when your program exists! open(f"/etc/my_service/pids/{my_pid}", "w").close() def service_off(signum, frame): global service_running service_running = False def service_on(signum, frame): global service_running service_running = True signal.signal(signal.SIGUSR1, service_off) signal.signal(signal.SIGUSR2, service_on)
SIGUSR1
is a custom signal reserved for custom use, as well as SIGUSR2
, so they are safe to use in this manner without fear of secondary actions happening. (For example, if you send something like SIGINT
, aka interrupt, it will just kill your program if not caught properly.)
Alice.py
will then simply go through the directory of PID files when it starts up or shuts down, and sends each one that signal to let them know she’s back online.
import os import signal def let_them_know(startup=True): signal_to_send = signal.SIGUSR1 if startup else signal.SIGUSR2 for pid_file in os.listdir(f"/etc/my_service/pids/"): # Put in try catch block for production os.kill(int(pid_file), signal_to_send)
Pros:
- Super simple
Cons:
- Not cross-platform compatible
- Cannot transfer data
Sockets
The traditional IPC. Every time I look for different IPC methods, sockets always come up. It’s easy to understand why, they are cross platform and natively supported by most languages.
However, dealing directly with raw sockets is very low-level and require a lot more coding and configuration. The Python standard library has a great example of an echo server to get you started.
But this is batteries included, everyone-else-has-already-done-the-work-for-you Python. Sure you could set up everything yourself, or you can use the higher level Listener
s and Client
s from the multiprocessing library.
Alice.py
is hosting a server party, and executing incoming requests.
from multiprocessing.connection import Listener def function_to_execute(*args): """" Our handler function that will run when an incoming request is a list of arguments """ return args[0] * args[1] with Listener(('localhost', 6000), authkey=b'Do not let eve find us!') as listener: # Looping here so that the clients / party goers can # always come back for more than a single request while True: print('waiting for someone to ask for something') with listener.accept() as conn: args = conn.recv() if args == b'stop server': print('Goodnight') break elif isinstance(args, list): # Very basic check, must be more secure in production print('Someone wants me to do something') result = function_to_execute(*args) conn.send(result) else: conn.send(b'I have no idea what you want me to do')
Bob.py
is going to go for just a quick function and then call it a night.
from multiprocessing.connection import Client with Client(('localhost', 6000), authkey=b'Do not let eve find us!') as conn: conn.send([8, 8]) print(f"What is 8 * 8? Why Alice told me it's {conn.recv()}") # We have to connect again because Alice can only handle one request at a time with Client(('localhost', 6000), authkey=b'Do not let eve find us!') as conn: # Bob's a party pooper and going to end it for everyone conn.send(b'stop server')
Start up Alice.py
first, then run Bob.py
. Bob will quickly exit with the message What is 8 * 8, Why Alice told me it's 64
Alice will have four total messages:
waiting for someone to ask for something Someone wants me to do something waiting for someone to ask for something Goodnight
Pros:
- Cross-platform compatible
- Can be extended to RPC
Cons:
- Slower
- More overhead
RPC
Believe it or not, you can use a lot of the methods from remote procedure calls locally. Even sockets and message queues can already be set up to work for either IPC or RPC.
Now, barring using IR receivers and transmitters or laser communications, you are probably going to connect remote programs via the internet. That means most RPC standards are going to be networking based, aka on sockets. So it’s less about deciding which protocol to use, and more on choosing which data transmission standard to use. Two that I have used are JSONRPC, for normal humans, and XMLRPC, for if you are a XML fan that like sniffing glue and running with scissors 1. There is also SOAP, for XML fans who need their acronyms to also spell something, and Apache Thrift that I found while doing research for this article that I have not touched. Those standards transfer data as text, which makes it easier to read, but inefficient. Newer options, like gRPC use protocol buffers to serialize the data and reduce overhead.
It’s also very common and easy to just write up a simple HTTP REST interface for you programs, using a lightweight framework like bottle or flask, and communicate that way.
In the future (if they don’t already exist) I expect to see even more choices with WebSocket or WebRTC based communications.
Summary
Lets wrap this all up with a comparison table:
Method | Cross Platform Compatible | Requires File or File Descriptor | Requires External Service | Easy to write / read code 2 |
---|---|---|---|---|
Pipes | No | Yes | No | No |
Files | Yes | Yes | No | Yes |
Message Queues | Yes3 | No | Yes | Yes |
Shared Memory | Yes4 | Yes4 | No | No |
Sockets | Yes | No | No | Yes |
Signals | No | No | No | Yes |
Hopefully that at least clears up why Sockets are the go-to IPC method, as they have the most favorable traits. Whereas for me, I usually want something either a little more robust, like message queues, or REST APIs so that it can be used locally or remotely. Which, to be fair, are built on top of sockets.