The Birthday Paradox – The Proof is in the Python

This month is my wife’s, mother’s and my own birthday, all within a span of nine days. What are the odds of that? No idea, I’m no statistician. However, as a developer, I thought it would be fun to prove (or disprove?) the Birthday Paradox with some Python coding.


If you haven’t heard of the Birthday Paradox, it states that as soon as you have 23 random people in a room, there is a 50 percent chance two of them have the same birthday. Once the number of people in the room is at least 70, there is a 99.9 percent chance. It sound counter intuitive as it takes a full 366 (a full year + 1) people to have a 100% chance.

So instead of doing this from the statistical side, let’s do it from the experimentation angle. We are going to generate 23 random days in a year and see if there is a duplicate day.

First, we create a function that will return a list of N number of datetime objects within a given year. We will then use that list to see if there are duplicates within it. In the first case, with 23 people, exactly half the time we run this function there should be at least one duplicate date.

from random import randint
from datetime import datetime, timedelta

def random_birthdays(number_of_people):
    first_day_of_year = datetime(2017, 1, 1)
    return [first_day_of_year + timedelta(days=randint(0, 365))
            for _ in range(number_of_people)]

We need to run this function a lot of times to get an average of how often there is a duplicate.  We’ll create a function that runs it a thousand times and returns percentage average of duplicate found. (As a percentage is between 0 and 1, such as .50, we are multiplying it by 100 so that .5 looks like “50% chance” later.)

def determine_probability(number_of_people, run_amount=1_000):
    dups_found = 0
    for _ in range(run_amount):
        birthdays = random_birthdays(number_of_people)
        duplicates = set(x for x in birthdays if birthdays.count(x) > 1)
        if len(duplicates) >= 1:
            dups_found += 1

    return dups_found/run_amount * 100

Feel free to run it even more times than a 1000 for higher accuracy, max I tried was 100,000 which did have the most consistent results.

Finally, we add that last bit of code that actually calls the function and prints our findings.

if __name__ == '__main__':
    msg = ("{people} Random people have a {chance:.1f}%"
           " chance of having a birthday on the same day")
    for people in (23, 70):
        print(msg.format(people=people, chance=determine_probability(people)))

And wouldn’t ya know it, those fancy statisticians seem to be right.

23 Random people have a 50.4% chance of having a birthday on the same day
70 Random people have a 99.9% chance of having a birthday on the same day

At the top of the post, you saw a plot generated by calculating the first 100 people’s worth of probabilities, with red vertical markers at 23 and 70.  It is rather smooth due to increasing the run_amount to 10,000. Below is a plot of an entire 366 people, but only a 1000 runs per number of people.


To accomplish this, I used matplotlib and mixed in some multiprocessing to speed up the probability generation. Here is the final entire file.

from random import randint
from datetime import datetime, timedelta
from multiprocessing import Pool, cpu_count

import matplotlib.pyplot as plt

def random_birthdays(number_of_people):
    first_day_of_year = datetime(2017, 1, 1)
    return [first_day_of_year + timedelta(days=randint(0, 365))
            for _ in range(number_of_people)]

def determine_probability(number_of_people, run_amount=1000):
    dups_found = 0
    print(f"Generating day {number_of_people}")
    for _ in range(run_amount):
        birthdays = random_birthdays(number_of_people)
        duplicates = set(x for x in birthdays if birthdays.count(x) > 1)
        if len(duplicates) >= 1:
            dups_found += 1

    return number_of_people, dups_found/run_amount * 100

def plot_yearly_probabilities(max_people, vertical_markers=(23, 70)):
    with Pool(processes=cpu_count()) as p:
        percent_chances = p.map(determine_probability, range(max_people))

    plt.plot([z[1] for z in sorted(percent_chances, key=lambda x: x[0])])

    plt.xlabel("Number of people")
    plt.ylabel('Chance of sharing a birthday (Percentage)')

    for marker in vertical_markers:
        if max_people >= marker:
            plt.axvline(x=marker, color='red')


if __name__ == '__main__':


Notice that our generate image lines near perfectly alongside the statically generated image on Wikipedia*.

Hope you enjoyed, and maybe even learned something alone the way!

* By Rajkiran g (Own work) [CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0) or GFDL (http://www.gnu.org/copyleft/fdl.html)], via Wikimedia Commons


Reusables – Part 2: Wrappers

Spice up your code with wrappers! In Python, a wrapper, also known as a decorator, is simply encapsulating a function within other functions.

def my_func(a, b=2):

def my_other_func(**kwargs):

In Reusables, all the wrappers take arguments, aka meta decorators, so you will at minimum have to end them with parens ()s. Let’s take a look at the one I use the most first.


import reusables

def test_func():
    import time, random
    time.sleep(random.randint(2, 5))

# Function 'test_func' took a total of 5.000145769345911 seconds

time_it documentation
The message by default is printed. However it can also be sent to a log with a customized message. If log=True it will log to the Reusables logger, however you can also specify either a logger by name (string) or pass in a logger object directly.


@reusables.time_it(log=True, message="{seconds:.2f} seconds")
def test_time(length):
    return "slept {0}".format(length)

result = test_time(5)
# 2016-11-09 16:59:39,935 - reusables.wrappers  INFO      5.01 seconds

# slept 5

It’s also possible to capture time taken in a list.

@reusables.time_it(log='no_log', append=my_times)
def test_func():
    import time, random
    length = random.random()
for _ in range(10):
# [0.4791555858872698, 0.8963652232890809, 0.05607090172793505, 
# 0.03099917658380491,0.6567622821214627, 0.4333975642063024, 
# 0.21456404424395714, 0.5723555061358638, 0.0734819056269771, 
# 0.13208268856499217]


The oldest wrapper in the Reusables library forces the output of a function to be unique, or else it will raise an exception or if specified, return an alternative output.

import reusables
import random

def poor_uuid():
    return random.randint(0, 10)

print([poor_uuid() for _ in range(10)])
# [8, 9, 6, 3, 0, 7, 2, 5, 4, 10]

print([poor_uuid() for _ in range(1000)])
# Exception: No result was unique

unique documentation


A very simple wrapper to use while threading. Makes sure a function is only being run once at a time.

import reusables
import time

def func_one(_):

def func_two(_):

@reusables.time_it(message="test_1 took {0:.2f} seconds")
def test_1():
    reusables.run_in_pool(func_one, (1, 2, 3), threaded=True)

@reusables.time_it(message="test_2 took {0:.2f} seconds")
def test_2():
    reusables.run_in_pool(func_two, (1, 2, 3), threaded=True)


# test_1 took 5.04 seconds
# test_2 took 15.07 seconds


It’s good practice to catch and explicitly log exceptions, but sometimes it’s just easier to let it fail naturally at any point and log it for later refinement or debugging.

def test():
    raise Exception("Bad")

# 2016-12-26 12:38:01,381 - reusables   ERROR  Exception in test - Bad
# Traceback (most recent call last):
#     File "<input>", line 1, in <module>
#     File "reusables\wrappers.py", line 200, in wrapper
#     raise err
# Exception: Bad


Add the result of the function to the specified queue instead of returning it normally.

import reusables
import queue

my_queue = queue.Queue()

def func(a):
    return a


# 10

New in 0.9


Catch specified exceptions and return an alternative output or send it to an exception handler.

def handle_error(exception, func, *args, **kwargs):
    print(f"{func.__name__} raised {exception} when called with {args}")

def will_raise(message="Hello"):
    raise Exception(message)

will_raise("Oh no!")
# will_raise raised Oh no! when called with ('Oh no!',)


Once is never enough, keep trying until it works!

@reusables.retry_it(exceptions=(Exception, ), tries=10, wait=1)
def may_fail(dont_do=[]):
    if len(dont_do) < 6:
        raise OSError("So silly")
    print("Much success!")

# Much success!


This post is a follow-up of Reusables – Part 1: Overview and File Management.

Is it easier to ask for forgiveness in Python?

Yes, we are looking for graphic artists

Yes, we are looking for graphic artists

Is Python more suited for EAFP1 or LBYL2  coding styles? It has been debated across message boards, email chains, stackoverflow, twitter and workplaces. It’s not as heated as some other battles; like how spaces are better than tabs, or that nano is indisputably the best terminal editor 3; but people seem to have their own preference on the subject. What many will gloss over is the fact that Python as a language doesn’t have a preference. Rather, different design patterns lend themselves better to one or the other.

I disagree with the position that EAFP is better than LBYL, or “generally recommended” by Python – Guido van Rossum 4

If you haven’t already, I suggest taking a look at Brett Cannon’s blog post about how EAFP is a valid method with Python that programmers from other languages may not be familiar with. The benefits are EAFP are simple: Explicit code, fail fast, succeed faster, and DRY (don’t repeat yourself). In general I find myself using it more than LBYL, but that doesn’t mean it’s always the right way.

Use Case Dependent

First, a little code comparison to see the differences between them. Lets try to work with a list of list of strings. Our goal is that if the inner list is at least three items long, copy the third item into another list.

messages = [["hi there", "how are you?", "I'm doing fine"]]
out = []
if messages and messages[0] and len(messages[0]) >= 3:

except IndexError as err:
    print(err)  # In the real world, this should be a logging `.exception()`

Both pieces of code are short and easy to follow what they are accomplishing. However, when there are at least three items in the inner list, LBYL will take almost twice the amount of time! When there aren’t any errors happening, the checks are just additional weight slowing the process down.

But if we prune the messages down to [["hi there", "how are you?"]]   then EAFP will be much slower, because it will always try a bad lookup, fail and then have to catch that failure. Whereas LBYL will only perform the check, see it can’t do it, and move on. Check out my speed comparison gist yourself.

Some people might think putting both the append and lookup in the same try block is wrong. However, it is both faster; as we only have to do the lookup once; and the proper way; as we are only catching the possible IndexError and not anything that would arise from the append.

The real takeaway from this is to know which side of the coin you area dealing with beforehand. Is your incoming data almost always going to be correct? Then I’d lean towards EAFP, whereas if it’s a crap shoot, LBYL may make more sense.

Sometimes one is always faster

There are some instances when one is almost always faster than the other (though speed isn’t everything). For example, file operations are faster with EAFP because the less  IO wait, the better.

Lets try making a directory when we are not sure if it was created yet or not, while pretending it’s still ye olde times and os.makedirs(..., exist_ok=True) doesn’t exist yet.

import os 

if not os.path.exists("folder"):

except FileExistsError:

In this case, it’s always preferential to use EAFP, as it’s faster (even when executed on a m.2 SSD) , there are no side effects, and the error is highly specific so there is no need for special handling.

Be careful though, many times when dealing with files you don’t want to leave something in a bad state, in those cases, you would use LBYL.

What bad things can happen when you don’t ask permission?

Side effects

In many cases, if something fails to execute, the state of the program or associated files might have changed. If it’s not easy to revert to a known good state in the exception clause, EAFP should be avoided.


with open("file.txt", "w") as f:
        f.write("Hi there!\n") 
    except IndexError: 
        pass  # Don't do, need to be able to get file back to original state

Catching too much

If you are wrapping something with except Exception or worse, the dreaded blank except:, then you shouldn’t be using EAFP (if you’re using black except: still, you need to read up more on that and stop it!)


except:  # If others see this you will be reported to the Python Secret Police

Also, sometimes errors seem specific, like an OSError, but could raise multiple different child errors that must be parsed through.

# DO 
import errno

except FileNotFoundError: # A child of OSError
    # could be put as 'elif err.errno == errno.ENOENT' below
    log.error("Yo dawg, that directory doesn't exist, "
              "we'll try the backup folder")
except OSError as err:
    if err.errno in (errno.EACCES, errno.EPERM):
        log.error("We don't have permission to access that capt'n!"
                  "We'll try the backup folder")
        log.exception(f"Unexpected OSError: {err}") 
        raise err # If you don't expect an error, don't keep going. 
                  # This isn't Javascript

When to use what?

EAFP (Easier to Ask for Forgiveness than Permission)

  • IO operations (Hard drive and Networking)
  • Actions that will almost always be successful
  • Database operations (when dealing with transactions and can rollback)
  • Fast prototyping in a throw away environment

LBYL (Look Before You Leap):

  • Irrevocable actions, or anything that may have a side effect
  • Operation that may fail more times than succeed
  • When an exception that needs special attention could be easily caught beforehand

Sometime you might even find yourself using both for complex or mission critical applications. Like programming the space shuttle, or getting your code above that 500 lines the teacher wants.


First steps into GUI design with Kivy

Boring Background

I have always had in interest in making a local photo organization tool, that supports albums and tagging the way I want to do it. Maybe down the line allowing others to connect and use it too. So I started on one a few years ago, using what I knew best, Python REST back-end with Angular JS web based front end.

And it worked decently well.

However I was never happy with it’s performance past 10K images (and I am dealing with hundreds of thousands) and I tried halfway through development to switch to Angular 2. And everything blew up.

Now that I am older and wiser less stupid, I can face the fact that local data is best served with a desktop native application. It also removes the need for trying to keep up to date with ever changing web standards and libraries, in exchange for ones that might last through an entire development process (crazy thought, I know.)

Choosing Kivy

To be honest, I didn’t even think about using Kivy for a desktop application at first. I went straight to PySide, as it’s less bad licensing compared to PyQT. However, at the time, it didn’t want to play ball with a halfway modern python version, 3.5, the oldest I will create new content on, 3.7-dev is preferred or 3.6 for stable. I then looked into alternatives, such as wx and Kivy. Lets just say a quick view at both of their home pages tells you plenty about who you trust more with front end design off the bat.

Even thought Kivy seems to be aimed at more mobile development, including touch capabilities, I have found it to be extremely capable as a desktop application. It is also very appealing to me as it is no BS, MIT licensed.

First Steps

Just like any good python GUI, Kivy was a pain in the arse to install. On Windows, I just ended up grabbing the pre-compiled binaries from a well-loved unofficial binaries page. On my Linux VirtualBox, ended up having to disable 3D acceleration for it to start.

However, once I was able to start coding, it became stupid simple to be able to get content on the screen fast, and manipulate them. After about four hours of coding in one night, I had this:

An image viewer app that could display any directory of images. It included a preview bar that you could select past or upcoming images from, and control with either mouse clicks or keyboard presses. The actual code for it can be found here (just be aware it isn’t ready for the lime light yet, plenty of fixes to go.)

Application Layout

The application is composed of six widget classes overall. Here is a simple visual diagram of what they look like, and the Kivy widget classes they are using. (The padding is for visual ease only, not part of the program or to scale).

First Lessons learned

The two biggest issues I ran into were proper widget alignment, and the fact there is no built-in ‘on_hover’ like I am used to with CSS. I wrote up a quick class to do the latter, and including it here as it should be a drop-in for other’s kivy projects.

Kivy Mouse Over Code

from kivy.factory import Factory
from kivy.properties import BooleanProperty, ObjectProperty
from kivy.core.window import Window
from kivy.uix.widget import Widget

class MouseOver(Widget):

    def __init__(self, **kwargs):
        self.hovering = BooleanProperty(False)
        self.poi = ObjectProperty(None)
        super(MouseOver, self).__init__(**kwargs)

    def _mouse_move(self, *args):
        if not self.get_root_window():
        is_collide = self.collide_point(*self.to_widget(*args[1]))
        if self.hovering == is_collide:
        self.poi = args[1]
        self.hovering = is_collide
        self.dispatch('on_hover' if is_collide else 'on_exit')

    def on_hover(self):
        """Mouse over"""

    def on_exit(self):
        """Mouse leaves"""

Factory.register('MouseOver', MouseOver)

Then you simply have to subclass it as well as the original widget class you want your object to be, and override the on_hover and on_exit methods.

class Selector(Button, MouseOver):
    """ Base class for Prev and Next Image Buttons"""

    def on_hover(self):
        self.opacity = .8

    def on_exit(self):
        self.opacity = 0

Widget alignment

Proper widget alignment was fun to figure out, because Kivy has both hard set properties width and height for objects, as well as a more lose size_hint feature. By default, everything has a size_hint of (1, 1)  , think of this as a percentage of the screen,  so (1,1) == (100% height, 100% width).

Simple right? Well, now add two of those objects in base layout. Some layouts, like BoxLayout and GridLayout will then make each of them 50% of the screen. Others, like FloatLayout will let each of them take up the entire layer, so one will be hidden. Even more fun, is to disable it, you manually have to set size_hint to (None, None).

Then on top of that, you can start mix and matching, hinting and absolutes with different elements on the screen. I chose to do that in this case, because I always wanted the preview bar of images at the bottom to be only 100px tall. But I wanted the large image to expand to the full height.  But if you set the bar to 100px tall, and leave the image (1,1), with just raising it’s starting position 100px from the bottom, it now is 100px off the top of the page. So now on top of the size_hint you need to add an on_size method, that will reduce the height of the image object every time the window is resized, like so:

    def on_size(self, obj, size):
        """Make sure all children sizes adjust properly"""
        self.image.height = self.height - 100
        self.next_button.pos = (self.width - 99, self.next_button.hpos)

You can notice we also have make sure that the right hand button always looks attached to the right hand of the screen. (Which I later learned could probably be accomplished with the RelitiveLayout, but am still unsure how that would work compared to everything else I already have with my FloatLayout.)

So the takeaway for me was, even without CSS, you’re going to have fun with alignment issues.

Next Steps

So I have am image viewer, great! But you want to organize images into more manageable groups. I want to create a folder / album view to be able to select at a higher level what you want to view.

After that I am sure I will want to create a database system to store tags, album info and thumbnails (for faster preview loading).

Python Box 3 – The reddit release

Box has surpassed my expectations in popularity, even appearing in the Python Weekly newsletter and on Python Bytes podcast. I am very grateful for everyone that uses it and hope that you find it to be another great tool in your Python tool-belt! Also a huge shot out to everyone that has provided feedback or code contributes!

Box logoI first posted about Box about a month ago and linked to it on /r/python, which had great feedback and critique. With such interest, I have been working diligently the past few weeks to try and make it better, and I feel it confident it isn’t just better, it’s substantially improved.

Box 3 brings two huge changes to the table. First is that can take arguments to allow it to work in new and different ways, such as becoming a default_box or a  frozen_box. Second, Box does not longer deals with converting all sub dicts into Boxes upon creation, rather upon retrieval. These two things means that Box is now more powerful, customizable and faster than ever before.

New Features

 Conversion Box

This turns all those pesky keys that normally could not be accessed as an attribute into a safe string. automatically ‘fixes’ keys that can’t become attributes into valid lookups.

my_box = Box({"Send him away!": True})

my_box["Send him away!"] == my_box.Send_him_away

To be clear, this NEVER modifies the actual dictionary key, only allows attribute lookup to search for the converted name. This is the only feature that is enabled by default. Also keep it mind it does not modify case or attempt to find ascii equivalent for extended charset letters, it simply removes anything outside the allowed range.

You can also set those values through the attribute. The danger with that is if two improper strings would transform into the same attribute, such as a~b and a!b would then both equal the attribute a_b, but the Box would only modify or retrieve the first one it happened to find.

my_box.Send_him_away = False

# dict_items([('Send him away!', False)])

Default Box

If you don’t like getting errors on failed lookups, or bothering to create all those in-between boxes, this is for you!

default_box = Box(default_box=True)

# <Box: {}>

default_box.a.b.c.d.c.e.f.g.h = "x"
# <Box: {'a': {'b': {'c': {'d': {'c': {'e': {'f': {'g': {'h': 'x'}}}}}}}}}>

You can specify what you want the default return value to be with default_box_attr which by default is a Box class. If something callable is specified, it will return it as an instantiated version, otherwise it will return the specified result as is.

bx3 = Box(default_box=True, default_box_attr=3)

# 3

However it now only acts like a standard default dict, and you do lose the magical recursion creation.

# AttributeError: 'int' object has no attribute 'there'

Frozen Box

Brrrrr, you feel that? It’s a box that’s too cool to modify. Handy if you want to ensure the incoming data is not mutilated during it’s usage. An immutable box cannot have data added or delete,  like a tuple it can be hashed if all content is immutable, but if it has mutable content can still modify those objects and is unhashable.

frigid_box = Box({"Kung Fu Panda": "Pure Awesome"}, frozen_box=True)

frigid_box.Kung_Fu_Panda = "Overrated kids movie"
# Traceback (most recent call last):
# ...
# box.BoxError: Box is frozen

# -156221188

Camel Killer Box

DontYouHateCamelCaseKeys? Well I do too, kill them with fire camel_killer_box=True.

snake_box = Box({"BadKey": "silly value"}, camel_killer_box=True) 
assert snake_box.bad_key == "silly_value"

This transformation, including lower casing, is applied to everything. Which means the above example for just conversion box would also be lower case.

snake_box["Send him away!"] == snake_box.send_him_away

Box It Up

Force the conversion of all sub objects on creation, just like old times. This is useful if you expect to be referencing every value, and every sub-box value and want to front load the time at creation instead of during operations. Also a necessity if you plan to keep the reference or re-use the original dictionary that was boxed.

boxed = Box({"a": {"b": [{"c": "d"}]}}, box_it_up=True)


Dangerous Waters

As stated above, sub box objects are now not recreated until they are retrieved or modified. Which means if you have references to the original dict and modify it, you may inadvertently be updating the same objects that are in the box.

a = {"a": {"b": {"c": {}}}}
a_box = Box(a)
# <Box: {'a': {'b': {'c': {}}}}>

a["a"]["b"]["d"] = "2"

# <Box: {'a': {'b': {'c': {}, 'd': '2'}}}>

So if you plan to keep the original dict around, make sure to box_it_up or do a deepcopy first.

safe_box = Box(a, box_it_up=True)
a["a"]["b"]["d"] = "2"

# <Box: {'a': {'b': {'c': {}}}}>

Obviously not an issue if you are creating it from scratch, or with a new and unreferenced dict, just something to be aware of.

Speed and Memory Usage

Something a lot of people were worried about was speed and memory footprint compared to alternatives and the built-in dict. With the previous version of Box, I was more of the mindset that it lived to be a scripter, sysadmin or developer aid in writing in the code, and less performant worried. Simply put, it was foolish to think that way. If I really want this to be the go to dot notation library, it needs to be as fast as, if not faster, than it’s peers while still providing more functionality.

Because Box is the only dot notation library that I know of that works by converting upon retrieval, it is hard to do a true 1 to 1 test verse the others. To the best of my abilities, I have attempted to capture the angles that show where the performance differences are most noticeable.

This test is using a modified version of HearthStone card data, available on github, which is 6.57MBs and has 2224 keys, with sub dictionaries and data. To gather metrics, I am using reusables.time_it and memory_profiler.profile and the file can be run yourself. Box is being compared against the two most applicable competitors I know of, addict and DotMap (as Bunch and EasyDict do not account for recursion or have other common features), as well as the built in dict type. All charts except the memory usage are in seconds.

Lower is always better, these are speed and memory footprint charts

Creation and Conversion

So first off, how long does it take to convert a JSON file to the object type, and back again.

Figure 1: Convert JSON file to object and back to dict


The load is the first thing done, between then and the transformation back into a dictionary, all first level items are retrieved and 1000 new elements were added to the object. Thanks to the new style of only transforming sub items upon retrieval, it greatly speeds up Box, making very comparative to the built in dict type, as seen in figure 1. 

Load code comparison:

# addict
with open("hearthstone_cards.json", encoding="utf-8") as f:
    ad = Dict(json.load(f))

# DotMap
with open("hearthstone_cards.json", encoding="utf-8") as f:
    dm = DotMap(json.load(f))

# Box
bx = Box.from_json(filename="hearthstone_cards.json", encoding="utf-8")

# dict
with open("hearthstone_cards.json", encoding="utf-8") as f:
    dt = json.load(f)

Value Lookup

The load speed up does come at the price of having a heavier first initial lookup call for each value. The following graph, figure 2, shows a retrieval of all 2224 first level values from the object.

Figure 2: Lookup of 2224 first level values


As expected, Box took far and away the most time, but only on the initial lookup, and we are talking about a near order of magnitude less than the creation times for the other libraries.

The lookup is done the same with every object type.

for k in obj:

Inserting items

Inserting a 1000 new entities into the objects deals with very small amounts of time all around.

Figure 3: Inserting a 1000 new dict objects that will be accessible as those native object types


Insert looks the most different from the code, as the different objects behave differently on insertion. Box will insert it as a plain dict, and not convert it until retrieval. Addict will not automatically make new items into addict.Dicts, causing the need to manually. DotMap behaves as a default dict out of the box, so we can simply access the attribute we want to create directly off the bat.

# addict 
for i in range(1000):
    ad["new {}".format(i)] = Dict(a=i)

# DotMap
for i in range(1000):
    dm["new {}".format(i)]['a'] = i

# Box
for i in range(1000):
    bx["new {}".format(i)] = {'a': i}

# dict
for i in range(1000):
    dt["new {}".format(i)] = {'a': i}

Total Times

To summarize the times, dict is obviously the fastest, but Box is in a comfortable second fastest.

Figure 4: The total amount of seconds for all above scenarios


Memory Footprint

What I was most surprised with was memory usage. I have not had a lot of experience measuring this, so I leave room open for something obvious I am missing, but on first look, it seems that Box uses less memory than even the built in dict. 

Figure 5: Total memory usage in MBs


        Line #    Mem usage    Increment   Line Contents
        85     28.9 MiB      0.0 MiB   @profile()
        86                             def memory_test():
        87     41.9 MiB     13.0 MiB       ad = load_addict()
        88     57.1 MiB     15.2 MiB       dm = load_dotmap()
        89     64.5 MiB      7.4 MiB       bx = load_box()
        90     74.6 MiB     10.1 MiB       dt = load_dict()
        92     74.6 MiB      0.0 MiB       lookup(ad)
        93     74.6 MiB      0.0 MiB       lookup(ad)
        95     74.6 MiB      0.0 MiB       lookup(dm)
        96     74.6 MiB      0.0 MiB       lookup(dm)
        98     75.6 MiB      1.0 MiB       lookup(bx)
        99     75.6 MiB      0.0 MiB       lookup(bx)
       101     75.6 MiB      0.0 MiB       lookup(dt)
       102     75.6 MiB      0.0 MiB       lookup(dt)
       104     76.0 MiB      0.3 MiB       addict_insert(ad)
       105     76.6 MiB      0.6 MiB       dotmap_insert(dm)
       106     76.8 MiB      0.2 MiB       box_insert(bx)
       107     76.9 MiB      0.2 MiB       dict_insert(dt)


Questions I can answer

How does it work like a traditional dict and still take keyword arguments?

You can create a Box object just like you would create a dict as normal, just those new arguments will not be part of dict object (they are stored in a hidden ._box_config attribute). They were also made rather unique, such as ‘conversion_box’ and ‘frozen_box’ so that it would be very rare for others to be using those kwargs.

Is it safe to wait to convert dictionaries until retrieval?

As long as you aren’t storing malformed objects that subclassed dict, yes.