7c0h

# Articles tagged with "python"

In my previous post on hierarchical loss for multi-label classification I gave an implementation of a specific algorithm for calculating the loss between two trees. I then added a quick edit mentioning that "this algorithm doesn't work too well in practice", and today I want to delve into why.

Imagine you want to predict the cities where someone lived based on some data. The hierarchy of locations is a tree with country at the first level, province or state second, and city at its third level. This tree has ca. 195 nodes on its first level and a lot more as we go down the tree.

Let's now say that I was supposed to choose `Argentina.Misiones.Posadas` (which corresponds to a city in Argentina) but I predicted `Congo.Bouenza.Loutété;` (which is the 10th most popular city in the Republic of Congo). The loss for this prediction is 0.01, which is surprisingly low - seeing as I wasn't even close to the real answer, I would have expected something near 1.

As we go deeper into the tree, the loss goes down real quick. If I had predicted `Argentina.Chubut.Puerto Madryn` (a city 1900km away in one of the other 23 possible provinces) the loss would be 0.00043, and if I had predicted `Argentina.Misiones.Wanda` (one of the other 64 cities in the correct province) my loss would have been 0.000019. If your tree is deeper than this then you will soon start running into numerical issues.

The problem here is the nature of the problem itself. Because my predictions are multi-label there is no limit to the number of cities where a person may have lived while, simultaneously, there is no limit to how many cities I may predict. If I predict that a person has lived in every single city in America, from Ward Hunt Island Camp in Canada down to Ushuaia in Argentina and everything in between, but it turns out that the person has lived in all other cities in the world, my loss would only then be 1. And if it turns out that the person has briefly lived in `Argentina.Misiones.Posadas` then my loss goes down to ~0.995 because getting one city right also means that I got the country right.

Now you see why this algorithm is very good in theory but not useful in practice: if you are trying to predict one or two points in a big tree then your losses will always be negligible. No matter how wrong your prediction is, the loss for a "normal" person will never be high enough to be useful.

On the other hand, if you are expecting your predictions to cover a good chunk of the tree then this algorithm is still right for you. Otherwise a good alternative is to use the Jaccard distance instead and represent `Argentina.Misiones.Posadas` as the set `{"Argentina", "Argentina.Misiones", "Argentina.Misiones.Posadas"}`. This is not as fair a measure as I would like (it punishes small errors a bit too harshly) but it still works well in practice. You could also look deeper into the paper and see if the non-normalized algorithms work for you.

So, this is a thing that happened:

I was invited to give a talk to the Social event organized by LatinX in AI during the NAACL 2021 conference.

I talked about best practices for publishing your code on the internet for everyone to see, starting from how to collaborate with your future self (aka "please write comments"), with scientists, with nice APIs who will do the web design for you, and finally directly with final users. I have published the slides in this PDF, and will publish the video (or even better, a transcription) as soon as I get my hands on it.

Update July 11th: the presentation with notes is now available here.

Here's one of those problems that sounds complicated but, when you take a deep dive into it, turns out to be just as complicated as it sounds.

Suppose you build a classifier that takes a book and returns its classification according to the Dewey Decimal System. This classifier would take a book such as "The return of Sherlock Holmes" and classify it as, say, "Fiction".

Of course, life is rarely this easy. This book in particular is more often than not classified as 823.8, "Literature > English > Fiction > Victorian period 1837-1900". The stories, however, were written between 1903 and 1904, meaning that some librarians would rather file it under 823.912, "Literature > English > Fiction > Modern Period > 20th Century > 1901-1945".

Other books are more complicated. Tina Fey's autobiography Bossypants can be classified under any of the following categories:

• Arts and Recreation > Amusements and Recreation > Public Entertainments, TV, Movies > Biography And History > Biography
• Arts and Recreation > Amusements and Recreation > Stage presentations > Biography And History > Biography
• Literature > American And Canadian > Authors, American and American Miscellany > 21st Century

This is known as a hierarchical multi-label classification problem:

• It is hierarchical because the expected classification is part of a hierarchy. We could argue whether Sherlock Holmes should be classified as "Victorian" or "Modern", but we would all agree that either case is not as bad as classifying it under "Natural Science and Mathematics > Chemistry".
• It is multi-label because there is more than one possible valid class. Tina Fey is both a Public entertainer and an American. There is no need to choose just one.
• It is classification because we need to choose the right bin for this book.
• It is a problem because I had to solve it this week and it wasn't easy.

There seems to be exactly one paper on this topic, Incremental algorithms for hierarchical classification, and is not as easy to read as one would like (and not just because it refers to Section 4 when in reality should be Section 5). Luckily, this survey on multi-label learning presents a simpler version.

I ended up writing a test implementation to ensure I had understood the solution correctly, and decided that it would be a shame to just throw it away. So here it is. This version separates levels in a tree with '.' characters and is optimized for clarity.

Edit June 17: this algorithm doesn't work too well in practice. I'll write about its shortcomings soon, but until then you should think twice about using it as it is.

``````#!/usr/bin/python
from collections import defaultdict

def parent(node):
""" Given a node in a tree, returns its parent node.

Parameters
----------
node : str
Node whose parent I'm interested in.

Returns
-------
str
Parent node of the input node or None if the input Node is already a
root node.

Notes
-----
In truth, returning '' for root nodes would be acceptable. However,
None values force us to think really hard about our assumptions at every
moment.
"""
parent_str = '.'.join(node.split('.')[:-1])
if parent_str == '':
parent_str = None
return parent_str

def nodes_to_cost(taxonomy):
""" Calculates the costs associated with errors for a specific node in a
taxonomy.

Parameters
----------
taxonomy : set
Set of all subtrees that can be found in a given taxonomy.

Returns
-------
dict
A cost for every possible node in the taxonomy.

References
----------
Implements the weight function from
Cesa-bianchi, N., Zaniboni, L., and Collins, M. "Incremental algorithms for
hierarchical classification". In Journal of Machine Learning Research,
pages 31–54. MIT Press, 2004.
"""
assert taxonomy == all_subtrees(taxonomy), \
"There are missing subnodes in the input taxonomy"

# Set of nodes at every depth
depth_to_nodes = defaultdict(set)
# How many children does a node have
num_children = defaultdict(int)
for node in taxonomy:
depth = len(node.split('.'))-1
parent_node = parent(node)
if parent_node is not None:
num_children[parent_node] += 1

cost = dict()
for curr_depth in range(1+max(depth_to_nodes.keys())):
for node in depth_to_nodes[curr_depth]:
if curr_depth == 0:
# Base case: parent node
cost[node] = 1.0/len(depth_to_nodes[curr_depth])
else:
# General case: node guaranteed to have a parent
parent_node = parent(node)
cost[node] = cost[parent_node]/num_children[parent_node]
return cost

def all_subtrees(leaves):
""" Given a set of leafs, ensures that all possible subtrees are
included in the set too.

Parameters
----------
leaves : set
A set of selected subtrees from the overall category tree.

Returns
-------
set
A set containing the original subtrees plus all possible subtrees
contained in these leaves.

Notes
-----
Example: if leaves = {"01.02", "01.04.05"}, then the returned value is the
set {"01", "01.02", "01.04", "01.04.05"}.
"""
full_set = set()
for leave in leaves:
parts = leave.split('.')
for i in range(len(parts)):
return full_set

def h_loss(labels1, labels2, node_cost):
""" Calculates the Hierarchical loss for the given two sets.

Parameters
----------
labels1 : set
First set of labels
labels2 : set
Second set of labels
node_cost : dict
A map between tree nodes and the weight associated with them.

Notes
-----
If you want a loss between 0 and 1, the `nodes_to_cost` function implements
such a function.

Returns
-------
float
Loss between the two given sets.

References
----------
The nicer reference of the algorithm is to be found in
Sorower, Mohammad S. "A literature survey on algorithms for multi-label
learning." Oregon State University, Corvallis (2010).
"""
# We calculate the entire set of subtrees, just in case.
all_labels1 = all_subtrees(labels1)
all_labels2 = all_subtrees(labels2)
# Symmetric difference between sets
sym_diff = all_labels1.union(all_labels2) - \
all_labels1.intersection(all_labels2)
loss = 0
for node in sym_diff:
parent_node = parent(node)
if parent_node not in sym_diff:
loss += node_cost[node]
return loss

if __name__ == '__main__':
# Simple usage example
taxonomy = set(["01", "01.01", "01.02", "01.03", "01.04", "01.05",
"02", "02.01", "02.02", "02.03", "02.03.01"])
weights = nodes_to_cost(taxonomy)
node_1=set(['01'])
node_2=set(['01.01', '02'])
print(h_loss(node_1, node_2, weights))
``````

The compiler as we know it is generally attributed to Grace Hopper, who also popularized the notion of machine-independent programming languages and served as technical consultant in 1959 in the project that would become the COBOL programming language. The second part is not important for today's post, but not enough people know how awesome Grace Hopper was and that's unfair.

It's been at least 60 years since we moved from assembly-only code into what we now call "good software engineering practices". Sure, punching assembly code into perforated cards was a lot of fun, and you could always add comments with a pen, right there on the cardboard like well-educated cavemen and cavewomen (cavepeople?). Or, and hear me out, we could use a well-designed programming language instead with fancy features like comments, functions, modules, and even a type system if you're feeling fancy.

None of these things will make our code run faster. But I'm going to let you into a tiny secret: the time programmers spend actually coding pales in comparison to the time programmers spend thinking about what their code should do. And that time is dwarfed by the time programmers spend cursing other people who couldn't add a comment to save their life, using variables named `var` and cramming lines of code as tightly as possible because they think it's good for the environment.

The type of code that keeps other people from strangling you is what we call "good code". And we can't talk about "good code" without it's antithesis: "write-only" code. The term is used to describe languages whose syntax is, according to Wikipedia, "sufficiently dense and bizarre that any routine of significant size is too difficult to understand by other programmers and cannot be safely edited". Perl was heralded for a long time as the most popular "write-only" language, and it's hard to argue against it:

``````open my \$fh, '<', \$filename or die "error opening \$filename: \$!";
my \$data = do { local \$/; <\$fh> };
``````

This is not by far the worse when it comes to Perl, but it highlights the type of code you get when readability is put aside in favor of shorter, tighter code.

Some languages are more propense to this problem than others. The International Obfuscated C Code Contest is a prime example of the type of code that can be written when you really, really want to write something badly. And yet, I am willing to give C a pass (and even to Perl, sometimes) for a couple reasons:

• C was always supposed to be a thin layer on top of assembly, and was designed to run in computers with limited capabilities. It is a language for people who really, really need to save a couple CPU cycles, readability be damned.
• We do have good practices for writing C code. It is possible to write okay code in C, and it will run reasonably fast.
• All modern C compilers have to remain backwards compatible. While some edge cases tend to go away with newer releases, C wouldn't be C without its wildest, foot-meet-gun features, and old code still needs to work.

Modern programming languages, on the other hand, don't get such an easy pass: if they are allowed to have as many abstraction layers and RAM as they want, have no backwards compatibility to worry about, and are free to follow 60+ years of research in good practices, then it's unforgivable to introduce the type of features that lead to write-only code.

Which takes us to our first stop: Rust. Take a look at the following code:

``````let f = File::open("hello.txt");
let mut f = match f {
Ok(file) => file,
Err(e) => return Err(e),
};
``````

This code is relatively simple to understand: the variable `f` contains a file descriptor to the `hello.txt` file. The operation can either succeed or fail. If it succeeded, you can read the file's contents by extracting the file descriptor from `Ok(file)`, and if it failed you can either do something with the error `e` or further propagate `Err(e)`. If you have seen functional programming before, this concept may sound familiar to you. But more important: this code makes sense even if you have never programmed with Rust before.

But once we introduce the `?` operator, all that clarity is thrown off the window:

``````let mut f = File::open("hello.txt")?;
``````

All the explicit error handling that we saw before is now hidden from you. In order to save 3 lines of code, we have now put our error handling logic behind an easy-to-overlook, hard-to-google `?` symbol. It's literally there to make the code easier to write, even if it makes it harder to read.

And let's not forget that the operator also facilitates the "hot potato" style of catching exceptions1, in which you simply... don't:

``````File::open("hello.txt")?.read_to_string(&mut s)?;
``````

Python is perhaps the poster child of "readability over conciseness". The Zen of Python explicitly states, among others, that "readability counts" and that "sparser is better than dense". The Zen of Python is not only a great programming language design document, it is a great design document, period.

Which is why I'm still completely puzzled that both f-strings and the infamous walrus operator have made it into Python 3.6 and 3.8 respectively.

I can probably be convinced of adopting f-strings. At its core, they are designed to bring variables closer to where they are used, which makes sense:

``````"Hello, {}. You are {}.".format(name, age)
f"Hello, {name}. You are {age}."
``````

This seems to me like a perfectly sane idea, although not one without drawbacks. For instance, the fact that the `f` is both important and easy to overlook. Or that there's no way to know what the `=` here does:

``````some_string = "Test"
print(f"{some_string=}")
``````

(for the record: it will print `some_string='Test'`). I also hate that you can now mix variables, functions, and formatting in a way that's almost designed to introduce subtle bugs:

``````print(f"Diameter {2 * r:.2f}")
``````

But this all pales in comparison to the walrus operator, an operator designed to save one line of code2:

``````# Before
myvar = some_value
if my_var > 3:
print("my_var is larger than 3")

# After
if (myvar := some_value) > 3:
print("my_var is larger than 3)
``````

And what an expensive line of code it was! In order to save one or two variables, you need a new operator that behaves unexpectedly if you forget parenthesis, has enough edge cases that even the official documentation brings them up, and led to an infamous dispute that ended up with Python's creator taking a "permanent vacation" from his role. As a bonus, it also opens the door to questions like this one, which is answered with (paraphrasing) "those two cases behave differently, but in ways you wouldn't notice".

I think software development is hard enough as it is. I cannot convince the Rust community that explicit error handling is a good thing, but I hope I can at least persuade you to really, really use these type of constructions only when they are the only alternative that makes sense.

Source code is not for machines - they are machines, and therefore they couldn't care less whether we use tabs, spaces, one operator, or ten. So let your code breath. Make the purpose of your code obvious. Life is too short to figure out whatever it is that the K programming language is trying to do.

## Footnotes

• 1: Or rather "exceptions", as mentioned in the RFC
• 2: If you're not familiar with the walrus operator, this link gives a comprehensive list of reasons both for and against.

April 21: see the "Update" section at the end for a couple extra details.

A very common operation when programming is iterating over elements with a nested loop: iterate over all entities in a collection and, for each element, perform a second iteration. A simple example in a bank setting would be a job where, for each customer in our database, we want to sum the balance of each individual operation that the user performed. In pseudocode, it could be understood as:

``````for each customer in database:
customer_balance=0
for each operation in that user:
customer_balance = customer_balance + operation.value
# At this point, we have the balance for this one customer
``````

Of all the things that Python does well, this is the one at which Python makes it very easy for users to do it wrong. But for new users, it might not be entirely clear why. In this article we'll explore what the right way is and why, by following a simple experiment: we create an n-by-n list of random values, measure how long it takes to sum all elements three times, and display the average time in seconds to see which algorithm is the fastest.

``````values = [[random.random() for _ in range(n)] for _ in range(n)]
``````

We will try several algorithms for the sum, and see how they improve over each other. Method 1 is the naive one, in which we implement a nested for-loop using variables as indices:

``````for i in range(n):
for j in range(n):
acum += values[i][j]
``````

This method takes 42.9 seconds for n=20000, which is very bad. The main problem here is the use of the `i` and `j` variables. Python's dynamic types and duck typing means that, at every iteration, the interpreter needs to check...

• ... what the type of `i` is
• ... what the type of `j` is
• ... whether `values` is a list
• ... whether `values[i][j]` is a valid list entry, and what its type is
• ... what the type of `acum` is
• ... whether `values[i][j]` and `acum` can be summed and, if so, how - summing two strings is different from summing two integers, which is also different from summing an integer and a float.

All of these checks make Python easy to use, but it also makes it slow. If we want to get a reasonable performance, we need to get rid of as many variables as possible.

Method 2 still uses a nested loop, but now we got rid of the indices and replaced them with list comprehension

``````for row in values:
for cell in row:
acum += cell
``````

This method takes 17.2 seconds, which is a lot better but still kind of bad. We have reduced the number of type checks (from 4 to 3), we removed two unnecesary objects (by getting rid of `range`), and `acum += cell` only needs one type check. Given that we still need checking for `cell` and `row`, we should consider getting rid of them too. Method 3 and Method 4 are alternatives to using even less variables:

``````# Method 3
for i in range(n):
acum += sum(values[i])

# Method 4
for row in values:
acum += sum(row)
``````

Method 3 takes 1.31 seconds, and Method 4 pushes it even further with 1.27 seconds. Once again, removing the `i` variable speed things up, but it's the "sum" function where the performance gain comes from.

Method 5 replaces the first loop entirely with the `map` function.

``````acum = sum(map(lambda x: sum(x), values))
``````

This doesn't really do much, but it's still good: at 1.30 seconds, it is faster than Method 3 (although barely). We also don't have much code left to optimize, which means it's time for the big guns.

NumPy is a Python library for scientific applications. NumPy has a stronger type check (goodbye duck typing!), which makes it not as easy to use as "regular" Python. In exchange, you get to extract a lot of performance out of your hardware.

NumPy is not magic, though. Method 6 replaces the nested list `values` defined above with a NumPy array, but uses it in a dumb way.

``````array_values = np.random.rand(n,n)
for i in range(n):
for j in range(n):
acum += array_values[i][j]
``````

This method takes an astonishing 108 seconds, making it by far the worst performing of all. But fear not! If we make it just slightly smarter, the results will definitely pay off. Take a look at Method 7, which looks a lot like Method 5:

``````acum = sum(sum(array_values))
``````

This method takes 0.29 seconds, comfortably taking the first place. And even then, Method 8 can do better with even less code:

``````acum = numpy.sum(array_values)
``````

This brings the code down to 0.16 seconds, which is as good as it gets without any esoteric optimizations.

As a baseline, I've also measured the same code in single-threaded C code. Method 9 implements the naive method:

``````float **values;
// Initialization of 'values' skipped

for(i=0; i<n; i++)
{
for(j=0; j<n; j++)
{
acum += values[i][j];
}
}
``````

Method 9 takes 0.9 seconds, which the compiler can optimize to 0.4 seconds if we compile with the `-O3` flag (listed in the results as Method 9b).

All of these results are listed in the following table, along with all the values of `n` I've tried. While results can jump a bit depending on circumstances (memory usage, initialization, etc), I'd say they look fairly stable.

N=10 N=100 N=1000 N=10000 N=20000
Method 1 0.00001 0.00078 0.07922 8.12818 42.96835
Method 2 0.00001 0.00043 0.04230 4.34343 17.18522
Method 3 0.00000 0.00004 0.00347 0.33048 1.30787
Method 4 0.00000 0.00004 0.00329 0.32733 1.27049
Method 5 0.00000 0.00004 0.00329 0.32677 1.30128
Method 6 0.00003 0.00269 0.26630 26.61225 108.61357
Method 7 0.00001 0.00006 0.00121 0.06803 0.29462
Method 8 0.00001 0.00001 0.00031 0.03640 0.15836
Method 9 0.00000 0.00011 0.00273 0.22410 0.89991
Method 9b 0.00000 0.00006 0.00169 0.09978 0.40069

## Final thoughts

I honestly don't know how to convey to Python beginners what the right way to do loops in Python is. With Python being beginner-friendly and Method 1 being the most natural way to write a loop, running into this problem is not a matter of if, but when. And any discussion of Python that includes terms like "type inference" is likely to go poorly with the crowd that needs it the most. I've also seen advice of the type "you have to do it like this because I say so and I'm right" which is technically correct but still unconvincing.

Until I figure that out, I hope at least this short article will be useful for intermediate programmers like me who stare at their blank screen and wonder "two minutes to sum a simple array? There has to be a better way!".

If you're a seasoned programmer, Why Python is slow answers the points presented here with a deep dive into what's going on under the hood.

## April 21 Update

A couple good points brought up by my friend Baco:

• The results between Methods 3, 4, and 5 are not really statistically significant. I've measured them against each other and the best I got was a marginal statistical difference between Methods 3 and 5, also known as "not worth it".
• Given that they are effectively the same, you should probably go for Method 4, which is the easiest one to read out of those three.
• If you really want to benchmark Python, you should try something more challenging than a simple sum. Matrix multiplication alone will give you different times depending on whether you use liblapack3 or libopenblas as a dependency. Feel free to give it a try!