Articles tagged with "ramblings"

How not to sell tickets

Let's say you are a band and you want to keep scalpers from buying all of your tickets and reselling them at ridiculous prices.

By now we know what works: you sell tickets with assigned names on them, and check at the door that the name on the person's ID (or, alternatively, credit card) matches the name on the ticket. You can also offer a "+1 option" for bringing someone along, but they must be accompanied by the person who bought the tickets in the first place. No ID, no entrance.

By contrast, here's what doesn't work:

Screenshot from the band's website: Please don't buy from scalpers or scammers. We have launched a scalper-free ticket exchange.

How do I know that it doesn't work? For starters, because all of the regular $40 tickets are sold out:

Screenshot from the ticket seller's website: ticket at $49.52 sold out

And because the only available tickets are those sold by scalpers at 10 times their value:

Screenshot from another ticket website: tickets starting at $500

You know what I would do if I were a scalper? I would buy as many tickets as I could, and then I would also sign up for the exchange with as many fake addresses as possible. It doesn't matter that I can't use the exchanged tickets afterwards - as long as the supply is reduced, the value of the tickets I bought first will only go up. Since every overpriced ticket offsets the cost of the other 9 I got early, it all works out for me at the end.

By now it is well known that ticket companies don't get rid of scalpers because they don't want to. The method I mentioned in the first paragraph is proven to work, but why would ticket companies get rid of scalpers when they can reach deals with them and get a second cut on the same ticket?

Which brings me to my final point. I think there are three types of bands:

  • those that care about their fans and successfully prevent scalpers from getting all the tickets,
  • those that care about their fans but don't know what to do (and/or do it wrong), and
  • those that don't care about scalpers because they get paid anyway (or get paid even more)

If you are a member of the first type of band I applaud you for standing up to your fans. And if you know someone who is in the second situation, feel free to forward them this post.

I could share some rough words with those who belong to the third group. But honestly, why bother? It's not like there's a shortage of good musicians. I'll go watch those instead.

April 2022 edit: John Oliver has a funny segment on this topic. You can watch it on YouTube

Brain dump

Here's a list of short thoughts that are too long for a tweet toot but too short for a post.

On old computer hardware

I spent the last month of my life fixing the computers of my family. That meant installing Roblox on a tablet with 1Gb of RAM, fixing antivirus on Windows 7, dealing with Alexa in Spanish, and trying to find cheap ink for printers with DRM. Fun fact: HP uses DRM to forbid you from importing ink, and then stopped delivering ink to my family's city.

Modern hardware can have a long, long life, but this won't happen if software developers don't start optimizing their code even a bit. Sure, Barry Collins may not have a problem with an OS that requires 4Gb of RAM, but I feel I speak for tens of thousands of users when I say that he doesn't know what he's talking about.

On new computer hardware

I know that everyone likes to dump on Mark Zuckerberg, and with good reason: the firm formerly known as Facebook is awful and you should stay away from everything and anything they touch. Having said that, there's a reasonable chance that the moment of VR is finally here. If you are a software developer, I encourage you to at least form an informed opinion before the VR train leaves the station.

On movies

I wasn't expecting to enjoy Ready or not as much as I did. I also wasn't expecting to enjoy a second watch of Inception almost as much as the first time, but those things happened anyway. I was however expecting to enjoy Your name, so no surprises there.

I also got on a discussion about Meat Grinder, a Thai film that is so boring and incoherent that it cured me of bad movies forever. No matter how bad a film is, my brain can always relax and say "sure, it's bad, but at least it's not Meat Grinder". I hold a similar opinion about Funny Games, a movies where even the actors on the poster seem to be ashamed of themselves. At least here I have the backing of cinema critic Mark Kermode, who called it "a really annoying experience". Take that, people from my old blog who said I was the one who didn't "get it".

On books

Michael Lewis' book Liar's Poker is not as good as The Big Short, but if you read the latter without the former you are doing yourself a disservice. I wasn't expecting to become the kind of person who shudders when reading that "the head of mortgage trading at First Boston who helped create the fist CMO, lists it (...) as the most important financial innovation of the 1980s", and yet here we are.

I really, really, really like Roger Zelazny's A night in the lonesome October, which is why I'm surprised at how little I liked his earlier, award-winning book This immortal. I mean, it's not bad, but I wouldn't have tied it with Dune as Hugo Award winner of 1966 for Best Novel. I think it will end up overtaking House of Leaves in the category of books that disappointed me the most.

And finally, I can't make any progress with Katie Hafner and Matthew Lyon's book Where wizards stay up late because every time I try to get back to it I get an irresistible urge to jump onto my computer and start programming. Looks like Masters of Doom will have to keep waiting.

On music

My favorite song that I discovered this year so far is Haunted by Poe. Looking for some music of her I learned about how thoroughly lawyers and the US music industry destroyed her career. This made me pretty angry until I read that her net worth is well into the millions of dollars, so I guess she came out fine after all. And since the album was written as a collaboration with her brother while he was writing "House of Leaves", I guess I did get something out of that book at the end.

Some thoughts on learning about computers

Here's a thought that has been rattling around in my head for the last 10 years or so.

My first word processor was WordStar, but I never got to use it fully. After all, what use does a 10 year old have for a word processor in 1994?

I then moved on to MicroCom's PC-Flow for several reasons. First and foremost, it was already installed on my PC. Either that or I pirated it early enough not to make a difference - in either case, it had the major advantage of already being there. Second, it was "graphical" - it may have been a program for flow charts, but that didn't stop me from writing some awful early attempts at writing music that are now lost to the sands of time and 5.25 floppy disks. And third, because it printed - not all of my programs played nice with my printer, but this one did. The end result being that I wrote plenty of silly texts using a program designed for flow charts.

The late 90s were a tumultuous time for my computer skills. At home I had both MS Office '97 and my all-time-favorite Ami Pro, while in high school I had to use a copy of Microsoft Works that was already old at the time. The aughts brought Linux and StarOffice, which would eventually morph into OpenOffice and LibreOffice. Shortly afterwards I also learned LaTeX, making me care about fonts and citations to a degree that I would never have imagined.

And finally, the '10s brought Google Docs and MS Office 365. But the least said about them, the better.

The point of this incomplete stroll through memory lane is to point out the following: if there's a way to mark a text as bold in a document, I've done it. Cryptic command? Check. Button in a status bar that changes places across versions? Check. Textual markers that a pre-processor will remove before generating the final file? Check. Nothing? Also check. If you ever need to figure out how some piece of software expects you to mark something in bold, I'm your man.

And precisely because this function kept jumping around and changing shape throughout my entire life, I have developed a mental model that differentiates between the objective and the way to get there. If I have a reasonable belief that a piece of software can generate bold text, I'll poke around until I find the magic incantation that achieves this objective. But my story is not everybody's story. Most regular people I know learned how to make bold text exactly once, and then they stuck to that one piece of software for as long as possible. For them, text being bold and that one single button in that one single place are indistinguishable, and if you move the button around they'll suddenly panic because someone has moved their cheese.

I have long wondered what's the long-term effect of this "quirk" in our education system. And forecasts are not looking good: according to The Verge, students nowadays are having trouble even understanding what a file is. Instead of teaching people to understand the relation between presentation and content, we have been abstracting the underlying system to the point of incomprehension. The fact that Windows 10 makes it so damn hard to select a folder makes me fear that this might be deliberate - I'm not one to think of shady men ruining entire generations in the name of profit, but it's hard to find a better explanation for this specific case.

Based on my experience learning Assembly, pointers, and debugging, I believe that the best cure to this specific disease is a top-down approach1 with pen and paper. If I were to teach an "Introduction to computers" class, I would split it in two stages: First, my students would write their intended content down, using their own hands on actual paper. They would then use highlighters to identify headers, text that should be emphasized, sections, and so on. At this stage we would only talk about content while completely ignoring presentation, in order to emphasize that...

  • ... yes, you might end up using bold text both to emphasize a word and for sub-sub headers, but they mean different things.
  • ... once you know what the affordances of a word processor are, all you need to figure out is where the interface has hidden them.

We would then move on to the practical part, using a word processor they have never seen. We would use this interface so the students get a rough idea of what the interface looks like in real life. And finally, my students would then go home and practice with whatever version of MS Office it's installed in their computer. If at least one of them tries to align text with multiple spaces only to feel dirty and re-do it the right way, I will consider my class a success.

Would this work? Pedagogically, I think it would. But I am painfully aware that my students would hate it. And good luck selling a computer course that doesn't interact with a computer. It occurs to me that perhaps it could be done in an interactive program, one that "unlocks" interface perks as you learn them. If I'm ever unemployed and with enough time in my hands, I'll give it a try and let you know.


[1] A top-down approach would be learning the concepts first and the implementation details later. Its counterpart would be bottom-up, in which you first learn how to do something and later on you learn what you did that for. Bottom-up gets your hands dirty earlier, similar to Mr. Miyagi's teaching style, while top-down keeps you from developing bad habits.

Write-only code

The compiler as we know it is generally attributed to Grace Hopper, who also popularized the notion of machine-independent programming languages and served as technical consultant in 1959 in the project that would become the COBOL programming language. The second part is not important for today's post, but not enough people know how awesome Grace Hopper was and that's unfair.

It's been at least 60 years since we moved from assembly-only code into what we now call "good software engineering practices". Sure, punching assembly code into perforated cards was a lot of fun, and you could always add comments with a pen, right there on the cardboard like well-educated cavemen and cavewomen (cavepeople?). Or, and hear me out, we could use a well-designed programming language instead with fancy features like comments, functions, modules, and even a type system if you're feeling fancy.

None of these things will make our code run faster. But I'm going to let you into a tiny secret: the time programmers spend actually coding pales in comparison to the time programmers spend thinking about what their code should do. And that time is dwarfed by the time programmers spend cursing other people who couldn't add a comment to save their life, using variables named var and cramming lines of code as tightly as possible because they think it's good for the environment.

The type of code that keeps other people from strangling you is what we call "good code". And we can't talk about "good code" without it's antithesis: "write-only" code. The term is used to describe languages whose syntax is, according to Wikipedia, "sufficiently dense and bizarre that any routine of significant size is too difficult to understand by other programmers and cannot be safely edited". Perl was heralded for a long time as the most popular "write-only" language, and it's hard to argue against it:

open my $fh, '<', $filename or die "error opening $filename: $!";
my $data = do { local $/; <$fh> };

This is not by far the worse when it comes to Perl, but it highlights the type of code you get when readability is put aside in favor of shorter, tighter code.

Some languages are more propense to this problem than others. The International Obfuscated C Code Contest is a prime example of the type of code that can be written when you really, really want to write something badly. And yet, I am willing to give C a pass (and even to Perl, sometimes) for a couple reasons:

  • C was always supposed to be a thin layer on top of assembly, and was designed to run in computers with limited capabilities. It is a language for people who really, really need to save a couple CPU cycles, readability be damned.
  • We do have good practices for writing C code. It is possible to write okay code in C, and it will run reasonably fast.
  • All modern C compilers have to remain backwards compatible. While some edge cases tend to go away with newer releases, C wouldn't be C without its wildest, foot-meet-gun features, and old code still needs to work.

Modern programming languages, on the other hand, don't get such an easy pass: if they are allowed to have as many abstraction layers and RAM as they want, have no backwards compatibility to worry about, and are free to follow 60+ years of research in good practices, then it's unforgivable to introduce the type of features that lead to write-only code.

Which takes us to our first stop: Rust. Take a look at the following code:

let f = File::open("hello.txt");
let mut f = match f {
    Ok(file) => file,
    Err(e) => return Err(e),

This code is relatively simple to understand: the variable f contains a file descriptor to the hello.txt file. The operation can either succeed or fail. If it succeeded, you can read the file's contents by extracting the file descriptor from Ok(file), and if it failed you can either do something with the error e or further propagate Err(e). If you have seen functional programming before, this concept may sound familiar to you. But more important: this code makes sense even if you have never programmed with Rust before.

But once we introduce the ? operator, all that clarity is thrown off the window:

let mut f = File::open("hello.txt")?;

All the explicit error handling that we saw before is now hidden from you. In order to save 3 lines of code, we have now put our error handling logic behind an easy-to-overlook, hard-to-google ? symbol. It's literally there to make the code easier to write, even if it makes it harder to read.

And let's not forget that the operator also facilitates the "hot potato" style of catching exceptions1, in which you simply... don't:

File::open("hello.txt")?.read_to_string(&mut s)?;

Python is perhaps the poster child of "readability over conciseness". The Zen of Python explicitly states, among others, that "readability counts" and that "sparser is better than dense". The Zen of Python is not only a great programming language design document, it is a great design document, period.

Which is why I'm still completely puzzled that both f-strings and the infamous walrus operator have made it into Python 3.6 and 3.8 respectively.

I can probably be convinced of adopting f-strings. At its core, they are designed to bring variables closer to where they are used, which makes sense:

"Hello, {}. You are {}.".format(name, age)
f"Hello, {name}. You are {age}."

This seems to me like a perfectly sane idea, although not one without drawbacks. For instance, the fact that the f is both important and easy to overlook. Or that there's no way to know what the = here does:

some_string = "Test"

(for the record: it will print some_string='Test'). I also hate that you can now mix variables, functions, and formatting in a way that's almost designed to introduce subtle bugs:

print(f"Diameter {2 * r:.2f}")

But this all pales in comparison to the walrus operator, an operator designed to save one line of code2:

# Before
myvar = some_value
if my_var > 3:
    print("my_var is larger than 3")

# After
if (myvar := some_value) > 3:
    print("my_var is larger than 3)

And what an expensive line of code it was! In order to save one or two variables, you need a new operator that behaves unexpectedly if you forget parenthesis, has enough edge cases that even the official documentation brings them up, and led to an infamous dispute that ended up with Python's creator taking a "permanent vacation" from his role. As a bonus, it also opens the door to questions like this one, which is answered with (paraphrasing) "those two cases behave differently, but in ways you wouldn't notice".

I think software development is hard enough as it is. I cannot convince the Rust community that explicit error handling is a good thing, but I hope I can at least persuade you to really, really use these type of constructions only when they are the only alternative that makes sense.

Source code is not for machines - they are machines, and therefore they couldn't care less whether we use tabs, spaces, one operator, or ten. So let your code breath. Make the purpose of your code obvious. Life is too short to figure out whatever it is that the K programming language is trying to do.


  • 1: Or rather "exceptions", as mentioned in the RFC
  • 2: If you're not familiar with the walrus operator, this link gives a comprehensive list of reasons both for and against.

A new and improved homeopathy

You might have heard of this new, alternative take on medicine called Homeopathy. If you haven't, the basic idea is that you take a (possibly active) substance, dilute it with alcohol or distilled water, and repeat the process until only the "vital energy" of the original substance remains. According to Hahnemann, the creator of Homeopathy (or, to be precise, according to the Wikipedia article), each dilution increases the potency of the preparation while ensuring that all traces of the original substance are effectively gone.

The efficacy of this practice has been called into question several times, which to me sounds less like a problem and more like an opportunity: how do we bring Homeopathy into the 21st century?

Enter the Nocebo effect. Unlike it's big brother the Placebo effect, the nocebo effect is at play when a treatment has a negative effect simply because the patient believes it to be so - the common example being patients that suffer from "side effects" when receiving an inert substance. While precise numbers are impossible to obtain, around 5% of all patients are considered susceptible to this nocebo effect.

If a nocebo "weakens" a patient's positive response to a medication, and Homeopathy is based on diluting substances, we can combine them both! In what I have decided to call "Martinopathy" in honor of its creator (me), I suggest the following clinical procedure: when a patient is prescribed a Martinopathic treatment for (say) common cold, they are first directed to a standard pharmacy, where they buy a common, over the counter, non-homeopathic common cold drug. They are then sent to their physician. The Doctor will take a look at the medicine, repeat to the patient "this medicine will not work" around 20 times, after which the patient is free to continue their treatment with their now-martinopathic medication. In this way the effect of the regular medicine has been "diluted" down to homeopathic standards, but this time in a scientifically sound way.

There is still some room for improvements. If costs are an issue, they could be kept low by martinopathing the medicine at the source - instead of yelling at a patient, a medical professional could yell at the boxes directly in the factory floor. It is not entirely clear whether the medical professional would have to be certified in this new treatment or not. But those are small details that we can sort after I get my Nobel prize.

Page 1 / 3 »