I’ve done a lot of interviews recently, and a common theme among them — and among prior interviews over the years too — is companies who want to go from an existing “legacy” system to their shiny “new” system: They’ve concluded that the existing “legacy” system isn’t meeting their needs, and that a “new” system is necessary.
I call these “version 2.0” projects, because quite a lot of them involve taking an “original” system that’s been keeping the company alive since its inception, and making a replacement for it. I’ve been on several teams doing “version 2.0” projects over the years, and I’ve even started a couple of those projects, and there’s one truth that has been consistently valid among every single one of those exercises:
The recent hullabaloo over Roald Dahl’s works being editedcensored has me utterly incensed. Sure, Roald Dahl was kind of a terrible person, and he wrote several things I too find offensive. But that gives no-one other than Roald Dahl himself the right to censor his writing. He wrote what he wrote, and if you don’t like it, read something else. There are plenty of sanitized, safe, bland, milquetoast books out there if you don’t like having your sensitivities offended.
But it occurred to me that the reason that Puffin Books can get away with this censorship is that they (and Netflix) own the copyright, and by law copyright terms are ridiculously long. It’ll be four decades yet before anyone else can re-publish the stories the way Dahl wrote them.
I don’t want to be a part of this.
But I’m a creator: I make things, I write things, I draw things, I code things, I build things. I’m constantly contributing to a system I never signed up for. In my life a ton of work has been fixed in a tangible medium by me, to use the legal copyright terminology. Per copyright law, I hold the rights to a mountain of content, and because I keep creating, the mountain keeps getting bigger. These very words will join that pile, and if I die just after writing this sentence, my heirs or estate would hold the copyright in it for another 70 years — these words would enter the public domain in February 2093, which is utterly insane.
So I’m making an addendum right here to my last will and testament, and as soon as I’m done typing it, I’m going to print a copy and sign it to give it the force of law. And this addendum is simply this:
Not 70 years. One year. Every story, every essay, every picture, every pixel, every line of code, every last byte, everything I’ve made that could possibly be copyrightable and in which I hold the copyright will be up for grabs to the world one year later.
Does the world want it? Probably not, but you all get it anyway. Once I’m gone, my family gets a year to prepare for its release, and then it belongs to everybody, the whole kit and kaboodle. Anyone can have whatever debates you want over what I might have intended for some picture or some character or some design or whatever, but everybody is free to put their own spin on them all after the one-year mark. Once I’m gone, I don’t have a say in it anymore.
Presumably, I still have a lot of years left in me, and I can state my intentions and control my works for a few more decades. But whenever I’m gone, there’s one year on my copyrights left, and that’s it.
The copyright system is pretty broken, but with this, I believe I’m helping in my own small way to help right the ship. Maybe Congress will have some sense someday and shorten copyright terms to match, but until they take care of it, this declaration will have to do.
Beginner and intermediate programmers often think that programming is math. After all, a lot of computer science is math, and computers run on math, and core concepts of the field like Turing Machines and lambda calculus are really pure hard math. You can’t get started in this field without knowing some math.
But programming — or software engineering if you prefer — really isn’t math. Programming, as Donald Knuth rightly noted all the way back in the 1970s, is really literature. We tell the computer stories about how to do things: We write plays, and the computer acts out the play for us. Some of the words that we use in those plays are based on math, but most programming is really a form of storytelling. Code is sometimes compared to poetry, but I think it has the most in common with prose — which is arguably why systems like ChatGPT are so good at it.
So today, my Internet connection went out. The router got stuck overnight, and I rebooted it, and no big deal. Windows, however, still shows this, even though I have a perfectly fine network connection:
If you Google it, you’ll find lots of people have the problem, across multiple versions of Windows, going back years. The solutions vary from “just reboot” to complicated registry hacks to “reinstall Windows.” 🤦♂️
I can’t even.
I hear anecdotal stories about “weird problems” like the one above all the time: My friend’s father can’t get the printer to work without reinstalling the drivers every time he uses it. Your cousin’s word processor crashes every time she clicks the “Paste” button and there’s an image on the clipboard. My colleague’s video glitches, but only in a video call with more than three people. And invariably, the solutions are always the same: Reinstall something. This one weird registry hack. Try my company’s cleaning software!
So I’d like to let all of the ordinary, average, nontechnical people in the room in on a little secret:
This is bullshit.
All of it is bullshit. Start to finish. Nearly every answer you hear about how to “fix” your bizarre issues is lies and garbage.
I was asked on Discord today why some languages require semicolons and others don’t, and this is one of those surprisingly deep questions that to the best of my knowledge hasn’t been answered very well elsewhere:
Why do some languages end statements with semicolons?
Why do other languages explicitly not end statements with semicolons?
Why do some languages require them but it seems like the compiler could just “figure it out,” since it seems to know when you’ve forgotten them?
Why are they optional in some languages?
And, of all things, why the weird shape that is the semicolon? Why not | or $ or even ★ instead?
So let’s talk about semicolons, and try to answer this as well as we can.
I work with ideas about computers. I think about the things computers can do, and I try to find ways to make computers do those things better or faster, and I write all those ideas down. And I try to find ways to stop computers from ever being slow, so that we don’t have to make them faster. I also think a lot about if there are things that computers can or can’t do, and if it’s important that computers can or can’t do them. It’s bad for people when computers are slow or when computers can’t do things because we want computers to help us with things we want to do. But making computers do things that they can’t do is hard, and making computers go faster can be hard too. So sometimes I use ideas from other people to make the things computers do faster or better, and sometimes I find my own ideas too, and then I write those ideas down and tell everyone about them so all of us can make computers do more things better.
Lately, I’ve been growing increasingly obsessed with this problem. While my solution is very fast (O(n) is pretty fast!), I’ve been concerned about a few possible issues with it:
First, I wasn’t certain it was anything better than locally-optimal. It’s guaranteed not to produce a bad result, but will it produce a good result? I couldn’t be sure.
Second, it relies on floating-point arithmetic, while most other solutions don’t.
It has the nice upside of being able to operate in constant space (not including the O(k) output space), and linear time, but those two caveats are potentially problematic. If it turned out to be a really bad solution, who’d use it? And the floating-point numbers felt too fuzzy to be safe. So I started poking at it again.
Being a computer scientist is a funny thing. You live on the edge of knowledge in a weird realm that’s not quite mathematics and not quite physical machinery. Like most of the sciences, of course, anything you “discover” was likely already discovered several decades before. But every great once in a while, you bump into a problem that seems to have received very little attention, and you see the existing solutions, and you think, “Huh, it seems like there must be a better way.”
I did that, once, a long time ago: I discovered a really neat technique for computing treemaps on-the-fly for arbitrarily large datasets, and that’s why SpaceMonger 2+ have such incredibly fast renderings. I don’t know for sure if it’s a novel solution, but I haven’t seen anyone else do it. One of these years, I need to properly publish a paper on how that works: I haven’t been intentionally keeping that algorithm private; I just haven’t gotten around to writing up a really good description of how it works.
But today, I’m going to talk about another algorithm entirely: Linear partitioning a sequence of numbers.
✗ The election of November 2020 will be suspended. Thank God it wasn’t. There was a giant mess following it, but we had the election on-time, and it went surprisingly smoothly.
✓✗ Trump will declare himself president “until further notice.” I think we all know that Biden is the President, but some of Trump’s die-hard supporters are convinced he’ll somehow still be reinstated, and I’m not entirely sure even he believes he lost, even though, y’know, those pesky facts say he did. But I’m gonna claim half-credit on this, since, y’know, on January 6, he did try to commit a coup d’état.
✗ The year-zero rule will take out Trump. Didn’t happen. And here’s hoping President Biden stays safe.
✓ Historians will rank Trump as the worst president ever. Okay, there’s debate on this, but among respectable historians, the only real question is whether he’s at the bottom or just in the bottom five. I’m still claiming it, though.
That’s 3½ out of 12! I officially suck at predicting the future!
Of course, Trump did try to commit a coup d’état, and he is still tearing apart the country, and the hard right is still so far off the crazy deep end that every day I expect those lunatics to march on the Capitol again — but at least as regards the “Trump becomes the American Dictator” timeline, I’ve never been happier to be wrong.
Comments Off on Happy to Be Wrong — But I Almost Wasn’t