Well, that was a quick month.
So what’s happened? I’m a married father with a baby, so family eats up a lot of time. And there’s that full-time employment thing I do most of the week. And a secret project I can’t tell you much about but that you’re gonna love (psst, that link’s not a link yet :-P). And yard work, and chores around the house; with spring comes mowing and mulch and weeds. We also had our basement flood when the big rainstorm came through last week, and pumping out that water and cleaning up afterwards was just loads of fun.
But you probably don’t care about all that; what most of you in reader-land care about is Smile, this suddenly-notorious little programming language I’ve spent so much of the last decade-plus thinking about and working on. So let’s talk about that.
I’ve been busily shoring up the implementation of the interpreter, trying to knock out bugs and fix issues, getting it to the point where it at least parses and executes the whole language correctly, and is a full implementation of both the core language and the base-level libraries. I’ve also been writing documentation, lots and lots of documentation, so that when you do get your hands on a copy of it, you won’t just throw your hands up in disgust and confusion, and you’ll be able to learn it and try demos and look up answers when you don’t understand things. This has been a lot of work, to put it mildly, and there’s a lot more to go.
Filed under Personal, Smile
There are a few posts on Reddit about this that I just had to give direct replies to.
I always thought lisp was a platonic form
Lisp is not really a “platonic form,” although really what should be argued is the number of operators required for Turing-completeness. Lambda calculus could make a better claim to something like that, although it’s a Turing tarpit. Lisp had a certain degree of elegance by being able to describe its own evaluation using seven fundamental operators (or five, or six, or eight, or whatever, depending on who you talk to). But you can define a Turing-complete language using a single operator. Even the simplest Lisp is a lot bigger. Smile has about a dozen fundamental operators (and its unique parsing/translation layer): That core is not as small as McCarthy’s Lisp, but it’s not exactly a gallumphing dirigible either. That said, of that, only six or seven of those operators are truly necessary; you can write a simple but working eval function for Smile in Smile with just that much, and one of the sample programs I’ll include is exactly that.
I suggest Sean release the language specification and let alternative implementations begin.
Doesn’t there have to be a first implementation before there can be alternative ones? I’d like to get the formal language specification nailed to the floor and etched in stone — and build good conformance tests — before really thinking about alternative implementations. When I’m done squishing bugs in the interpreter, and I have enough documentation that people can learn Smile and become comfortable coding in it, that’s when it’s time to start talking about alternative (faster) implementations. Right now is just too early.
That said, the current interpreter has the Apache open-source license stamped all over its source code. You will be able to get your hands on it from stem to stern, optimize it, clone it, and rebuild it to your heart’s content — when it’s ready for the general public, which it definitely isn’t yet.
> Actually Smile code is very similar to Rebol however it’s exponentially more powerful
> what does that mean?
I don’t know either.
I’ve been pondering this question all day: What the heck do I do next?
The intarnetz has discovered Smile, which is kinda cool, in a terrifying sort of way.
But the language is still barely in its infancy. I have a semi-working interpreter (release package 0.3!), but there are lots of bugs and some stuff is far from built. The language grammar is pretty well nailed-down at this point. But the libraries are severely lacking, even for really common operations, and even a few of the standard data types don’t exist. I have some documentation, but it’s far from complete.
Yet people want to know more. They got a taste of something interesting, and for a fleeting moment, it’s become news, after a fashion. I don’t want to starve and disappoint potentially-interested folks, but I don’t want to give out junk and crush what could be something pretty neat.
So tomorrow, I’m going to talk with some folks at work who’ve been following Smile through its development, and we’ll see if our brain trust can figure out what a good next step is. I’m a coder: I develop algorithms, not marketing plans, and I’d rather stick to what I know.
For tonight, I got the current Smile interpreter running under Mono on Linux, so w00t on that.
Anyway, uh, stay tuned, I guess.
Whoa, linky. Who knew people actually read this blog? Zounds.
I should answer some of the questions being asked. (Why here and not on Reddit? Well, I don’t have a Reddit account, for one, and for two, I’d like to keep my answers about Smile in a place where they’re easier to find, rather than buried deep in a Reddit thread somewhere.)
So here we go.
I’ve wanted to talk about Smile for a long time now.
Smile is a programming language. A decade ago when I named it, it stood for “Syntax Makes Interpreting Lisp Easier,” but don’t tell anybody there’s a Lisp in there, because it doesn’t look, act, or feel like Lisp.
So let’s start at the beginning.
Smile is a programming language.
It is built on extremely solid theoretical foundations, on a cross between the untyped lambda calculus and a message-passing model. The core of the language is built on very simple concepts, rigorously applied at every level to build mathematically-provable abstractions. It’s a language that is designed to make computer scientists giggle with glee at its elegance and simplicity.
Stdout print "Hello, World.\n"
We have a baby.
Those of you that have kids know that this is pretty much complete and sufficient explanation for where the heck I’ve been for the last six months. Those of you who don’t have kids, well, just trust me on this.
His name is Alan Thomas, after my grandfathers and my father, named a good strong name carried by WWII veterans, by leaders of men, by the men who taught me by example what it is to be a man, so named to honor our shared ancestry as he carries our lineage to the future. No pressure, kid. But I’m sure you’ll do fine.
He’s an amazing little thing. He’s three months old now, and not yet able to sit up, but he’s healthy and happy.
It somehow still boggles my mind that I’m a father. Fathers? Aren’t they adults? Big strong men who drink beer and watch football and lift heavy things and talk in short sentences about weighty matters? Is that me? Guess it must be. Sure don’t feel that grown-up, but I have a job and a wife and a dog and cats and a house and a mortgage and car payments, and now I have a son, too. I keep wondering who he’s going to be like, keep hoping I have something in common with him, which is a hard thing to divine when he mostly squeaks and squirms and hasn’t yet learned how to hold a spoon, much less why you’d want to.
“They say you should lead, follow, or get out of the way.”
So began an essay I once wrote for a college entrance exam to a very prestigious university. I was rejected, and they claimed it was for my less-than-perfect grades, but I don’t doubt that essay played its part. As a seventeen-year-old, I was ill-equipped to state the message well, but it’s a message that I live by, and it’s a message that bears repeating:
If you must lead,
Or get out of the way,
Choose to get out of the way.
I have a gripe.
Computer science is a branch of mathematics. It’s a powerful, amazing tool based on logic and reasoning and the work of giants. And it seems to get no respect from the programmers who blithely don’t realize they’re using it every day.
In the business world, programming problems are solved by either gluing preexisting stuff together or “just hack it until it works.” I can’t count the number of times I’ve heard a colleague say, “Well, what if we just try this,” or “I know this trick,” or “I don’t know why that broke, maybe sunspots.” There seems to be little recognition of the value that computer science brings to the table: Everything is just tips and tricks and technologies, not reasoning and technique.
And if (like me) you’re one of the rare ones who uses computer science to solve a problem, it’s not attributed to all those proofs and techniques and hard science you learned: You’re just a “programming wizard.” It’s as if everybody around you is building houses out of pine, and they see you build a house out of stone and think you found really hard, strong pine somewhere. And worse, they then ask you to point out which trees you got it from.
I don’t understand this disconnect.
I love CorelDRAW*. It’s one of my favorite go-to tools for just about everything graphics-related. I’ve been using it since a friend gave me a bootleg copy of CorelDRAW 3, all the way back in 1994. Four years later, I had scraped and saved enough to buy a legitimate copy of CorelDRAW 5 (see! piracy really can lead to sales sometimes!), and I’ve been upgrading ever since. Boxed copies of everything from version 5 to version X5 are sitting on the shelf behind me as I type this.
* Lest there be any doubt, I’ve tried Adobe Illustrator
. I gave it a fair shot, I really did. But it drives me crazy trying to use it. I spend a lot of my time bending and tweaking nodes, and I use the heck out of CorelDRAW’s PowerClip and Blend for everything from simple clipping to really complicated shading effects. Illustrator was pretty weak in all those categories the last time I used it. I tried InkScape
, too, and stopped using it right about the point where their coders asked on their forum, “Why would you ever want PowerClip?”
That said, as much as I love CorelDRAW, there are a few things I’d really like to see either changed or fixed. Some of them are downright bugs that they keep not fixing. Some are existing functionality that just doesn’t work well. And one’s a “please steal this technique from your competitors already.” So here’s my list:
I’ve watched with great interest the chase of Edward Snowden. And it’s resulted in a change in my attitude toward government.
Once upon a time, I distrusted government, but I generally assumed that they were simply too incompetent to do anything truly malicious. These are the same people that can’t decide what color to paint a wall, my reasoning went, so there’s no way they could possibly be capable enough to be able to apply the level of evil so many of their detractors accuse them of. That, and there are so many bureaucratic checks and balances that they’re at best handicapped; I envisioned NSA spying on a level only slightly less primitive than tying an extra string to a tin-can telephone.
And then Snowden. Good news, crackpots, you’re not paranoid. Instead of the government being a well-paid collection of Mr. Magoo’s closest relatives, they suddenly morphed into the worst Orwellian nightmare Hollywood can depict. Even as I write this, I’m not at all sure that these very words aren’t placing me on a watch list — if I’m not on one already for having a brain in my head and a tendency to ask probing and awkward questions.