I finally have a project that justifies learning prolog

Sometimes when I’m feeling batty,I’ll put the Three Laws of Robotics in my source code. Usually, this is an veiled insult aimed at myself; the comment is my script is gotten so morbidly complex that it threatens to wake up and kill me.

On a completely unrelated note, I’ve been picking at the edges of prolog for the last couple of years. I’ve worked my way through a free Prolog textbook, and now I’m very slowly working my way through Language, Proof and Logic in order to learn me some predicate calculus.

Now I thought of a project that combines these two. I’m gonna build daydream about defining an ontology suitable for making robots comply with those three laws of robotics. Once I finished, it would work like this:


> Eat baby
Violates law #1!
> Mop floor
OK
> Burn down abandoned house
OK

You get the idea.

PyOhio was a smashing success

The Columbus Metro Library offered a fantastic location for us. Wireless internet, multiple meeting rooms, one room with about 30 workstations, etc. Really great location. A $15 donation makes you a “friend” of the library, and gets you a 15% discount at the coffee booth.

Catherine Devlin led the charge of organizing this conference, and she did it amazingly well.

The slides from my decorator talk are available here. I’ll be breaking them down into a series of blog posts with a lot more commentary, so stay tuned.

Howard Roark!

From the article:

The drink request Sunday, said Simmermon, who was visiting from Brooklyn, was denied by a barista who told him that Murky doesn’t do espresso over ice. Irked, Simmermon said he asked for a triple espresso and a cup of ice, which he said the barista provided, grudgingly.

Apparently Murky Coffee would prefer to do it right or not at all. Brilliant. I completely respect that.

Posted in mlp

Notes from clerb meeting on Thursday, July 17th

DimpleDough provided a great location for this month’s Cleveland Ruby Users Group and they even shelled out for dinner. We heard a really good talk about about ruby and F#.

The ruby material covered some neat corners of the language like the method_missing method, which operates like python’s getattr. Here’s a toy example of how it can be used:

irb(main):005:0> class C
irb(main):006:1> def foo
irb(main):007:2> 1
irb(main):008:2> end
irb(main):009:1> end
=> nil
irb(main):010:0> c = C.new
=> #
irb(main):011:0> c.foo
=> 1
irb(main):012:0> class C
irb(main):013:1> def method_missing(m, *args)
irb(main):014:2> puts "you tried to call a method #{m}"
irb(main):015:2> end
irb(main):016:1> end
=> nil
irb(main):017:0> c.baz
you tried to call a method baz
=> nil
irb(main):018:0>

Incidentally, note how I added a new method to class C after I originally defined it. That’s a cute trick in ruby. I can imagine a lot of nasty misuses of that, but I think the “we’re all consenting adults” rule should apply. And when a class has dozens of methods, it might be helpful to divide them across different files.

We talked about currying as well, in the context of F#. I tend to use currying in this scenario:

  • I recognize that two separate functions could be refactored to be a single function with a whole bunch more parameters;
  • I remake the original functions as curried versions of the new super function.

In other words, if I already have two methods, like paint_it_red(it) and paint_it_green(it), it’s trivial to realize I could write a paint_it_some_color(it, color) and then replace the original paint_it_red with a curried version.

I found this really useful when it isn’t just a single parameter I’m fixing to a constant value, but maybe a whole bunch.

Apparently, Ruby will add currying support in 1.9. I tried to see if I could “fake it” in the irb interpreter, but I just made a mess:

irb(main):036:0> def f(a, b)
irb(main):037:1> a + b
irb(main):038:1> end
=> nil

Nothing interesting so far. f adds its two parameters. So now I’m going to try to make a new function that returns a version of function f with the first parameter a set to fixed value:

irb(main):046:0> def curried_f(a)
irb(main):047:1> def g(b)
irb(main):048:2> a+b
irb(main):049:2> end
irb(main):050:1> return g
irb(main):051:1> end
irb(main):053:0> curried_f(1)
ArgumentError: wrong number of arguments (0 for 1)
from (irb):50:in `g’
from (irb):50:in `curried_f’
from (irb):53
from :0

The problem (I think) stems from how in Ruby, if I just type the name of the function, the function gets called. So in line 50, when I’m trying to return a reference to the new function I just created, Ruby evaluates the result of calling g without giving it any parameters.

I bet I’m doing something very un-ruby-tastic with this approach. I’m probably supposed to leverage those anonymous blocks instead.

F# looks really interesting. It supports all those weird prolog/erlang/haskell-style features like single assignment, pattern matching, and optimal tail-call recursion, with the benefit of having access to the .NET libraries as well.

One of the best professors I studied under made a remark that in COBOL, you think for five minutes and then type for two hours, but in prolog, you think for two hours, and then type for five minutes. I agree. I have learned a lot of languages, but I haven’t gotten any smarter. I’m just learning how to map my thoughts into notation much more quickly.

I would love to have the time and reason to do a project with F#. I think I’ll start by installing mono and messing around.

I need to write faster tests

This is not ideal:

----------------------------------------------------------------------
Ran 84 tests in 370.741s

OK

My tests take so long for two reasons. First of all, most of them use twill to simulate a browser walking through a version of the web app running on localhost. Second, my test code reads like a novel. Here’s an example, slightly embellished to make a point:

setup: connect to the database and find or create a hospital and an employee named “Nurse Ratched.” Find or create a bunch of open shifts in the emergency department. Find or create another nurse named Lunchlady Doris*.

test: Nurse Ratched wants to see what shifts are available to be picked up. So she logs into the app. Then she navigates to the “open shifts” screen, and then filters down to shifts in the emergency department over the next seven days. Then she wants to sign up for the shift starting at midnight on Saturday night. So, she clicks the “sign up” icon. The system verifies that this shift + her already-scheduled hours won’t push her into overtime, and she has no other flags on her account, so she is automatically scheduled.

Then the system sends her a confirmation message, which according to her preferences, is sent to her email address. Then the system queues an SMS message to be delivered an hour before the shift starts in order to remind her (also according to her preferences).

Finally, the test verifies that the shift is now not listed as available by simulating Lunchlady Doris logging in and checking that same “open shifts” screen.

If everything checks out, print a dot, and move on to the next chapter.

teardown: Unassign Nurse Ratched from the shift she picked up.

I think twill in itself is fine. Marching through a series of pages is problematic. I do this to set up conditions for testing later on. As a side benefit, I verify everything checks out along the way.

On the plus side, I’m confident that the integration of all these components do in fact play nice together. I don’t think it’s safe to abandon end-to-end testing like this, but I would like not to depend it every time I want to make some slight change to a component. It would be nice to run these right before a commit, but only run some super-fast tests after each save.


[*]People that understand this reference should reevaluate their priorities in life. back

I heart Python doctests

I wrote the doctests for the function below and then wrote the code to satisfy them in a total of about 30 seconds. As an extra plus, these doctests immediately clarify behavior in corner cases.

def has_no(s):
"""
Return False if string s doesn't have the word 'no' inside.

>>> has_no('no problem')
True

>>> has_no('not really')
False

>>> has_no('no')
True

>>> has_no('oh nothing')
False
"""

if s.lower() == 'no': return True
if s.lower().startswith('no '): return True
if s.lower().endswith(' no'): return True
if ' no ' in s.lower(): return True

return False

Writing tests in any other testing framework would have taken me much longer. Compared to writing these tests with nose, writing this:

assert not has_no('oh nothing')

wouldn’t take me any more time than

>>> has_no('oh nothing')
False

But that’s not all there is to it. With nose, I’d need to open a new test_blah.py file, then import my original blah.py module, then I would have to decide between putting each assert in a separate test function or just writing a single function with all my asserts.

That’s how a 30-second task turns into a 5-minute task.

Anyhow, I’m surprised doctests don’t get a lot more attention. They’re beautiful. Adding tests to an existing code base couldn’t be any simpler. Just load functions into an interpreter and then play around with it (ipython has a %doctest_mode, by the way).

For a lot of simple functions (like the one above) it is easy to just write out the expected results manually rather than record from a session.

It is also possible to store doctests in external text files. The Django developers use this trick frequently.

Finally, I don’t try to solve every testing problem with doctests. I avoid doctests when I need elaborate test fixtures or mock objects. Most of my modules have a mix of functions with doctests and nose tests somewhere else to exercise the weird or composite stuff.

Incidentally, this post is where Tim Peters introduced the doctests module.

Supply-side economics explained

I wrote this post four years ago on kuro5hin. The picture has just gotten worse since then. I sure am glad I’m learning to grow food in the backyard.

George W. Bush’s economic policy is based on trickle-down economics, also known as supply-side stimulus. Reagan was a big fan of this idea also. Simply described, supply siders argue that the best way to stimulate the economy to grow is to cut taxes on the wealthy. When their tax rates fall, the rich will increase their investments. For example, a restaurant owner might decide to build a larger kitchen if she gets a big refund check. Then, she’ll have to hire more workers to staff that kitchen, and so employment goes up, indirectly because of that original tax cut.

It’s an appealing idea. Reagan argued that it even makes sense for the government to cut taxes to below current spending and take on debt because in the long run, the economy would grow back so that eventually the tax cut would pay for itself. This approach is called “supply-side” because the stimulus (the tax cut) are applied to the suppliers of goods and services (the business sector).

The common objection to supply-side economics is that there’s absolutely no guarantee that if you cut taxes on the wealthy, then they will use that money to invest in new business. In fact, since these tax cuts happen in bad economic times, investors might decide that their money is safer if they save it rather than invest it. Going back to the restaurant example, if the restaurant owner decides to just stuff that tax refund into a savings account, or just keep it in her mattress, then no job growth occurs.

Also, if the government did what Reagan (and George W. Bush) recommended and went into deficits to finance one of these tax cuts, and no economic growth occurs, then the government is in a really bad spot. They have to raise taxes back to sustainable levels, and then raise taxes again in order to get the money to pay for the debt, and then raise taxes even higher to pay for the interest on the debt. Or, they can do what Reagan did, and just roll the debt over by issuing more debt. This is sort of like paying off the Master Card bill with the Visa. It works great as long as you can always get another credit card to lend you more money. When the last credit card company decides not to give you a card, then you are in trouble.

George Herbert Walker Bush called supply-side economics “voodoo economics” because all of supply-side theory was based on a hope that the rich would invest those tax cuts and not just stick them in the bank. George W. Bush ignores his father’s opinions about the wisdom of his economic policy, however, and is a big supporter of supply-side economics.

Third-world countries do the Visa-Master Card swap trick all the time. They run up huge debts by spending more than they tax, and keep borrowing money from private investors in their country and abroad. When it becomes obvious that the country is so far in debt that they will never be able to pay it back, investors start selling off their debt, even if they sell them at steeply-discounted amounts. This is really, really bad for the country still trying to pay its bills by borrowing more. When investors start dumping your IOUs on the market, then your country’s currency quickly loses value. This is called hyper-inflation.

In 1997, investors all around the world had lots of money invested in east Asia. Then, people lost confidence in certain countries, and so investors all started selling off like mad. The investors sold debt denominated in Asian currency to buy dollars. This pushed down the value of Asian currencies relative to $US. In short, families in these countries found out that their life savings (which were stored in their home-country currency, like the Thai baht, or the Indonesian rupiah, not in $US) lost all of its value because of inflation. It was as if these people woke up, went to the store, and discovered that all the prices had doubled, and were probably going to double every day after that. That’s when the riots broke out, which scared away more investors, and the downward spiral continued.

The same thing happened recently in Argentina. Investors all started selling off Argentinian debt, so the value of the Argentinian currency plummeted, and people were wiped out. Also, when you have high, high inflation, goods imported from other countries become much more expensive.

What happened in the 1980s is like a big Rorschach test. Some economists see all the signs that supply-side economics worked, and others see the same period as the beginning of severe fiscal irresponsibility (“fiscal” means how the government manages spending). There’s no doubt the economy grew after the Reagan tax cuts, but it never grew enough to pay back the debt Reagan racked up. We’re stilling paying interest today on that debt. We’re also now adding to it because each year that the government spends more than it taxes, it creates a deficit, so that gets added to the debt, and we’ve been in a deficit ever since the George W. Bush tax cuts. Also, in some other recessions, the government has chosen to just wait it out, and most recessions end in about 11 months. Based on previous experience, the recession probably would have taken care of itself eventually, and we wouldn’t have all this debt hanging over us today from twenty years ago that we still haven’t paid off.

In 1991, part of the reason why George H. W. Bush had to break his “read my lips: no new taxes” pledge was because he was forced with the choice of either raising taxes, or putting the country further in debt. He made the politically painful move in order to protect the long-term interests of the country, even though he knew he was just about guaranteeing he would lose the 1992 election.

Clinton saw an opportunity to steal an issue from the Republicans in 1992. Since they were no longer the party of being fiscally responsible, Clinton made that his mantra. He balanced the budget early, by cutting spending and raising taxes. Then of course, the public didn’t like that, so in 1994, the Democrats lost control of Congress. Still, thanks to Clinton, we got out of deficits by the end of 1990s and in 2000 Gore wanted to start paying down the debt, but then George W. Bush won the election, and instead of paying down the $7 trillion that we owe (about $24,000 per US citizen, and growing every day), he pushed through his tax cuts instead.

The US debt is at an all-time high, and the financial world is starting to worry about the long-term stability of the US economy. The International Monetary Fund, in a release a few weeks ago, recently warned that the US debt was increasing to the size where it could threaten the world economy. The Bush administration almost entirely ignored the report and the mainstream US media didn’t make the report into a big story.

Meanwhile, the US dollar has lost about 30% of its value versus the EU Euro in the last 12 months. A weak currency in the short run may help our exports, but in the long run, it pushes up interest rates and frightens foreign investors. Since most of our debt is held by non-US investors, the US government’s ability to borrow depends on maintaining confidence that our currency will maintain value in the long-term.

One economist described debt as more like termites in the walls, rather than a tornado outside. Both will eventually destroy the house, but it is a lot easier to pretend that the termite problem isn’t so bad.

The Brookings Institute, a think tank in Washington, DC, just finished a paper that describes some long-term consequences of ignoring the budget deficits. Alice Rivlin, former vice-Chair of the Federal Reserve Board of Governors co-authored the paper. It is written for the interested outsider, rather than the professional economist. In short, allowing the government to run deficits indefinitely raise interest rates for all of us, risks inflation of US currency, and limits long-term economic growth.

Total employment (the number of people with jobs) has fallen by about 3 million jobs since the economy peaked in March of 2001. George W. Bush promoted the tax cut as a tool to create jobs, and by that standard, it hasn’t worked at all.

Help improve my PyOhio talk

I ran through my PyOhio presentation at tonight’s Clepy meeting.

I think I’ll spend more time talking about the material in the slides, rather than pausing just long enough to scan them with my eyes and move to the next. I’m anxious about boring people, so I think I go at a frenzied pace.

Also I need to learn how to tweak s5 (or at least rst2s5.py) so that I can have more control over how my content appears. A fair number of code samples had the last few lines truncated.

Anyway, I welcome comments on my presentation.

Don’t spend your termite poison money on insurance against Martian invasions.

This post wanders all over the place and I’m not sure I’m articulating my thoughts very well. Comments and criticism are welcome.

Fannie Mae and Freddie Mac (I don’t know why these companies have such ridiculous names either) are bound by regulations to hold enough capital (cash dollars) in order to remain solvent across some theoretical worst-case scenarios. The regulators dreamed up some really extreme situations that could likely bankrupt these companies, and insisted that the companies held enough cash to survive.

When I worked at Fannie Mae in the department that wrestled with the C++ model that calculated our reserve requirements for these 10-year stress tests, we used to joke around about how unlikely these stress tests really were. We would say that we might as well buy insurance against Martian invasions, or against all the animals teaming up together to attack humanity.

While Fannie Mae was legally complying with these unrealistic scenarios, the sub-prime crisis was a scenario that they were not prepared for, and it slaughtered them. The CEO had to step down. The price fell from around $80 a share when I left in 2001 to $18 today.

The sub-prime crisis at its core is very mundane. Lenders got sloppy and investors let their greed entice them to take risks they shouldn’t have. That’s all there is to it. Local banks lent money to high-risk borrowers, then the banks sold the loan to Fannie Mae, who sold the loans to Wall Street. Investors preferred the high-return investments over the low-return boring crap.

No perfect storm was necessary to trigger this. It was just a whole lot of people getting sloppy and eventually enough straws accumulated to break the camel’s back. The same pattern played out in the seventeenth century and probably a hundred times since then.

Now I’m a workaday programmer, and I see the same dynamic in code. People write elaborate systems to protect against ridiculously unlikely scenarios but then skimp on the boring stuff. Maybe they get the hard parts done but never make sure their app’s internals are well documented, easy to maintain, and intuitively designed.

In my experience, it’s the mundane bugs, not the diabolically clever hackers, that cause me the most grief.

If I write some algorithm that costs O(n2), I will almost immediately start trying to tame it down. The voices in my brain scream about worst-case costs. Macho programmers write badass algorithms. However, I find that the really smart thing to do is to spend a few minutes thinking about the likely use cases. If I know that for the forseeable future, I’m never going to run this algorithm with n > 5, then I think the grown-up thing to do is to write a big fat docstring that reminds me later about this risk, and then move on to getting other stuff done.

The market rewards a good-enough and finished solution more than an potentially amazing but currently unfinished solution.

If Fannie Mae had focused on just making sure that they were vetting the loans better, things wouldn’t have been so bad. The theoretical worst case scenarios are not going to happen before the more likely stuff goes wrong. I worked at Fannie Mae preparing against Martian invaders. We ignored the termites in the walls, so to speak.