Obscure python syntax error

Been writing python for a long time. When I wrote this code, I could not figure out why I was getting a syntax error.


d1 = dict(
user_id=99,
display_name='Matt Wilson',)

d2 = dict(
email_address='[email protected]',
**d1,)

It is the trailing comma after **d1. It is not OK. Which is really weird, because the trailing comma after display_name=’Matt Wilson’ is just fine.

Announcing carp!

Carp is for code templating.

You know, for stuff like starting HTML files with a bunch of predefined javascript libraries, or for starting a new python project without having to write a setup.py file from scratch.

You can use carp for a single file or for a tree of files and folders.

It makes it easy to take a folder of code files and subfolders, replace parts of text you want to be parameterized, and then store it. Later, you render that template by passing in the required parameters.

Get the code at github here.

python: allow only one running instance of a script

UPDATE: Thanks so much for all the feedback! I’m going to look at using flock as well, and I’ll write that up soon.

Imagine you have a script that archives a bunch of data by copying it to another box. You use cron to schedule that script to run every hour, because normally, the script finishes in about thirty (30) minutes or so.

But every so often, maybe when your application gets really popular, the cron job takes more than an hour. Maybe it takes three hours this one time.

And during that time, cron starts up two more copies of your script. That can cause all sorts of havoc, where two or more scripts each try to modify the same file, for example.

In this scenario, you need a way to prevent those second and third (and maybe fourth and fifth, etc) scripts from starting as long as one is already going.

It would be very helpful when the script started, it first checked if another process was already running. If one is already running, then this new script should just immediately exit. But if no other script is running, then this script should get to work.

Here’s a simple method for doing that:

1. When the script starts, the first thing it does it look for a file in /tmp named something like /tmp/myscript.pid.

2. If that file exists, then the script reads that file. The file holds a process ID (pid). The script now checks if that any process with that pid is running.

3. If there is not a process running with this pid, then probably what happened was the old script crashed without cleaning up this pid file. So, this script should get to work. But if there is a process running with that pid, then there is already a running instance of this script, and so this script should just immediately exit. There’s a tiny risk with this approach that I’ll discuss at the end of this post.

4. Depending on what happened in step 3, the script should exit at this point, or it should get to work. Before the script gets to the real work though, it should write its own process ID into /tmp/myscript.pid.

That’s the pseudocode, now here’s two python functions to help make it happen:

import os

def pid_is_running(pid):
"""
Return pid if pid is still going.

>>> import os
>>> mypid = os.getpid()
>>> mypid == pid_is_running(mypid)
True
>>> pid_is_running(1000000) is None
True
"""

try:
os.kill(pid, 0)

except OSError:
return

else:
return pid

def write_pidfile_or_die(path_to_pidfile):

if os.path.exists(path_to_pidfile):
pid = int(open(path_to_pidfile).read())

if pid_is_running(pid):
print("Sorry, found a pidfile! Process {0} is still running.".format(pid))
raise SystemExit

else:
os.remove(path_to_pidfile)

open(path_to_pidfile, 'w').write(str(os.getpid()))
return path_to_pidfile

And here’s a trivial script that does nothing but check for a pidfile and then sleep for a few seconds:

if __name__ == '__main__':

write_pidfile_or_die('/tmp/pidfun.pid')
time.sleep(5) # placeholder for the real work
print('process {0} finished work!'.format(os.getpid()))

Try running this in two different terminals, and you’ll see that the second process immediately exits as long as the first process is still running.

In the worst case, this isn’t perfect

Imagine that the first process started up and the operating system gave it process ID 99. Then imagine that the process crashed without cleaning up its pidfile. Now imagine that some completely different process started up, and the operating system happens to recycle that process ID 99 again and give that to the new process.

Now, when our cron job comes around, and starts up a new version of our script, then our script will read the pid file and check for a running process with process ID 99. And in this scenario, the script will be misled and will shut down.

So, what to do?

Well, first of all, understand this is an extremely unlikely scenario. But if you want to prevent this from happening, I suggest you make two tweaks:

1. Do your absolute best to clean up that pidfile. For example, use python’s sys.excepthook or atexit functions to make sure that the pid file is gone.

2. Write more than just the process ID into the pid file. For example, you can use ps and then write the process name to the pid file. Then change how you check if the process exists. In addition to checking for a running process with the same pid, check for the same pid and the same data returned from ps for that process.

Check back soon and I’ll likely whip up some kind of some simple library that offers a context manager that does it to the extreme case described above.

Python: log uncaught exceptions with sys.excepthook

You ever notice how when your script dies because of some uncaught error, you don’t get that error in your log files? This post walks through how to make sure that you log that uncaught exception.

This is a trivial script that will raise an uncaught exception (code available here):

$ cat rgl/kaboom1.py
# vim: set expandtab ts=4 sw=4 filetype=python:

import logging

def f():
return g()

def g():
return h()

def h():
return i()

def i():
1/0

if __name__ == '__main__':

logging.basicConfig(
level=logging.DEBUG,
filename='/tmp/kaboom1.log',
filemode='w')

logging.debug('About to do f().')

f()

Notice the helpful traceback:

$ python rgl/kaboom1.py
Traceback (most recent call last):
File "rgl/kaboom1.py", line 28, in
f()
File "rgl/kaboom1.py", line 9, in f
return g()
File "rgl/kaboom1.py", line 13, in g
return h()
File "rgl/kaboom1.py", line 17, in h
return i()
File "rgl/kaboom1.py", line 21, in i
1/0
ZeroDivisionError: integer division or
modulo by zero

Unfortunately, that helpful traceback does not show up in the output logs!

$ cat /tmp/kaboom1.log
DEBUG:root:About to do f().

You could wrap your code with big try / except

This diaper pattern is a popular solution::

try:
f()

except Exception as ex:
logging.exception(ex)
raise

Make sure you re-raise the exception, otherwise your program will end with a zero return code.

Sidenote: how to log an exception instance

If you do any of these, you probably won’t like what you get:

logging.error(ex)
logging.error(str(ex))

In both cases, you are just turning the exception to a string. You won’t see the traceback and you won’t see the exception type.

Instead of those, make sure you do one of these:

logging.exception(ex)

# this is exactly what logging.exception does inside
logging.error(ex, exc_info=1)

# sets a higher log level than error
logging.critical(ex, exc_info=1)

For the last two, without that exc_info=1 parameter, you won’t see the traceback in your logs. You’ll just see the message from the exception.

Or you can use sys.excepthook

Instead of nesting your code inside a try-except clause, you can customize the built-in sys.excepthook function.

The kaboom2.py script has this extra code:

def log_uncaught_exceptions(ex_cls, ex, tb):

logging.critical(''.join(traceback.format_tb(tb)))
logging.critical('{0}: {1}'.format(ex_cls, ex))

sys.excepthook = log_uncaught_exceptions

And here’s the results:

$ python rgl/kaboom2.py

$ cat /tmp/kaboom2.log
DEBUG:root:About to do f().
CRITICAL:root: File "rgl/kaboom2.py", line 39, in
f()
File "rgl/kaboom2.py", line 9, in f
return g()
File "rgl/kaboom2.py", line 13, in g
return h()
File "rgl/kaboom2.py", line 17, in h
return i()
File "rgl/kaboom2.py", line 21, in i
1/0

CRITICAL:root:: integer division or modulo by zero

Incidentally, sys.excepthook preserves the non-zero return code.
Also incidentally, you can use sys.excepthook for all sorts of fun stuff. This shows how to make it fire off pdb when stuff blows up.

Notes from Cleveland code retreat

I attended the day-long Code Retreat at Lean Dog’s floating office on Sunday.

We worked in pairs for 45-minute blocks of time, building Conway’s Game of Life. In each session, we started from the beginning. We didn’t build on previous code.

After each session, teams talked about what stuff they discovered. Then we reorganized into new pairs and started working from the beginning again.

First pairing

At first I paired with a C# developer. While other teams went right into writing test programs, we spent a fair amount of time talking and sketching out ideas about how to model the game.

Then we made a grid that knew how to generate the next state just by copying itself into a new grid.

We planned to add in code to support each of the four rules of the game of life one by one.

So, we wrote a test that created a grid with just a single living cell. Then we told that grid to make a grid for the next generation, and our test verified that the single cell survived the copy.

Then we started working on the first rule: if a cell is alive and has less than two neighbors, it dies.

We burned through the reminder of time sketching out ideas on my clipboard. By the end of our time, it was clear that we only needed to pay attention to living cells and neighbors of living cells.

It was also clear that the time spent writing the first test and then writing the code to satisfy the first test did nothing to move us toward a real solution.

Second pairing

Next I worked with a Java guy. This time, we did something a little closer to TDD.

We decided to start from the point of view of an end-user firing up the game. The user would want to make a grid, add a few living cells, and then tell the grid to generate its next state.

The first code we wrote was a test and we started writing tests almost immediately. In the test, we instantiated a Grid class. That test blew up with an error, since no Grid class existed. So then we wrote a grid class. It had no methods or attributes, but that was all we needed to write in order to satisfy our test.

Next we wrote a new test that instantiated a grid and then added a single cell by calling an (undefined) addCell(…) method.

After seeing that test crash with an error, we wrote an addCell(…) method that was a no-op, because that’s all our tests required us to do.

Then we wrote a test for rule one, where a living cell with less than two neighbors dies.

In this test, we made a grid, added a single cell, and then told the grid to create a new grid that represented itself in the future. Then we used an assert to verify the new grid had no cells.

First we added a “cells” instance-level ListArray attribute to our grid class. Then we wrote a generate_next_grid method on our grid class, and all that did was instantiate and return a new grid with an empty cells.

And bingo, we had enough code again to pass our test.

At this point, I became convinced that the kind of testing we were doing was harming our productivity. We had fallen into some kind of xeno’s paradox* where we would never actually get anything done, because we could always think of some new test that applied to a trivial subset of the project, and then work on writing code to satisfy that test.

*A dude shoots an arrow at a target. Before the arrow gets to the target, it has to cross half the distance to the target. Then it can cross the other half of the distance. But now it has to cross half of the remaining distance, so it does that.

Then it has to cross half of that remaining distance, and then half of the remaining distance after that, and so on for infinity.

Since the arrow has an infinite number of partial distances to cross, the arrow will never reach the target.

Read more about it here.

I was itching to solve a real problem, rather than just watch lights blink from red to green, so I cajoled my partner into writing a test for the blinker pattern, which is when a 3×1 rectangle of cells converts to a 1×3 rectangle of cells on the second turn, and then on the third turn, converts back to a 3×1 rectangle.

So, we started talking about how to implement the rules. We decided we needed to figure out for every cell that was near a living cell, how many living neighbors it had. So we wrote a method to generate the coordinates for the eight neighboring cell to a particular cell, and wrote a test for that.

Then we were going to make a neighbor_count hashmap attribute on our grid and we were going to use that to link cells to how many neighbors each had, and then the time ran out.

At this point, I had a good feeling about the algorithms involved in solving this project.

Third pairing

Right before the third pairing, a much older fellow raised his hand and asked if anyone wanted to work together on a solution that only tracked the living cells. That’s the approach I was using, so I paired with this guy. He said he didn’t care what language we used, but he was a lousy typist. So I said I could do the typing and we could use python.

At first we excitedly talked over each other and drew pictures until we confirmed we had the same solution in mind. It turned out my partner was an old common lisp hacker, and once he realized that I understood what he meant when he talked about stuff like &rest and cadr, we got along really well.

The first bit of code I wrote was a cell class with an x and a y attribute, and a method to generate a list of eight nearby cells.

I wrote a quick test in the same file as my code for that to make sure the execution was correct.

Of course, that test failed. I had lots of syntax errors because I was writing so fast and defending against good-natured jabs about how vim was so vastly inferior to emacs.

After a few cycles of fixing typos and rerunning the tests, we got the syntax errors out of the code. Then we attacked the project of generating the next state of the grid.

I had already seen how easy it was to write a test for the blinker pattern, so I wrote a quick test for that. The test made a grid with three cells in a vertical rectangle, then told the grid to generate the next state, then it tested that in the next state, the three cells were in a horizontal rectangle.

We wrote a grid class with a dictionary attribute of living cells in the current generation and a dictionary attribute of cells that would live in the next generation.

Then we wrote the function in our grid class to generate another grid. The function did an outer loop on all the living cells, then for each living cell, it looped through all the neighbors of that living cell, and then did an inner loop of all neighbors of that neighbor, and then counted up how many were alive.

Then we implemented the four rules of the game with two if-clauses.

We ran the test again, and of course, the code was again littered with indentation errors and mismatched parentheses because of how fast I had been typing and moving stuff around.

Then the timer went off. I stayed put while everybody was gathering in the middle to talk about how everything went. Within about 30 seconds I got the last few silly typing glitches fixed, and BAM. The test passed.

I walked over to my partner and told him we solved it.

Then we listened to other teams talk. The conversation centered around naming conventions that people were coming up with, like “sustainable neighborhood” and “four horsemen”.

I raised my hand and asked if anybody had written tests for any of the patterns in game of life beyond the blinker pattern, like tests for the glider pattern, or any of the other patterns shown on the wikipedia page.

Nobody replied to that. Instead the conversation shifted to a discussion of whether or not booleans hidden behind getters and setters were a sign of a bad design.

I think it was while somebody was talking about writing tests for their (nonexistent) display code that my partner leaned over and said “This is a drunkard’s search.” Of course, I didn’t know what meant, so he explained it. Here’s the wikipedia entry:

Conducting a drunkard’s search is to look in the place that’s easiest, rather than in the place most likely to yield results. Taken from an old joke about a drunkard who loses his car keys while unlocking his car and is found looking under a streetlamp down the road because the light is better, it has been an object of consideration in the social sciences since at least 1964.

Fourth pairing

I asked if anybody wanted to see a python solution, and four of us worked together, with me doing the typing.

First I described how we would need to find the eight neighboring cells of a particular cell. So I wrote a test for that, and then wrote the code to satisfy that test.

Then once that was finished, I described what the blinker looked like, and I wrote a test for that.

Then I reused the approach that we came up with in the third pairing, and then got it working fairly quickly.

At one point I showed the code we had written to one of the people running the event, and he pointed out two things (this is roughly paraphrased, and I hope I am getting these ideas correct):

  1. I had four if-clauses in a single function, and each if-clause tested two independent variables. So that means there were dozens of paths through the function that would have to be tested.
  2. My blinker test could be satisfied by code that just hardcoded the expected results, rather than built an actual game-of-life system. My test didn’t force a complete solution.

Both points illustrate the biggest difference in the TDD mindset versus whatever it is called that I do. (design by intuition?)

The fact that I didn’t have a test for every path that I created didn’t really concern me, because I was focused on getting the blinker pattern to work. If that blinker pattern test failed, then I might go in to my code and make sure that what I wrote was really what I meant, or maybe I’d even throw it all out and start from scratch. But I wouldn’t worry about covering all corners of this version until I was confident I was on the right track.

As for the second point, I use tests to catch errors, not to tell me what to write. I wouldn’t write code that returns a hard-coded list of values just to pass a test, unless I actually thought I could solve the real problem by doing that.

Anyhow, we got everything working pretty quickly this second time. You can see the output from my fourth pairing here.

After you download that code, you can run the tests like this:

$ python life.py
..
--------------------------------------------------------------------
Ran 2 tests in 0.001s

OK

Those two dots mean that my two tests passed. I spent the remainder of the time showing how to use the interactive interpreter and how to write doctests. I’ve tested a few of the patterns from the wikipedia page, and my code seems to do the right thing, but I’m not absolutely convinced that this implementation is correct.

Fifth pairing

I worked with two guys and we used ruby and rspec. I described the algorithm that I had used in the last two times, and we decided to work on building that.

We went with the approach of not writing any more code than was required to satisfy the tests.

So, first, we wrote some tests to instantiate a cell class, then we wrote some tests to call a neighbors method on the cell class and verify that it return a list of eight things, and then we verified that the first and last and elements in the neighbors list were at the position that we expected them to be at.

Then we started work on the grid class. We wanted to make sure that two different objects with the same values for their x and y attributes would be considered equal. But none of us knew how to redefine the == operator in ruby, and while we were researching this, the time ran out.

Again, I felt like we had fallen into the trap of slicing off trivial subsets of the real problem, and solving them, rather than attacking the real problem.

Commentary

Writing tests to check for errors (which is what I do) is not test-driven development. I daydream about solutions until I find one that I like, and then I build it, and I see if it works by writing tests to cover all the use cases I can think of.

My design comes from idle thoughts and doodles on my notepad, not from the testing.

Final note: I’ve tried my best to keep any generalizations and extrapolations out of this post. I don’t mean for this to be seen as an attack on anyone’s style of work. I’m very, very grateful for everything I’ve learned about how to write tests for software from the TDD community.

how to restart a gunicorn server without leaving vim

I’ve been using the gunicorn WSGI server lately while I work on my next project. gunicorn is fantastic, except for one tiny nuisance. I have to restart gunicorn as I change my app code. At the beginning, this meant I would switch over to the terminal where gunicorn was running. Then I’d kill the parent process by hitting CTRL+C, and then start it up again, by typing:

gunicorn crazyproject.webapp:make_application

Other frameworks often have a development mode that forks off a helper process that watches the source code folder. When some file changes, the helper tells the server to reload.

I’m not using any frameworks this time, so I don’t have that feature. I never really liked the fact that every time I saved a file, I would restart the server during development. Instead, in this case, I want to easily restart gunicorn, but only when I want to, and I don’t want to leave my editor.

It turned out to be pretty easy to do.

First I read in the excellent gunicorn docs how to tell gunicorn to reload my app:

How do I reload my application in Gunicorn?

You can gracefully reload by sending HUP signal to gunicorn:

$ kill -HUP masterpid

Next I added a –pid /tmp/gunicorn.pid option to the command that I use to start gunicorn, so that gunicorn would write the parent’s process ID into /tmp/gunicorn.pid.

Now any time I want gunicorn to reload, I can do this little command in a terminal window:

$ kill -HUP `cat /tmp/gunicorn.pid`

Those backticks around cat /tmp/gunicorn.pid tell the shell to do that part of the command first, and then feed the result into the rest of the command.

You can always use ! in vim to run some command-line program. If I had to explain the difference between vim and emacs, the one difference that is most interesting to me is that vim makes it super-easy to send buffer contents to other programs or read buffer contents from other programs, while the people behind emacs seem to think that any external dependency should ultimately be moved into emacs itself. I get the feeling that vim wants to be my text editor, but emacs wants to be my OS.

Anyhow, while I’m in vim, I run that command to restart like this:

: !kill -HUP `cat /tmp/gunicorn.pid`

After using that code for a while, and feeling confident it worked right, I mapped F12 in vim to do that action for me by adding this little thing to the end of my ~/.vimrc:

:map :!kill -HUP `cat /tmp/gunicorn.pid`

Now that means I hit F12 when I want to reload gunicorn.

Looking for a few CLE pythoners for a quick project

I can build it myself, but it’s going to take a lot longer. I’m looking for a few people to help me build a geographical search app.

If you’re smart, live near me, and have at least eight hours a week to work, let me know.

This will be for some combination of money up front and equity.

I expect to launch in anywhere from six weeks to three months after the start.

I have no patience for ideologues, jackanapes, or freeloaders.

A new pitz release (1.1.2)

I’m using semantic versioning now, so I bumped from 1.0.7 to 1.1.0 after adding a few tweaks to the command-line interface. Then I discovered a silly mistake in the setup.py file, and released 1.1.1. Then I realized I did the fix wrong, and then released 1.1.2 a few minutes later.

The –quick option

Normally, running pitz-add-task will prompt for a title and then open $EDITOR so I can write a description. After that, pitz will ask for choices for milestone, owner, status, estimate, and tags.

Sometimes I want to make a quick task without getting prompted for all this stuff.

I already had a --no-description option that would tell pitz-add-task to not open $EDITOR for a description. And I already had a --use-defaults option to just choose the default values for milestone, owner, status, estimate, and tags.

But when I just want to make a quick to-do task as a placeholder, writing out all this stuff:$ pitz-add-task --no-description --use-defaults -t "Fix fibityfoo"

is kind of a drag. So I made a --quick option (also available as -q) that does the same thing as --no-description --use-defaults.

The -1 alias for –one-line-view

This is the typical way that a to-do list looks:$ pitz-my-todo -n 3
==============================
slice from To-do list for matt
==============================

(3 task entities, ordered by ['milestone', 'status', 'pscore'])

Write new CLI scripts (witz!) to talk to pitz-webapp 9f1c76
matt | paused | difficult | 1.0 | 1
webapp, CLI
Starting up and loading all the pitz data at the beginning of ever...

Experiment with different task summarized views 0f6fee
matt | unstarted | straightforward | 1.0 | 1
CLI
Right now, the summarized view of a task looks a little like this:...

Add more supported URLs to pitz-webapp 295b5f
matt | unstarted | straightforward | 1.0 | 0
webapp
I want to allow these actions through the webapp: * Insert a ne...

Incidentally, notice the -n 3 option limits the output to the first three tasks.

Tasks also have a one-line view:$ pitz-my-todo -n 3 --one-line-view
==============================
slice from To-do list for matt
==============================

(3 task entities, ordered by ['milestone', 'status', 'pscore'])

Write new CLI scripts (witz!) to talk to pitz-webapp 9f1c76
Experiment with different task summarized views 0f6fee
Add more supported URLs to pitz-webapp 295b5f

Typing out --one-line-view is tedious, so now, -1 is an alias that works as well:$ pitz-my-todo -n 3 -1
==============================
slice from To-do list for matt
==============================

(3 task entities, ordered by ['milestone', 'status', 'pscore'])

Write new CLI scripts (witz!) to talk to pitz-webapp 9f1c76
Experiment with different task summarized views 0f6fee
Add more supported URLs to pitz-webapp 295b5f

Build your own kind of dictionary

This is the video from my PyOhio talk on building objects that can act like dictionaries.

The text and programs from my talk on building your own are available here on github. You can probably absorb the information most quickly by reading the talk.rst file.

Please let me know if you find spelling errors or goofy sentences that are hard to understand that I should rewrite.

This is the description for the talk:

My talk is based on a project that seemed very simple at first. I wanted an object like the regular python dictionary, but with a few small tweaks:

  • values for some keys should be restricted to elements of a set
  • values for some keys should be restricted to instances of a type

For example, pretend I want a dictionary called favorites, and I want the value for the “color” key to be any instance of my Color class. Meanwhile, for the “movie” key, I want to make sure that the value belongs to my set of movies.

In the talk, I’ll walk through how I used tests to validate my different implementations until I came up with a winner.

Unlike my talk last year on metaclass tomfoolery, and the year before that on fun with decorators (and decorator factories) I’m hoping to make this talk straightforward and friendly to beginning programmers.

You’ll see:

  • how I use tests to solve a real-world problem
  • a few little gotchas with the super keyword
  • a little about how python works under the hood