About matt

My name is Matt Wilson and I live in Cleveland Heights, Ohio. I love random emails from strangers, so get in touch! [email protected].

old-school code checklist

You can’t call it old-school code unless a majority of these are true:

  • global vars are all registered at the top of the file, and are used to track state
  • Comments contain author’s initials and a date
  • Last line of the file is just the number 1;
  • Uses LDAP
  • You recognize the dude that wrote it because you’ve seen his email address at the bottom of some man pages

Obscure python syntax error

Been writing python for a long time. When I wrote this code, I could not figure out why I was getting a syntax error.


d1 = dict(
user_id=99,
display_name='Matt Wilson',)

d2 = dict(
email_address='[email protected]',
**d1,)

It is the trailing comma after **d1. It is not OK. Which is really weird, because the trailing comma after display_name=’Matt Wilson’ is just fine.

We need to make our conference presentations more accessible

When I say “more accessible” I mean any of these:

  • useful for people with hearing / reading / seeing / cognitive / anything differences
  • approachable for people at many different skill levels
  • not intimidating. not insulting or offensive or exclusive.
  • useful for people that are studying the material afterward
  • Help me out here — what are some more examples?

I’m not an expert on how to do this, but I know this is a problem. I want help figuring out solutions, so please get in touch with me if you can help with that. I will ignore defenders of the status quo.

More detail on the problem

We pump a lot of energy into making really cool presentations for conferences, and then, when the conference is over, a lot of times, that great content usually just disappears.

Or if it doesn’t disappear, we don’t do a good job of getting it out where more people can benefit from our work. Maybe there’s a zipfile with our slides on a page for our talk on the conference website afterwards.

Maybe the slides (without most of the commentary) will show up online in one of those flash widgets.

Or if you’re really lucky, a video recording of the presentation will show up. And that’s great. For example, Next Day Video records and edits the PyOhio presentations, and does fantastic work, but just a video is not sufficient for all audiences.

A video recording is great for some things, but not for others. It isn’t easy how to copy a URL mentioned in a video, for example, or copy-paste a block of code. Or bookmark something 25-minutes in.

Consider that for every person in your audience, over the next few years, there’s probably at least 10 or a hundred or maybe even a thousand people that will be doing searches online for the facts you’re covering right now.

A lot of those people might be brand new to the language or library. A lot of those people might not be native English speakers. And maybe they’re on slow internet connections too.

A few ideas to make this better

I have a few ideas for what to do, listed below, but I’m more interested in getting feedback from readers. So please, let me know how what you think we should do.

Anyhow, my ideas:

  • Require presenters to submit something like a paper, not just a stack of LOLCATS slides for their proposal.
  • Bundle the materials from the presenters as soon as possible and get those out on the web. The SAS Global Forum does this, and it is great. I have read nearly all the papers presented at their conferences, because they make it so easy to get their material, and because everybody is writing actual papers, not doing stream of consciousness performance art.
  • Use open formats for text. Avoid PDFs, power-point slide desks, and similar stuff like the plague. Take SEO to heart. Not because we want to sell advertising, but because we want to share our knowledge.
  • Encourage attendees to critically react to the presentations. Maybe even consider the presentation material as open-source. A presentation contains some code sample that confuses people in the audience, then people should rewrite that with something more intuitive and more obvious.

Announcing carp!

Carp is for code templating.

You know, for stuff like starting HTML files with a bunch of predefined javascript libraries, or for starting a new python project without having to write a setup.py file from scratch.

You can use carp for a single file or for a tree of files and folders.

It makes it easy to take a folder of code files and subfolders, replace parts of text you want to be parameterized, and then store it. Later, you render that template by passing in the required parameters.

Get the code at github here.

Customize your psql terminal

Here’s the contents of my ~/.psqlrc file:

-- Report time used for each query.
\timing

-- Set a cute border around query results.
\pset border 2

-- Set pager to less, rather than more.
\setenv PAGER /usr/bin/less

-- Monkey up the prompt.

-- This one shows user@host:database
-- \set PROMPT1 '%n@%m:%/%# '

-- Same thing, but with pretty colors, across several lines.
\set PROMPT1 '\n%[%033[0;36;29m%]%n@%m:%/%[%033[0m%]\n%# '

Now I get a psql prompt that looks like this:

user@localhost:dbname
# select * from current_timestamp;
+-------------------------------+
| now |
+-------------------------------+
| 2012-10-23 12:22:38.263557-04 |
+-------------------------------+
(1 row)

Time: 1.907 ms

python: allow only one running instance of a script

UPDATE: Thanks so much for all the feedback! I’m going to look at using flock as well, and I’ll write that up soon.

Imagine you have a script that archives a bunch of data by copying it to another box. You use cron to schedule that script to run every hour, because normally, the script finishes in about thirty (30) minutes or so.

But every so often, maybe when your application gets really popular, the cron job takes more than an hour. Maybe it takes three hours this one time.

And during that time, cron starts up two more copies of your script. That can cause all sorts of havoc, where two or more scripts each try to modify the same file, for example.

In this scenario, you need a way to prevent those second and third (and maybe fourth and fifth, etc) scripts from starting as long as one is already going.

It would be very helpful when the script started, it first checked if another process was already running. If one is already running, then this new script should just immediately exit. But if no other script is running, then this script should get to work.

Here’s a simple method for doing that:

1. When the script starts, the first thing it does it look for a file in /tmp named something like /tmp/myscript.pid.

2. If that file exists, then the script reads that file. The file holds a process ID (pid). The script now checks if that any process with that pid is running.

3. If there is not a process running with this pid, then probably what happened was the old script crashed without cleaning up this pid file. So, this script should get to work. But if there is a process running with that pid, then there is already a running instance of this script, and so this script should just immediately exit. There’s a tiny risk with this approach that I’ll discuss at the end of this post.

4. Depending on what happened in step 3, the script should exit at this point, or it should get to work. Before the script gets to the real work though, it should write its own process ID into /tmp/myscript.pid.

That’s the pseudocode, now here’s two python functions to help make it happen:

import os

def pid_is_running(pid):
"""
Return pid if pid is still going.

>>> import os
>>> mypid = os.getpid()
>>> mypid == pid_is_running(mypid)
True
>>> pid_is_running(1000000) is None
True
"""

try:
os.kill(pid, 0)

except OSError:
return

else:
return pid

def write_pidfile_or_die(path_to_pidfile):

if os.path.exists(path_to_pidfile):
pid = int(open(path_to_pidfile).read())

if pid_is_running(pid):
print("Sorry, found a pidfile! Process {0} is still running.".format(pid))
raise SystemExit

else:
os.remove(path_to_pidfile)

open(path_to_pidfile, 'w').write(str(os.getpid()))
return path_to_pidfile

And here’s a trivial script that does nothing but check for a pidfile and then sleep for a few seconds:

if __name__ == '__main__':

write_pidfile_or_die('/tmp/pidfun.pid')
time.sleep(5) # placeholder for the real work
print('process {0} finished work!'.format(os.getpid()))

Try running this in two different terminals, and you’ll see that the second process immediately exits as long as the first process is still running.

In the worst case, this isn’t perfect

Imagine that the first process started up and the operating system gave it process ID 99. Then imagine that the process crashed without cleaning up its pidfile. Now imagine that some completely different process started up, and the operating system happens to recycle that process ID 99 again and give that to the new process.

Now, when our cron job comes around, and starts up a new version of our script, then our script will read the pid file and check for a running process with process ID 99. And in this scenario, the script will be misled and will shut down.

So, what to do?

Well, first of all, understand this is an extremely unlikely scenario. But if you want to prevent this from happening, I suggest you make two tweaks:

1. Do your absolute best to clean up that pidfile. For example, use python’s sys.excepthook or atexit functions to make sure that the pid file is gone.

2. Write more than just the process ID into the pid file. For example, you can use ps and then write the process name to the pid file. Then change how you check if the process exists. In addition to checking for a running process with the same pid, check for the same pid and the same data returned from ps for that process.

Check back soon and I’ll likely whip up some kind of some simple library that offers a context manager that does it to the extreme case described above.