I just wrote this up on my biz site.
Upload files directly to Rackspace Cloudfiles from the browser with AJAX PUT
I hope it helps somebody out!
I just wrote this up on my biz site.
Upload files directly to Rackspace Cloudfiles from the browser with AJAX PUT
I hope it helps somebody out!
You can’t call it old-school code unless a majority of these are true:
Been writing python for a long time. When I wrote this code, I could not figure out why I was getting a syntax error.
d1 = dict(
user_id=99,
display_name='Matt Wilson',)
d2 = dict(
email_address='[email protected]',
**d1,)
It is the trailing comma after **d1. It is not OK. Which is really weird, because the trailing comma after display_name=’Matt Wilson’ is just fine.
When I say “more accessible” I mean any of these:
I’m not an expert on how to do this, but I know this is a problem. I want help figuring out solutions, so please get in touch with me if you can help with that. I will ignore defenders of the status quo.
We pump a lot of energy into making really cool presentations for conferences, and then, when the conference is over, a lot of times, that great content usually just disappears.
Or if it doesn’t disappear, we don’t do a good job of getting it out where more people can benefit from our work. Maybe there’s a zipfile with our slides on a page for our talk on the conference website afterwards.
Maybe the slides (without most of the commentary) will show up online in one of those flash widgets.
Or if you’re really lucky, a video recording of the presentation will show up. And that’s great. For example, Next Day Video records and edits the PyOhio presentations, and does fantastic work, but just a video is not sufficient for all audiences.
A video recording is great for some things, but not for others. It isn’t easy how to copy a URL mentioned in a video, for example, or copy-paste a block of code. Or bookmark something 25-minutes in.
Consider that for every person in your audience, over the next few years, there’s probably at least 10 or a hundred or maybe even a thousand people that will be doing searches online for the facts you’re covering right now.
A lot of those people might be brand new to the language or library. A lot of those people might not be native English speakers. And maybe they’re on slow internet connections too.
I have a few ideas for what to do, listed below, but I’m more interested in getting feedback from readers. So please, let me know how what you think we should do.
Anyhow, my ideas:
Carp is for code templating.
You know, for stuff like starting HTML files with a bunch of predefined javascript libraries, or for starting a new python project without having to write a setup.py file from scratch.
You can use carp for a single file or for a tree of files and folders.
It makes it easy to take a folder of code files and subfolders, replace parts of text you want to be parameterized, and then store it. Later, you render that template by passing in the required parameters.
Get the code at github here.
UPDATE: Thanks so much for all the feedback! I’m going to look at using flock as well, and I’ll write that up soon.
Imagine you have a script that archives a bunch of data by copying it to another box. You use cron to schedule that script to run every hour, because normally, the script finishes in about thirty (30) minutes or so.
But every so often, maybe when your application gets really popular, the cron job takes more than an hour. Maybe it takes three hours this one time.
And during that time, cron starts up two more copies of your script. That can cause all sorts of havoc, where two or more scripts each try to modify the same file, for example.
In this scenario, you need a way to prevent those second and third (and maybe fourth and fifth, etc) scripts from starting as long as one is already going.
It would be very helpful when the script started, it first checked if another process was already running. If one is already running, then this new script should just immediately exit. But if no other script is running, then this script should get to work.
Here’s a simple method for doing that:
1. When the script starts, the first thing it does it look for a file in /tmp named something like /tmp/myscript.pid.
2. If that file exists, then the script reads that file. The file holds a process ID (pid). The script now checks if that any process with that pid is running.
3. If there is not a process running with this pid, then probably what happened was the old script crashed without cleaning up this pid file. So, this script should get to work. But if there is a process running with that pid, then there is already a running instance of this script, and so this script should just immediately exit. There’s a tiny risk with this approach that I’ll discuss at the end of this post.
4. Depending on what happened in step 3, the script should exit at this point, or it should get to work. Before the script gets to the real work though, it should write its own process ID into /tmp/myscript.pid.
That’s the pseudocode, now here’s two python functions to help make it happen:
import os
def pid_is_running(pid):
"""
Return pid if pid is still going.
>>> import os
>>> mypid = os.getpid()
>>> mypid == pid_is_running(mypid)
True
>>> pid_is_running(1000000) is None
True
"""
try:
os.kill(pid, 0)
except OSError:
return
else:
return pid
def write_pidfile_or_die(path_to_pidfile):
if os.path.exists(path_to_pidfile):
pid = int(open(path_to_pidfile).read())
if pid_is_running(pid):
print("Sorry, found a pidfile! Process {0} is still running.".format(pid))
raise SystemExit
else:
os.remove(path_to_pidfile)
open(path_to_pidfile, 'w').write(str(os.getpid()))
return path_to_pidfile
And here’s a trivial script that does nothing but check for a pidfile and then sleep for a few seconds:
if __name__ == '__main__':
write_pidfile_or_die('/tmp/pidfun.pid')
time.sleep(5) # placeholder for the real work
print('process {0} finished work!'.format(os.getpid()))
Try running this in two different terminals, and you’ll see that the second process immediately exits as long as the first process is still running.
Imagine that the first process started up and the operating system gave it process ID 99. Then imagine that the process crashed without cleaning up its pidfile. Now imagine that some completely different process started up, and the operating system happens to recycle that process ID 99 again and give that to the new process.
Now, when our cron job comes around, and starts up a new version of our script, then our script will read the pid file and check for a running process with process ID 99. And in this scenario, the script will be misled and will shut down.
So, what to do?
Well, first of all, understand this is an extremely unlikely scenario. But if you want to prevent this from happening, I suggest you make two tweaks:
1. Do your absolute best to clean up that pidfile. For example, use python’s sys.excepthook or atexit functions to make sure that the pid file is gone.
2. Write more than just the process ID into the pid file. For example, you can use ps and then write the process name to the pid file. Then change how you check if the process exists. In addition to checking for a running process with the same pid, check for the same pid and the same data returned from ps for that process.
Check back soon and I’ll likely whip up some kind of some simple library that offers a context manager that does it to the extreme case described above.
You ever notice how when your script dies because of some uncaught error, you don’t get that error in your log files? This post walks through how to make sure that you log that uncaught exception.
This is a trivial script that will raise an uncaught exception (code available here):
$ cat rgl/kaboom1.py
# vim: set expandtab ts=4 sw=4 filetype=python:
import logging
def f():
return g()
def g():
return h()
def h():
return i()
def i():
1/0
if __name__ == '__main__':
logging.basicConfig(
level=logging.DEBUG,
filename='/tmp/kaboom1.log',
filemode='w')
logging.debug('About to do f().')
f()
Notice the helpful traceback:
$ python rgl/kaboom1.py
Traceback (most recent call last):
File "rgl/kaboom1.py", line 28, in
f()
File "rgl/kaboom1.py", line 9, in f
return g()
File "rgl/kaboom1.py", line 13, in g
return h()
File "rgl/kaboom1.py", line 17, in h
return i()
File "rgl/kaboom1.py", line 21, in i
1/0
ZeroDivisionError: integer division or
modulo by zero
Unfortunately, that helpful traceback does not show up in the output logs!
$ cat /tmp/kaboom1.log
DEBUG:root:About to do f().
This diaper pattern is a popular solution::
try:
f()
except Exception as ex:
logging.exception(ex)
raise
Make sure you re-raise the exception, otherwise your program will end with a zero return code.
If you do any of these, you probably won’t like what you get:
logging.error(ex)
logging.error(str(ex))
In both cases, you are just turning the exception to a string. You won’t see the traceback and you won’t see the exception type.
Instead of those, make sure you do one of these:
logging.exception(ex)
# this is exactly what logging.exception does inside
logging.error(ex, exc_info=1)
# sets a higher log level than error
logging.critical(ex, exc_info=1)
For the last two, without that exc_info=1
parameter, you won’t see the traceback in your logs. You’ll just see the message from the exception.
Instead of nesting your code inside a try-except clause, you can customize the built-in sys.excepthook function.
The kaboom2.py script has this extra code:
def log_uncaught_exceptions(ex_cls, ex, tb):
logging.critical(''.join(traceback.format_tb(tb)))
logging.critical('{0}: {1}'.format(ex_cls, ex))
sys.excepthook = log_uncaught_exceptions
And here’s the results:
$ python rgl/kaboom2.py
$ cat /tmp/kaboom2.log
DEBUG:root:About to do f().
CRITICAL:root: File "rgl/kaboom2.py", line 39, in
f()
File "rgl/kaboom2.py", line 9, in f
return g()
File "rgl/kaboom2.py", line 13, in g
return h()
File "rgl/kaboom2.py", line 17, in h
return i()
File "rgl/kaboom2.py", line 21, in i
1/0
CRITICAL:root:
Incidentally, sys.excepthook preserves the non-zero return code.
Also incidentally, you can use sys.excepthook for all sorts of fun stuff. This shows how to make it fire off pdb when stuff blows up.
I just stumbed on an old email I wrote a while ago to a prospective employer. I’m posting it because I’m curious how much it matches other people’s preferences.
> We didn’t get a chance to talk much about culture, but there’s just a few questions that I’d love to hear more from you on:
> -What kind of environment do you want to work in? What kind of environment do you work best in?
I’m happiest when I understand the big picture and can participate in the conversation about what to build. I know some programmers hate vague requests, but I thrive on talking with clients, especially when they don’t know a damn thing about how computers work.
I start with some rough sketches and then iteratively, we design the product together.
> -How do you like to be managed? How do you manage others?
The best managers I’ve had made it seem effortless. I knew my work mattered, I got a chance to learn things and work on cool projects, and when we had meetings they felt more like kids designing secret hideouts rather than a trip to the principal’s office.
How did they do that? I don’t really know, but I think it was a function of what we were working on, who we worked with, and how we did it.
When I lead teams, before each day starts, I spend a good amount of time by myself, planning what I want each person to do.
Then I’ll tell each person what I expect them to get done. That’s a starting point though — I’m happy to have a conversation and then reassign and reorganize tasks.
After that, I try to handle as much of the tedious stuff as possible, so that my team can think deeply about big problems. That means I’m happy to handle the tasks for stuff like changing labels on buttons,or debugging some non-core system, or replying to clueless customers.
I don’t think of management as a reward for paying dues. I think of running a team of developers as sort of like programming, but at a much higher, more abstract level.
The other thing I aim to do is make myself redundant. There’s nothing worse than feeling like you can never take a vacation because everything will fall apart. I encourage everyone else to share knowledge and cross-train each other, even if it means it costs us throughput in the short run.
> -Can you tell me a bit more about remote working in your experience? What’s worked well / not well?
Emailed screenshots can save a lot of time when there’s some layout bug that needs to be fixed. And I like IM for discrete questions, like “what is the URL to the testing box”.
But really, I’m a big fan of talking on the phone. I find that a 15-minute phone conversation where both people are totally focused on that conversation is often way more efficient than two people just barely paying attention.
Generally, I think regularly talking through stuff is key to keeping everyone invested and focused on the real goal.
That’s it!
A friend has a fork of my project and sent me a pull-request. Without doing any eyeballing at all, I did this:
$ git fetch XXX
$ git merge XXX/master
Git ran a fast-forward merge and now all those commits are in my code. Then I ran some tests and KABOOM.
After spending a while digging around and seeing just how much needed to be fixed, I decided that it would be better to just send an email with the test errors back to my friend rather than try to fix them myself. I got my own work to do, after all.
So I sent the email, but now I had a git checkout with all those commits.
And since this was a fast-forward merge, git didn’t create a commit for the merge.
Incidentally, today I learned that you can force git to make an explicit commit for fast-forward merges like this:
$ git merge --no-ff XXX/master
I went to #git on irc.freenode.org, where I always go, and I explained the situation, and then was told to do these commands:
$ git reflog show master
$ git reset --keep master@{1}
And it worked! All the foreign commits are now gone.
But what just happened?
I’m not absolutely certain, but it seems like the git reflog show master command shows the changes applied to HEAD over time. This is what the top lines showed for me:
cce8252 master@{0}: merge XXX/master: Fast-forward
08c8f50 master@{1}: commit: Cleaning up the my-account pages
5526212 master@{2}: commit: Deleted a handler that was never getting used
This is different than git log. This is talking about the state of my local master branch over time.
Then, the next command git reset –keep master@{1} is telling git to reset the checkout to look like what master looked like one state in the past.
Like I said, I’m still not sure I understand this, but I plan to study it more.
I find code like this really confusing:
if a and b and c:
if d or e:
return True
elif f and g:
return True
else:
return False
Furthermore, updating those rules is nasty. I’m very likely to get something wrong, especially when “if a” is really some elaborate method call with several parameters.
It get a little more obvious when I flatten it out like this:
if a and b and c and (d or e or (f and g)):
return True
else:
return False
That looks OK for this contrived example, but imagine I need to stick in another clause that says also return True when a and g are True. It ain’t so easy to arrange the code any more.
Over the years, I’ve run into this situation a lot. Usually, I’m doing something like registering a new user, and I want to rule out that this new user is already in the database. So I have to look up the user on a variety of fields, like maybe email address, their user name, their mobile number, etc.
When I can boil the action down into something like “do this, unless any of these scenarios apply” I write the code in this format:
# Beginning of the gauntlet
if not a:
return False
if not b:
return False
if not c:
return False
if not d or e:
return False
# End of the gauntlet
return True
I put the fastest reasons to disqualify the action at the beginning. The really elaborate stuff belongs at the bottom.
Now,