I just wrote this up on my biz site.
Upload files directly to Rackspace Cloudfiles from the browser with AJAX PUT
I hope it helps somebody out!
I just wrote this up on my biz site.
Upload files directly to Rackspace Cloudfiles from the browser with AJAX PUT
I hope it helps somebody out!
This is all over the intarweb right now.
I’m betting that this is an ad for the monitoring software she supposedly used to track her boss’s time on farmville.
I think we can all agree that invasions of privacy never looked cuter!
I wrote this post on the jQuery mailing list and nobody replied, so I’m pasting it here. I could really use some advice.
I’m using a modal dialogs and I love them, but I haven’t found a really elegant way to handle actions in the dialog window that require changes to the parent page.
Here’s an example. I have a monthly calendar page that lists employee names on the days they are supposed to work. Clicking on an employee name opens a “shift detail” page in a modal dialog.
That shift detail page has more information like the specific tasks planned for the day, the start time and stop time of the shift, etc. From this shift detail screen, I can remove this particular employee from the schedule by submitting an AJAX POST from this popup.
After I remove the employee, I need to update both the popup window and the original page that hosted the link to the popup window. Right now I do this by adding a callback that fires when the AJAX POST succeeds. That callback then updates both pages. The callback is named “after_remove_employee”.
This system gets really nasty when I use the “shift detail” popup on different screens. For example, in addition to the monthly view, I also have a weekly view with more information. So after an employee is removed from the schedule on the weekly view, I need to do some different things in the callback.
Right now, the way I handle this is that I define the same callback twice. I define “var after_remove_employee = function (data) {…} on the weekly view to do what it needs there, and then I define it differently on the monthly view.
I’ve simplified the problem to help explain it. In reality, I have lots of different popups on lots of different pages, and in each popup, there are many different possible actions.
I’m sure I’m not the only one that’s been in this scenario. What is an elegant solution?
I’m thinking about using custom events. So, the callback after a successful AJAX POST would just fire an “employee removed” event, and everybody subscribed would get a reference to the event object and do whatever they want.
However, I’ve never used JS events before, and I don’t know if this is even possible.
Please, any feedback is welcome.
So, I’ve been ignoring all the messages to upgrade from WordPress 2.5. And then I got some phishing php code embedded into my server.
Then I rebooted, and the whole filesystem was gone.
So I lost the last few posts. Maybe I can rebuild them. Maybe I can’t. In either scenario, I probably won’t.
I have a form on my site that lets people choose a start date and a stop date. Then I show statistics for that date range. I wrote a FormEncode schema to verify that the start date is before the stop date.
The documentation on schema validators is fairly sparse, so I’m publishing this because it might help somebody else out.
# This is in a file named formencodefun.py
from formencode import Schema
from formencode.validators import DateConverter, FancyValidator, Invalid
class DateCompare(FancyValidator):
messages = dict(invalid="Start date must be before stop date")
def validate_python(self, field_dict, state):
start_date = field_dict['start_date']
stop_date = field_dict['stop_date']
if start_date > stop_date:
msg = self.message('invalid', state)
raise Invalid(msg, field_dict, state,
error_dict=dict(stop_date=msg))
class MySchema(Schema):
start_date = DateConverter()
stop_date = DateConverter()
chained_validators = [DateCompare()]
>>> from formencodefun import MySchema
>>> s = MySchema()
>>> d1 = {'start_date':'11-02-2008', 'stop_date':'11-15-2008'}
>>> d2 = {'start_date':'11-15-2008', 'stop_date':'11-02-2008'}
>>> s.to_python(d1)
{'stop_date': datetime.date(2008, 11, 15), 'start_date': datetime.date(2008, 11, 2)}
>>> s.to_python(d2)
------------------------------------------------------------
Traceback (most recent call last):
File "
File "/home/matt/virtualenvs/scratch/lib/python2.5/site-packages/FormEncode-1.1-py2.5.egg/formencode/api.py", line 400, in to_python
value = tp(value, state)
File "/home/matt/virtualenvs/scratch/lib/python2.5/site-packages/FormEncode-1.1-py2.5.egg/formencode/schema.py", line 200, in _to_python
new = validator.to_python(new, state)
File "/home/matt/virtualenvs/scratch/lib/python2.5/site-packages/FormEncode-1.1-py2.5.egg/formencode/api.py", line 403, in to_python
vp(value, state)
File "formencodefun.py", line 18, in validate_python
error_dict=dict(stop_date=msg))
Invalid: Start date must be before stop date
Notice when I run s.to_python(d1), I get a dictionary back with the the values for start_date and stop_date replaced with datetime.date objects.
Then when I run my schema on d2, where the start_date is after the stop_date, my schema raises an Invalid exception. In a web framework like TurboGears, there is some exception handler that will catch that exception and take that error dictionary and redraw the form and print my error message.
Notice that the DateConverters first take my strings and turn them into datetime.date objects before the test in DateCompare. FormEncode runs the chained validators after it runs the individual validators.
In this case, I just want to make sure that the start date precedes the stop date. I have written other validators that add extra keys into the field dict or change the values, but I want to keep this example simple.
If the first DateConverters fail, then the chained validators never run:
>>> s.to_python({'start_date':'UNPARSEABLE', 'stop_date':'11-20-2008'})
------------------------------------------------------------
Traceback (most recent call last):
File "
File "/home/matt/virtualenvs/scratch/lib/python2.5/site-packages/FormEncode-1.1-py2.5.egg/formencode/api.py", line 400, in to_python
value = tp(value, state)
File "/home/matt/virtualenvs/scratch/lib/python2.5/site-packages/FormEncode-1.1-py2.5.egg/formencode/schema.py", line 197, in _to_python
error_dict=errors)
Invalid: start_date: Please enter the date in the form mm/dd/yyyy
When my DateCompare validator is run, I can be confident that the objects with the keys start_date and stop_date in the field_dict have already been converted to datetime.date objects.
In summary, FormEncode is awesome, but I have spent a lot of time beating my head against the wall trying to learn how to use it.
Here’s the problem: when you meet somebody, you can exchange business cards, and maybe that business card has your facebook or linkedin url.
Then later, that person can send you an invite or a friend request based on how you gave them your URL.
It would be nicer to more quickly close this gap, so that at the moment you meet somebody, you can instantly “friend” each other, sort of like this:
This ain’t rocket science. Just a faster way to share contact information. Does this already exist?
Instead of using SMS, the system could also use HTTP posts, but not every phone has internet access. In addition to just using my phone number, maybe me and the other person could both agree to send in a short password, which must match before the system links the two accounts.
I’ve been daydreaming about this for a while. I took some time to write out my thoughts. They’re still half-baked.
Blogs and RSS feeds are pretty good. I don’t have to manually go to sites. My reader polls the sites I subscribe to and it pulls the feeds. But the situation could be a lot better.
Feed readers don’t work all that well offline. Sure, maybe the RSS feed itself is downloaded, but images won’t likely be pulled down.
Also, polling is kind of goofy. It would be nicer to use some kind of pub-sub framework where I get notified.
RSS feeds usually only store recent stories.
Very often I find a great blog that has dozens of stories. I would love to be able to download the entire blog for offline viewing.
Yeah, what about it? I know of one single blog that actually uses it in this context. I would like to think there is a solution to this problem that doesn’t require building C++ extensions to the browser.
This section is based on my experiences with WordPress and Blogger. Obviously, publishing content on a remote site requires an internet connection to that remote site, but there is no real reason that I should need an internet connection to preview the rendering of my content.
Also, there’s no obvious way I can integrate my source control tools with my blog engine.
Several times I start an article on my laptop, upload it as a draft to my server, then work on it on my server, then lose my internet connection, and go back to an out-of-date draft on my laptop to continue work.
I can write an article much more quickly using simplified markup and I can be pretty certain that it will render into valid HTML. There are a few plugins for WordPress that support writing with markdown, but they require using the wordpress text editor. Sure, I could copy and paste from my real editor, but that’s less than ideal.
Take these ingredients:
And optionally:
Here’s a simple example:
On the remote git repository, all the rendered HTML, RSS, etc would be available for cloning and the webserver supports people reading my blog the old-fashioned way.
WordPress has other features like being able to navigate through archives, or select stories by tags, or send updates to twitter, etc. I think all of these could be solved somehow during the publishing phase.
For example, navigation through archives doesn’t really require any scripting. I just need to generate indexes for every date range.
Tag-based navigation also doesn’t really require running:
SELECT POSTS.*
FROM POSTS, POST_TAGS, TAGS
WHERE POST.ID = POST_TAGS.POST_ID
AND POST_TAGS.TAG_ID = TAG.ID
AND TAGS.NAME = 'some inoffensive tag name';
It would be sufficient to just regenerate indexes for every tag after each post during the publishing phase.
WordPress allows visitors to post comments on a blog, and it does a pretty good job filtering out spammers with the Akismet plugin. I see two solutions; one is straightforward and mediocre and one is preposterous.
The straightforward solution is to use a service like disqus to track comments on an external server.
The rendered HTML pages would include a blob of javascript. That javascript makes a request to pull all the comments for this URL to the site, and then it appends the text to the DOM. Of course, people that download the material for offline viewing won’t see the comments when they don’t have an internet connection.
Sure, it would be possible to regularly scrape the comments out of the remote server and rebuild all the files available for offline viewing, but that only solves the reading part.
Imagine I write a blog post with a mediocre code sample inside, and you think of a better way to write the same code.
You start writing a comment on my site (or on my Disqus section, it doesn’t matter) and you’re about to submit, when you see a little line that says all comments become my copyright, and you know you want to use this code in some GPL project.
Maybe you don’t see any lines at all that explain who owns blog comments, so then you’re uncertain about what applies.
Anyhow, there’s a deadweight loss here. You have something to say that would help me out, but you won’t say it. If I knew what you were going to say, I’d make a special exception just for this one comment.
By the way, If you want me to change my license so I don’t own the comments, then I’m faced with a bad situation where somebody can post a comment, and then demand later that I take it down. This is a serious problem for “real” sites. Look at the terms of service on reddit. It insists on a perpetual non-exclusive right to any content posted there.
Just like it will be possible to clone my blog text, commenters should have their own repository where I can clone their comments.
So, when Lindsey comments on my (Matt’s) site, she really writes a post on her own site, and then sends my site a message that says:
Hi Matt,
I read your blog post [1] and I wrote a comment here on my site [2].
You can show my comment on to your site as long as you agree with my comment license [3].
[1] http://matt.example.com/why-rinsing-is-as-good-as-washing
[2] http://lindsey.example.com/soap-is-not-optional
[3] http://lindsey.example.com/comment-license
Lindsey
This message could be an email, an HTTP post, whatever. I could manually process this message, or I could set up some handler that figures out what to do based on some rules ahead of time.
So, we’ve changed the flow of comments from lots of people pushing text to me to a system where they just send me notifications and if I want to pull them, then I can.
This system allows more offline work to be done. Lindsey can clone my site and read it. Then she can write a comment. The next time she has an internet connection, she publishes her comment to her site, which triggers the message to be sent to my site.
So, pretend that I don’t show Lindsey’s comment on my site because I think her point makes me look stupid. Now how do third-parties get to see her remarks?
Well, this is a solution that is better than the status quo. Imagine that when Lindsey sent me a message about her comment, she also sent a similar message to another server called a conversation hub.
She tells that hub that her post http://lindsey.example.com/soap-is-not-optional is a response to my post http://matt.example.com/why-rinsing-is-as-good-as-washing.
When somebody clones a feed from my site, they can also check a few of these conversation hubs and optionally clone any posts that have indicated they are relevant to that post.
We’d need better tools to assemble a conversation thread from all the different pieces. But that’s not really that hard.
A spammer could just send messages to the conversation hubs linking their posts to everything out there.
Well, the conversation hubs could insist on real authentication, and then allow feedback from people. Also, people that check for comments at a hub can request to only see comments that have received aggregate positive feedback.
Well, if I switch to this approach, and people start downloading my text files to read offline, they ain’t gonna see my adsense ads, and I’ll be deprived of my $15/year revenue.
But for people that actually make real money off adsense, the question is valid. Remember that we’re talking about helping people read your site offline. Those people that are mostly offline aren’t seeing the site now anyway.
The online visitors can still see them though. Also, people that view the HTML files after cloning my publish node may still see them if they have a working internet connection and they allow the embedded javascript to run.
Sure, there’s a risk that some online viewers will switch to the offline-views and then turn off javascript or their internet connection so that they can’t see the ads.
Publishers would need to weigh this risk. Maybe the solution could be to sell offline copies at a price equal to the expected lost revenue from the switchers.
It’s a non-issue. The HTML is available online just like it always was.
I’m team-teaching a course at Tri-C in the visual communications department this Fall. The class is for graphic designers, and we go through the experience of meeting with a client, building a prototype web site, revising it, then releasing it on the world, then going back and fixing any post-release issues.
The students all have excellent graphic design skills, but nothing in previous courses covers anything programming-related. The other instructor has the graphic design chops, and that’s really the meat of the course, but they do want to punch up the level of instruction beyond building static HTML into building simple web apps.
I need a short list of skills you wish all graphic designers have. Here’s what I plan to cover so far:
I think I’m going to use PHP rather than anything else, but I’d like to hear arguments against that.
Also, the class lab has a bunch of expensive Mac machines, so if there are really good tools out there, I’d love to hear about them.
I have some HTML links and I want to display them as buttons. I spent a few hours appying CSS styles, but never got something that appears identical to a button.
So this is what I’m doing now. It is obvious to read, and seems to work.
The W3 validator approved it in HTML 4 strict. So, all signs suggest that this is the way to go, but I don’t follow the semantic web conversation very closely.
So, for those of you that do pay attention to all that stuff, what’s bad about this approach?