Random notes from mg

a blog by Marius Gedminas

Marius is a Python hacker. He works for Programmers of Vilnius, a small Python/Zope 3 startup. He has a personal home page at http://gedmin.as. His email is marius@gedmin.as. He does not like spam, but is not afraid of it.

Thu, 21 Nov 2013

Rackspace OSS

I've a Jenkins server running tests for most of my open-source projects now: https://jenkins.gedmin.as/. It was not too difficult to set up:

  1. Send an email to Jesse Noller about that free Rackspace Cloud hosting for OSS projects, expecting to be rebuffed because all my projects are small and insignificant.
  2. Receive a positive response with the instructions. Yay!
  3. Sign up for an account, type in all my credit card details and personal phone numbers, get it verified by a phone call (I hate phone calls, but apparently it prevents fraud or something). Send the account name/ID back to Jesse.
  4. Log in to the MyCloud account control panel/dashboard/thingie, create a server, ask for Ubuntu 12.04 LTS.
  5. Wait a couple of minutes, ssh as root, do the usual setup (locale-gen, adduser, ~/.ssh/authorized_keys, dotfiles, etckeeper, postfix, my PPA for sysadmin automation stuff).
  6. There was an amusing interlude where I tried to set up the Rackspace Cloud Monitoring agent according to their instructions, failed, had to open a support ticket, give their techs a login with root, then wait less than 24 hours until it was resolved. I still don't know what went wrong, but it works now.
  7. Get and install a free SSL certificate from StartSSL.
  8. Install and configure Jenkins (latest upstream version, just in case).
  9. Point my DNS to the server's IP.

It's been running this way for a month now, with no problems. So, thanks, Rackspace!

Grand plans for the future: have Jenkins do daily/weekly mirrors of all my GitHub repos, look for commits made since the last release tag, filter out boring ones ("bump version number"), and send me reminders that "project X needs a new release: you've new features sitting there unreleased for X days now". Or maybe just a web page with a table of "X commits since last release" linking to GitHub history.

P. S. I can't preview this since PyBlosxom fails to run on my laptop with a cryptic error. I'm debating debugging this versus migrating to a static blog compiler.

posted at 10:47 | tags: , | permanent link to this entry | 0 comments

Mon, 22 Jul 2013

Adding "Edit on GitHub" links to Sphinx pages

Documentation pages on ReadTheDocs have a nice sidebar with extra "Show on GitHub" and "Edit on GitHub" links. Here's how you can have those for your own Sphinx documentation:

  1. Create _ext and _templates subdirectories.
  2. Create a file _ext/edit_on_github.py that hooks into html-page-context to add a couple of template variables.
  3. Create a file _templates/sourcelink.html that uses those two variables to insert "Show on GitHub" and "Edit on GitHub" links in the sidebar.
  4. Edit conf.py and add os.path.abspath('_ext') to sys.path.
  5. Add edit_on_github to the list of extensions.
  6. Use edit_on_github_project to specify your GitHub username and repository (separated by a slash).
  7. Optionally use edit_on_github_branch to specify the desired branch (it defaults to 'master').
  8. Make sure _templates is in the templates path.
  9. make html and enjoy

screenshot

posted at 13:48 | tags: , , | permanent link to this entry | 0 comments

Thu, 12 Apr 2012

Logging levels and hierarchies

I remember when the logging package seemed big and complicated and forbidding. And then I remember when I finally "got" it, started using it, even liked it. And then I've discovered that I didn't really understand the model after all.

Consider this: we have two loggers

root
  \
   -- mylogger

configured as follows:

import logging
root = logging.getLogger()
root.setLevel(logging.INFO)
root.addHandler(logging.FileHandler("info.log"))
mylogger = logging.getLogger('mylogger')
mylogger.setLevel(logging.DEBUG)
mylogger.addHandler(logging.FileHandler("debug.log"))

What happens when I do mylogger.debug('Hi')?

Answer: the message appears in both debug.log and info.log.

That was surprising to me. I'd always thought that a logger's level was a gatekeeper to all of that logger's handlers. In other words, I always thought that when a message was propagating from a logger to its parent (here from mylogger to root), it was also being filtered against the parent's log level. That turns out not to be the case.

What actually happens is that a message is tested against the level of the logger where it was initially logged, and if it passes the check, it gets passed to all the handlers of that logger and all its ancestors with no further checks. Unless the propagation is stopped somewhere by one of the loggers having propagate set to False. And of course each handler has its own level filtering. And I'm ignoring filters and the global level override.

Part of the confusion was caused by my misunderstanding of logging.NOTSET. I assumed, incorrectly, that it was just a regular logging level, one even less severe than DEBUG. So when I wrote code like this:

import logging
root = logging.getLogger()
root.setLevel(logging.INFO)
root.addHandler(logging.FileHandler("info.log"))
mylogger = logging.getLogger('mylogger')
mylogger.setLevel(logging.NOTSET)  # <-- that's the default level, actually
mylogger.debug("debug message")

I saw the debug message being suppressed and assumed it was because of the root logger's level. Which is correct, in a way, just not the way I thought about it.

NOTSET does not mean "pass all messages through", it means "inherit the log level from the parent logger". The documentation actually describes this, although in a rather convoluted way. My own fault for misunderstanding it, I guess.

posted at 18:01 | tags: | permanent link to this entry | 3 comments

Fri, 09 Mar 2012

logging.config.fileConfig gotcha

If you use logging.config.fileConfig (e.g. because you use paster serve something.ini to deploy your WSGI apps) you should know about this.

By default fileConfig disables all pre-existing loggers if they (or their parent loggers) are not explicitly mentioned in your .ini file.

This can result in unintuitive behaviour:

(if you don't see the embedded example, you can find it at https://gist.github.com/1642893).

If you have Python 2.6 or later (and you should), you can turn this off by passing disable_existing_loggers=False to fileConfig(). But what if it's not you calling fileConfig() but your framework (e.g. the above-mentioned paster serve)?

Now usually paster serve tries to configure logging before importing any of your application modules, so there should be no pre-existing loggers to disable. Sometimes, however, this doesn't work for one reason or another, and you end up with your production server suppressing warnings and errors that should not be suppressed. I haven't actually figured out yet who's responsible for those early imports in the application I'm working on (until today I assumed, incorrectly, that paster imports the module containing your WSGI app before it calls fileConfig).

If you're not sure if this bug can bite you or not, check that you don't have any disabled loggers by doing something like

import logging
assert not any(getattr(logger, 'disabled', False)
               for logger in logging.getLogger().manager.loggerDict.values())

while your application is running.

posted at 23:57 | tags: | permanent link to this entry | 2 comments

Sun, 19 Dec 2010

Stuff I've been doing recently

objgraph got

zodbbrowser got

imgdiff got

posted at 02:35 | tags: | permanent link to this entry | 2 comments

Sat, 07 Aug 2010

Profiling with Dozer

Dozer is mostly known for its memory profiling capabilities, but the as-yet unreleased version has more. I've talked about log capturing, now it's time for

Profiling

This WSGI middleware profiles every request with the cProfile module. To see the profiles, visit a hidden URL /_profiler/showall:

List of profiles

What you see here is heavily tweaked in my fork branch of Dozer; upstream version had no Cost column and didn't vary the background of Time by age (that last bit helps me see clumps of requests).

Here's what an individual profile looks like:

One profile

The call tree nodes can be expanded and collapsed by clicking on the function name. There's a hardcoded limit of 20 nesting levels (upstream had a limit of 15), sadly that appears not to be enough for practical purposes, especially if you start profiling Zope 3 applications...

You can also take a look at the WSGI environment:

WSGI environment expanded

Sadly, nothing about the response is captured by Dozer. I'd've liked to show the Content-Type and perhaps Content-Length in the profile list.

The incantation in development.ini is

[filter-app:profile]
use = egg:Dozer#profile
profile_path = /tmp/profiles
next = main

Create an empty directory /tmp/profiles and make sure other users cannot write to it. Dozer stores captured profiles as Python pickles, which are insecure and allow arbitrary command execution.

To enable the profiler, run paster like this:

$ paster serve development.ini -n profile

Bonus feature: call graphs

Dozer also writes a call graph in Graphviz "dot" format in the profile directory. Here's the graph corresponding to the profile you saw earlier, as displayed by the excellent XDot:

Call graph

See the fork where the "hot" red path splits into two?

Call graph, zoomed in

On the left we have Routes deciding to spend 120 ms (70% total time) recompiling its route maps. On the right we have the actual request dispatch. The actual controller action is called a bit further down:

Call graph, zoomed in

Here it is, highlighted. 42 ms (24% total time), almost all of which is spent in SQLAlchemy, loading the model object (a 2515 byte image stored as a blob) from SQLite.

A mystery: pickle errors

When I first tried to play with the Dozer profiler, I was attacked by innumerable exceptions. Some of those were due to a lack of configuration (profile_path) or invalid configuration (directory not existing), or not knowing the right URL (going to /_profiler raised TypeError). I tried to make Dozer's profiler more forgiving or at least produce clearer error messages in my fork branch, e.g. going to /_profiler now displays the profile list.

However some errors were very mysterious: some pickles, written by Dozer itself, could not be unpickled. I added a try/except that put those at the end of the list, so you can see and delete them.

Pickle errors

Does anybody have any clues as to why profile.py might be writing out broken pickles?

Update: as Ben says in the comments, my changes have been accepted upstream. Yay!

posted at 06:06 | tags: | permanent link to this entry | 5 comments

Capturing logs with Dozer

Dozer is mostly known for its memory profiling capabilities, but the as-yet unreleased version has more:

Log capturing

This WSGI middleware intercepts logging calls for every request. Here we see a toy Pylons application I've been working on in my spare time. Dozer added an info bar at the top:

Dozer's infobar

When you click on it, you get to see all the log messages produced for this request. I've set SQLAlchemy's loglevel to INFO in my development.ini, which produces:

Dozer's log viewer

(Why on Earth does SQLAlchemy think I want to see the memory address of the Engine object in my log files, I don't know. The parentheses contain argument values for parametrized queries, of which there are none on this page.)

Upstream version displays absolute timestamps (of the YYYY-MM-DD HH:MM:SS.ssssss variety) in the first column; my fork shows deltas in milliseconds. The incantation in development.ini is

[filter-app:logview]
use = egg:Dozer#logview
next = main

which makes it disabled by default. To enable, you run paster like this:

$ paster serve development.ini -n logview

(Upstream version lacks the paste entry point for logview; it's in my fork, for which I submitted a pull request weeks ago like a good open-source citizen. Incidentally, patches for stuff I maintain have been known to languish for years in my inbox, so I'm not one to throw stones.)

Next: profiling with Dozer.

Update: Tom Longson blogged about this back in 2008! And his CSS is prettier.

posted at 05:27 | tags: | permanent link to this entry | 5 comments

Fri, 06 Aug 2010

irclog2html is now on PyPI

irclog2html, the IRC log to HTML converter, is now (finally!) available from the Python Package Index.

In other news, logs2html now copies irclog.css to the destination directory (if it doesn't exist there already). I've been noticing logs produced with irclog2html on random places, and sometimes they were unstyled; hopefully this will become rare now.

posted at 04:32 | tags: | permanent link to this entry | 0 comments

Wed, 07 Jul 2010

ImportError: No module named _md5

If you're using virtualenv, and after a system upgrade you get errors like

...
  File "...", line ...
    from hashlib import md5
  File "/usr/lib/python2.6/hashlib.py", line 63, in __get_builtin_constructor
     import _md5
ImportError: No module named _md5

this means that the copy of the python executable in your virtualenv/bin directory is outdated and you should update it:

$ cp /usr/bin/python2.6 /path/to/venv/bin/python

or, better yet, recreate the virtualenv.

posted at 01:20 | tags: , | permanent link to this entry | 2 comments

Sun, 18 Apr 2010

Re: Web frameworks considered useful.

Martijn Faassen defends web frameworks in a rather longish post (you can tell it's 5 AM in the morning and I've nearly defeated the unread post queue in Google Reader). I'd like to propose a condensed version. Consider this slogan:

Simple things should be easy; complicated things should be possible.

Frameworks make simple things easy. Good frameworks keep the complicated thing possible; poorly-designed frameworks make the complicated thing more difficult than necessary; bad frameworks make even simple things complicated.

Doing everything from scratch merely makes things possible, but rarely easy.

posted at 04:57 | tags: | permanent link to this entry | 3 comments

Sun, 04 Apr 2010

Review: Grok 1.0 Web Development

Disclaimer: I received a free review copy of this book. The book links are affiliate links; I get a small amount from any purchase you make through them.

Grok is a Python web framework, built on top of the Zope Toolkit, which is the core of what used to be called Zope 3 and is now rebranded as BlueBream. Confused yet? Get used to it: the small pluggable components are the heart and soul of ZTK, and the source of its flexibility. It's not surprising that people take the same approach on a larger scale: take Zope 3 apart into smaller packages and reassemble them into different frameworks such as Grok, BlueBream or repoze.bfg.

Grok 1.0 Web Development by Carlos de la Guardia

The Grok book by Carlos de la Guardia introduces the framework by demonstrating how to create a small but realistic To-do list manager. I like this technique, and it works pretty well. The author covers many topics:

Some important topics like internationalization, time zones, testing with Selenium, and (especially) database migration (which is pretty specific for ZODB) were not covered.

If you want to learn about Grok, this book will be useful, but there's a caveat: there's the usual slew of typographical mistakes and other errors I've come to expect from books published by Packt. It's their third book I've seen; all three had surprisingly high numbers of errors. Some had more, others had fewer. The Grok book was on the high side and the first one where I was tempted to record a "WTFs per page" metric. The mistakes are easy to notice and correct, so they didn't impede my understanding of the book's content. Disclaimer: I've been working with Zope 3 for the last six-or-so years, so I was pretty familiar with the underlying technologies, just not the thin Grok convenience layer. If minor errors annoy you, stay away. I haven't noticed any major factual errors, although there were what I would consider some pretty important omissions:

Overall, Grok is pretty nice, especially compared to vanilla Zope 3. However, when compared to frameworks like Pylons or Django, Grok appears more complex and seemingly requires you to do additional work for unclear gain. For example, chapter 8 has you writing three components for every new form you add: one for the form itself, one for a pagelet wrapping the form, and one for a page containing the pagelet. Most of that code is very similar with only the names being different. I'm sure there are situations where this kind of extreme componentization pays off (e.g. it lets you override particular bits on particular pages to satisfy a particular client's requests, without affecting any other clients), but the book doesn't convincingly demonstrate those advantages. Again, I may be biased here since I've been enjoying those advantages for the past six years, without ever having felt the pain of doing similar customizations with a less flexible framework. (It's a gap in my professional experience that I'm itching to fill.)

Update: some other reviews on Planet Python.

Update 2: Another review (well, part 1 of one, but I got tired waiting for part 2).

posted at 22:30 | tags: , , | permanent link to this entry | 3 comments

Sat, 13 Mar 2010

Review: Python Testing: Beginner's Guide

I've been testing (as well as writing) Python code for the last eight years, so a book with the words Begginer's Guide prominently displayed on the cover isn't something I'd've decided to buy for myself. Nevertheless I jumped at the offer of receiving a free e-copy for reviewing it.

Python Testing: Beginner's Guide by Danien Arbuckle

Short summary: it's good book. I learned a thing or two from it. I don't know well it would work as an introductionary text for someone new to unit testing (or Python). Some of the bits seemed overcomplicated and underexplained, parts of the example code/tests seemed to contain design decisions received from mysterious sources.

Incidentally, Packt uses a simple yet effective method for watermarking e-books: my name and street address are displayed in the footer of every page. What's funny is that the two non-ASCII characters in the street name are replaced with question marks. It's not a data entry problem: the website that let me download those books shows my address correctly, so it must be happening somewhere in the PDF production process. I didn't expect this kind of Unicode buggyness from a publisher. Then again there were occasional strange little typographical errors in the text, like not leaving a space in front of an opening parenthesis in an English sentence, or using a never-seen-before +q= operator in Python code. I was also left wondering how the following sentence (page 225) could slip past the editing process:

doctest ignores everything between the Traceback (most recent last call).

Thankfully those small mistakes did not detract from the overall message of the book.

I liked the author's technique of showing subtly incorrect code, letting the reader look at it and miss all the bugs, and then showing how unit or integration tests catch the bugs the reader missed. I'm pretty sure there's at least one remaining bug that the author missed in the example package (storing a schedule doesn't erase old data), which could serve for a new chapter on regression testing if there's a second edition.

Summary of topics covered:

I found the TDD cycle a bit larger than I generally like, but I believe it's a matter of taste, and perhaps a shorter cycle wouldn't work as well in a written medium.

I found it a bit jarring how the Twill chapter intrudes between the two chapters showing unit testing and integration testing of the same sample package. I think it would've been better to swap the order of chapters 8 and 9.

I liked the technique presented for picking subsets of the code for integration tests, although I wonder how well it would work on a larger project.

Topics not covered:

As you can see these holes are all rather small.

Probably the biggest weakness of the book is the complexity of some things shown:

Seeing the repetitive and redundant mock code in the first few doctest examples I started asking what's the point?, but the book failed to provide a compelling answer (the answer provided—it's easier to locate bugs—works just as well for integration tests that focus on individual classes). And there are good answers for that question, like instant feedback from your unit test suite. Are they worth the additional development effort? Maybe that depends on the developer. I don't think they would help me, so I tend to stick with low-level integration tests I call "unit tests" (as well as system tests; it's always a mistake to keep all your tests in a single level). I'm slightly worried that this book might give the wrong impression (testing is hard) and turn away beginning Python programmers from writing tests altogether.

Overall I do not feel that I have wasted my time reading Python Testing. I look forward to reading the other reviews that showed up on Planet Python. I gathered that not all reviewers were happy with the book, but avoided reading their reviews in order not to influence my own.

Update: I especially liked this review by Brian Jones. The lack of awkward page breaks in code examples is something that I only noticed after reading a different book, which is full of such awkward breaks, sigh.

Update 2: The book links are now affiliate links; I get a small amount from any purchase you make through them.

posted at 21:54 | tags: , | permanent link to this entry | 3 comments

Sat, 06 Mar 2010

You've got to love profiling

Yesterday I slashed 50% of run time from our applications functional test suite by modifying a single function. I had no idea that function was responsible for 50% of the run time until I started profiling.

Profiling a Python program is getting easier and easier:

$ python -m cProfile -o prof.data bin/test -f

runs our test runner (which is a Python script) under the profiler and stores the results in prof.data.

$ runsnake prof.data

launches the RunSnakeRun profile viewer, which displays the results visually:

RunSnakeRun square map display
The square map display of RunSnakeRun, with the 'render_restructured_text' function highlighted.

Who knew that ReStructuredText rendering could be such a time waster? A short caching decorator and the test suite is twice as fast. The whole exercise took me less than an hour. I should've done it sooner.

Other neat tools:

posted at 20:49 | tags: | permanent link to this entry | 6 comments

Fri, 05 Mar 2010

Bye, bye, free time!

Things I've taken up to do in the nearest future:

I really ought to read Getting Things Done. Reading it has been on my todo-list for years.

posted at 23:02 | tags: | permanent link to this entry | 4 comments

Wed, 03 Mar 2010

Weekly Zope developer IRC meetings

On Tuesday we started what will hopefully become a tradition: weekly IRC meetings for Zope developers. Topics covered include buildbot organization and maintenance, open issues with the ZTK development process, and the fate of Zope 3.5 (= BlueBream 1.0).

There are IRC logs of the meeting, and Christian Theune posted a summary to the mailing list.

My take on this can be summed up as: Zope ain't dead yet! The project has fragmented a bit (Zope 2, Zope Toolkit, Grok, BlueBream, Repoze), but we all share a set of core packages and we want to keep them healthy.

Next meeting is also happening on a Tuesday, at 15:00 UTC on #zope in FreeNode.

posted at 13:09 | tags: , | permanent link to this entry | 0 comments

Thu, 07 Jan 2010

Latin-1 or Windows-1252?

Michael Foord wrote about some Latin-1 control character fun in a blog that's hard to read (the RSS feed syndicated on Planet Python is truncated, grr!) and hard to reply (no comments on the blog! my Chromium's AdBlock+ hid the comment link so I couldn't find it), but never mind that.

Unfortunately the data from the customers included some \x85 characters, which were breaking the CSV parsing.

0x85 is a control character (NEXT LINE or NEL) in Latin-1, but it's a printable character (HORIZONTAL ELLIPSIS) in Microsoft's code page 1252, which is often mistaken for Latin-1. I would venture a suggestion that the encoding of the customer data was not latin-1 but rather cp1252.

>>> '\x85'.decode('cp1252')
u'\u2026'
posted at 23:29 | tags: | permanent link to this entry | 3 comments

Fri, 18 Dec 2009

GTimeLog: not dead yet!

Back in 2004 I wrote a small Gtk+ app to help me keep track of my time, and called it GTimeLog. I shared it with my coworkers, put it on the web (on the general "release early, release often" principles), and it got sort-of popular before I found the time to polish it into a state where I wouldn't be ashamed to show it to other people.

Fast-forward to 2008: there are actual users out there (much to my surprise), I still haven't added the originally-envisioned spit and polish, haven't done anything to foster a development community, am wracked by guilt of not doing my maintainerly duties properly, which leads to depression and burnout. So I do the only thing I can think of: run away from the project and basically ignore its existence for a year. Unreviewed patches accumulate in my inbox.

It seems that the sabbatical helped: yesterday, triggered by a new Debian bug report, I sat down, fixed the bug, implemented a feature, applied a couple of patches languishing in the bug tracker, and released version 0.3 (which was totally broken thanks to setuptools magic that suddenly stopped working; so released 0.3.1 just now). Then went through my old unread email, created bugs in Launchpad and sent replies to everyone. Except Pierre-Luc Beaudoin, since his @collabora.co.uk email address bounced. If anyone knows how to contact him, I'd appreciate a note.

version is now shown in the about dialog

There are also some older changes that I made before I emerged out of the funk and so hadn't widely announced:

posted at 01:22 | tags: , | permanent link to this entry | 6 comments

Wed, 09 Dec 2009

Unix is an IDE, or my Vim plugins

Unix is an IDE. I do my development (Python web apps mostly) with Vim with a bunch of custom plugins, shell (in GNOME Terminal: tabs rule!), GNU make, ctags, find + grep, svn/bzr/hg/git.

The current working directory is my project configuration/state. I run tests here (bin/test), I search for code here (vim -t TagName, find + grep), I run applications here (make run or bin/appname). I can multitask freely, for example, if I'm in the middle of typing an SVN commit message, I can hit Ctrl+Shift+T, get a new terminal tab in the same working directory, and look something up. No aliases/environment variables/symlinks. I can work on multiple projects at the same time. I can work remotely (over ssh).

Gary Bernhardt's screencasts on Vimeo show how productive you can get if you learn Vim and tailor it to your needs. I have Vim scripts that let me

Some of these come from www.vim.org, some I've written myself, some I've taken and modified a little bit to avoid an irritating quirk or add a missing feature. Some things I don't have (and envy Emacs or IDE users for having -- like an integrated debugger for Python apps, and, generally, integration with other tools, running in the background).

It's been my plan for a long time to polish my plugins, release them somewhere (github? bitbucket? launchpad?) and upload to vim.org, but as it doesn't seem to be happening, I thought I'd at least put an svn export of my ~/.vim on the web.

posted at 01:23 | tags: , , | permanent link to this entry | 9 comments

Tue, 01 Dec 2009

Displaying multiline text in Zope 3

zope.schema has Text and TextLine. The former is for multiline text, the latter is for a single line, as the name suggests. Zope 3 forms will use a text area for Text fields and an input box for TextLine fields. Display widgets, however, apply no special formatting (other than HTML-quoting of characters like <, > and &), and since newlines are treated the same way as spaces in HTML, your multiline text gets collapsed into a single paragraph.

Here's a pattern I've been using in Zope 3 to display multiline user-entered text as several paragraphs:

import cgi

from zope.component import adapts
from zope.publisher.browser import BrowserView
from zope.publisher.interfaces import IRequest


class SplitToParagraphsView(BrowserView):
    """Splits a string into paragraphs via newlines."""

    adapts(None, IRequest)

    def paragraphs(self):
        if self.context is None:
            return []
        return filter(None, [s.strip() for s in self.context.splitlines()])

    def __call__(self):
        return "".join('<p>%s</p>\n' % cgi.escape(p)
                        for p in self.paragraphs())

View registration

<configure
    xmlns="http://namespaces.zope.org/zope">

  <view
      for="*"
      name="paragraphs"
      type="zope.publisher.interfaces.browser.IBrowserRequest"
      factory=".views.SplitToParagraphsView"
      permission="zope.Public"
      />

</configure>

and usage

<p tal:replace="structure object/attribute/@@paragraphs" />

Update: The view really ought to be registered twice: once for basestring and once for NoneType. I was too lazy to figure out the dotted names for those (or check if zope.interface has external interface declarations for them), so I registered it for "*". You should know that this makes the view available for arbitrary objects (but won't work for most of them, since they don't have a splitlines method), and that it is, sadly, accessible to users who may try to hack your system by typing things like @@paragraphs in the browser's address bar. Ignas Mikalajūnas offers an alternative solution using TALES path adapters.

posted at 20:52 | tags: , | permanent link to this entry | 1 comments

Mon, 21 Sep 2009

Pylons and SQL schema migration

I'm at the point in my hobby project where I'd like to be able to change my models without losing all my test data. And I'm too lazy to do manual dumps and edit the SQL in place before reimporting it.

I want a system

I've been glancing at SQLAlchemy-Migrate, since I've been brought up to believe NIHing is Bad. But Migrate is scary. I have to admit that the longer I stare at its documentation, the less I can describe why I think so. All those shell commands—but there's an API for invoking them from Python, so maybe I can achieve my goals. I'll have to try and see.

posted at 19:44 | tags: , , | permanent link to this entry | 9 comments