JavaScript has picked up lots of pythonisms over the last few years which is obvioulsy a Good Thing(tm). Aza Raskin of Mozilla has now created Pyscript, a version of JavaScript sans curly braces. As a fellow Pythonista I too find curly braces aesthetically unpleasant. But I don’t think it’s the pressing issue. At the end of his post, Aza asks “What other ways can we make Javascript syntax prettier and more readable?” Let me tell you by pointing to the elephant in the room.

Writing a class/object in JavaScript, especially “subclassing,” weirds me out.

I just can’t make my peace with the functions-implicitly-become-object-constructors idea. I can see how it might make sense in a prototype world. But once you use functions to define objects, you must use the new operator for instantiation. This means there are some functions you call right away and some you don’t. I just don’t understand why it would be so bad to have a new language construct for defining objects?

The lack of such a one-and-only language construct leads to a plethora of ways how to define object methods. Some like doing it this way:

function MyObject() {
  /* constructor here */
  this.aMethod = function() {
    /* method here */
  }
}

while others like to monkey-patch them in, like so:

function MyObject() {
  /* constructor here */
}
MyObject.prototype.aMethod = function() {
  /* method here */
}

or even:

MyObject.prototype = {
  aMethod: function() {
    /* method here */
  }
}

I’m sorry, that’s just too many ways for doing something all too common in object-oriented languages: defining objects. Not to mention the prototype-less one:

var MyObject = {
  aMethod: function() {
    /* method here */
  }
}

(Of course, when using Mozilla’s JavaScript engine you can also monkey patch a prototype into this via MyObject.__proto__ = {...}. In fact, the Mozilla folks like using __proto__ all the time to specify the object’s baseclass, uh, I mean baseprototype.)

I know what you’re going to say now. People could just settle for one way and impose that as a coding convention. But why hasn’t that happened? My suspicion is that because none of the choices are truly great. The language itself doesn’t encourage a particular choice more than any other, and that’s bad. Arbitration like that is just one step away from Perl.

Coming back to Aza’s Pyscript, I think it’s a useful exercise because it shows how much you can improve JavaScript with just a few lines of, uh, JavaScript. Perhaps I should give it a whirl and come up with a language construct for creating prototype-based objects, including a decent inheritance syntax. What do you think that should look like?

Update: I’ve written a follow-up post that contains a solution.

Advertisements

Attention, attention! This is a service announcement!

I haven’t been blogging much about Python and Zope lately. In fact, as some of you may have noticed, I’m no longer involved in Zope at all. I continue to use Python, though. To keep your feed aggregators a.k.a. planets topical I suggest removing my blog feed from Zope related planets and switch Python related aggregators to my Python category feed.

If any of you readers have found my blog through one of the Python and Zope planets and still enjoyed my other posts, I suggest adding my general feed to your news reader so you continue to get updates. Thank you.

End of service announcement.

This is really just a “note to self” kind of post. I meant to write this down a while ago but I forgot. To prevent further forgetting, here it is:

I always compile Python myself. The Python that comes with OS X tends to get outdated pretty soon and it has outdated libraries in its site-packages directory. And beware installing or updating anything in there, it might ruin core components of OS X (because they actually use this Python instead of one that’s not the user’s to modify). I know that MacPorts too has various Python versions but it then again it applies various patches to them and builds them in weird manners that I don’t understand (framework, etc.). So the best bet to get a clean and reliable Python installation is to self-compile (and then use virtualenv to prevent it from being messed up).

As it happens, when I compiled Python 2.5 or higher on OS X, it linked to either the OS X readline library or the MacPorts one. Which one I don’t know, but it was definitely hosed. So while the interpreter worked fine, the interpreter shell would crash with a Bus Error. So what I did was compile my own plain vanilla version of readline and installed it to /opt. Of course that didn’t work right away because readline wouldn’t build on OS X Leopard without applying a small patch to a build script.

After having installed readline, I configured Python with the (undocumented, but apparently existing) --with-readline-dir option:

./configure --prefix=/opt --with-readline-dir=/opt

and did the usual make && make install dance.

If you’re a Python web developer and are interested in fast templating, Chameleon might be of interest to you. Brought to you by that crazy Austro-Danish duo Malthe Borch and Daniel Nouri, it’s a byte compiler that bakes HTML/XML templates into Python (byte) code. Currently it has support for Zope Page Templates and Genshi. And it’s fast. Fricken fast. If I remember correctly, it doesn’t achive the same speeds as Google’s Spitfire in all benchmarks, but it’s in the same league.

Now Malthe and Daniel as well as some regular contributors such as Chris McDonough, Wichert Akkerman and Hanno Schlichting are perfecting ZPT and Genshi compatibility. With support for macros and i18n, it already looks like a serious contender to replace zope.pagetemplate in templating-heavy Zope apps such as Plone. In fact, Chameleon might be the perfect match for Alex Limi’s proclaimed faster and lighter Plone 4, next to ditching Archetypes, reducing the Component Architecture overdose and going more Pythonic.

That said, it’d be very interesting to see Chameleon being tried by the non-Zope crowd. repoze.bfg has already adopted it as its de-facto standard templating engine. Has anyone tried it with Django, Pylons, TurboGears yet?

(This is a post from my old blog which seems to be going offline once in a while, maybe even permanently at some point. This article is still very useful so I’ve reposted it here for other people’s and my reference.)

Thanks to Hanno Schlichting’s howto, I’ve figured out how to create Windows eggs of those packages that have C extensions. This approach doesn’t need Microsoft Visual Studio, nor does it require you to wade
through a bunch of free Microsoft downloads that don’t really work in the end anyway.

Here’s what I did:

  1. Installed the standard Python 2.4 distribution from the MSI package.
  2. Installed the MingW compiler (into the standard location C:\MingW)
  3. Created C:\Documents and Settings\Philipp\pydistutils.cfg and put the following text in it:
    [build]
    compiler=mingw32

    This tells distutils to always use the MingW compiler whenever it has to compile something from C.

  4. Went to the Control Panel -> System -> Advanced tab and clicked on the Environment Variables button. There I appended the following text to the the Path environment variable, adding the Python interpreter as well as MingW’s programs to the search path:
    ;C:\Python24;C:\MingW\bin

    Then I added another environment variabled called HOME with the following value:

    C:\Documents and Settings\Philipp

    This points distutils at the pydistutils.cfg file that I created earlier (you can put the pydistutils.cfg file anywhere you want, you just need to make sure that the HOME environment
    variable points to the directory).

  5. With that in place, I am able to take any tarball (e.g. zope.interface-3.4.0.tgz), unzip it and create a Windows egg from it like so:
    python setup.py bdist_egg

What’s more, with a setup like this, it is easily possible to install Zope 3 completely from eggs (e.g. using zc.buildout) even if there are no pre-built Windows eggs on the Cheeseshop. More specifically, with this setup, zopeproject (which is really just a convenience tool over zc.buildout) works like a charm on Windows now.

The release process that I wrote for the Zope subversion repository states that a library’s version number on the trunk or a release branch should always be the next release version number applicable to that branch. For instance, if zope.interface 3.4.1 were just released from the 3.4.x branch, the version number of zope.interface on that branch should read 3.4.2dev.

Let me explain why I suggested this practice and, despite much critique, still maintain it’s makes the most sense.

First of all, the setuptools documentation states:

Note: the project version number you specify in setup.py should always be the next version of your software, not the last released version.

So it’s a convention that seems to be generally suggested. That doesn’t necessarily mean it’s a good idea, though.

What makes it a good idea is the fact that when you get a checkout of the trunk or a development branch, the version number is actually meaningful, due to setuptools’ version semantics

3.4.1 < 3.4.2dev < 3.4.2

So, a development egg of zope.interface 3.4.2dev will for instance satisfy a version requirement like “zope.interface > 3.4.1“. For example, say you wanted to temporarily deploy from a subversion checkout. I had to do this to get my PyFlakes running with Emacs. The latest release PyFlakes 0.2.1 wasn’t good enough because something got fixed on the trunk. So I got a trunk checkout and did a

python setup.py install

into a virtualenv. Surely enough, the trunk’s version number was still pointing to 0.2.1. So I ended up with pyflakes-0.2.1-py2.4.egg in my site-packages and no way to tell it from the actual 0.2.1 release. (Yes, I know there’s some setuptools parameter than can let you build versions like 0.2.1-r48292, and those would be fine, if they were configured to occur on the trunk automatically.) So that’s why not only bumping the version number to the next release but also adding the “dev” marker to tell development eggs or snapshots apart from actual release (and prevent people from releasing from the trunk!) is a good idea.

To conclude, I think bumping the version number to the next or at least the next anticipated release (it’s ok if you don’t get the version number right the first time) and adding a dev marker is not only a good idea, I think it’s pretty much the only way to get the version number semantics of development eggs right (unless you use r34234 suffixes which have other problems). Of course, I’m willing to be convinced otherwise if alternate solutions achieve the same semantics. Comment away!

So far I’ve found that distributing .egg files is mostly useless:

  • Source tarballs as created with the distutils/setuptools sdist command are not only equivalent to eggs, they often contain more information (such as top-level README.txt or INSTALL.txt files). Also, not everybody has embraced easy_install yet and a tarball is the least surprise to the old-school folks.
  • .egg files are marked with the Python version they were created with. So if you only upload an .egg file that was created with, say, Python 2.4, and you don’t provide a source tarball, Python 2.5 users will be out of luck trying to easy_install your package (even though it may perfectly work on Python 2.5).
  • If the package contains C extensions, you pretty much can’t risk uploading an .egg file because it’ll contain binaries. With Linux and MacOSX this is unacceptable due to the various ways Python itself can be built on these platforms (and will therefore likely be incompatible with anything you’ve built). One notable exception is Windows which (thanks to its homogeneousness) makes it possible to distribute binaries w/o problems. Then again, because Windows users rarely have the right compiler installed, it pretty much requires you to distribute binaries.

So why are we still uploading .egg files to PyPI? Isn’t it enough and even better to just upload the source tarball? (And a Windows binary egg only if the package contains C extensions.)