September 30, 2008
September 17, 2008
Mark Ramm’s keynote at DjangoCon, A TurboGears guy on what Django could learn from Zope, has made the rounds in the Python blogosphere this week. His main point is that other projects will always have more talented people than your project. That’s why it’s always a good idea to be open, share technology and work with others. The real point, however, is that you can learn from each other’s mistakes (and he specifically advises Django to not repeat Zope’s mistakes).
This brings me to an interview with Scott Collins, published by Ars Technica in July 2004 (sic!). When asked about the mistakes Mozilla had made in the past, one of the things he mentions seems awfully familiar:
We made a version of COM, called XPCOM, a fundamental piece of every component of every part of the software. We took COM all the way down to the lowest levels of the system, and I think that XPCOM is a fantastic useful tool. We had great and important places to use it, but we used it too deeply, we allowed it to influence our memory model across the board, and there continue to be costs associated with that which I don’t think we need to be paying. I feel bad that I let it go that deep when I continually fought against it, but I am one of the perpetrators — I wrote some of the machinery that made it easy to use COM, and that allowed people to go deeper and deeper, which we knew was a mistake.
I’ve been saying this for years, and people tend to think that I’m damning COM unconditionally, and I’m not. COM is very important to us, and it’s a foundation of our scriptability layer, and scriptability is fundamental to how our application works. But I don’t think every object should be a COM object. We are finally moving away from using COM for everything, and maybe a synergy with Mono will help us get to the right level of differentiation between objects. This is a deep engineering thing, but I believe that fundamentally that we took the COM metaphor too deep and we paid a price for that: we could have more performance sooner earlier on if we had recognized and stuck to limits in our use of COM.
Most of this statement would hold true for Zope if you replaced “COM” with “Component Architecture”, even the bit about the speed. I’m not saying that the CA is slow, but it’s making us write stuff in complicated ways which seems to end up slowing down everything as a whole. Not to mention developer speed and developer learning speed. We had to build a whole framework on top of Zope just to hide the complexity of the CA for beginners.
Mozilla has learnt its lesson and refactored its code, especially the Gecko rendering engine. I wouldn’t be surprised if the speed and memory improvements in Firefox versions 2 and 3 are in part due to reducing the overuse of XPCOM. If you now think Zope hasn’t learnt its lesson yet, it’s not quite true. The Repoze guys have taken great ideas from Zope and re-implemented them in a much simpler manner that doesn’t need the Component Architecture and works well with others (by [ab]using WSGI) and makes them scale incredibly well. Examples:
- repoze.tm2, a WSGI middleware that allows application-wide transactions (an alternative to having the transaction handling mangled into the object publisher)
- repoze.who, a WSGI middleware for authentication (an alternative to Zope’s
- repoze.urispace, an implementation of W3 URISpace which describe a method to define arbitrary application policy based on URL paths (an alternative to Zope 2’s acquisition and Zope 3’s local components)
- repoze.errorlog, repoze.profile, repoze.debug, etc., which are various WSGI middlewares that take responsibility out of the object publisher
This isn’t to say that the Component Architecture isn’t useful or elegant because it is. But after having spent nearly two years helping create a framework that tries to hide the complexity of CA (over)use, I think it’s time to change gears a bit. The Repoze guys have pioneered it, now it’s time that Zope itself embraced it. If I were to pick, the publisher, its publication machinery and the traversal code would be the first to undergo this treatment, and Martijn seems to have those on his list as well. But Zope is bigger than just the two of us.
September 11, 2008
Yesterday’s launch of Largon Hadron Collider (LHC) spawned a series of reports and articles in the media, most of which unfortunately either miss the point or contain simply wrong information. Being a particle physicist myself (I’m currently writing my masters thesis), allow me to explain what the LHC is about. Instead of the typical 30 second news report, I’ll try to go into a bit more detail. Don’t worry, it’s educational.
The great Richard Feynman once described the work of particle physicists as being spectators of a board game played by the gods. You can observe the board, see the pieces move and deduct the rules of the game that way. The longer you watch, the more you know about the game and the clearer the rules become. In particle physics, we strive for an ever smaller rule book by trying to look at Nature and figuring out symmetries and common elements.
For instance, just think of how many different kinds of materials there are in the world. Millions. How to structure them? Well, you can look at what the materials are made of and you find they’re made of many identical atoms. Once you’ve done this with all materials in the world, you discover there are only several dozen kinds of atoms. So the sheer amount of materials in the world has been reduced to a bunch of atoms and the rules by which they can be combined. So by figuring out the common elements and the rules, we’ve made understanding materials much simpler.
You can now go on and take apart the atoms and find that atoms are always made up of a nucleus and a number of electrons in outer shells. But the protons and neutrons that make up the nucleus aren’t elementary either, they’re actually made up of quarks. So you end up with
- quarks (which make up hadrons like protons and neutrons)
These are the particles that make up matter and to our knowledge they’re elementary. That means we can’t take them apart any further. (The actual list is a bit longer, but that’s not important for now.)
Now, matter just doesn’t sit there. It interacts, just like the pieces move on a board game according to certain rules. For instance, the nucleus and the electrons in atoms are bound together. This means there must be a constant force that keeps them together. It’s called the electromagnetic force. It’s in fact the same force that the Sun exerts on our eyes (which causes the receptors in our eyes report to the brain that we’ve seen light) or that makes the electrons go up and down in a radio antenna (therefore inducing an electric current that your radio transforms into sound).
All in all we know three elementary forces:
- electromagnetic force: it makes charged particles attract or reject each other
- strong force: it binds quarks to bunches of two or three called hadrons (which makes the proton a hadron)
- weak force: it allows some particles to transform into certain different ones, therefore allowing phenomena like the radioactive beta decay of atoms
(You may have noticed that gravity is missing from this list. Gravity is in fact so weak compared to these three forces that it makes little difference on a subatomic scale.)
When a particle exerts a force on another particle, it transfers energy to that particle. One curious property of Nature is that energy can only occur in multiples of a certain amount. That means forces can also only be exerted in discrete amounts. These discrete amounts are in fact particles as well! So while you may think of the Sun’s light as electromagnetic waves, it would be also be appropriate to think of a series of small particles being emitted by the Sun and absorbed by our eyes.
So now we have a more complete overview over elementary particles:
- matter particles: quarks, electrons, …
- force particles: photons (electromagnetic), weak bosons and gluons (strong)
Our rule book: The Standard Model
The whole point of particle physics is now to come up with the rule book by which those matter and force particles work together. The rule book that physicists have come up with over several decades is called the Standard Model. It describes how Nature combines matter and force particles to build atoms and many more fascinating processes. It’s actually quite an elegant theory.
The Standard Model is the work of many scientists that have observed Nature and trying to transform their observations into a rule. Of course you’re going to need appropriate experiments to make such observations. Not only that, you also need to verify the predictions the Standard Model makes about other processes. Unfortunately, if you want to make precision measurements, the reactions under which particles exchange forces typically require lots of energy. For instance, weak force is easily observed in atoms through beta decay, but this process doesn’t allow you to get precise measurements about the weak force. For that you want to single out electrons, give them a lot of energy so that the probability of a weak interactions is high enough so that you can observe them by the millions.
This is what colliders are for. They are long empty (as in vacuum) tubes in which particles can be accelerated and brought to collision. When they collide, they perform interactions such as the ones we’ve talked about above. By observing the outcome of these interactions (such as making very precise measurements) we can verify or disprove the rule book. CERN had such a collider called LEP until a couple of years ago, now LHC has been built in its place. So the LHC is far from being an all-new thing. Put simply, it’s a more powerful version of what was already there.
Why more powerful? So we can give particles even more energy to allow processes that we haven’t yet been able to observe. One of those processes involves the Higgs boson, another force particle. The force that the Higgs boson relates to is quite special because it is the answer to a question that the Standard Model without the Higgs mechanism can’t solve: How come some particles have mass?
The Higgs mechanism works a bit like friction in fluids. Imagine you’re pulling a little wagon with negligible mass around. It would require no effort. Now imagine you’re pulling it under water. The friction from the water that surrounds the wagon makes it harder to pull the wagon forward. It feels like the wagon now has considerable mass. And the greater the friction from the water acts upon the wagon, the greater its perceived mass would be.
The existence of the Higgs boson is pretty much the only prediction of the Standard Model that hasn’t been verified yet — due to the lack of energy. If LHC manages to verify its existence, it would be great a success for the Standard Model. In other words, the rules that we’ve deducted over the past decades would have proven to be accurate.
… and more!
Thanks to its potential high energy output, the LHC will hopefully be capable of doing more than just verifying existing theories such as the Higgs mechanism. We also hope that the LHC’s energy will allow interactions that the Standard Model doesn’t predict.
Wait, you may think, why would you want the LHC to contradict your rule book? Well, simple. While our current rule book, the Standard Model, is an enormously successful theory, it does lack explanations for some phenomena. So apparently it doesn’t describe the complete set of rules, there must be more rules that we just haven’t figured out yet. To do that, we hope that the LHC will give us clues about physics beyond the Standard Model.
My masters thesis is somewhat related to this. It’s a theoretical topic which means I don’t do experiments. What theorists do is look at alternative models and see if they fit observations better than the current model. So when LHC finds processes that haven’t been observed yet, theorists may already have the appropriate rule book. And while there are some indicators for this or that new rule book, we won’t know for sure until LHC tells us.
Needless to say, these are exciting times for particle physicists.