Remember how I was talking about serializing data in ZODB using Google Protocol Buffers instead of pickles? Well, Keas Inc. suggested the idea and asked me to work on it. The first release of a package combination that implements the idea is ready:
- keas.pbstate 0.1.1 (on PyPI), which helps you write classes that store all state in a Protocol Buffer message.
- A patch for ZODB (available at packages.willowrise.org) that makes it possible to plug in serializers other than ZODB’s standard pickle format.
- keas.pbpersist 0.1 (on PyPI), which registers a special serializer with ZODB so that ProtobufState objects get stored without any pickling.
This code is new, but the tests all pass and the coverage tests claim that every line of code is run by the tests at least once. I hope these packages get used so we can find out their flaws. I did my best to make keas.pbstate friendly, but there are some areas, object references in particular, where I don’t like the current syntax. I don’t know if this code is fast–optimization would be premature!
I should mention that while working with protobuf, I got the feeling that C++ is the native language and Java and Python are second class citizens. I wonder if I’d have the same feeling with other serialization formats.
This release has two new useful features, one for performance, one for safety.
The performance feature is that if you use both the cache-servers and poll-interval options, RelStorage will use the cache to distribute basic change notifications. That means we get to lighten the load on the database using the poll-interval, yet changes should still be seen instantly on all clients. Yay! 🙂
The only drawback I expect is that caching makes debugging more difficult. Still, this option should help people build enormous clusters, like the one my current customer was planning to build, although I got word today that they have changed their mind.
The new safety feature is the pack-dry-run option, which lets you run only the nondestructive pre_pack phase to get a list of everything that would be deleted by the pack phase. This should be particularly useful if you’re trying out packing for the first time on a big database. My current customer would have benefited from this too.
I also fixed a bug that caused the pack code to not remove as much old stuff as it should and I started using PyPI instead of the wiki as the main web page. Using PyPI means I have to maintain only one README, which gets translated automatically into the PyPI page. Until now I’ve had to maintain both the README and the wiki page.