Contemplating Integration of Protocol Buffers into ZODB

I am now looking at integrating Google’s Protocol Buffers into ZODB.  This means that wherever possible, ZODB should store a serialized protocol buffer rather than a pickle.  The main thing my customer hopes to gain is language independence, but storing protocol buffers could open other possibilities as well.  I think this is a very good idea and I’m going to pursue it.

Although ZODB has always clung tightly to Python pickles, I don’t think that moving to a different serialization format violates any basic ZODB design principle.  On the other hand, I have tried to change the serialization format before; in particular, the APE project tried to make the serialization format completely pluggable.  The APE project turned out not to be viable.  Therefore, this project must not repeat the mistake that broke APE.

Why did APE fail?  APE was not viable because it was too hard to debug.  It was too hard to debug because storage errors never occurred near their source, so tracebacks were never very helpful.  The only people who could solve such problems were those who were intimately familiar with the application, APE, and ZODB, all at the same time.  The storage errors indicated that some application code was trying to store something the serialization layer did not like.  The serialization layer had no ability to voice its objections until transaction commit, at which time the incompatible application code was no longer on the stack (and might even be on a different machine if ZEO was involved).

This turned out to be a much more serious problem than I anticipated.  It made me realize that one of the top design concerns for large applications is producing high quality stack tracebacks in error messages.  A quality traceback can pinpoint the cause of an error in seconds.  I am not aware of any substitute for quality tracebacks, so I am now willing to sacrifice a lot to get them.

So I am faced with a basic design choice.  Should protocol buffers be integrated into ZODB at the application layer, rather than behind a storage layer like APE did?  If I choose to take this route, Persistent subclasses will need to explicitly store a protocol buffer.  Storage errors will indeed occur mostly at their source, since the protocol buffer classes will check validity immediately.

Now that I’ve written this out, the right choice seems obvious: the main integration should indeed be done at the application layer.  Until now, it was hard to distinguish this issue from other issues like persistent references and the format of database records.

Furthermore, the simplest thing to do at first is to store a protocol buffer object as an attribute of a normal persistent object, rather than my initial idea of creating classes that join Persistent with Google’s generated classes.  That means we will still store a pickle, but the pickle will contain a serialized protocol buffer.  Later on, I will figure out how to store a protocol buffer without a pickle surrounding it.  I will also provide a method of storing persistent references, though it might be different from the method ZODB users are accustomed to.

RelStorage 1.1.1 Released

For those just tuning in: RelStorage is an adapter for ZODB (the database behind Zope) that stores data in a relational database such as MySQL, PostgreSQL, or Oracle.  It has advantages over FileStorage and ZEO, the default storage methods.  Read more in the ZODB wiki.

Download this release from PyPI.  As I mentioned in another post, this release works around MySQL performance bugs to make it possible to pack large databases in a reasonable amount of time.

If you are upgrading from previous versions, a simple schema migration is required.  See the migration instructions.

Packing With MySQL

I’ve been working on packing a 6.5GB production database based on RelStorage and MySQL 5.0.  It contains 6.4 million objects, meaning “select count(1) from current_object” returns 6426879.  I would call this a moderately sized database.  When I first approached it, it had not been packed for at least 3 months.  At first, I could not pack it, because RelStorage was using statements that MySQL mis-optimizes.  I will discuss how I fixed the problem.

The first time I tried to pack this database, the pre_pack phase took several hours before I stopped it by killing the Python process.  I studied the problem using “show processlist” and the MySQL docs online.  I found a statement in RelStorage that MySQL mis-optimizes.  I tried again and it still took too long; this time I discovered that a workaround I had applied earlier for a mis-optimized query did not actually help this instance of MySQL.  I decided to make the workaround more robust by using an explicit temporary table.

Then, on Saturday, I started a pack again.  The pre_pack phase completed in less than an hour this time and the final phase began.  The final phase gets from the pre_pack phase a complete list of objects and states to delete; all it really has to do is delete them.  I let it run all weekend.  Today (Monday), using “show processlist” again, I measured 17 seconds per transaction, with 26,000 transactions to go.  With a 50% duty cycle, this would have taken about 10 days!  (The 50% duty cycle is a configurable parameter that causes RelStorage to periodically sleep during packing, allowing other clients to write to the database without significant interruption.)

I read up on the delete statement.  It turns out that delete with a subquery is often very expensive in MySQL.  I converted the delete statements to use a table join instead of a subquery, and the time per transaction dropped from 17 seconds to about 3 milliseconds.  This time the pack finished in minutes.

Does this mean I should optimize every statement this way?  Certainly not.  The SQL syntax I had to use to optimize MySQL is not portable, so I had to sacrifice portability, readability, and maintainability, in order to gain acceptable performance.  I will only make that sacrifice when necessary.

I am very happy that I did not try to optimize this code until I was in a position where I could properly measure the performance with millions of real objects.  If I had optimized prematurely, I would have focused on micro-optimizations that might have only doubled or tripled the performance.  By waiting until the time was ripe, I multiplied the pack performance by a factor of at least 5,000, with only a few days of work and only minor code changes.  Next time someone asks why premature optimization is so evil, I will try to remember this experience.  Most of the best software developers I know skip micro-optimizations and focus on readability instead, helping to ensure that much larger optimizations will be possible later.

Note that during my optimization work, I never once needed to interrupt the main site nor restart MySQL.  I only restarted my own Python program repeatedly.  I doubt anyone else knew there was a pack going on.

RelStorage 1.1.1 Coming Soon

After a lot of testing and performance profiling, on Friday I released RelStorage 1.1.  On Saturday, I was planning to blog about the good news and the bad news, but I was unhappy enough with the bad news that I decided to turn as much of the bad news as I could into good news.  So I fixed the performance regressions and will release RelStorage 1.1.1 next week.

The performance charts have changed since RelStorage 1.0.

Chart 1: Speed tests with 100 objects per transaction.

Chart 2: Speed tests with 10,000 objects per transaction.

I’m still using 32 bit Python 2.4 and the same hardware (AMD 5200+), but this time, I ran the tests using ZODB 3.8.1.

The good news:

  • Compared with RelStorage 1.0, reads from MySQL and Oracle are definitely faster.  In some tests, the Oracle adapter now reads as quickly as MySQL.  To get the Oracle optimization, you need cx_Oracle 5, which was just released this week.  Thanks again to Helge Tesdal and Anthony Tuininga for making this happen.
  • Loading 100 objects or less per transaction via MySQL is now so fast that my speed test can no longer measure it.  The time per transaction is 2 ms whether I’m running one ZODB client or 15 concurrently.  (The bad news is now I probably ought to refine the speed test.)

The bad news in version 1.1 that became good news in version 1.1.1:

  • The Oracle adapter took a hit in write speed in 1.1, but a simple optimization (using setinputsizes()) fixed that and writes to Oracle are now slightly faster than they were in RelStorage 1.0.
  • MySQL performance bugs continue to be a problem for packing.  I attempted to pack a 5 GB customer database, with garbage collection enabled, using RelStorage 1.1, but MySQL mis-optimized some queries and wanted to spend multiple days on an operation that should only take a few seconds.  I replaced two queries involving subqueries with temporary table manipulations.  Version 1.1.1 has the workarounds.

The bad news that I haven’t resolved:

  • Writing a lot of objects to MySQL is now apparently a little slower.  With 15 concurrent writers, it used to take 2.2 seconds for each of them to write 10,000 objects, but now it takes 2.9 seconds.  This puts MySQL in last place for write speed under pressure.  That could be a concern, but I don’t think it should hold up the release.

Of the three databases RelStorage supports, MySQL tends to be both the fastest and the slowest.  MySQL is fast when RelStorage executes simple statements that the optimizer can hardly get wrong, but MySQL is glacial when its query optimizer makes a bad choice.  PostgreSQL performance seems more predictable.  PostgreSQL’s optimizer seems to handle a subquery just as well as a table join, and sometimes SQL syntax rules make it hard to avoid a subquery.  So, to support MySQL, I have to convert moderately complex statements into a series of statements that force the database to make reasonable decisions.  MySQL is smart, but absent-minded.

The ZEO tests no longer segfault in ZODB 3.8.1, which is very good news.  However, ZEO still appears to be the slowest at reading a lot of objects at once.  While it would be easy to blame the global interpreter lock, the second chart doesn’t agree with that assumption, since it shows that ZEO reads are slow regardless of concurrency level.  ZEO clients write to a cache after every read and perhaps that is causing the overhead.

I will recommend that all users of RelStorage upgrade to this version.  The packing improvements alone make it worth the effort.

RelStorage: New MySQL Adapter

I was interested in adding MySQL 5.0+ support to RelStorage, so I went ahead and did it. The code is in Subversion now.

The tests I’ve run suggest the MySQL adapter is already a lot faster than ZEO and both of the other adapters. The MySQL adapter has consistently finished the tests 40% faster than the PostgreSQL adapter. I haven’t tested PostgreSQL 8.3 yet; perhaps that will make a difference. I don’t know yet how the reliability of each adapter compares.

The port took all day. (My wife and I both have a cold, making it unwise to go to work today.) I didn’t think it would take that long, but I had to slow down to figure out the strange relationship between locks and transactions in MySQL. The LOCK TABLE statement is full of surprises! I ended up using GET_LOCK and RELEASE_LOCK and row-level locks instead.

So after editing queries to fit MySQL’s rules and fixing miscellaneous details, RelStorage now has a third database adapter. I’m excited to see what happens next. For one thing, MySQL support could have a major positive effect on Plone.

By the way, MySQLdb version 1.2.2 is required. Version 1.2.1 has a bug involving BLOBs; RelStorage depends heavily on BLOBs.

RelStorage: A New ZODB Storage

I’m writing RelStorage, a new storage implementation for ZODB / Zope / Plone. RelStorage replaces PGStorage. I’ve put up a RelStorage Wiki page and the zodb-dev mailing list has been discussing it. There is no stable release yet, but a stable release is planned for this month.

While performance is not the main goal (reliability and scalability are more important), I was pleasantly surprised to discover last week that creating a Plone 3 site in RelStorage on PostgreSQL 8.1 is a bit faster than doing the same thing in FileStorage, the default ZODB storage. Clearly, the PostgreSQL team is doing a great job!

Several years ago I put together an early prototype of PGStorage. I recall discovering that PostgreSQL was terribly slow at storing a lot of BLOBs. I read about the soon-to-come TOAST feature, but I wasn’t sure it would solve the problem, so I discarded the whole idea for years. Today, PostgreSQL seems to have no problem at all with this kind of work. It sure has come a long way.

RelStorage also connects to Oracle 10g. According to benchmarks, Oracle has a slight performance advantage, perhaps due to the “read only” isolation mode that Oracle provides. It might be useful for PostgreSQL to get that feature too.

I’m considering setting up a MySQL adapter for RelStorage as well. When the database is in MySQL and Zope is running in mod_wsgi, we could say that the “P” in LAMP stands for Plone!