NewEgg, You Amaze Me

Last week, an important desktop computer used by my family died. The connection between the power supply and motherboard fried somehow. I’ve never seen a connection fail like this before: all of the connections attached to red wires had burned the surrounding plastic within the connector. It was difficult to detach the connector, and even when I cleaned up the motherboard and used a new power supply, the motherboard refused to power on. Nothing else was damaged.

I suspect the motherboard was drawing too much current and the connector failed over time. Newer motherboards have separate 12V connectors which could solve the problem.

Anyway, I immediately found a better motherboard (PCCHIPS A15G (V1.0) AM2+ MCP61P) and a faster, lower power CPU (AMD|A64 X2 5200 2.7G AM2 65N R) on NewEgg for a total of $108 including shipping. The computer will be better than ever. NewEgg sure knows how to keep me as a customer.

The Fastest WSGI Server for Zope

I have been planning to compare mod_wsgi with paste.httpserver, which Zope 3 uses by default.  I guessed the improvement would be small since parsing HTTP isn’t exactly computationally intensive.  Today I finally had a good chance to perform the test on a new linode virtual host.

The difference blew me away.  I couldn’t believe it at first, so I double-checked everything.  The results came out about the same every time, though:

wsgi-zope1

I used the ab command to run this test, like so:

ab -n 1000 -c 8 http://localhost/

The requests are directed at a simple Zope page template with no dynamic content (yet), but the Zope publisher, security, and component architecture are all deeply involved.  The Paste HTTP server handles up to 276 requests per second, while a simple configuration of mod_wsgi handles up to 1476 per second.  Apparently, Graham‘s beautiful Apache module is over 5 times as fast for this workload.  Amazing!

Well, admittedly, no… it’s not actually amazing. I ran this test on a Xen guest that has access to 4 cores.  I configured mod_wsgi to run 4 processes, each with 1 thread. This mod_wsgi configuration has no lock contention.  The Paste HTTP server lets you run multiple threads, but not multiple processes, leading to enormous contention for Python’s global interpreter lock.  The Paste HTTP server is easier to get running, but it’s clearly not intended to compete with the likes of mod_wsgi for production use.

I confirmed this explanation by running “ab -n 1000 -c 1 http://localhost/”; in this case, both servers handled just under 400 requests per second.  Clearly, running multiple processes is a much better idea than running multiple threads, and with mod_wsgi, running multiple processes is now easy.  My instance of Zope 3 is running RelStorage 1.1.3 on MySQL.  (This also confirms that the MySQL connector in RelStorage can poll the database at least 1476 times per second.  That’s good to know, although even higher speeds should be attainable by enabling the memcached integration.)

I mostly followed the repoze.grok on mod_wsgi tutorial, except that I used zopeproject instead of Repoze or Grok.  The key ingredient is the WSGI script that hits my Zope application to handle requests.  Here is my WSGI script (sanitized):

# set up sys.path.
code = open('/opt/myapp/bin/myapp-ctl').read()
exec code

# load the app
from paste.deploy import loadapp
zope_app = loadapp('config:/opt/myapp/deploy.ini')

def application(environ, start_response):
    # translate the path
    path = environ['PATH_INFO']
    host = environ['SERVER_NAME']
    port = environ['SERVER_PORT']
    scheme = environ['wsgi.url_scheme']
    environ['PATH_INFO'] = (
        '/myapp/++vh++%s:%s:%s/++%s' % (scheme, host, port, path))
    # call Zope
    return zope_app(environ, start_response)

This script is mostly trivial, except that it modifies the PATH_INFO variable to map the root URL to a folder inside Zope. I’m sure the same path translation is possible with Apache rewrite rules, but this way is easier, I think.

How to Fix the MySQL Write Speed

Last time I ran the RelStorage performance tests, the write speed to a MySQL database appeared to be slow and getting slower.  I suspected, however, that all I needed to do was tune the database.  Today I changed some InnoDB configuration parameters from the defaults.  The simple changes solved the MySQL performance problem completely.

The new 10K chart, using RelStorage 1.1.3 on Debian Sid with Python 2.4 and the same hardware as before:

I added the following lines to my.cnf to get this speed:

innodb_data_file_path = ibdata1:10M:autoextend
innodb_buffer_pool_size=256M
innodb_additional_mem_pool_size=20M
innodb_log_file_size=64M
innodb_log_buffer_size=8M
innodb_flush_log_at_trx_commit=1
innodb_file_per_table

This is similar to the configuration suggested by the InnoDB documentation for a 512 MB database server.  Even if you have a 16 GB server, I would suggest starting with the settings for a 512 MB server, then watch what happens to the RAM and CPU on the database server when you connect all of your client machines simultaneously.  You want to leave at least half the RAM available for disk cache and usage spikes.

Not all of these changes are related to speed.  The innodb_file_per_table option just seems like a good idea because it makes tables visible on the filesystem, which should improve manageability.  I think it might improve cache locality as well.

With these changes to my.cnf, ZEO, PostgreSQL, and MySQL all perform about the same for writes, with MySQL having a slight lead.  I suspect all three are hitting hardware and kernel limits.  I think the differences would be more pronounced on higher-end storage hardware.

A big caveat: It’s risky to change InnoDB settings unless you’re familiar with all the effects.  Some changes break compatibility with existing table data.  Get to know the InnoDB documentation very well before you change these settings, and make backups using mysqldump, as always.

Meanwhile, Oracle XE continues to write slowly and ZEO read performance is so bad that it’s off the chart.  I bet ZEO read performance could be improved with some simple optimizations somewhere, but I don’t have an incentive to fix that. 🙂  Perhaps it has been fixed in ZODB 3.9.

RelStorage Support

I am more than happy to support RelStorage as best I can by email.  Every time I do, however, I always get a nagging feeling that I could help RelStorage users a lot better if we set up a short term support contract.  I would very much appreciate a chance to optimize their system by testing the performance of different configurations.  When the communication is limited to email, neither of us gets a chance to discover how we might help each other better.

So if you’re a RelStorage user and your database is growing by tens of gigabytes, please seriously consider a short term support contract with my little company.  A little tuning or code revision in the right place could yield orders of magnitude performance gains.  I really want to help you directly.  Contact me at shane (at) willowrise (dot) com.

Limits of zope.pipeline

I’m starting to get a sense of what publisher functionality I can put in a WSGI pipeline and what I shouldn’t.

The pipeline is very useful for specifying the order things should happen.  For example, the error handling should be as early in the pipeline as possible, so it can handle many kinds of errors, but it has to come after the pipeline element that opens and closes the root database connection.  Constraints like that have never been expressed clearly in the current publisher.

I was planning to encapsulate the <base> tag mangling logic in a simple pipeline step, but I’ve studied how it currently works and I realize now that WSGI doesn’t provide a good abstraction for the kind of heuristics Zope uses to makes the <base> tag logic fast.  I am considering several choices:

  1. Split the base tag handling between a pipeline element and a new adapter.
  2. Add short-lived output filter hooks to the response, similar to the traversal_hooks I added to requests, which I think turned out quite nice.
  3. Stick to the original plan, which might cause performance problems since Zope would then have to buffer potentially large output streams.

I need to choose the pattern that maximizes clarity for readers.  #1 and #2 are very similar.  #1 is less direct and thus more ambiguous than #2, but #1 is used more often in Zope code.

Anecdotal Evidence

I like to believe that I am a competent software developer in both Python and Java.  As a competent developer, I find that certain things are generally much easier than other things.

For instance, I just spent a frustrating week working out how to install a Shibboleth identity provider, yet I never got it working quite satisfactorily.  Then, after spending 30 minutes with Python-openid, I had an identity provider server running and I already felt quite confident with it.

It’s as if Java and Python are not really in the same league.  There is no way to prove that, though.  Sometimes Java wins anyway.

java.lang.IllegalStateException: No match found

That error means a Java regular expression did not match the input text.

If you get this while trying to install Shibboleth, it’s because the installer requires a host name with three parts, such as “shib.example.com”.  “localhost” and “localhost.localdomain” will not suffice.

zope.pipeline

I’ve been working on a new revision of the Zope publishing framework.  The goal is to make the publisher comprehensible.  Since I helped design the current publisher, I don’t mind saying that the current design really stinks.  We made it extensible in a way that breeds ravioli code.  I find it difficult to follow the sequence of interactions in the code without concentrating hard and memorizing a lot.  That is not acceptable.

Now that I’m working with Zope most of the time, I can try to clean up the mess.  I decided to try out a Repoze-like design that builds on a WSGI pipeline.  (WSGI did not exist at the time we invented the current zope.publisher.)  I’m trying hard to maintain compatibility with the current Zope publisher, so I’m copying from Zope rather than Repoze, but taking ideas from Repoze as much as possible.

I’ve been pretty happy with the new code, but it wasn’t until last night that I felt a click that told me this new design can work well.  The key is to use the Zope component architecture to build the pipeline dynamically.  In order to cope with all the strange things the publisher has to do, the system varies the pipeline according to the request type.  Now any developer should be able to easily plug in pipeline elements with ZCML.  People can also build a static pipeline if they want, but I think that would involve a fair amount of work to maintain across software upgrades.  I expect the standard dynamically configured pipeline to have 12 stages or more.

The best part is the new design seems to have fewer concepts than the old design.  IPublication is gone, along with publication factories; now we just have request factories.  Dependencies should be under control as well.

I’ve been checking the sketched code into my sandbox on svn.zope.org.  I hope to get the code working soon so I can get feedback on the new design.

Redesign of zope.publisher

Here are my current thoughts on what we should do with zope.publisher.

  • Most packages that depend on zope.publisher only use its interfaces and two base classes (BrowserView and BrowserPage).  Those packages don’t care about anything else that zope.publisher offers.  Therefore, to reduce dependency burdens and to make the zope.publisher package easier to explain, I think zope.publisher should be reduced to providing only interfaces and those two base classes.
  • The IPublication interface and all its implementations should die, to be replaced with a WSGI pipeline.  This should make the publication process dramatically easier to understand and customize.
  • Request and response objects should just hold information; they should have no interesting behavior such as traverse().
  • We should create and use two new keys in the WSGI environment, “zope.request” and “zope.response”.  These will implement at least IRequest and IResponse, respectively.  (“transaction” and “zope.interaction” are also possibilities.)
  • The WSGI-compatible pipeline should be capable of doing everything zope.app.publication currently does.  We should use Repoze packages in that pipeline where possible.
  • Most of the code in zope.app.publisher should be moved to a better-named package like zope.fileresource or something.  IMHO, the name “zope.app.publisher” conveys an understanding that does not match the contents of the package at all.
  • I think all of zope.app.publication and zope.app.http should be moved out of zope.app and into WSGI pipeline elements.  (This is less clear to me than the rest of the plan.)

How would others feel about that plan?  It’s a lot of changes, but it could be done in phases, and it seems like a big improvement.  I probably need to elaborate more.  It’s still crystalizing in my head.