Problem: running “python bootstrap.py” or “bin/buildout” often produces scripts that mix up the Python package search path due to some packages being installed system-wide. Version conflicts result.
Workaround: use “python -S bootstrap.py” and “python -S bin/buildout”. Magically, no more version conflicts.
I wish I had thought of that before. Duh!
Update: Another tip for new zc.buildout users I’ve been meaning to mention is that you should create a preferences file in your home directory so that downloads and eggs are cached centrally. This makes zc.buildout much friendlier. Do this:
echo "[buildout]" >> ~/.buildout/default.cfg
echo "eggs-directory = $HOME/.buildout/eggs" >> ~/.buildout/default.cfg
echo "download-cache = $HOME/.buildout/cache" >> ~/.buildout/default.cfg
It seems a bit silly that zc.buildout doesn’t have these settings by default. They make zc.buildout behave a lot like Apache Maven, which is what a lot of Java shops are using these days. Both zc.buildout and Maven are great tools once you get to know them, but both are a real pain to understand at first.
Martijn Faasen suggested this solution in a comment on my previous post and I think it’s the best. I created a new service:
I simply posted a patched ZODB3 source distribution on a virtual-hosted server. The first tarball, “ZODB3-3.8.1-polling-serial.tar.gz”, includes both the invalidation polling patch and the framework I created for plugging in data serialization formats other than pickles, but in the near future I plan to also post distributions with just the polling patch and some eggs for Windows users.
It would not make sense for me to post the patched tarballs and eggs on PyPI because I don’t want people to pull these patched versions accidentally. Pulling these needs to be an explicit step.
Thanks to setuptools and zc.buildout, it turns out that creating a Python code distribution server is a piece of cake. The buildout process scans the HTML pages on distribution servers for <a> links. Any link that points to a tarball or egg with a version number is considered a candidate. A static web site can easily fulfill these requirements. I imagine it gets deeper than that, but for now, that’s all I need.
To use this tarball, buildout.cfg just needs to include lines something like:
find-links = http://packages.willowrise.org
versions = versions
ZODB3 = 3.8.1-polling-serial
zc.buildout does the rest.
It took a while to find this solution because, upon encountering the need to distribute patched eggs, I guessed it would be difficult to set up and maintain my own package distribution server. I also guessed setuptools had no support for patches in its versioning scheme. I’m glad I was completely wrong.
By the way, Ian suggested pip as a solution, but I don’t yet see how it helps. I am interested. I hope to see more of pip on Ian’s great blog.
I’ve been thinking more about patching Python eggs. All I really need is for buildout.cfg to use a patched egg. It doesn’t matter when the patching happens (although monkey patching is unacceptable; the changes I’m making are too complex for that.) So the buildout process should download an egg that has already been patched. That solution is probably less error-prone anyway.
So, I could create a “ZODB3-polling” egg that contains ZODB 3.8.1 with the invalidation polling patch, then upload that to PyPI. All I have to do is tell people how to change their buildout.cfg to use my egg in place of the ZODB3 egg.
Ah, but there’s trouble: the ZODB3 egg is pulled in automatically through egg dependencies. If people simply add my new egg to their buildout.cfg, they will end up with two ZODB versions in the Python path at once. Which one wins?!
Therefore, it seems like zc.buildout should have a way to express, in buildout.cfg, “any requirement for egg X should instead be satisfied by egg Y”. I am going to study how that might be done.
The term “egg” as used in the Python community seems so whimsical. It deserves lots of puns. A couple of weeks ago, I made a little utility for myself that takes all the eggs from an egg farm produced by zc.buildout and makes a single directory tree full of Python packages and modules. I called it Omelette. Get it? Ha! (I can hear chickens groaning already…) The surprising thing about Omelette is it typically finishes in less than 1 second, even with dozens of eggs and thousands of modules. It mostly produces symlinks, but it also unpacks zip files. I plan to share it, but I don’t know when I’ll get around to packaging it.
Anyway, I want to talk about poaching patching eggs. As systems grow in complexity, patching becomes more important. Linux distributors, for example, solve a really complex problem, and their solution is to patch nearly every package. If they didn’t, installed systems would be an unstable and unglued mess. I imagine distributors’ patches usually reach upstream maintainers, but I also imagine it often takes months or years for those patches to trickle into a stable release of each package.
I really want to find a good way to integrate patching into the Python egg installation process. I want to be able to say, in package metadata, that my package requires ZODB with a certain patch. That patch would take the form of a diff file that might be downloaded from the Web. I also want to be able to say that another package requires ZODB with a different patch, and assuming those patches have no conflicts, I want the Python package installation system to install ZODB with both patches. Moreover, I want other buildouts to use ZODB without patches, even though I have a centralized cache of eggs in my ~/.buildout directory.
So let’s say my Python package installation system is zc.buildout, setuptools, and distutils. Which layer should be modified to support patching? I don’t think the need for automated patching arises until you’re combining a lot of packages, so it would seem most natural to put patching in zc.buildout. I can imagine a build.cfg like this:
I wonder how difficult it would be to achieve that. Some modification of setuptools might be required. Alternatively, can Paver patch eggs? I suspect Paver is not very good at patching eggs either.
Seeing this in a text editor makes me nervous:
That’s invalid code, but I didn’t write it: the IDE is displaying my file completely incorrectly. There are lines missing. There is some kind of repaint bug and it has something to do with scrolling. No matter how featureful an IDE might be, I can’t use it if it can’t show me a text file without jumbling the lines. When I once saw an open source text editor do the same kind of thing, I dropped that editor so fast that I no longer remember its name. 🙂
I have been trying out Wing IDE. It’s nice that it shows me instant documentation as I’m typing, but there’s still a lot I’d like to see. I have some feature requests:
- The file dialog in Wing IDE is a royal pain, just like most file dialogs. KDE is the only system I’ve seen with a consistently good file dialog, so please let me use that instead. Provide some configuration option that tells the IDE to use a shell command like “kdialog –getopenfilename /” whenever I want to open a file.
- NetBeans has the right idea for renaming symbols. It’s even better than Eclipse. In NetBeans, Ctrl-R doesn’t open a search/replace dialog, nor does it open a refactoring dialog if the symbol is private. NetBeans does something much more clever: it selects all instances of the symbol, then as you type, all instances of that symbol change simultaneously. No dialog is necessary. That feature alone tempts me to use NetBeans for Python code, even though NetBeans is as oversized as Eclipse.
- When I’m typing code, the main documentation I’m interested in is interface documentation, not implementation documentation. So Wing IDE really needs to support zope.interface.
- In both Eclipse and NetBeans, I can almost completely ignore import statements. Auto-completion adds the necessary import statements automatically. Eclipse goes even further and generates import statements when I paste code from another file, but that’s just icing on the cake.
If only Wing IDE supported these features, buying a license would be an easy decision. A promise from the developers that those features are coming soon would be very encouraging.
After I switched my RepRap to run on g-codes a few weeks ago, I started writing little Python scripts to do various things like lay down a raft, shut the extruder off and move the platform out of the way, extrude in preparation for a build, and send a g-code file to the controller. All of the scripts rely on a common module that sends a series of g-codes to the controller and waits for acknowledgements. The code has grown and become more interesting:
- I optimized the communication by having the host send enough commands to keep the controller’s 128 byte serial buffer full. The host does this by sending commands before an acknowledgement has been received for commands sent earlier. This works surprisingly well and helps the controller handle many commands per second, which is important for drawing short segments.
- I added code that automatically resumes a build where it left off if the controller restarts during the build. I implemented this because my controller spontaneously restarted several times this week in the middle of a build. I implemented this feature by creating a simple, fast machine simulator in the host. To resume a build, the code resets the controller, sends the g-codes necessary to put the machine back in the state recorded by the simulator, then resubmits the commands the controller never acknowledged. This also works surprisingly well.
- I created a g-code runner that can resume any stopped build. I implemented this because yesterday my controller not only spontaneously restarted, it actually lost its firmware in the process! (I then discovered and fixed a loosely connected ground wire which could have been the culprit all along.) Now I can stop a build at any time, turn off the power, disconnect the USB cable, clean the heater, restart the host computer, turn on the power and reconnect, upload a new firmware, run some g-code to test and prime the RepRap, and finally resume printing exactly where I left off. It’s kind of magical to see it actually work.
So now I have worked around the most common ways a print can fail partway through. This should make it much easier to work through problems with large builds. I can even stop a build with ctrl-C, shut everything off, go to bed, and resume the build another day! I definitely need to start doing that. 😉
By the way, I had to make a minor update to the g-code firmware to make it possible to restore all of the machine state: the G92 code needs to notice and use the X, Y, and Z parameters. The fact that no one has yet done this to the firmware in Subversion suggests that none of the other g-code runners can resume a build like mine can. So I guess this code might be useful to the community. Speak up if you’re interested.
I’ve been writing code for the family history department of the Church of Jesus Christ of Latter-day Saints for the last 4+ years. It has been a good experience and I feel like I’ve contributed and learned a lot, but now I have a new opportunity with my family that I shouldn’t pass up. My family business is now doing software consulting! And guess what kind of consulting offers we’re getting? It’s all Python, Zope, and Plone! I’m really happy about that. I feel like working in Python gives us a good chance to advance the state of the art.
Our first full time gig starts in December. It will involve a lot of work with RelStorage, which is another cool bonus. We expect the work to take six months, maybe a little more. Among other work, we are going to make RelStorage rock on MySQL (even more than it does already).
I also need to tie up some loose ends on RelStorage: I need to integrate the optimized Oracle queries, finish setting up the test environment, and release 1.1. (It’s about time, eh? Version 1.1c2 has been out there a long time. Blame Java… 😛 )
I’ve switched my RepStrap to use G-Codes, since that seems to be the direction the RepRap project is heading. I’m glad I did! The G-Code protocol consists of ASCII text I can read in a terminal, edit with a text editor, and generate with Python. Now I have several Python scripts for doing various things with the machine. Sometimes scripting is much more comfortable than a GUI interface.
OTOH, I’m also glad I started with the SNAP codes, since the code in Subversion that implements the new protocol isn’t stable yet. Specifically:
- At first, the G-Code firmware failed very badly. Some poking around revealed that the microcontroller was resetting on every strtod() call. This turned out to be a bug in Ubuntu 8.04; the solution was to just grab the latest avr-libc package from the 8.10 release and install it with “dpkg -i”.
- At that point, the compiled firmware was just a little too big to fit on the Arduino. I commented out the support for drill cycling (codes 81-83; do we really need those?), then it fit with room to spare.
- The RepRap host interface can generate G-Code scripts, but the scripts contain very long pauses. I discovered that the G4 codes it was generating were supposed to be in units of seconds, but the code was outputting milliseconds instead. I fixed that and now the generator works fine.
Incidentally, I just measured my extruder’s output rate, and this seems like a good place to keep a record. I set the motor speed to 170 (the range is 0-255 and it drives a 10.4V PWM transistor… my old power supply outputs 10.4V instead 12V… I’m thinking of replacing it). It pulled in 100mm of 3mm filament in 261 seconds, outputting 2045mm of filament that fell on the floor.
Based on that measurement, I calculate that the extruder outputs 7.835mm/s (470.1mm per minute); I’m sure it outputs a little less when gravity is not involved. It pulls in 0.383mm/s. Since the output rate is 20.45 times the input rate, while the volume is obviously the same, the output diameter must be 1/sqrt(20.45) = 1/4.522 of the input diameter, so the output is about 0.663mm wide. Again, that’s with extra gravity involved; the real thing will be slightly fatter. I’ve been using 0.7mm width as an estimate and I guess that’s very close to correct.