Burt,
It is intriguing that Multiprocessing isn't working for you; can you confirm that it doesn't have an error, but just runs one process at a time? I use the Multiprocessing library exclusively now, as PP can't seem to handle extensions, but have never had any problems with it...
I don't know much about the specifics of memory use, but I have learned a few things (all with regard to using the Multiprocessing library, but they may shed some light on PP also). The child processes take a job from the job pool, complete it, take the next available job and so on. However, they seem to keep hold of some (not all) of the memory they were using; if you watch the processes over time the memory footprint of each will grow. Running some models involving Network Analyst with 4 child processes, and a worker pool of initially 56 items, I received Memory Error after the total footprint of all four processes had grown to about 4 GB, about half way through the pool. This was nowhere near the limits of the computer I was using. My best guess was that all the memory was still owned by the main process, and was thus hitting its 32 bit limit, but the numbers don't really add up to support that theory...
The easy way around this isn't fixed until Python 2.7, where you can set a maxtasksperchild value; although it doesn't seem to work very well - requiring more complicated code, which either hangs or runs quite slowly.
The hard way around is to recreate the pool every n times for each child, depending on how big the operations being performed by the workers are... Obviously it isn't pretty, but works just fine. I have used a number of computers and a mix of Windows 7 and XP while testing all these out, and it seems that sometimes on some computers the first parallel processes through the pool are a lot slower than the later ones, by at least 4 to 7x. If this is happening on a computer, you want n as high as possible, to minimise the number of your tasks that are first through the pool. If not, you might as well recreate the pool after each parallel process has run once.
Let me know if you want more info, and I'll try to get an example up on my blog sooner rather than later!
Cheers.