[Sidefx-houdini-list] Python in Houdini 9

Lutz Paelike lutz_p at gmx.net
Tue Oct 31 13:31:22 EST 2006


> Have you tried stackless?

Yes i know the creator of stackless python, Christian Tismer,
very well and did a major development project with him (non 3d).

> The performance is much better than regular python using threads?

The performance can be better if used wisely, especially for simulations.
There are some different paradigms involved with stackless.
One core feature is the ability for unlimited recursion.

Normally the execution context in Python (aka a frame object)
is moved on the stack when you call another function and restored
when you leave it. In Stackless Python however the internal implementation
does not follow the regular scheme. Python stack frames are managed internally
more like a linked list. That means context switching is *extremely* fast
which makes it perfect for simulations and the overall performance is also better.

By some magic hackery fast context switching also works with
Python Modules written in C (when using the stackless C api it is less hackish)
but you have less trouble if you stay in the python world.
A problem arises if you have long lasting calls (especially into C-Extensions).
You can play well with stackless python if you do only simple operations and return
quickly, this applies as well to standard python generators (some sort of stream processing).


Imagine you have a particle system with 100.000 particles.
It would be nice to have a thread per particle to put the simulation logic
in a class and have an array of "agents" and just cycle through all agents to evaluate.
With regular threads this would be impossible to do since the memory consumption
would kill your machine, but there is a solution in stackless called microthreads.

Microthreads are no proper system level threads, but just lightweight pseudo-threads
with a memory footprint of about 5kb (depends on your class implementation of course).
That means you can keep easily 100.000 simple microthreads in 512 MB memory. (very nice)

The scheduling of the microthreads is up to you. By implentation you have a cooperative scheduling model,
but of course it is up to you to implement more advanced scheduling easily in python.
(you could decide to not evaluate non-visible agents at all to speed up the simulation)

Another killer feature is that microthreads can be pickled !
(pickling is the python term for conveniently serializing (saving to disk) almost everything).
That means you can stop a running simulation at any time and dump it to disk, load it back later
and continue simulation. The computer which loads the pickled microthreads must not necessarily
be the same where you saved it, since pickles objects are platform independent.
Aside from just loading and saving to disk you could also choose to send a pickled microthread
through the network to another machine and implement some kind of load-balancing.
So you create some really nifty simulation farm !
Don't forget you are still not bound to the python world but can implement
heavy computing tasks in C extensions. As long as the extension is present on
every machine in the pool it is all good.



> It will be great if SESI cretes a solid core base for Houdini 9 ready
> for concurrent programming, and transparent for developers.
> For example, providing in the HDK functions that runs in parallel when
> houdini cooks,or like in Maya where you can ensure that your
> operations ar concurrent safe so Maya try to parallel these
> operations.

Exactly. If the feature list  for Houdini 9 is still not carved in stone
then i would really encourage SESI to provide some API to make it possible
to hook into the core houdini event loop or at least to register callbacks for specific events.
I don't know the existing Houdini API very well yet, so maybe it is already there.
Thereby it would be possible to integrate various custom libraries and gui kits (like wx and qt)
or to integrate better.

I am looking at this specific problem for some time now and can say it is a burden
to integrate deeply into an application without having access to the core event loop of
commercial software without access to the source.
Trolltech provides support for Qt (only commercial version, not GPL)
to hook up the Qt event loop into X based Applications but it would be *much* better
to have official support  in the hosting application for this.
*sigh*


> Multiprocessing is the future and is almost a comodity feature in
> computers today, so I think that is very important that Houdini 9 can
> respond to these new times.

Don't forget that there is still the Global Interpreter Lock (GIL) in Python.
You *can* have multiple system level threads (or stackless microthreads)
but as long as you evaluate python bytecode there can only be *one* active thread per process.
That means execution of python code is serialized by the GIL.
This means also that threads don't scale your python program as well as in
other environments but make it somewhat simpler to handle.

>From time to time people demand to get rid of the GIL but this will not gonna happen
(at least  not in the current implementation of python, aka CPython)
there are other implementations out there, which try to solve the problem differently
but that's another story...


A saner and more stable approach (for the host environment, houdini in this case)
would be to run your custom simulation in a separate process
on a another processor (core or computer) and bridge it with a network connection.


Hope you got a brief overview of stackless python for the moment.
By the way the Massive Multiplayer Online Game "Eve Online"
runs on stackless python, on the the server side to keep a persistent simulation for
the online gamers.

The creator of EVE,  CCP Games is a major contributor to stackless.

see 7.4 at http://www.eve-online.com/faq/faq_07.asp

Cheers,

Lutz



More information about the Sidefx-houdini-list mailing list