[Sidefx-houdini-list] HDK: op:/ magic

sgustafso at gmail.com sgustafso at gmail.com
Mon Jan 24 02:20:12 EST 2011

Definitely throw a lock around any write access to the thread counter, otherwise you're bound to run into race conditions.
Another thing that might trip you up: the vex ops init() happens not just once for each separate thread, but also once for each separate use of the function in vex. This would be incredibly useful, if you could, say, access constant arguments passed to the vex function (string paths!!!) from within the init() call. Alas...

Something else that you should consider is that this gets immenseley more complicated if you want to access multiple gdps from the same vexop. The reference counting trick to track first initialization fails, because you never know which instance of the function you're dealing with. I.e, you can use reference counting to get first initialization and last cleanup, but of which instance? I'd kill for a vexop_instanceid(), or equivalent argument.

What I've found to be the best option for flexible gu_detail access in vexops is a handle-based interface (like pcopen and friends), hash map, and a little bit of reference counting. Eg., in vex this looks something like:

int mygdh = gdopen("op:/path/to/some/op");
gdfoo(mygdh, ...);

Since we don't know what that path is until we're in eval(), and there might multiple calls to gdopen for different cached gdps, I just do a lookup into a hash map for the path to see if there's already a gdp for that path. If the path is found on the hash map, the associated value from the map is an index into a contiguous vector storing the allocated GU_Detail pointers -- otherwise, a thread lock is obtained and the hash map and vector are modified. From that point on, any functions that need access to the gdp just access it directly from the vector (in constant time) using their integer handle.
I use a single reference counter to track only how many instances of 'gdopen' have been initialized, and by how many threads; when the counter reaches zero, obtain a thread lock and clear out the vector/hash maps.

This does mean that each call to gdopen() has to do a hash map lookup. But honestly, it's not that bad: the only argument to gdopen is a string, which is /very likely/ to be constant. And since it's constant, we can rely heavily on vex to optimize away excess calls. E.g., in a test case with 800,000 points and an optimize level of VEX_OPTIMIZE_2, the number of eval() calls that I'm currently seeing for gdopen is about 3125 (how well it optimizes really depends on the rest of the vex code. It can get as low as numthreads); perfectly acceptable imo.
This same optimization gain can't necessarily be had if you're doing everything all in one call. Eg,
gdfoo("op:/bar", ptnum)' // last arg is varying, so eval() will be called for every point
Of course if you know that you are only ever going to read from one gdp, then this isn't an issue: you only need to do a lock one time per full vex process. But if you need access to multiple gdps, I don't see any way around either using a hash map of some kind, or just doing full gdp initialization per-instance-per-thread.
The extra bonus to using a handle-based interface is that once you have a functional gdopen() equivalent, adding in new vexops that need GU_Detail access is about as easy as it gets.

At Mon, 24 Jan 2011 01:21:06 +0100,
Szymon Kapeniak wrote:
> I've tried it before, and nope, this doesn't work either, perhaps
> because they (threads) operate on virtual machine. Once you start to
> update geometry on every frame, reference counter will fail - not sure
> why. There is a progress though, I managed to avoid crashes completely
> with the help of UT_Lock class. I've still haven't found a way to
> update gdp on time, so one of the threads is sending back garbage. But
> I see the light of hope...
> 2011/1/23 Olex P <hoknamahn at gmail.com>:
> > Heh, sleepy, last bit of code has to be
> >
> > if(!threads)
> > {
> >   threads = 1;
> >   if(geo) geo->load();
> >   else something_unexplainable_happened();
> > }
> > else threads++;
> > _______________________________________________
> > Sidefx-houdini-list mailing list
> > Sidefx-houdini-list at sidefx.com
> > https://lists.sidefx.com:443/mailman/listinfo/sidefx-houdini-list
> >
> _______________________________________________
> Sidefx-houdini-list mailing list
> Sidefx-houdini-list at sidefx.com
> https://lists.sidefx.com:443/mailman/listinfo/sidefx-houdini-list

More information about the Sidefx-houdini-list mailing list