[Sidefx-houdini-list] deep primid aov?

Rangi Sutton rsutton at cuttingedge.com.au
Wed Aug 13 23:26:40 EDT 2014

Hey Matt,

I don't think it can be tackled like you're describing.

If you say the value identifying the object is  "5", how does the deep map
say it's 40% of 5 if it's not saying it's "2"? It needs two bits of info,
the fact it's "5", and the fact it's "40%".

You'll need a separate channel for each component you want to be able to
separate, so you're back to multiple AOVs.

Or you embrace that you're working in deep land and render a separate pass
for every object and deep merge the lot.

ID passes where values rather than channels are used to seperate the
objects is a hack that falls over wherever any sorts of transparancy,
including anti-aliasinng or motion bur, comes in to play. Making it deep
just amplifies the problem.


Rangi Sutton
VFX Supervisor
Cutting Edge

On 13 August 2014 12:38, Matt Estela <matt.estela at gmail.com> wrote:

> Short version:
> Outputting primid as a deep aov appears to be filtered to nonsensical
> values, is that expected?
> Long version:
> Say we had a complex spindly object like this motorcycle sculpture created
> from wires:
> http://cdnl.complex.com/mp/620/400/80/0/bb/1/ffffff/dad2da95038d2c19ee6f7207eacf5e0c/images_/assets/CHANNEL_IMAGES/RIDES/2012/04/wiresculpturerides01.jpg
> Comp would like to have control over grading each sub-object of this bike,
> but outputting each part (wheel, engine, seat etc) as a separate pass is
> too much work, even outputting rgb mattes would mean at least 7 or 8 aov's.
> Add to that the problems of the wires being thinner than a pixel, so
> standard rgb mattes get filtered away by opacity, not ideal.
> Each part is a single curve, so in theory we'd output the primitive id as a
> deep aov. Hmmm....
> Tested this; created a few poly grids, created a shader that passes
> getprimid -> parameter, write that out as an aov, and enable deep camera
> map output as an exr.
> In nuke, I can get the deep aov, and use a deepsample node to query the
> values. To my surprise the primid isn't clean; in its default state there's
> multiple samples, the topmost sample is correct (eg, 5),  the values behind
> are nonsense fractions (3.2, 1.2, 0.7, 0.1 etc).
> If I change the main sample filter on the rop to 'closest surface', I get a
> single sample per pixel which makes more sense, and sampling in the middle
> of the grids I get correct values. But if I look at anti-aliased edges, the
> values are still fractional.
> What am I missing? My naive understanding of deep is it stores the samples
> prior to filtering; as such the deepsample picker values returned should be
> correct primid's without being filtered down by opacity or antialising.
> Stranger still, if I use a deepToPoints, the pointcloud looks correct, but
> I'm not sure I trust the way it visualises the samples.
> Anyone tried this? I read an article recently were weta were talking about
> using deep id's to isolate bits of chimps, seems like a useful thing that
> we should be able to do in mantra.
> -matt
> _______________________________________________
> Sidefx-houdini-list mailing list
> Sidefx-houdini-list at sidefx.com
> https://lists.sidefx.com:443/mailman/listinfo/sidefx-houdini-list

More information about the Sidefx-houdini-list mailing list