[Sidefx-houdini-list] deep primid aov?
matt.estela at gmail.com
Thu Aug 14 05:28:57 EDT 2014
We had some progress on this; seems you have to either use the micropoly
mode or turn off stochastic transparency, you get render glitches
otherwise. In this way we could render a solid green plane with a
transparent red pane on top, and isolate them by depth. Pretty cool.
We then tried the same with id mattes, mostly worked, will try a few more
things tomorrow. Will collect results and post to odforce when we have a
good suite of examples.
Btw, the deep exr output is very twitchy; easy to create a combo of options
that shuffles channels into invalid states; red channel gets blanked, other
channels go strange etc.
On 14/08/2014 6:56 PM, <eetu at undo.fi> wrote:
> Hey Rangi,
> That's where the beauty of deep data comes in! The aov values, as well as
> color and opacity, are stored _per_sample_, not per pixel, and that's why
> it works.
> In nuke you can just select all samples that have a given id value
> (plus/minus a float epsilon..), and remove them or create a mask channel.
> Matt: Apologies if the example scenes don't work at the moment, they were
> created with then-current versions of Houdini and Nuke (Dec 2013), and the
> deep handling has been changing a bit in both, I guess.
> (apologies if this shows up twice, I didn't see my first try coming
> through, now trying with different from:address..)
> On 2014-08-14 06:26, Rangi Sutton wrote:
>> Hey Matt,
>> I don't think it can be tackled like you're describing.
>> If you say the value identifying the object is "5", how does the deep map
>> say it's 40% of 5 if it's not saying it's "2"? It needs two bits of info,
>> the fact it's "5", and the fact it's "40%".
>> You'll need a separate channel for each component you want to be able to
>> separate, so you're back to multiple AOVs.
>> Or you embrace that you're working in deep land and render a separate pass
>> for every object and deep merge the lot.
>> ID passes where values rather than channels are used to seperate the
>> objects is a hack that falls over wherever any sorts of transparancy,
>> including anti-aliasinng or motion bur, comes in to play. Making it deep
>> just amplifies the problem.
>> Rangi Sutton
>> VFX Supervisor
>> Cutting Edge
>> On 13 August 2014 12:38, Matt Estela <matt.estela at gmail.com> wrote:
>> Short version:
>>> Outputting primid as a deep aov appears to be filtered to nonsensical
>>> values, is that expected?
>>> Long version:
>>> Say we had a complex spindly object like this motorcycle sculpture
>>> from wires:
>>> Comp would like to have control over grading each sub-object of this
>>> but outputting each part (wheel, engine, seat etc) as a separate pass is
>>> too much work, even outputting rgb mattes would mean at least 7 or 8
>>> Add to that the problems of the wires being thinner than a pixel, so
>>> standard rgb mattes get filtered away by opacity, not ideal.
>>> Each part is a single curve, so in theory we'd output the primitive id
>>> as a
>>> deep aov. Hmmm....
>>> Tested this; created a few poly grids, created a shader that passes
>>> getprimid -> parameter, write that out as an aov, and enable deep camera
>>> map output as an exr.
>>> In nuke, I can get the deep aov, and use a deepsample node to query the
>>> values. To my surprise the primid isn't clean; in its default state
>>> multiple samples, the topmost sample is correct (eg, 5), the values
>>> are nonsense fractions (3.2, 1.2, 0.7, 0.1 etc).
>>> If I change the main sample filter on the rop to 'closest surface', I
>>> get a
>>> single sample per pixel which makes more sense, and sampling in the
>>> of the grids I get correct values. But if I look at anti-aliased edges,
>>> values are still fractional.
>>> What am I missing? My naive understanding of deep is it stores the
>>> prior to filtering; as such the deepsample picker values returned should
>>> correct primid's without being filtered down by opacity or antialising.
>>> Stranger still, if I use a deepToPoints, the pointcloud looks correct,
>>> I'm not sure I trust the way it visualises the samples.
>>> Anyone tried this? I read an article recently were weta were talking
>>> using deep id's to isolate bits of chimps, seems like a useful thing that
>>> we should be able to do in mantra.
>>> Sidefx-houdini-list mailing list
>>> Sidefx-houdini-list at sidefx.com
>> Sidefx-houdini-list mailing list
>> Sidefx-houdini-list at sidefx.com
> Sidefx-houdini-list mailing list
> Sidefx-houdini-list at sidefx.com
More information about the Sidefx-houdini-list