[Sidefx-houdini-list] deep primid aov?

Rangi Sutton rsutton at cuttingedge.com.au
Thu Aug 14 05:17:46 EDT 2014


Hey Eetu.

Ah.. yes you're right. I have seen a demo where it shows that the colour is
deep. Most of the deep stuff I've mucked with (not much) is render a beauty
pass and deepify/DeepRecolor it, which wouldn't work for Matt's use. But a
proper full colour deep pass would.

Will check out your setup when I get a chance.

Cheers,
r.


Rangi Sutton
VFX Supervisor
Cutting Edge


On 14 August 2014 18:56, <eetu at undo.fi> wrote:

> Hey Rangi,
>
> That's where the beauty of deep data comes in! The aov values, as well as
> color and opacity, are stored _per_sample_, not per pixel, and that's why
> it works.
>
> In nuke you can just select all samples that have a given id value
> (plus/minus a float epsilon..), and remove them or create a mask channel.
>
> Matt: Apologies if the example scenes don't work at the moment, they were
> created with then-current versions of Houdini and Nuke (Dec 2013), and the
> deep handling has been changing a bit in both, I guess.
>
> eetu.
>
>
> (apologies if this shows up twice, I didn't see my first try coming
> through, now trying with different from:address..)
>
>
> On 2014-08-14 06:26, Rangi Sutton wrote:
>
>> Hey Matt,
>>
>> I don't think it can be tackled like you're describing.
>>
>> If you say the value identifying the object is  "5", how does the deep map
>> say it's 40% of 5 if it's not saying it's "2"? It needs two bits of info,
>> the fact it's "5", and the fact it's "40%".
>>
>> You'll need a separate channel for each component you want to be able to
>> separate, so you're back to multiple AOVs.
>>
>> Or you embrace that you're working in deep land and render a separate pass
>> for every object and deep merge the lot.
>>
>> ID passes where values rather than channels are used to seperate the
>> objects is a hack that falls over wherever any sorts of transparancy,
>> including anti-aliasinng or motion bur, comes in to play. Making it deep
>> just amplifies the problem.
>>
>> Cheers,
>> r.
>>
>>
>> Rangi Sutton
>> VFX Supervisor
>> Cutting Edge
>>
>>
>> On 13 August 2014 12:38, Matt Estela <matt.estela at gmail.com> wrote:
>>
>>  Short version:
>>> Outputting primid as a deep aov appears to be filtered to nonsensical
>>> values, is that expected?
>>>
>>> Long version:
>>> Say we had a complex spindly object like this motorcycle sculpture
>>> created
>>> from wires:
>>>
>>>
>>> http://cdnl.complex.com/mp/620/400/80/0/bb/1/ffffff/
>>> dad2da95038d2c19ee6f7207eacf5e0c/images_/assets/CHANNEL_
>>> IMAGES/RIDES/2012/04/wiresculpturerides01.jpg
>>>
>>> Comp would like to have control over grading each sub-object of this
>>> bike,
>>> but outputting each part (wheel, engine, seat etc) as a separate pass is
>>> too much work, even outputting rgb mattes would mean at least 7 or 8
>>> aov's.
>>> Add to that the problems of the wires being thinner than a pixel, so
>>> standard rgb mattes get filtered away by opacity, not ideal.
>>>
>>> Each part is a single curve, so in theory we'd output the primitive id
>>> as a
>>> deep aov. Hmmm....
>>>
>>> Tested this; created a few poly grids, created a shader that passes
>>> getprimid -> parameter, write that out as an aov, and enable deep camera
>>> map output as an exr.
>>>
>>> In nuke, I can get the deep aov, and use a deepsample node to query the
>>> values. To my surprise the primid isn't clean; in its default state
>>> there's
>>> multiple samples, the topmost sample is correct (eg, 5),  the values
>>> behind
>>> are nonsense fractions (3.2, 1.2, 0.7, 0.1 etc).
>>>
>>> If I change the main sample filter on the rop to 'closest surface', I
>>> get a
>>> single sample per pixel which makes more sense, and sampling in the
>>> middle
>>> of the grids I get correct values. But if I look at anti-aliased edges,
>>> the
>>> values are still fractional.
>>>
>>> What am I missing? My naive understanding of deep is it stores the
>>> samples
>>> prior to filtering; as such the deepsample picker values returned should
>>> be
>>> correct primid's without being filtered down by opacity or antialising.
>>>
>>> Stranger still, if I use a deepToPoints, the pointcloud looks correct,
>>> but
>>> I'm not sure I trust the way it visualises the samples.
>>>
>>> Anyone tried this? I read an article recently were weta were talking
>>> about
>>> using deep id's to isolate bits of chimps, seems like a useful thing that
>>> we should be able to do in mantra.
>>>
>>>
>>> -matt
>>> _______________________________________________
>>> Sidefx-houdini-list mailing list
>>> Sidefx-houdini-list at sidefx.com
>>> https://lists.sidefx.com:443/mailman/listinfo/sidefx-houdini-list
>>>
>>>  _______________________________________________
>> Sidefx-houdini-list mailing list
>> Sidefx-houdini-list at sidefx.com
>> https://lists.sidefx.com:443/mailman/listinfo/sidefx-houdini-list
>>
>
> _______________________________________________
> Sidefx-houdini-list mailing list
> Sidefx-houdini-list at sidefx.com
> https://lists.sidefx.com:443/mailman/listinfo/sidefx-houdini-list
>



More information about the Sidefx-houdini-list mailing list