Tuesday, March 26, 2013

RtIN - Shot #4 and a headless mouse

Here's a still frame from the shot that takes place directly after the walking dynein. The fluorescence is transported along the axons towards the brain.


 And I've started modelling a mouse which will be in quite a few shots, so I'm hoping it's going to be a decent looking mouse.


I've also been working hard on my music visualization project. I've managed to output cubes to represent Pachelbel's Canon. You can see the repeating structure in the bottom third which represents the repetitive cello line. Baaa bummm deee dummm. That one.

Busy times.

Later,
Stuart

Monday, March 18, 2013

RtIN - Shot #3 video

Here's the animated version from my previous post. Hope you like it.


Later,
Stuart

Sunday, March 17, 2013

RtIN - Shot #3

Here's a still from my third shot, animated. It's partly an homage to "The Inner Life of the Cell", but in my animation, it's about how scientists can get fluorescent dye into specific neurons. Here a dynein molecule carries a vesicle loaded with fluorescent dye up an axon.



I'll probably upload this shot to Vimeo, since I think it's pretty neat. Stay tuned.
Later,
Stuart

Saturday, March 16, 2013

Data Vis - Sneak Peek

I'm rather excited about the data visualization project I've taken on. I'm just a bit worried it won't come together in the next couple weeks. Here's a sneak peek:

For those typographical fiends out there... fonts are just placeholders so far.
Later,
Stuart

Thursday, March 7, 2013

RtIN - Second shot

The second shot I've rendered out for my neurophotonics animation shows a neuron firing as the probe enters the environment. It doesn't look very interesting to show an image of the electrical impulse, as it needs animation (and sound) to be believable. So that's why I'm only showing one still and no impulse.


I've gotten some good feedback on this shot from my professor, so I'll be making changes to the look and lighting for the final. But for now I'm working on another shot that takes place at the molecular scale.

I also recently scripted a very simple image plane creation tool, because I don't like using image planes attached to cameras, and it gets tedious to make a poly plane, scale it to the proper width and height, create new UVs, create a new material, add a file node into the color, and browse for the image file. This tool does most of that for you; you just have to enter the pixel width and height of your ref image, and after the plane is created, browse for the image file. If I was going to add more features, I would have a file browser at the get go, and try to read the pixel dimensions from there. But anyway, the script is here: http://bmc.erin.utoronto.ca/~stuart/resources.html (at the bottom of the page).

Later,
Stuart

Saturday, March 2, 2013

Review of "The dark art of mental ray" Tutorial

The dark art of mental ray from SimplyMaya provides workflows intended to harness the power of render layers and passes to gain minute control over your renders. This frees you from the time-consuming process of tweaking your lighting and materials to absolute perfection within Maya.

The instructor starts by briefly covering the practical implications of linear and non-linear workflows. These topics overlap with the beginning of “PhysicallyAccurate Lighting in mental ray”. Linear lighting is an abstract concept and difficult to get your head around. This is compounded by the fact that there are numerous ways to tackle linear lighting (as with all things) in Maya. The methods given are decent, but I would have liked a bit more explanation for these, since I previously learned yet another method that makes more sense, to me at least.

The explanations of render layers and render passes are excellent. It can get very confusing when and where to use either render layers or passes, and the instructor provides helpful examples of getting results from passes, layers, and combinations of the two. Case in point, you can get ambient occlusion from a render pass or a render layer, but one of them gives you much more control. Also discussed here are layer and material overrides, accessing passes from the render viewer, selecting passes to rebuild a beauty pass, and a few pitfalls. This is important information for any serious Maya user to know.

As an aside, I find that rendering tutorials often use uninspiring geometry and scenes *cough* torus knot *cough*, and while that does get the information across, it becomes easier to imagine how principles can apply to your own projects when interesting and attractive examples are used, as in this tutorial.

I have never used Nuke before; this was my first look at it. I have quite a bit of experience learning complicated software, so it wasn't too difficult for me to keep up in the compositing sections. However, the tutorial does assume some knowledge of Nuke (and Maya), so I would not recommend this for beginners in either program. Linking the separate passes in Nuke to rebuild the rendered image shows how you can gain control over your render in post, especially with higher bit files. This is the main focus of the Nuke sections of the tutorial, i.e. linking nodes together to recompose your rendered image while gaining more and more granular control over each aspect of the image. The instructor doesn't go into depth on what attributes you can or should tweak in terms of color correction etc., but this isn't the aim of the tutorial, for better or worse.

In the same vein, creating separate render layers for each light (with associated passes) really gets to the heart of controlling your output. The Nuke setup gets more involved, but it is obviously worth the effort when you see the control you have. It doesn't stop here, as contribution maps can be built to break down the scene in terms of geometry and materials. More controlis the theme becoming clear? As contribution maps and light layers are dealt with separately, I admit I am a bit unclear on how they can be used in conjunction, for the simple reason that different types of merge nodes would have to be used in a specific order in certain groupings. Just a quick look at a network with both light layers and contribution maps would clarify this for me.

Requirements for even more granular control allow us to see how to output to custom render passes, a very helpful technique. Learning how to properly add ambient occlusion passes was another eye-opener. This is all about using mental ray the right way and to maximum effect. Finally we get to see how to create material id passes; I'm not sure how the method described compares to another method I've used with surface shaders as material overridesperhaps just passes vs. layers?

This tutorial is all about workflows for getting more out of your renders without endless re-rendering. It is abundantly clear that hitting batch render does not need to be the final step before delivery. Granular control without additional render time is a wonderful thing. But what is the cost? The instructor did not mention it, but I suspect it is disk space. Adding dozens and dozens of passes and layers adds up to eyebrow-raising file sizes. Each frame can quickly become very hefty, and even a short animation will be massive. But space is cheap and time is expensive; you can do the math.

One final thought: I am fairly confident all of the techniques shown in Nuke could be applied to After Effects, though it may be less elegant or quick to do. As I suspect more people are familiar with After Effects, I would be interested to see an addendum or short tutorial on connecting up a couple of light layers each with the required render passes. Plus ambient occlusion with the proper math. Regardless, this is an excellent tutorial that will make you think about rendering differently. I know I will be adopting many of these workflows in my current project, now that I've bought a couple more hard drives.