Dan Roarty introduction to the Wikihuman project
Hi everyone! I would like to start off by introducing myself.My name is Dan Roarty and I am currently a Lead Character Artist at Microsoft / Blacktusk studios in Vancouver, working on the ‘Gears of War’ franchise. Previous to that, I worked on the Tomb Raider and
Star Wars franchises and I have several personal projects aimed at creating realistic 3d portraits. I had worked with Chaos Group on a few small projects in the past and was invited to help out on an interesting project for The New Yorker magazine.
Chaos Group Creative Director Chris Nichols explained a little more about the project and it seemed like a good opportunity to work with Paul Debevec and Jay Busch at ICT on creating a realistic human face render for The New Yorker. The idea was to showcase how far digital scans and rendering technology has come and how far we could push photorealism in facial rendering. The plan was to collaborate in creating a photorealistic portrait in V-Ray using scan data of Paul’s face from ICT’s light stage. I was excited to say the least to work with an incredible team for such a prestigious magazine.
Working with ICT, I knew we were going to be provided with incredible scan data and maps. ICT is responsible for providing data based off their light stage which is has a spherical structure of LED lights to capture the face, resulting in accurate an accurate real-world representation of somebody’s face. For this project the data provided by ICT was incredibly detailed and accurate. Paul and his team gave us a low-res mesh, hi-res mesh, a 32 bit raw displacement map, as well as a full albedo and specular map.
The timeframe we had for creating the piece was incredibly tight. I had originally set up a quick lighting scene that I had used on previous personal projects and was quickly able to bring in the mesh provided as a start. The thought we had was to use the straight raw data and plug it into the V-Ray shaders. The results demonstrated how little work was needed on the asset in order to achieve a realistic render.
We did encounter a few challenges along the way. One challenge was that the data provided did not have eye sockets due to the fact it was a scan of the head. I quickly adjusted and sculpted the hi-res mesh and low res mesh to enable us to place some proper eyes in the newly adjusted sockets. We also had to create a new skin shader that would show off the details of the scan and sell the realism of the render. In order to show the detail of the scan as best we could, it was important to use good lighting. We used a V-Ray dome light for reflection as well as a simple key light for shadow and a rim light to help frame the face structure a little better. We then created a basic camera with a ¾ shot. Because we didn’t focus on the scan data for the ears, it wouldn’t have made sense to showcase them from a front neutral position. We found that less was more while creating the render. We wanted to adjust the shader to demonstrate realism in reflection as well as the subsurface to show through without being over powering.
All in all we wanted to treat the data as simplisticly and honest as possible and help portray that with powerful yet appealing shaders and lighting. There was more we could have done if we had the time, such as build the rest of the head, hair, shoulders, but from a purpose standpoint, we wanted to demonstrate how accurate and effective raw data could be coupled with V-Ray for a realistic render.