We also had to break the sequences into short segments so we could get manageable and accurate data. Also Peter asked that there be no difference between performers and crew so everybody had to dance to the music while we were capturing them.įor both scenes we had to capture Peter and the dancer separately. There was a special energy during the session as we knew that we were pioneering a new technology with lots of potential but still many limitations. Umberto Lazzari: Both Peter Gabriel and the African dancer were simply amazing. We delivered the data with hierarchical data structure, mentioned above, by setting up a corresponding CG character rig in the animation environment of choice, as Umberto can further explain. He was very nice and humble: every time sitting with us and the rest of the crew on the stage’s floor eating take out food: not exactly your standard Rock Star! Peter Gabriel seemed very happy in trying the motion capture technology and enthusiastic about CG. He was extremely friendly and easy, and when I warned him that I was going to hurt him when removing the markers, as body hairs would be pulled out with the double sticky tape, he smiled at me and replied: “pain is not always a bad thing …. My main memory is about dealing physically with Peter Gabriel as I was the one sticking markers on his bare skin. We ended with almost 200 takes, but many of them were not usable because of the data loss due to system limitation. What are your memories of the motion capture sessions for Peter Gabriel’s Steam?įrancesco Chiarini: We recorded motion capture for 2 days, the shooting planning was minimal and the director Steven Johnson was having new ideas all the time. In addition to the hierarchical format we could export the data to global 3D positions or interpolation values (for driving facial blend shapes). Umberto Lazzari: We also had tools for filtering noise and for filling occlusion gaps. When we got a 4 camera system things got only slightly better, and we understood the hard way that a huge number of cameras was necessary: a few years later dozens of cameras were used for the job by the industry. At the beginning we had the most basic system with only 2 cameras, this implied extreme limitations, like the performer could not rotate much or one or more markers would not be visible by both cameras at the same time, that was necessary for 3D triangulation. For body mocap an additional software layer converted the plain 3D coordinates in a hierarchical data structure defined by a “skeleton” with joints, for each joint 2 or 3 (as needed) angles of rotation were computed, each joint depended on the previous one (the wrist depending on the elbow in turn depending on the shoulder, and so forth) all the way to a root point (say the belly button) that defined the CG character position in space.īack then we did not use a mocap suit, we attached the markers directly to the skin as it was done in the medical field with hypoallergenic double sticky tape. The post processing on the PC included a calibration step to obtain the cameras’ 3D positions and orientations, and then the labeling and tracking of the performer’s markers that was followed by the 3D reconstruction by means of triangulation (photogrammetry). It was broadcasted worldwide and was our best credential to start working in Hollywood in 1991: The Lawnmower Man (1992) the first movie with mocap animation.Ĭan you talk about how SuperFluo worked? What components made up both the mocap gear worn by the performer, and the hardware and software?įrancesco Chiarini: The ELITE processor was streaming 2D co-ordinates (corresponding to the markers for each frame for each camera) to a PC for recording. After a few experimental mocap animations (show at Siggraph 1990 in Dallas TX), a couple of years later, in summer 1990, the CG animated station break “Rai Ciao GO” for the World Soccer Championship “Italia ’90” was the very first commercial use of motion capture. With SuperFluo, we joined forces to develop applications for the entertainment industry. While developing this system we found out via our friend Roberto Maiocchi that the Polytechnic was developing a real time image processor (called ELITE) to extract with an image correlation algorithm the signature of retro-reflective markers, and that a startup company called BTS was developing applications in the bio-mechanical / medical field for this device. How did you get started in motion capture?įrancesco Chiarini: We had a production company, SuperFluo in Bologna, and in the mid 80s started to develop a complete CGI system, from modeling to rendering: back then there was no commercial software yet.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |