2 Mar 2019
Gordon Moore had first forecasted what would popularly come to be known as “Moore’s Law”, adopted as the industry standard for microchip/technology developers. By extension, software is easily placed in that same era, most significantly with the progression of the internet. It’s often comically suggested that we learn from the products of the entertainment industry, such as cell phones from Star Trek, Self-driving cars from Total Recall, and touch and motion-based interfaces from Minority Report. While this is incredibly ironic, since the entertainment industry is not filled with innovative engineers, it does indicate the potential of what imagination can create.
I grew up in what could be described as the third era of the film generation. By the time I became cognizant of the films playing in theaters, Computer-Generated techniques were being employed in production prior to release, creating the kind of other-worldly experiences one might find in a science-fiction or fantasy novel. It was said that Tolkein’s Lord of The Rings trilogy would not be able to be created into a movie due to its highly imaginative nature. Even before conventional film captured movies were pushing the envelope, Disney was developing the world of animated reality. As improved hardware has become available, sharper software has likewise become available. The long-developing medical field has also brought us technologically to where we are now. Our highly detailed understanding of human anatomy, its physiology, and even the psychological associations have permitted software developers to developed highly immersive human character models. Take a look here.
But one significant component is still not present in these CG ‘people’ – intelligence. While they may look life-life, they are nonetheless still merely glorified corpses. Of course, they will do fine for still photographs, but films still need the backbone of real human people. Yet modern technological innovation has brought us to the quantum age. Innovators like Nvidia, Intel, Google, and numerous others are acknowledges the potential of artificial intelligence. While we recognize that computers are still not capable of doing what humans can do, movies (and more convincingly video games) are able to create plausible synthetic characters that we can relate with. Artificial intelligence pushes the previously-dronelike CG characters of entertainments past into new depths, allowing hardware-driven software to respond to actions in real-time. Characters look over when approached, and react when other ai characters are interacting with them.
Now imagine, as CG characters become more weighty, featuring pore-level texture detail, full riggings for full range of joint and muscle motion, ai-assisted language comprehension, and a voice synthesizer. This is the potential making of a trainable actor for the next blockbuster. I was privileged to get to see Alita: Battle Angel, the first film (to my knowledge) where the main character was completely CG. Now the voice is still that of a voice-over actor, but it would be easily to create a unique voice with the proper software. Here’s the question I will pose here: At what point will these fully-synthetic ‘CG People’ start becoming a popularized alternative to contemporary actors. Of course, there is more than just ease of film adapt that will need to take place before this could become normal. Namely, film audiences need to be able to relate with the characters on the screen. There is a theory that indicates that the closer a synthetic character comes to reality, there is something eerily strange about it. The Uncanny Valley was posed in the robotics industry, but plausible robots are much further off than what is already showing up on screens in multiple formats.
However, before glorying in the prospect of mankind seemingly initiated the era of virtual synthetic humans, I pause for caution. First off, these characters are not simple creations. They can take thousands of hours to perfect. They involve numerous levels in CG pipeline, including modeling, texturing, shading, rigging, animating, and presumably integrating their self-enabled physics in the environment they are introduced. Much like a human birth, they require careful care, devotion, and research to arrive in their final form. Second, their ‘final form’ is merely a lifeless corpse waiting for in-than programming input. At this point, they are glorified calculators, outputting what they are told to do, nothing more and nothing less. Sophisticated AI software enables them to intelligently respond to various situations with varying degrees of normality (or extremity). Their responses must be realistic, just as good CG artists affirm a true work of digital art isn’t complete until defects like dust, scrapes, scares, dents, and scowls are present. In mirroring reality, are the characters being synthetically derived really superior to the sin-rich world the entertainment craving world is desiring? In the intent to immerse in a fantastical world, is more of the same really the goal?
Finally, the gravest question must come to the fore: can these synthetic ‘people’ be like us in every way – can we create the soul that directs them entirely independent of any outside force? One vital variable of humanity is the free will of each individual. At some point, these creations are bound by the creator originated programming, despite what Issac Asimov articulated as the three laws of robotics. If no soul can be authentically created for these synthetic people, are they really more than animals? Some would argue animals are capable of emotion, others would conclude they are bound by their survival instinct (however complicated and drawn out as that may be). Either way, the ethical consideration should be dealt with as we approach this new era of ‘CG people’. Perhaps more value will be the resulting understanding of who we are as soul-bearing creatures ourselves and our own Creator.