Pre - Post, Digital Art, & AI, Part 4

One of the things that became evident from my work with AI, looking at what AI could potentially be as a real agentic partner in the making of a film, was that it turned out to be much harder than I expected. There's a lot of confusion about the various party tricks seen on the internet, where people create amazing, crazy videos of babies doing things that would normally be impossible. To things that you no longer know are real or not, but there's something not quite right about them - yet. You need to use your own expertise and experience to guide these junior AI agents, or else you will simply be creating empty calories, devoid of any true meaning or story, producing nothing of substance

We know this is going to get better, but at the same time, producing something that's a real story, real content that can last for many minutes or hours and entertain and inform takes a lot of work. In my search to see if I could use these tools to help me create my animations, I needed to know whether I could actually create a performance, lip-sync, and motion, and be able to guide it, and that was what I was going to look at next. After many failed attempts, I started getting closer to something that seemed like a performance and could show emotional context in the face. This one was based on a redesign of one of our characters, and I thought it showed promise.

Some of the subtleties were quite impressive, but still, could I have a character actually walk through an environment and lip-sync? My next test was this.

Very far from perfect, with artifacting around the ground. Not very realistic floor, but the fact that he can walk through an environment and lip-sync somewhat imperfectly was interesting.

As you can tell, the dialogue also expresses some of my frustration with this seemingly ongoing struggle to get something I can actually use and to work on a real project. But I felt I was maybe getting closer.

While my focus has been on animated characters so far, it's clear I need to cover the full range of potential outputs, including realistic characters, voice acting, and their use across both real and animated characters. Also, being able to create environments with our characters that are realistic at times, for education or corporate training.

So I started testing out generated voice acting with some of the great tools out there, like ElevenLabs, and playing with the idea of using those voices across various realistic and animated characters. This is what I got.

I did enjoy using the British accent across our live character, somewhere in the 1960s or 50s, and our animated redesign, FranknSon as well. Surprisingly, the voice matched both characters, and the animation and motion fit well. This was promising. Of course, I needed to make sure I could also get a sense of how an American female voice would sound, so I did one additional test in a classroom setting.

But in my case, I'm also looking closely at how this could be used to create 3D animated characters, since I think this will be the more immediate and acceptable application of generative AI for creating compelling animations. We've already seen how animated films have become a major driver of Hollywood's income, and I think it makes sense that animators who have always wanted to make long-form films get the tools to do so. So here are a couple more 3D animation styles we're working on for a potential IP educational animation about STEM and exploration for pre-teens.

One more area I wanted to tackle was handling multilingual content. I did one more test with having my Terry, space pioneer character, be able to speak in English and Spanish. It did a really good job. Take a listen.

As a bilingual, I appreciate how well this actually worked. The idea of handling multiple languages, which I have tested so far, has been quite impressive as well.

These tools are getting better, and some of the new AI tools are just amazing. I'll be returning to our digital characters topic in the future, but the other areas where I've been doing a lot of work are integrating AI tools, such as workflows and chatbots, and the bigger impact may be vibe coding. With the proper expertise and domain knowledge, you will be able to increase productivity in a massive way when it comes to creating new applications. While testing is still more important than ever, it will affect what gets developed by small and large groups alike. I'll be looking at this in the next blog, and I'm excited to share my findings in Vibe coding.

Next
Next

Pre - Post, Digital Art, & AI, Part 3