Pre - Post, Digital Art, & AI, Part 4
One of the things that became evident from my work with AI, looking at what AI could potentially be as a real agentic partner in the making of a film, was that it turned out to be much harder than I expected. There's a lot of confusion about the various party tricks seen on the internet, where people create amazing, crazy videos of babies doing things that would normally be impossible. To things that you no longer know are real or not, but there's something not quite right about them - yet. You need to use your own expertise and experience to guide these junior AI agents, or else you will simply be creating empty calories, devoid of any true meaning or story, producing nothing of substance
We know this is going to get better, but at the same time, producing something that's a real story, real content that can last for many minutes or hours and entertain and inform takes a lot of work. In my search to see if I could use these tools to help me create my animations, I needed to know whether I could actually create a performance, lip-sync, and motion, and be able to guide it, and that was what I was going to look at next. After many failed attempts, I started getting closer to something that seemed like a performance and could show emotional context in the face. This one was based on a redesign of one of our characters, and I thought it showed promise.
Some of the subtleties were quite impressive, but still, could I have a character actually walk through an environment and lip-sync? My next test was this.
Very far from perfect, with artifacting around the ground. Not very realistic floor, but the fact that he can walk through an environment and lip-sync somewhat imperfectly was interesting.
As you can tell, the dialogue also expresses some of my frustration with this seemingly ongoing struggle to get something I can actually use and to work on a real project. But I felt I was maybe getting closer.
While my focus has been on animated characters so far, it's clear I need to cover the full range of potential outputs, including realistic characters, voice acting, and their use across both real and animated characters. Also, being able to create environments with our characters that are realistic at times, for education or corporate training.
So I started testing out generated voice acting with some of the great tools out there, like ElevenLabs, and playing with the idea of using those voices across various realistic and animated characters. This is what I got.
I did enjoy using the British accent across our live character, somewhere in the 1960s or 50s, and our animated redesign, FranknSon as well. Surprisingly, the voice matched both characters, and the animation and motion fit well. This was promising. Of course, I needed to make sure I could also get a sense of how an American female voice would sound, so I did one additional test in a classroom setting.
But in my case, I'm also looking closely at how this could be used to create 3D animated characters, since I think this will be the more immediate and acceptable application of generative AI for creating compelling animations. We've already seen how animated films have become a major driver of Hollywood's income, and I think it makes sense that animators who have always wanted to make long-form films get the tools to do so. So here are a couple more 3D animation styles we're working on for a potential IP educational animation about STEM and exploration for pre-teens.
One more area I wanted to tackle was handling multilingual content. I did one more test with having my Terry, space pioneer character, be able to speak in English and Spanish. It did a really good job. Take a listen.
As a bilingual, I appreciate how well this actually worked. The idea of handling multiple languages, which I have tested so far, has been quite impressive as well.
These tools are getting better, and some of the new AI tools are just amazing. I'll be returning to our digital characters topic in the future, but the other areas where I've been doing a lot of work are integrating AI tools, such as workflows and chatbots, and the bigger impact may be vibe coding. With the proper expertise and domain knowledge, you will be able to increase productivity in a massive way when it comes to creating new applications. While testing is still more important than ever, it will affect what gets developed by small and large groups alike. I'll be looking at this in the next blog, and I'm excited to share my findings in Vibe coding.
Pre - Post, Digital Art, & AI, Part 3
I will admit that at this point, based on what I just finished talking about in my last blog, I was getting a little excited about the possibilities. The idea that I could actually start creating assets that would be more than just supporting roles, but could be fully realized, animated characters with lip-sync, was so tantalizing. I needed to see how far I could take this.
I decided to take some original artwork I created, along with some early 3D art my brother David made, and see what it can do with it. The first image I took was of a group of my monsters that I had created for the FranknSon project, and I simply plugged it in to see what would happen. I wanted to see what it would do with the characters themselves with very little prompting. This is what occurred.
I was surprised by what happened. I didn't expect it to take the entire team and have a sort of walk-dance toward the camera. This was very cool, and I decided to test an original watercolor artwork I'd done a million years ago to see what would happen.
I love the stylized animation. It almost works. Now, I took an artwork by my brother, David Rivera, of a Roman army, and this is what happened next.
I kept getting surprised by how good it was. It had some things that needed to be corrected, but I just kept going. I decided to try a digital painting I created in Photoshop to see what it would do. This is the outcome.
All this was intriguing, and while less than perfect, it showed the possibilities. But I needed to know: could I actually animate a character just focusing on having it do something interesting? So, for the next test, I took my FranknSon image and tried to make it perform some interesting movements, like running in place. This was the output.
Far from perfect, but it almost worked. And now some of the animations are getting their own sound effects, like his feet as he runs in place. This was a lot of fun and also a lot of work, because you always need to regenerate when it misses the mark. which could often happen. There were many failed attempts. But this was getting intriguing, and I needed to know more. Could I actually control the animation? Could I have actual lip sync and animation together? The idea of being able to pull these basic animation elements together was going to be the next big test. We'll read more about it in the next blog. Can I animate and lip sync these characters?
Pre - Post, Digital Art, & AI, Part 2
n my first post, I saw the possibilities of using generative tools to support design. Reproduction will be a big deal for increasing the ability to create those assets. What the potential look and feel of my story might be, but there was more. I was curious how far I could take this, and so I decided to do some additional experiments. I had some old concept art and 3D designs. I've done in the past, and I was wondering what would happen if I tried to bring them into these tools. And then I realize there's more than just two for design. I can actually create new roles for my films.
Using pre-production seemed to be the place where all of this would work. But I didn't think it'd go much further than that. Then I started experimenting with some old pre-production pieces, and that's when the surprises started.
In my first post, I saw the possibility of using generative AI tools to help with design and pre-production, likely expanding on some concepts and ideas, and potentially creating new assets or at least drafts. What the potential look and feel of my story might be very different if I were able to use something to help with this development process, but then I started to take a closer look to see where else I could take this. To frame this, I've always seen writers and painters as great “solo” creators, but film and animation have always been collaborative arts. The idea of creating an animation, or even a film, as a solo or small-team project has always been problematic and very expensive. So, getting past this traditional workflow of doing a live-action film or an animation, one of the main problems in animation is rendering time. So, I initially started looking at how to use game engines to solve this major problem, and it looked very promising. Unfortunately, it didn't solve the entire problem, as we still needed to set up the rest of the production. So, to make a film or animation, it would still take a ton of people, money, and time, and so I kept looking.
I had some old concept art and designs that I've done in the past, and I decided to experiment with them. What could AI do with this--and that's when some of the interesting things just started happening.
In my early days, I used many different 3D tools to create my 3D environments. I use Maya, Vue, Daz, Poser, and a suite of compositing tools, many of which no longer exist. One particular software called Vue (used on “Pirates of the Caribbean”) could create some beautiful renderings that I loved, but it would take hours and hours to render a single image on my existing PC back in 2011. This one in particular took most of the day. I decided to see what would happen if I had a generative tool take this as the initial starting point and let me move the camera into the scene, something that would be unrealizable with the tools I had at the time. This was sometime around 2011, and then this happened.
The image came to life as a camera moved through this forest that never existed before. Except for still.
It suddenly became clear that the world I saw in still images and imagined could move was really possible. It was creating additional details, extrapolating my image into a fully formed world. You can move your camera through it. It was amazing. Then I decided I had to explore this further. I was using at the time Runway AI, but there were so many more tools to come and more possibilities. Was this just luck, or did I find a potential tool that can take me beyond pre-production? I decided to take one more image of 3D characters I rendered in Vue, alongside my 3D Maya characters, and add them to the same AI tool. The idea of moving through them into the background forest seemed unfeasible, but then it happened again.
The characters disappeared as the camera moved to a forest that was never realized in the original still. It was beautiful and surprising.
The forest came alive as the camera moved by the characters, and they dissolved away. The camera moved through this lush, deep, dark forest, bathed in light, and it became clear that this was much more than a design tool; it would affect my production. I started to realize that this could become a central tool in creating my animation. Could I create an entire animated film with a small team or just myself, without a super expensive budget? I was going to explore this further, and there were more surprises to come.
Pre - Post, Digital Art, & AI in the New Age.
It all begins with an idea.
Hi, welcome to my blog. I'm thrilled to have you here. I'm going to be talking about all things technology, including AI, real-time video, and related technologies for storytelling and storymaking. We'll dive deep into these fascinating subjects. I'm also going to explore some of the ups and downs of what AI brings to the table as a creative and what the dangers are as well.
It's a complex landscape. To begin, we're going to explore pre-AI, digital art, and post-AI. I want to share how my views on AI have evolved. Let's take a look at some compelling examples and see the evolution firsthand.
As far back as I can remember, I’ve always been drawn to creating big visions. For a writer or a painter, this comes naturally—working as a solo artist makes it possible to craft expansive stories and bold, imaginative worlds on your own.. But if you are a filmmaker, this can become almost impossible. Film, by its very nature, is a collaborative effort, requiring many talents to bring these visions to life. But at the end of the day, we still have the idea of the Auteur Theory: the Director as the primary creative force behind a film, like Kubrick, Wells, and Hitchcock. Putting this aside, it unfortunately becomes a bigger problem in animation, since unless you're Disney or Pixar, you can't afford the huge capital and talent required to create these animation masterpieces.
I have long wanted to find a way to solve this problem. Can a small team or a solo artist create a full-length animated film? We've seen live-action feature films produced by small teams, but animation is a significant obstacle to making this happen. I was inspired by Brian Taylor, in the 2000s. What made his project so interesting was that Brian was trying to do it himself as a solo artist, with no other help. The film was called “RustBoy,” and it was beautiful and inspiring. The possibilities that this could be done by a solo artist was motivating.
At the end of the day, this beautiful short masterpiece never made it, but it became the inspiration for what I hoped to do in the future.
I myself headed to Hollywood in 2010 to try to sell my feature animation I wanted to make - FranknSon. It became clear that I could sell it, but I'll never be allowed to make it myself. But that's a story for another blog.
As I worked on my Master's thesis, one of the tools I explored in depth was using game engines to address some of these problems. As a filmmaker, VFX, mobile developer, and game designer, this was a natural approach for me. It didn't solve all the problems, but it became another tool that a small team or solo artist could use. So my search continues as I explore how to create my Animations, particularly while working on the film FranknSon. I had an early concept art that I made using various tools from the old days, including poser. I wanted to redesign it.
This was in the early 2000s, and I needed to refresh and rethink how I might approach the animation's style and look (around 2023). The explosion of AI Derivative Models happened in 2020–2022 and started appearing in many web applications in 2023. One web application I came across is called Veed.io. Well, this was more of a video editing web application. It has some early AI features. I took my early FranknSon image of my character emerging from a castle and generated a few new images based on the original image with this tool, which got me thinking about these new possibilities.
These early AI models were strongly influenced by Universal Frankenstein movies, which was not my intention. But it still gave me a place to start redevelopment, FranknSon. In the image below, I asked the AI model to expand the image to show the surrounding landscape. I was surprised by the results. I knew then that the possibilities of using these evolving tools for us would take me to new horizons. I needed to keep exploring the possibilities.
This was something I needed to keep exploring. More to come in my next blog.