Orlando Rivera Orlando Rivera

Pre - Post, Digital Art, & AI, Part 3

I will admit that at this point, based on what I just finished talking about in my last blog, I was getting a little excited about the possibilities. The idea that I could actually start creating assets that would be more than just supporting roles, but could be fully realized, animated characters with lip-sync, was so tantalizing. I needed to see how far I could take this.

I decided to take some original artwork I created, along with some early 3D art my brother David made, and see what it can do with it. The first image I took was of a group of my monsters that I had created for the FranknSon project, and I simply plugged it in to see what would happen. I wanted to see what it would do with the characters themselves with very little prompting. This is what occurred.

I was surprised by what happened. I didn't expect it to take the entire team and have a sort of walk-dance toward the camera. This was very cool, and I decided to test an original watercolor artwork I'd done a million years ago to see what would happen.

I love the stylized animation. It almost works. Now, I took an artwork by my brother, David Rivera, of a Roman army, and this is what happened next.

I kept getting surprised by how good it was. It had some things that needed to be corrected, but I just kept going. I decided to try a digital painting I created in Photoshop to see what it would do. This is the outcome.

All this was intriguing, and while less than perfect, it showed the possibilities. But I needed to know: could I actually animate a character just focusing on having it do something interesting? So, for the next test, I took my FranknSon image and tried to make it perform some interesting movements, like running in place. This was the output.

Far from perfect, but it almost worked. And now some of the animations are getting their own sound effects, like his feet as he runs in place. This was a lot of fun and also a lot of work, because you always need to regenerate when it misses the mark. which could often happen. There were many failed attempts. But this was getting intriguing, and I needed to know more. Could I actually control the animation? Could I have actual lip sync and animation together? The idea of being able to pull these basic animation elements together was going to be the next big test. We'll read more about it in the next blog. Can I animate and lip sync these characters?

Read More
Orlando Rivera Orlando Rivera

Pre - Post, Digital Art, & AI, Part 2

n my first post, I saw the possibilities of using generative tools to support design. Reproduction will be a big deal for increasing the ability to create those assets. What the potential look and feel of my story might be, but there was more. I was curious how far I could take this, and so I decided to do some additional experiments. I had some old concept art and 3D designs. I've done in the past, and I was wondering what would happen if I tried to bring them into these tools. And then I realize there's more than just two for design. I can actually create new roles for my films.

Using pre-production seemed to be the place where all of this would work. But I didn't think it'd go much further than that. Then I started experimenting with some old pre-production pieces, and that's when the surprises started.

In my first post, I saw the possibility of using generative AI tools to help with design and pre-production, likely expanding on some concepts and ideas, and potentially creating new assets or at least drafts. What the potential look and feel of my story might be very different if I were able to use something to help with this development process, but then I started to take a closer look to see where else I could take this. To frame this, I've always seen writers and painters as great “solo” creators, but film and animation have always been collaborative arts. The idea of creating an animation, or even a film, as a solo or small-team project has always been problematic and very expensive. So, getting past this traditional workflow of doing a live-action film or an animation, one of the main problems in animation is rendering time. So, I initially started looking at how to use game engines to solve this major problem, and it looked very promising. Unfortunately, it didn't solve the entire problem, as we still needed to set up the rest of the production. So, to make a film or animation, it would still take a ton of people, money, and time, and so I kept looking.

I had some old concept art and designs that I've done in the past, and I decided to experiment with them. What could AI do with this--and that's when some of the interesting things just started happening.

A still image from 2011 is going to come back to life.

In my early days, I used many different 3D tools to create my 3D environments. I use Maya, Vue, Daz, Poser, and a suite of compositing tools, many of which no longer exist.  One particular software called Vue (used on “Pirates of the Caribbean”) could create some beautiful renderings that I loved, but it would take hours and hours to render a single image on my existing PC back in 2011. This one in particular took most of the day. I decided to see what would happen if I had a generative tool take this as the initial starting point and let me move the camera into the scene, something that would be unrealizable with the tools I had at the time. This was sometime around 2011, and then this happened.

The image came to life as a camera moved through this forest that never existed before. Except for still.

It suddenly became clear that the world I saw in still images and imagined could move was really possible. It was creating additional details, extrapolating my image into a fully formed world. You can move your camera through it. It was amazing. Then I decided I had to explore this further. I was using at the time Runway AI, but there were so many more tools to come and more possibilities. Was this just luck, or did I find a potential tool that can take me beyond pre-production? I decided to take one more image of 3D characters I rendered in Vue, alongside my 3D Maya characters, and add them to the same AI tool. The idea of moving through them into the background forest seemed unfeasible, but then it happened again.

The characters disappeared as the camera moved to a forest that was never realized in the original still. It was beautiful and surprising.

The forest came alive as the camera moved by the characters, and they dissolved away. The camera moved through this lush, deep, dark forest, bathed in light, and it became clear that this was much more than a design tool; it would affect my production. I started to realize that this could become a central tool in creating my animation.  Could I create an entire animated film with a small team or just myself, without a super expensive budget? I was going to explore this further, and there were more surprises to come.

Read More
Orlando Rivera Orlando Rivera

Pre - Post, Digital Art, & AI in the New Age.

It all begins with an idea.

 Hi, welcome to my blog. I'm thrilled to have you here.  I'm going to be talking about all things technology, including AI, real-time video, and related technologies for storytelling and storymaking.  We'll dive deep into these fascinating subjects.  I'm also going to explore some of the ups and downs of  what AI brings to the table  as a creative and what the dangers are as well.

It's a complex landscape. To begin, we're going to explore pre-AI, digital art, and post-AI.  I want to share how my views on AI have evolved. Let's take a look at some compelling examples and see the evolution firsthand.

 

As far back as I can remember, I’ve always been drawn to creating big visions. For a writer or a painter, this comes naturally—working as a solo artist makes it possible to craft expansive stories and bold, imaginative worlds on your own.. But if you are a filmmaker, this can become almost impossible. Film, by its very nature, is a collaborative effort, requiring many talents to bring these visions to life. But at the end of the day, we still have the idea of the Auteur Theory: the Director as the primary creative force behind a film, like Kubrick, Wells, and Hitchcock. Putting this aside, it unfortunately becomes a bigger problem in animation, since unless you're Disney or Pixar, you can't afford the huge capital and talent required to create these animation masterpieces. 

 

I have long wanted to find a way to solve this problem. Can a small team or a solo artist create a full-length animated film? We've seen live-action feature films produced by small teams, but animation is a significant obstacle to making this happen. I was inspired by Brian Taylor, in the 2000s. What made his project so interesting was that Brian was trying to do it himself as a solo artist, with no other help.  The film was called “RustBoy,” and it was beautiful and inspiring. The possibilities that this could be done by a solo artist was motivating.

At the end of the day, this beautiful short masterpiece never made it, but it became the inspiration for what I hoped to do in the future. 

I myself headed to Hollywood in 2010 to try to sell my feature animation I wanted to make - FranknSon. It became clear that I could sell it, but I'll never be allowed to make it myself. But that's a story for another blog. 

 

As I worked on my Master's thesis, one of the tools I explored in depth was using game engines to address some of these problems. As a filmmaker, VFX, mobile developer, and game designer, this was a natural approach for me. It didn't solve all the problems, but it became another tool that a small team or solo artist could use. So my search continues as I explore how to create my Animations, particularly while working on the film FranknSon. I had an early concept art that I made using various tools from the old days, including poser. I wanted to redesign it.

This was in the early 2000s, and I needed to refresh and rethink how I might approach the animation's style and look (around 2023). The explosion of AI Derivative Models happened in 2020–2022 and started appearing in many web applications in 2023. One web application I came across is called Veed.io. Well, this was more of a video editing web application. It has some early AI features. I took my early FranknSon image of my character emerging from a castle and generated a few new images based on the original image with this tool, which got me thinking about these new possibilities.

These early AI models were strongly influenced by Universal Frankenstein movies, which was not my intention. But it still gave me a place to start redevelopment, FranknSon. In the image below, I asked the AI model to expand the image to show the surrounding landscape. I was surprised by the results. I knew then that the possibilities of using these evolving tools for us would take me to new horizons. I needed to keep exploring the possibilities.

This was something I needed to keep exploring. More to come in my next blog.

Read More