Pre - Post, Digital Art, & AI, Part 2
In my first post, I saw the possibility of using generative AI tools to help with design and pre-production, likely expanding on some concepts and ideas, and potentially creating new assets or at least drafts. What the potential look and feel of my story might be very different if I were able to use something to help with this development process, but then I started to take a closer look to see where else I could take this. To frame this, I've always seen writers and painters as great “solo” creators, but film and animation have always been collaborative arts. The idea of creating an animation, or even a film, as a solo or small-team project has always been problematic and very expensive. So, getting past this traditional workflow of doing a live-action film or an animation, one of the main problems in animation is rendering time. So, I initially started looking at how to use game engines to solve this major problem, and it looked very promising. Unfortunately, it didn't solve the entire problem, as we still needed to set up the rest of the production. So, to make a film or animation, it would still take a ton of people, money, and time, and so I kept looking.
I had some old concept art and designs that I've done in the past, and I decided to experiment with them. What could AI do with this--and that's when some of the interesting things just started happening.
In my early days, I used many different 3D tools to create my 3D environments. I use Maya, Vue, Daz, Poser, and a suite of compositing tools, many of which no longer exist. One particular software called Vue (used on “Pirates of the Caribbean”) could create some beautiful renderings that I loved, but it would take hours and hours to render a single image on my existing PC back in 2011. This one in particular took most of the day. I decided to see what would happen if I had a generative tool take this as the initial starting point and let me move the camera into the scene, something that would be unrealizable with the tools I had at the time. This was sometime around 2011, and then this happened.
The image came to life as a camera moved through this forest that never existed before. Except for still.
It suddenly became clear that the world I saw in still images and imagined could move was really possible. It was creating additional details, extrapolating my image into a fully formed world. You can move your camera through it. It was amazing. Then I decided I had to explore this further. I was using at the time Runway AI, but there were so many more tools to come and more possibilities. Was this just luck, or did I find a potential tool that can take me beyond pre-production? I decided to take one more image of 3D characters I rendered in Vue, alongside my 3D Maya characters, and add them to the same AI tool. The idea of moving through them into the background forest seemed unfeasible, but then it happened again.
The characters disappeared as the camera moved to a forest that was never realized in the original still. It was beautiful and surprising.
The forest came alive as the camera moved by the characters, and they dissolved away. The camera moved through this lush, deep, dark forest, bathed in light, and it became clear that this was much more than a design tool; it would affect my production. I started to realize that this could become a central tool in creating my animation. Could I create an entire animated film with a small team or just myself, without a super expensive budget? I was going to explore this further, and there were more surprises to come.