How I learned to stop worrying and like the AI.
I've been a programmer, systems engineer, and architect for more years than I want to admit. When the idea of AI helping with coding started to surface a couple of years ago, I was a little suspicious, but I'm also very curious, so I started looking into it. My first attempt was a Unity coding project with a specific problem to solve. I started reviewing the code and realized it would take a little time. I decided to ask ChatGPT for a potential solution, so I explained the problem and what I wanted to do, then pressed “Enter”.
I waited for a minute or two as ChatGPT thought, and then it suddenly started giving me potential solutions after explaining its research, and then it gave me the code. I reviewed it, and it understood the problem, including the specific APIs and the frameworks I was dealing with. It explained what I was trying to do and how to get around some potential issues, but at the end of the day, it just gave me some code that seemed to be a potential solution.
I copied the code, opened my Unity environment, found the spot where I thought I needed to insert it (ChatGPT gave me some clues), and pasted it in. I double-checked a few things, saved it, and then ran it.
To my surprise, it worked. This has not always been the case, but from my first experience, I was really surprised by the solution and how well it worked. That's when I realized this was not just a simple assistant but something I could partner with to help solve problems that would take me too long to handle as a solo developer.
I worked on very large teams throughout my career at IBM, JPMorgan Chase, AT&T, ADP, and QVC, but being able to work this fast was intriguing; I was still not quite convinced. To give you a big-picture view of where AI is in this coding world, there are quite a few frameworks and platforms evolving. Some of the initial approaches simply used something like ChatGPT or Claude to review your question or problem. It would analyze it and give you general potential solutions that may work, not unlike what I did when I first started playing with it.
The problem with these types of solutions is that the AI doesn't really understand the context of the existing code, so it's a dependency that would be really useful if it understood the code rather than just what I'm explaining to it.
The next step in evolution was simply integrating AI coding assistants into the tools developers use, whether it is Visual Studio Code or other IDEs. Tools like Cursor, Copilot, and Windsurf started to appear. Suddenly, the AI had a sense of the code's context, so it could help you develop better solutions. This was still kind of an add-on, not an initiator of the programming experience. It took existing code or templates that you would develop to help you build specific functions, algorithms, or solutions, getting you closer to the goal. Since it knew the code that existed, it could potentially give you better solutions.
Today, most of us who need to create a website can use one of many platforms to develop it, and we simply answer questions about the look, the style, and whether it's e-commerce, training, and so on. We can use software like Squarespace or Wix, or any of the many other platforms that can develop websites for you. This hides many of the traditional, hardcore pieces of technology we normally use, from HTML to CSS to various embedded libraries, to help us manage the look, the template, and the responsive nature of good web design.
While many of these tools let you get behind the scenes and do some custom work, their main function is to eliminate and minimize that so that anyone who is non-technical can use their product. So, the next step up in the AI coding evolution was when we started hearing the word 'agentic'. These are tools like Claude Code, Codex, and Devin. They can do much more of the development than the prior versions, like planning tasks, editing multiple files, running tests, and iterating autonomously. The problem was that these were still centered on the idea that a software engineer, someone with training in programming and computer science, would use them. Many were terminal-based or simply had large setup scenarios that assumed a certain level of sophistication in how you would set up these IDEs and these custom tools to help you go to the next step.
The emergence of tools like n8n, Zapier, and workflow platforms from both OpenAI and Anthropic, along with new protocols that enable them to interconnect, is once again accelerating what can be done with AI. These tools, connectors, and other protocols, like MCP, give software agents the ability not only to do the work but also to initiate it based on their understanding of the issues they need to solve for you.
Many of these emerging tools and platforms were still geared toward those with a computer science or technical background. The promise of AI was simply to explain, talk, or type what you need and to participate in a dialogue without being technical. This began to change with what many people call 'vibe programming'.
These new vibe coding or programming tools have started to resemble the kinds of tools people use to create automated websites, and they hold your hand, guiding you through by asking questions. In this case, you get a chance to simply talk to the AI, explain what you want to do, and it will go ahead and do it. Though you are responsible for guiding it appropriately to make sure it doesn't go off the rails. Like any good AI tool, it isn't a one-prompt deal; it is a conversation. You have to understand what it's doing and test what it produces to make sure it's doing what you expect. While it may dramatically increase your productivity when creating code, there is a downside: you now need to do much more testing to make sure you got what you believe was created.
As a creator, you need to understand the problem domain. While a solid background in computers is helpful, you need a deep understanding of the subject and product you are trying to create. Some of these new AI tools, like Lovable, Bubble, and Replit, fall into this category: using AI, one or multiple, to understand your project, code it, and create something that's pretty amazing. You need to stay very much in the loop about what is being created. Many of these tools are considered prototype creators, but some take it much further. I've used both Lovable and Bubble, and they have grown a lot since I used them, but initially they would get stuck in loops and run into errors. They took forever to create something useful.
Then I switched to Replit, which had some minor differences. I went through the entire process, which helped me create and develop code, move it to GitHub, deploy to production, and manage the entire scope of a project. Rolling something out into production is a significant undertaking. All these tools have issues, and they will run into errors. There are issues with it forgetting things he may have done earlier and undoing work that was done. Memory becomes an issue, especially as your code gets more complex. You need to be clear and exact in how you phrase what you need to do. If you read it back and you don't understand it or realize there are some gaps in your explanation, reword it. You don't want it to make assumptions on your behalf. Make sure you review the plan before you give the go-ahead. Challenge the analysis. Make sure that it is using a current workflow library that may not be optimal. That's where external information is helpful. The AI will consider what you say, and there are times we'll say, "You're right, this is a better way to go," or "There are some dependencies that you realize are going to hurt you long term." That's why a conversation isn't simply saying "Okay"; it's about challenging their analysis and making sure nothing's being missed.
You need to protect the progression of your code as you move forward, really understand the production flow from development through potentially staging to production, and be able to test these systems realistically in a production environment. Moving code into a production environment is pretty old-school today. Many people doing Vibe programming today simply don't understand what it means to actually put something in production. This is very different from simply testing in a safe sandbox environment. A system that helps you move into production and understands how to secure and ensure a reliable, scalable website is a really big deal.
One of the biggest advantages I found in using Replit is the ability to leverage Anthropic AI for the project, one of the top coding LLMs.
Another advantage is that there are times when I need to scope out a potential production rollout or feature, and I can ask questions about ROI or the research I might need for a pitch deck or to understand a three-year rollout. I can talk to the agents within Replit who know the code and can do the analysis you would do with external AI like Claude or OpenAI, except Replit AI knows your product and code. You can reasonably trust that it will do a better job with the analysis, based on your actual project. It knows the code, rather than just a general idea of what would happen if you didn't use something that understands your project fully. Context is a big deal, and it makes it feel more like a real partner rather than just a piece of advice you get from an AI agent that doesn't really know what you're trying to develop. This is a quickly evolving area that will need to be revisited, as there's a lot more to learn about using these tools effectively. There's going to be a major impact in the near future, and we must understand how to work with them so we won't be left behind and can compete in the future economy.
Pre - Post, Digital Art, & AI, Part 4
One of the things that became evident from my work with AI, looking at what AI could potentially be as a real agentic partner in the making of a film, was that it turned out to be much harder than I expected. There's a lot of confusion about the various party tricks seen on the internet, where people create amazing, crazy videos of babies doing things that would normally be impossible. To things that you no longer know are real or not, but there's something not quite right about them - yet. You need to use your own expertise and experience to guide these junior AI agents, or else you will simply be creating empty calories, devoid of any true meaning or story, producing nothing of substance
We know this is going to get better, but at the same time, producing something that's a real story, real content that can last for many minutes or hours and entertain and inform takes a lot of work. In my search to see if I could use these tools to help me create my animations, I needed to know whether I could actually create a performance, lip-sync, and motion, and be able to guide it, and that was what I was going to look at next. After many failed attempts, I started getting closer to something that seemed like a performance and could show emotional context in the face. This one was based on a redesign of one of our characters, and I thought it showed promise.
Some of the subtleties were quite impressive, but still, could I have a character actually walk through an environment and lip-sync? My next test was this.
Very far from perfect, with artifacting around the ground. Not very realistic floor, but the fact that he can walk through an environment and lip-sync somewhat imperfectly was interesting.
As you can tell, the dialogue also expresses some of my frustration with this seemingly ongoing struggle to get something I can actually use and to work on a real project. But I felt I was maybe getting closer.
While my focus has been on animated characters so far, it's clear I need to cover the full range of potential outputs, including realistic characters, voice acting, and their use across both real and animated characters. Also, being able to create environments with our characters that are realistic at times, for education or corporate training.
So I started testing out generated voice acting with some of the great tools out there, like ElevenLabs, and playing with the idea of using those voices across various realistic and animated characters. This is what I got.
I did enjoy using the British accent across our live character, somewhere in the 1960s or 50s, and our animated redesign, FranknSon as well. Surprisingly, the voice matched both characters, and the animation and motion fit well. This was promising. Of course, I needed to make sure I could also get a sense of how an American female voice would sound, so I did one additional test in a classroom setting.
But in my case, I'm also looking closely at how this could be used to create 3D animated characters, since I think this will be the more immediate and acceptable application of generative AI for creating compelling animations. We've already seen how animated films have become a major driver of Hollywood's income, and I think it makes sense that animators who have always wanted to make long-form films get the tools to do so. So here are a couple more 3D animation styles we're working on for a potential IP educational animation about STEM and exploration for pre-teens.
One more area I wanted to tackle was handling multilingual content. I did one more test with having my Terry, space pioneer character, be able to speak in English and Spanish. It did a really good job. Take a listen.
As a bilingual, I appreciate how well this actually worked. The idea of handling multiple languages, which I have tested so far, has been quite impressive as well.
These tools are getting better, and some of the new AI tools are just amazing. I'll be returning to our digital characters topic in the future, but the other areas where I've been doing a lot of work are integrating AI tools, such as workflows and chatbots, and the bigger impact may be vibe coding. With the proper expertise and domain knowledge, you will be able to increase productivity in a massive way when it comes to creating new applications. While testing is still more important than ever, it will affect what gets developed by small and large groups alike. I'll be looking at this in the next blog, and I'm excited to share my findings in Vibe coding.
Pre - Post, Digital Art, & AI, Part 3
I will admit that at this point, based on what I just finished talking about in my last blog, I was getting a little excited about the possibilities. The idea that I could actually start creating assets that would be more than just supporting roles, but could be fully realized, animated characters with lip-sync, was so tantalizing. I needed to see how far I could take this.
I decided to take some original artwork I created, along with some early 3D art my brother David made, and see what it can do with it. The first image I took was of a group of my monsters that I had created for the FranknSon project, and I simply plugged it in to see what would happen. I wanted to see what it would do with the characters themselves with very little prompting. This is what occurred.
I was surprised by what happened. I didn't expect it to take the entire team and have a sort of walk-dance toward the camera. This was very cool, and I decided to test an original watercolor artwork I'd done a million years ago to see what would happen.
I love the stylized animation. It almost works. Now, I took an artwork by my brother, David Rivera, of a Roman army, and this is what happened next.
I kept getting surprised by how good it was. It had some things that needed to be corrected, but I just kept going. I decided to try a digital painting I created in Photoshop to see what it would do. This is the outcome.
All this was intriguing, and while less than perfect, it showed the possibilities. But I needed to know: could I actually animate a character just focusing on having it do something interesting? So, for the next test, I took my FranknSon image and tried to make it perform some interesting movements, like running in place. This was the output.
Far from perfect, but it almost worked. And now some of the animations are getting their own sound effects, like his feet as he runs in place. This was a lot of fun and also a lot of work, because you always need to regenerate when it misses the mark. which could often happen. There were many failed attempts. But this was getting intriguing, and I needed to know more. Could I actually control the animation? Could I have actual lip sync and animation together? The idea of being able to pull these basic animation elements together was going to be the next big test. We'll read more about it in the next blog. Can I animate and lip sync these characters?
Pre - Post, Digital Art, & AI, Part 2
n my first post, I saw the possibilities of using generative tools to support design. Reproduction will be a big deal for increasing the ability to create those assets. What the potential look and feel of my story might be, but there was more. I was curious how far I could take this, and so I decided to do some additional experiments. I had some old concept art and 3D designs. I've done in the past, and I was wondering what would happen if I tried to bring them into these tools. And then I realize there's more than just two for design. I can actually create new roles for my films.
Using pre-production seemed to be the place where all of this would work. But I didn't think it'd go much further than that. Then I started experimenting with some old pre-production pieces, and that's when the surprises started.
In my first post, I saw the possibility of using generative AI tools to help with design and pre-production, likely expanding on some concepts and ideas, and potentially creating new assets or at least drafts. What the potential look and feel of my story might be very different if I were able to use something to help with this development process, but then I started to take a closer look to see where else I could take this. To frame this, I've always seen writers and painters as great “solo” creators, but film and animation have always been collaborative arts. The idea of creating an animation, or even a film, as a solo or small-team project has always been problematic and very expensive. So, getting past this traditional workflow of doing a live-action film or an animation, one of the main problems in animation is rendering time. So, I initially started looking at how to use game engines to solve this major problem, and it looked very promising. Unfortunately, it didn't solve the entire problem, as we still needed to set up the rest of the production. So, to make a film or animation, it would still take a ton of people, money, and time, and so I kept looking.
I had some old concept art and designs that I've done in the past, and I decided to experiment with them. What could AI do with this--and that's when some of the interesting things just started happening.
In my early days, I used many different 3D tools to create my 3D environments. I use Maya, Vue, Daz, Poser, and a suite of compositing tools, many of which no longer exist. One particular software called Vue (used on “Pirates of the Caribbean”) could create some beautiful renderings that I loved, but it would take hours and hours to render a single image on my existing PC back in 2011. This one in particular took most of the day. I decided to see what would happen if I had a generative tool take this as the initial starting point and let me move the camera into the scene, something that would be unrealizable with the tools I had at the time. This was sometime around 2011, and then this happened.
The image came to life as a camera moved through this forest that never existed before. Except for still.
It suddenly became clear that the world I saw in still images and imagined could move was really possible. It was creating additional details, extrapolating my image into a fully formed world. You can move your camera through it. It was amazing. Then I decided I had to explore this further. I was using at the time Runway AI, but there were so many more tools to come and more possibilities. Was this just luck, or did I find a potential tool that can take me beyond pre-production? I decided to take one more image of 3D characters I rendered in Vue, alongside my 3D Maya characters, and add them to the same AI tool. The idea of moving through them into the background forest seemed unfeasible, but then it happened again.
The characters disappeared as the camera moved to a forest that was never realized in the original still. It was beautiful and surprising.
The forest came alive as the camera moved by the characters, and they dissolved away. The camera moved through this lush, deep, dark forest, bathed in light, and it became clear that this was much more than a design tool; it would affect my production. I started to realize that this could become a central tool in creating my animation. Could I create an entire animated film with a small team or just myself, without a super expensive budget? I was going to explore this further, and there were more surprises to come.
Pre - Post, Digital Art, & AI in the New Age.
It all begins with an idea.
Hi, welcome to my blog. I'm thrilled to have you here. I'm going to be talking about all things technology, including AI, real-time video, and related technologies for storytelling and storymaking. We'll dive deep into these fascinating subjects. I'm also going to explore some of the ups and downs of what AI brings to the table as a creative and what the dangers are as well.
It's a complex landscape. To begin, we're going to explore pre-AI, digital art, and post-AI. I want to share how my views on AI have evolved. Let's take a look at some compelling examples and see the evolution firsthand.
As far back as I can remember, I’ve always been drawn to creating big visions. For a writer or a painter, this comes naturally—working as a solo artist makes it possible to craft expansive stories and bold, imaginative worlds on your own.. But if you are a filmmaker, this can become almost impossible. Film, by its very nature, is a collaborative effort, requiring many talents to bring these visions to life. But at the end of the day, we still have the idea of the Auteur Theory: the Director as the primary creative force behind a film, like Kubrick, Wells, and Hitchcock. Putting this aside, it unfortunately becomes a bigger problem in animation, since unless you're Disney or Pixar, you can't afford the huge capital and talent required to create these animation masterpieces.
I have long wanted to find a way to solve this problem. Can a small team or a solo artist create a full-length animated film? We've seen live-action feature films produced by small teams, but animation is a significant obstacle to making this happen. I was inspired by Brian Taylor, in the 2000s. What made his project so interesting was that Brian was trying to do it himself as a solo artist, with no other help. The film was called “RustBoy,” and it was beautiful and inspiring. The possibilities that this could be done by a solo artist was motivating.
At the end of the day, this beautiful short masterpiece never made it, but it became the inspiration for what I hoped to do in the future.
I myself headed to Hollywood in 2010 to try to sell my feature animation I wanted to make - FranknSon. It became clear that I could sell it, but I'll never be allowed to make it myself. But that's a story for another blog.
As I worked on my Master's thesis, one of the tools I explored in depth was using game engines to address some of these problems. As a filmmaker, VFX, mobile developer, and game designer, this was a natural approach for me. It didn't solve all the problems, but it became another tool that a small team or solo artist could use. So my search continues as I explore how to create my Animations, particularly while working on the film FranknSon. I had an early concept art that I made using various tools from the old days, including poser. I wanted to redesign it.
This was in the early 2000s, and I needed to refresh and rethink how I might approach the animation's style and look (around 2023). The explosion of AI Derivative Models happened in 2020–2022 and started appearing in many web applications in 2023. One web application I came across is called Veed.io. Well, this was more of a video editing web application. It has some early AI features. I took my early FranknSon image of my character emerging from a castle and generated a few new images based on the original image with this tool, which got me thinking about these new possibilities.
These early AI models were strongly influenced by Universal Frankenstein movies, which was not my intention. But it still gave me a place to start redevelopment, FranknSon. In the image below, I asked the AI model to expand the image to show the surrounding landscape. I was surprised by the results. I knew then that the possibilities of using these evolving tools for us would take me to new horizons. I needed to keep exploring the possibilities.
This was something I needed to keep exploring. More to come in my next blog.