Overview

I explored whether AI could transform a flat 2D character into a fully realized 3D animated version — while preserving its motion and personality.

Workflow

Starting Point

I revisited one of my older animations created in After Effects using 2D shapes. To make sure the motion would still read clearly in 3D, I simplified the animation so each movement and part of the character remained recognizable. This became the foundation for the entire experiment.

Exploration

Using a static frame from the original animation, I generated multiple 3D variations of the character with different shapes, materials, and textures using Nanobanana Pro. Some felt soft and playful, others looked metallic, glass-like, or completely unexpected.

Once the character variations were created, I took the simplified motion from the original animation and applied it back onto the 3D versions using Kling Motion in Higgsfield.

The process involved a lot of prompting, testing, refining, and rerunning outputs. While the motion translated surprisingly well across most versions, AI still needed constant direction and adjustment. The results may look effortless, but getting there involved a lot of experimentation behind the scenes.

Final

The final outcome proved that motion and personality can carry across completely different visual styles when guided intentionally. While the results were not always perfect, the process opened up a faster and more flexible way to explore animation and character design.

This experiment reinforced something important for me:


AI does not replace the creative process. It reshapes how we create.

The final outcome proved that motion and personality can carry across completely different visual styles when guided intentionally. While the results were not always perfect, the process opened up a faster and more flexible way to explore animation and character design.

This experiment reinforced something important for me:


AI does not replace the creative process. It reshapes how we create.

Overview

I explored whether AI could transform a flat 2D character into a fully realized 3D animated version — while preserving its motion and personality.

Workflow

Starting Point

I revisited one of my older animations created in After Effects using 2D shapes. To make sure the motion would still read clearly in 3D, I simplified the animation so each movement and part of the character remained recognizable. This became the foundation for the entire experiment.

Exploration

Using a static frame from the original animation, I generated multiple 3D variations of the character with different shapes, materials, and textures using Nanobanana Pro. Some felt soft and playful, others looked metallic, glass-like, or completely unexpected.

Once the character variations were created, I took the simplified motion from the original animation and applied it back onto the 3D versions using Kling Motion in Higgsfield.

The process involved a lot of prompting, testing, refining, and rerunning outputs. While the motion translated surprisingly well across most versions, AI still needed constant direction and adjustment. The results may look effortless, but getting there involved a lot of experimentation behind the scenes.

Final

The final outcome proved that motion and personality can carry across completely different visual styles when guided intentionally. While the results were not always perfect, the process opened up a faster and more flexible way to explore animation and character design.

This experiment reinforced something important for me:


AI does not replace the creative process. It reshapes how we create.