Web3 in 60s / Generative AI in Motion Project

Date

2024

Service

Motion Graphics, Animation, AI-assisted, 2D

Client

Fintopio

🎬 Project Overview

This case study dives into the early phase of the FinLearn* project — a 22-episode motion series published weekly on TikTok, Instagram, and YouTube between October 4, 2024, and March 21, 2025. The goal: simplify complex Web3 and blockchain concepts into engaging, 60-second animated content.

In the first three episodes, I led an internal experiment testing the integration of AI generative tools (e.g. Runway ML, Midjourney) to explore whether they could help accelerate production without compromising visual quality or brand alignment.

This documentation outlines how we tested, implemented, and ultimately pivoted away from generative AI for this specific production pipeline.

🔗 Watch the 22 episodes

👩‍💻My Role 

As the project’s sole Motion Designer and Creative Lead, I was responsible for the entire production process — from concept to delivery. My focus during the AI-testing phase included:

  • Scripting: Translating technical topics into accessible narratives

  • AI Tool Research & Testing: Evaluating tools like Runway ML and Midjourney for speed vs. visual control

  • Visual Development: Defining art direction and consistency across episodes

  • Compositing & Branding: Integrating brand elements where AI tools lacked control

  • Animation: Assembling the final video with a mix of AI and handcrafted assets

  • Creative Direction: Maintaining tone and narrative cohesion


⚙️ Process Breakdown: AI-Assisted vs. Handcrafted

To assess efficiency and quality, I built two parallel workflows:

🧠 AI-Assisted Workflow (Episodes 1–3)

Goal: Test if generative tools could reduce design time and maintain consistent, high-quality output.

Workflow:

  1. Style Training (Runway ML Gen-1): Train the model to follow a specific visual look

  2. Script to Image: Generate images via Midjourney, using a pre-optimized prompt

  3. Image to Video: Use Runway ML to animate static frames

  4. Compositing: Add branding manually using AE + Mocha (to overcome hallucinations)

  5. Template Integration: Assemble sequences in AE

  6. Voiceover: Generate dynamic VO using AI tools

  7. Subtitles: Add in Premiere Pro

  8. Delivery


Main Bottleneck: Branding. AI tools introduced frequent visual hallucinations (e.g. random symbols, UI glitches), making it hard to control branding elements. Manual compositing was required to fix this, negating the intended speed benefits.

✍️ Handcrafted Workflow (Episodes 4–22)

Goal: Achieve greater customization and quicker revisions with pre-built assets.

Workflow:

  1. Script

  2. Asset curation via Envato Elements (2D illustration packs)

  3. Stylization and adaptation in Illustrator/AE

  4. Storyboarding in Figma

  5. Voiceover + AI subtitles

  6. Animation

  7. Delivery

Result: Faster turnaround with more brand-friendly scenes. Easily adaptable and scalable for weekly production. Less dependent on unpredictable AI outputs.

✅ Outcome

Although the AI-assisted workflow produced some compelling visuals (Ep. 1–3), it was ultimately phased out. The time spent fixing AI-generated inconsistencies — particularly for branding — offset any initial time savings. As a result, the team shifted to a handcrafted, minimalistic style supported by illustration libraries, which allowed me to meet weekly deadlines while maintaining creative quality.

Notably, Episode 2, made using the AI-assisted pipeline, became the most viewed in the series — with 28.3K views, 1,813 likes, and 108 saves. While visually striking, its success also stemmed from a strong script and shareable storytelling.

⚡Challenges

  • Managing AI hallucinations (e.g. broken logos, random artifacts)

  • Balancing experimentation with ongoing weekly deadlines

  • Meeting expectations of fast delivery without a support team

  • Integrating branded elements in a non-promotional way (to preserve organic reach)

✨ Highlight

Successfully delivered a 22-episode TikTok-first motion series, with early episodes serving as a live experiment in AI-assisted production. This allowed us to make informed decisions about scalability, consistency, and brand alignment — ultimately crafting a pipeline that balanced creativity and feasibility.

🧰 Tools & Outcome

  • Runway ML (AI video generation)

  • Midjourney (Style references, image prompts)

  • Figma (Storyboarding)

  • After Effects (Motion & Compositing)

  • Premiere Pro (Subtitles)

  • Envato Elements (Asset sourcing)

  • ElevenLabs (Voiceover generation)

More projects

Got questions?

The best conversations start here

E-mail

soraya@grimvoxel.studio

Telegram

www.grimvoxel.studio · ©2025 all rights reserved

@grimvoxel

Web3 in 60s / Generative AI in Motion Project

Date

2024

Service

Motion Graphics, Animation, AI-assisted, 2D

Client

Fintopio

🎬 Project Overview

This case study dives into the early phase of the FinLearn* project — a 22-episode motion series published weekly on TikTok, Instagram, and YouTube between October 4, 2024, and March 21, 2025. The goal: simplify complex Web3 and blockchain concepts into engaging, 60-second animated content.

In the first three episodes, I led an internal experiment testing the integration of AI generative tools (e.g. Runway ML, Midjourney) to explore whether they could help accelerate production without compromising visual quality or brand alignment.

This documentation outlines how we tested, implemented, and ultimately pivoted away from generative AI for this specific production pipeline.

🔗 Watch the 22 episodes

👩‍💻My Role 

As the project’s sole Motion Designer and Creative Lead, I was responsible for the entire production process — from concept to delivery. My focus during the AI-testing phase included:

  • Scripting: Translating technical topics into accessible narratives

  • AI Tool Research & Testing: Evaluating tools like Runway ML and Midjourney for speed vs. visual control

  • Visual Development: Defining art direction and consistency across episodes

  • Compositing & Branding: Integrating brand elements where AI tools lacked control

  • Animation: Assembling the final video with a mix of AI and handcrafted assets

  • Creative Direction: Maintaining tone and narrative cohesion


⚙️ Process Breakdown: AI-Assisted vs. Handcrafted

To assess efficiency and quality, I built two parallel workflows:

🧠 AI-Assisted Workflow (Episodes 1–3)

Goal: Test if generative tools could reduce design time and maintain consistent, high-quality output.

Workflow:

  1. Style Training (Runway ML Gen-1): Train the model to follow a specific visual look

  2. Script to Image: Generate images via Midjourney, using a pre-optimized prompt

  3. Image to Video: Use Runway ML to animate static frames

  4. Compositing: Add branding manually using AE + Mocha (to overcome hallucinations)

  5. Template Integration: Assemble sequences in AE

  6. Voiceover: Generate dynamic VO using AI tools

  7. Subtitles: Add in Premiere Pro

  8. Delivery


Main Bottleneck: Branding. AI tools introduced frequent visual hallucinations (e.g. random symbols, UI glitches), making it hard to control branding elements. Manual compositing was required to fix this, negating the intended speed benefits.

✍️ Handcrafted Workflow (Episodes 4–22)

Goal: Achieve greater customization and quicker revisions with pre-built assets.

Workflow:

  1. Script

  2. Asset curation via Envato Elements (2D illustration packs)

  3. Stylization and adaptation in Illustrator/AE

  4. Storyboarding in Figma

  5. Voiceover + AI subtitles

  6. Animation

  7. Delivery

Result: Faster turnaround with more brand-friendly scenes. Easily adaptable and scalable for weekly production. Less dependent on unpredictable AI outputs.

✅ Outcome

Although the AI-assisted workflow produced some compelling visuals (Ep. 1–3), it was ultimately phased out. The time spent fixing AI-generated inconsistencies — particularly for branding — offset any initial time savings. As a result, the team shifted to a handcrafted, minimalistic style supported by illustration libraries, which allowed me to meet weekly deadlines while maintaining creative quality.

Notably, Episode 2, made using the AI-assisted pipeline, became the most viewed in the series — with 28.3K views, 1,813 likes, and 108 saves. While visually striking, its success also stemmed from a strong script and shareable storytelling.

⚡Challenges

  • Managing AI hallucinations (e.g. broken logos, random artifacts)

  • Balancing experimentation with ongoing weekly deadlines

  • Meeting expectations of fast delivery without a support team

  • Integrating branded elements in a non-promotional way (to preserve organic reach)

✨ Highlight

Successfully delivered a 22-episode TikTok-first motion series, with early episodes serving as a live experiment in AI-assisted production. This allowed us to make informed decisions about scalability, consistency, and brand alignment — ultimately crafting a pipeline that balanced creativity and feasibility.

🧰 Tools & Outcome

  • Runway ML (AI video generation)

  • Midjourney (Style references, image prompts)

  • Figma (Storyboarding)

  • After Effects (Motion & Compositing)

  • Premiere Pro (Subtitles)

  • Envato Elements (Asset sourcing)

  • ElevenLabs (Voiceover generation)

More projects

Got questions?

The best conversations start here

E-mail

soraya@grimvoxel.studio

Telegram

Got questions?

The best conversations start here

E-mail

soraya@grimvoxel.studio

Telegram

www.grimvoxel.studio · ©2025 all rights reserved

@grimvoxel

www.grimvoxel.studio · ©2025 all rights reserved

@grimvoxel

Web3 in 60s / Generative AI in Motion Project

Date

2024

Service

Motion Graphics, Animation, AI-assisted, 2D

Client

Fintopio

🎬 Project Overview

This case study dives into the early phase of the FinLearn* project — a 22-episode motion series published weekly on TikTok, Instagram, and YouTube between October 4, 2024, and March 21, 2025. The goal: simplify complex Web3 and blockchain concepts into engaging, 60-second animated content.

In the first three episodes, I led an internal experiment testing the integration of AI generative tools (e.g. Runway ML, Midjourney) to explore whether they could help accelerate production without compromising visual quality or brand alignment.

This documentation outlines how we tested, implemented, and ultimately pivoted away from generative AI for this specific production pipeline.

🔗 Watch the 22 episodes

👩‍💻My Role 

As the project’s sole Motion Designer and Creative Lead, I was responsible for the entire production process — from concept to delivery. My focus during the AI-testing phase included:

  • Scripting: Translating technical topics into accessible narratives

  • AI Tool Research & Testing: Evaluating tools like Runway ML and Midjourney for speed vs. visual control

  • Visual Development: Defining art direction and consistency across episodes

  • Compositing & Branding: Integrating brand elements where AI tools lacked control

  • Animation: Assembling the final video with a mix of AI and handcrafted assets

  • Creative Direction: Maintaining tone and narrative cohesion


⚙️ Process Breakdown: AI-Assisted vs. Handcrafted

To assess efficiency and quality, I built two parallel workflows:

🧠 AI-Assisted Workflow (Episodes 1–3)

Goal: Test if generative tools could reduce design time and maintain consistent, high-quality output.

Workflow:

  1. Style Training (Runway ML Gen-1): Train the model to follow a specific visual look

  2. Script to Image: Generate images via Midjourney, using a pre-optimized prompt

  3. Image to Video: Use Runway ML to animate static frames

  4. Compositing: Add branding manually using AE + Mocha (to overcome hallucinations)

  5. Template Integration: Assemble sequences in AE

  6. Voiceover: Generate dynamic VO using AI tools

  7. Subtitles: Add in Premiere Pro

  8. Delivery


Main Bottleneck: Branding. AI tools introduced frequent visual hallucinations (e.g. random symbols, UI glitches), making it hard to control branding elements. Manual compositing was required to fix this, negating the intended speed benefits.

✍️ Handcrafted Workflow (Episodes 4–22)

Goal: Achieve greater customization and quicker revisions with pre-built assets.

Workflow:

  1. Script

  2. Asset curation via Envato Elements (2D illustration packs)

  3. Stylization and adaptation in Illustrator/AE

  4. Storyboarding in Figma

  5. Voiceover + AI subtitles

  6. Animation

  7. Delivery

Result: Faster turnaround with more brand-friendly scenes. Easily adaptable and scalable for weekly production. Less dependent on unpredictable AI outputs.

✅ Outcome

Although the AI-assisted workflow produced some compelling visuals (Ep. 1–3), it was ultimately phased out. The time spent fixing AI-generated inconsistencies — particularly for branding — offset any initial time savings. As a result, the team shifted to a handcrafted, minimalistic style supported by illustration libraries, which allowed me to meet weekly deadlines while maintaining creative quality.

Notably, Episode 2, made using the AI-assisted pipeline, became the most viewed in the series — with 28.3K views, 1,813 likes, and 108 saves. While visually striking, its success also stemmed from a strong script and shareable storytelling.

⚡Challenges

  • Managing AI hallucinations (e.g. broken logos, random artifacts)

  • Balancing experimentation with ongoing weekly deadlines

  • Meeting expectations of fast delivery without a support team

  • Integrating branded elements in a non-promotional way (to preserve organic reach)

✨ Highlight

Successfully delivered a 22-episode TikTok-first motion series, with early episodes serving as a live experiment in AI-assisted production. This allowed us to make informed decisions about scalability, consistency, and brand alignment — ultimately crafting a pipeline that balanced creativity and feasibility.

🧰 Tools & Outcome

  • Runway ML (AI video generation)

  • Midjourney (Style references, image prompts)

  • Figma (Storyboarding)

  • After Effects (Motion & Compositing)

  • Premiere Pro (Subtitles)

  • Envato Elements (Asset sourcing)

  • ElevenLabs (Voiceover generation)

More projects

Got questions?

The best conversations start here

E-mail

soraya@grimvoxel.studio

Telegram

Got questions?

The best conversations start here

E-mail

soraya@grimvoxel.studio

Telegram

www.grimvoxel.studio · ©2025 all rights reserved

@grimvoxel

www.grimvoxel.studio · ©2025 all rights reserved

@grimvoxel