Introduction

Long-form inspirational videos have become a powerful medium on social media, captivating audiences with their uplifting messages, compelling storytelling, and visually engaging content. These videos combine motivational speeches, beautiful imagery, and stirring music to create an emotional impact that resonates deeply with viewers. Using AI automation, you can efficiently produce these high-quality videos through a simple no-code workflow.

Examples

Overview of the automation

This tutorial outlines an automated workflow using n8n and JSON2Video to create long-form inspirational videos. The workflow follows these steps:

N8N workflow for inspirational videos

Prerequisites

To follow this tutorial, you will need the following accounts and API keys:

Build the automation

Let's dive into building this powerful automation. We'll start by preparing our data source in Airtable, then gather the necessary API keys, and finally, set up the n8n workflow to tie everything together.

Setting the Airtable base

Clone the Airtable base

To get started, you'll need a structured Airtable base to manage your video projects. Follow these steps to clone our pre-configured template:

  1. Open the Airtable template.
  2. Click on the "Copy base" button next to the base name (usually in the top left corner). A new window will open.
  3. Select the destination workspace in your Airtable account where you'd like to save the copied base.

Your cloned base, likely named "Entertainment" or similar, will contain a table called "Inspirational videos" with the following fields:

Field name Description
ID Auto-generated unique identifier for each video project.
Topic The central theme or subject of the inspirational video (e.g., "overcoming self-doubt").
Language The target language for the video's voiceover and subtitles (e.g., "English", "Spanish", "Korean").
Voice Name The specific voice to be used for the AI-generated narration (e.g., "en-US-EmmaMultilingualNeural", "Jenny", "Daniel").
Voice Model The AI model provider for the voiceover (e.g., "azure", "elevenlabs").
Title Font The font family to be used for the video's title.
Image Model The AI model provider for generating scene images (e.g., "flux-schnell", "flux-pro", "freepik-classic").
Subtitles Model The AI model to use for transcribing audio into subtitles (e.g., "default", "whisper").
Subtitles Font The font family for the automatically generated subtitles.
Music URL An optional URL to background music for the video.
Status The current status of the video generation: "Todo", "In progress", or "Done".
Result The URL to the final rendered video once the process is complete.

Get your Airtable personal access token

To allow n8n to connect with your Airtable base, you'll need a Personal Access Token (PAT). Follow these steps to obtain it:

Getting your API keys

Get your OpenAI API key

To use OpenAI's powerful language models, you'll need an API key:

  1. Go to the OpenAI API Keys page and log in.
  2. Click on "Create new secret key."
  3. Give your key a name (e.g., "n8n Inspirational Videos").
  4. Copy the generated API key. Make sure to save it somewhere secure, as you won't be able to view it again after closing the window.

Get your JSON2Video API key

To allow n8n to communicate with the JSON2Video API and generate videos, you'll need an API key:

  1. Log in to your JSON2Video dashboard.
  2. Navigate to the API Keys section.
  3. While you can use your Primary API key, it is recommended to create a Secondary API key for specific integrations like n8n for better security and access control. Click "Create new API key."
  4. Give your key a descriptive name (e.g., "n8n Workflow").
  5. For permissions, "Render" should be sufficient for this tutorial.
  6. Copy the generated API key. Store it securely, as it will only be shown once.

Create the workflow

Import the workflow

To quickly set up the n8n workflow, you can import our pre-built template:

  1. Download the workflow definition file: workflow.json.
  2. In your n8n dashboard, go to "Workflows" in the left sidebar.
  3. Click "New" or the "+" icon to create a new workflow.
  4. In the workflow editor, click the three dots menu (Workflow Settings) in the top right.
  5. Select "Import from File..."
  6. Browse to the downloaded workflow.json file and open it.
  7. The complete workflow will now be imported into your n8n instance.

Update the node settings

After importing the workflow, you need to configure the credentials for each service. Double-click on each of the following nodes and update their settings:

Update the Airtable nodes

There are two Airtable nodes: "Airtable - Read" and "Airtable - Update". Both need to be configured with your Personal Access Token (PAT):

  1. Double-click on the "Airtable - Read" node (and then repeat for "Airtable - Update").
  2. Under the "Credential to connect with" dropdown, click "+ Create new credential."
  3. Choose "Access Token" as the authentication method.
  4. Paste your Personal Access Token you obtained in the "Get your Airtable personal access token" section.
  5. Give the credential a name (e.g., "Airtable Personal Access Token account") and click "Save."
  6. Ensure the correct base ("Entertainment") and table ("Inspirational videos") are selected.
Update the OpenAI nodes

Configure the OpenAI node with your API key:

  1. Double-click on the "OpenAI" node.
  2. Under the "Credential to connect with" dropdown, click "+ Create new credential."
  3. Choose "API Key" as the authentication method.
  4. Paste your OpenAI API key you obtained in the "Get your OpenAI API key" section.
  5. Give the credential a name (e.g., "OpenAI account") and click "Save."
Update the JSON2Video nodes

There are two JSON2Video HTTP Request nodes: "Submit a new job" and "Check status". Both need your JSON2Video API key:

  1. Double-click on the "Submit a new job" node.
  2. Under "Headers," locate the "x-api-key" parameter.
  3. Replace the placeholder -- YOUR API KEY HERE -- with your JSON2Video API key.
  4. Repeat the same steps for the "Check status" node.

The JSON payload passed to JSON2Video API for the "Submit a new job" node sets a pre-designed JSON2Video template with ID fOnm0pvJFwKBtwgcCDTk. It passes dynamic content as variables, including the voice name, voice model, topic, intro, outro, questions, and font from your Airtable data and OpenAI's output. The background video and color scheme are set statically in this template for a consistent quiz aesthetic.

Run your first automated video creation

Once all credentials are configured:

  1. In your Airtable table, go to the "Inspirational videos" table.
  2. Create a new row and enter values for each column (e.g., Topic: "Mindfulness", Language: "English", Voice Name: "en-US-JennyMultilingualNeural", Voice Model: "azure", Title Font: "Oswald", Image Model: "flux-schnell", Subtitles Model: "default", Subtitles Font: "Oswald Bold", Music URL: leave blank for default, Status: "Todo").
  3. In your n8n workflow, ensure the workflow is active.
  4. Click on the "Test workflow" button in the bottom-center.
  5. The workflow will run. You can observe the progress as nodes light up green. The "Wait for 15 seconds" node will introduce a pause while the video renders.
  6. Once the workflow completes (all nodes turn green), check the Airtable base. The "Status" for your row should be "Done" and the "Result" field should be populated with the URL to your new video.

Localizing your videos into other languages

One of the powerful features of this workflow is the ability to easily localize your inspirational videos into multiple languages. This involves selecting the target language, choosing a compatible font that supports that language's characters, and picking an appropriate AI voice to match.

Example: creating a video in Korean

Let's walk through creating a video in Korean, which uses a non-Western character set, to demonstrate how to ensure font compatibility:

  1. Set the target language in Airtable: In your "Inspirational videos" Airtable base, create a new row or edit an existing one. Set the Language column to Korean.
  2. Choose a compatible font: For Korean characters, you'll need a font that supports the Korean script. A popular choice from Google Fonts is Noto Sans KR. In your Airtable row, set the Title Font column to Noto Sans KR and Subtitles Font to Korean. You can find more supported font families here.
  3. Select a matching voice: For a Korean voiceover, you'll need an Azure voice specifically for Korean. For Azure, you can use a voice like ko-KR-HyunsuNeural. In your Airtable row, set the Voice Model to azure and Voice Name to ko-KR-HyunsuNeural. You can find the full list of supported Azure voices.
  4. Run the workflow: Trigger the n8n workflow as you did before. JSON2Video will use the specified Korean font and voice to create your localized video.

Using alternative AI models

The workflow is configured to use Azure for voiceovers and Flux Schnell for image generation by default. While Azure voiceovers are included in all JSON2Video plans without consuming extra credits, other models like ElevenLabs for voice and Flux Pro for images consume additional credits. You can learn more about credit consumption in the JSON2Video documentation.

Using ElevenLabs

If you prefer the voice quality or specific voices offered by ElevenLabs, you can easily switch:

To use "ElevenLabs", simply change the "Voice Model" column in your Airtable row to 'elevenlabs' and choose a supported voice in the "Voice Name" column (e.g., 'Adam', 'Rachel', 'Bella').

Using Flux Pro

For higher-quality, realistic AI-generated images, you can switch to Flux Pro:

To use "Flux Pro", simply change the "Image Model" column in your Airtable row to 'flux-pro'.

Customizing your videos

Beyond the core content, you can deeply customize your videos by manipulating the template variables, refining AI-generated content, or even editing the underlying movie template itself.

Using template variables

The JSON2Video movie template (ID fOnm0pvJFwKBtwgcCDTk) defines multiple variables to easily do simple customizations without altering the JSON structure directly. These variables are populated by the data from your Airtable base and OpenAI's output.

Refining the AI-Generated content

The core content (voiceover text, image prompts, and video title) is generated by OpenAI based on a "system prompt" and your input. You can modify this prompt to customize the resulting videos and guide the AI's creativity.

The system prompt used in the "OpenAI" node is:

You are an expert motivational copy-writer and visual-storyboard artist.

**Goal**  
Produce a ~2-minute motivational speech (≈ 220–260 words) divided into coherent “scenes”. 

**Scene Structure**  
- **Scene 1 — Hook:** Immediately engage the viewer with the central <TOPIC>.  
- **Scenes 2 – 8/9 — Development (4–5 scenes):** Deeply explore the theme, evoke emotion, and build momentum.  
- **Final Scene — Uplift:** Leave the viewer with a clear, energizing call to improve their life.  
(=> total 9-10 scenes.)


Each scene must contain:  
1. **voiceover_text** – the narration for that scene, written in <LANGUAGE>.  
2. **image_prompt** – a richly detailed, *photorealistic* English prompt that visually captures the scene’s message.  
   • Maintain a consistent color palette, lighting style and overall aesthetic across every image to ensure harmony.  
   • Avoid any mention or depiction of violence, gore, nudity, or other potentially NSFW elements.

**Input placeholders**  
- `<TOPIC>` – central theme of the speech (e.g., “overcoming self-doubt”).  
- `<LANGUAGE>` – language for the narration (e.g., “Spanish”).  

**Output format** – return pure JSON, no explanatory text:  
```json
{
  "title": "<Concise inspiring title>",
  "scenes": [
    {
      "voiceover_text": "<Scene 1 narration in <LANGUAGE>>",
      "image_prompt": "<Scene 1 photorealistic prompt in English>"
    },
    ...
  ]
}

Editing the movie template

For advanced users who want to make deep changes to the structure, timing, animations, or visual design beyond what variables allow, you can edit the JSON2Video movie template directly. This requires advanced skills and a high knowledge of the JSON2Video API's Movie object, Scene object, and various Element types (Image, Video, Text, Voice, Audiogram, Subtitles, Component).

Follow these steps to customize the template:

  1. Open the provided movie template in the JSON2Video Visual Editor.
  2. From the top bar "Template" menu, click "Save template as..." to create your own editable copy.
  3. Make your desired edits to the template's structure, elements, or animations.
  4. Once you're satisfied, from the "Template" menu, click "Show Template ID" to get the new unique ID for your customized template.
  5. In your n8n workflow, double-click the "Submit a new job" node.
  6. In the "JSON Body" field, locate the "template": "fOnm0pvJFwKBtwgcCDTk" line and replace fOnm0pvJFwKBtwgcCDTk with your new template ID.

Conclusion and next steps

Congratulations! You have successfully built an automated workflow using n8n and JSON2Video to create dynamic, long-form inspirational videos. You've learned how to integrate Airtable as a data source, leverage OpenAI for content generation, and utilize JSON2Video's powerful API to turn structured data into compelling video narratives. You also explored how to localize videos, switch between different AI models, and customize your video output through template variables and direct template editing. This workflow empowers you to scale your content production, delivering personalized and impactful videos with minimal manual effort. Consider exploring other JSON2Video tutorials to discover more ways to automate video creation for various use cases.

Published on July 28th, 2025

Author
Joaquim Cardona
Joaquim Cardona Senior Internet business executive with more than 20 years of broad experience in Internet business, media sector, digital marketing, online video and mobile technologies.