Introduction
Long-form inspirational videos have become a powerful medium on social media, captivating audiences with their uplifting messages, compelling storytelling, and visually engaging content. These videos combine motivational speeches, beautiful imagery, and stirring music to create an emotional impact that resonates deeply with viewers. Using AI automation, you can efficiently produce these high-quality videos through a simple no-code workflow.
Examples
Overview of the automation
This tutorial outlines an automated workflow using n8n and JSON2Video to create long-form inspirational videos. The workflow follows these steps:
- n8n fetches video details (like topic, language, and voice preferences) from an Airtable base
- This information is sent to OpenAI, which generates a motivational speech divided into distinct scenes with accompanying image prompts and voiceover text
- JSON2Video takes this structured content and dynamically renders the video, complete with AI-generated visuals, voiceovers, and subtitles
- Finally, the Airtable base is updated with the final video URL

Prerequisites
To follow this tutorial, you will need the following accounts and API keys:
- An n8n.io account. (A free tier is available, or you can host it yourself.)
- An Airtable account. (A generous free tier is available, which we chose over Google Sheets for its easier integration with no-code tools like n8n and Make.com.)
- An OpenAI account with an API key.
- A JSON2Video account with an API key.
Build the automation
Let's dive into building this powerful automation. We'll start by preparing our data source in Airtable, then gather the necessary API keys, and finally, set up the n8n workflow to tie everything together.
Setting the Airtable base
Clone the Airtable base
To get started, you'll need a structured Airtable base to manage your video projects. Follow these steps to clone our pre-configured template:
- Open the Airtable template.
- Click on the "Copy base" button next to the base name (usually in the top left corner). A new window will open.
- Select the destination workspace in your Airtable account where you'd like to save the copied base.
Your cloned base, likely named "Entertainment" or similar, will contain a table called "Inspirational videos" with the following fields:
Field name | Description |
---|---|
ID | Auto-generated unique identifier for each video project. |
Topic | The central theme or subject of the inspirational video (e.g., "overcoming self-doubt"). |
Language | The target language for the video's voiceover and subtitles (e.g., "English", "Spanish", "Korean"). |
Voice Name | The specific voice to be used for the AI-generated narration (e.g., "en-US-EmmaMultilingualNeural", "Jenny", "Daniel"). |
Voice Model | The AI model provider for the voiceover (e.g., "azure", "elevenlabs"). |
Title Font | The font family to be used for the video's title. |
Image Model | The AI model provider for generating scene images (e.g., "flux-schnell", "flux-pro", "freepik-classic"). |
Subtitles Model | The AI model to use for transcribing audio into subtitles (e.g., "default", "whisper"). |
Subtitles Font | The font family for the automatically generated subtitles. |
Music URL | An optional URL to background music for the video. |
Status | The current status of the video generation: "Todo", "In progress", or "Done". |
Result | The URL to the final rendered video once the process is complete. |
Get your Airtable personal access token
To allow n8n to connect with your Airtable base, you'll need a Personal Access Token (PAT). Follow these steps to obtain it:
- Go to your Airtable developer hub.
- Click "Create new token."
- Give your token a name (e.g., "n8n JSON2Video demos").
- Under "Scopes," add the following permissions:
data.records:read
data.records:write
schema.bases:read
- Under "Access," select "Add a base" and choose the "Entertainment" base or the name you gave to the base when you cloned it for this tutorial.
- Click "Create token" and copy the generated token. Keep it safe, as you won't be able to see it again.
Getting your API keys
Get your OpenAI API key
To use OpenAI's powerful language models, you'll need an API key:
- Go to the OpenAI API Keys page and log in.
- Click on "Create new secret key."
- Give your key a name (e.g., "n8n Inspirational Videos").
- Copy the generated API key. Make sure to save it somewhere secure, as you won't be able to view it again after closing the window.
Get your JSON2Video API key
To allow n8n to communicate with the JSON2Video API and generate videos, you'll need an API key:
- Log in to your JSON2Video dashboard.
- Navigate to the API Keys section.
- While you can use your Primary API key, it is recommended to create a Secondary API key for specific integrations like n8n for better security and access control. Click "Create new API key."
- Give your key a descriptive name (e.g., "n8n Workflow").
- For permissions, "Render" should be sufficient for this tutorial.
- Copy the generated API key. Store it securely, as it will only be shown once.
Create the workflow
Import the workflow
To quickly set up the n8n workflow, you can import our pre-built template:
- Download the workflow definition file: workflow.json.
- In your n8n dashboard, go to "Workflows" in the left sidebar.
- Click "New" or the "+" icon to create a new workflow.
- In the workflow editor, click the three dots menu (Workflow Settings) in the top right.
- Select "Import from File..."
- Browse to the downloaded
workflow.json
file and open it. - The complete workflow will now be imported into your n8n instance.
Update the node settings
After importing the workflow, you need to configure the credentials for each service. Double-click on each of the following nodes and update their settings:
Update the Airtable nodes
There are two Airtable nodes: "Airtable - Read" and "Airtable - Update". Both need to be configured with your Personal Access Token (PAT):
- Double-click on the "Airtable - Read" node (and then repeat for "Airtable - Update").
- Under the "Credential to connect with" dropdown, click "+ Create new credential."
- Choose "Access Token" as the authentication method.
- Paste your Personal Access Token you obtained in the "Get your Airtable personal access token" section.
- Give the credential a name (e.g., "Airtable Personal Access Token account") and click "Save."
- Ensure the correct base ("Entertainment") and table ("Inspirational videos") are selected.
Update the OpenAI nodes
Configure the OpenAI node with your API key:
- Double-click on the "OpenAI" node.
- Under the "Credential to connect with" dropdown, click "+ Create new credential."
- Choose "API Key" as the authentication method.
- Paste your OpenAI API key you obtained in the "Get your OpenAI API key" section.
- Give the credential a name (e.g., "OpenAI account") and click "Save."
Update the JSON2Video nodes
There are two JSON2Video HTTP Request nodes: "Submit a new job" and "Check status". Both need your JSON2Video API key:
- Double-click on the "Submit a new job" node.
- Under "Headers," locate the "x-api-key" parameter.
- Replace the placeholder
-- YOUR API KEY HERE --
with your JSON2Video API key. - Repeat the same steps for the "Check status" node.
The JSON payload passed to JSON2Video API for the "Submit a new job" node sets a pre-designed JSON2Video template with ID fOnm0pvJFwKBtwgcCDTk. It passes dynamic content as variables, including the voice name, voice model, topic, intro, outro, questions, and font from your Airtable data and OpenAI's output. The background video and color scheme are set statically in this template for a consistent quiz aesthetic.
Run your first automated video creation
Once all credentials are configured:
- In your Airtable table, go to the "Inspirational videos" table.
- Create a new row and enter values for each column (e.g., Topic: "Mindfulness", Language: "English", Voice Name: "en-US-JennyMultilingualNeural", Voice Model: "azure", Title Font: "Oswald", Image Model: "flux-schnell", Subtitles Model: "default", Subtitles Font: "Oswald Bold", Music URL: leave blank for default, Status: "Todo").
- In your n8n workflow, ensure the workflow is active.
- Click on the "Test workflow" button in the bottom-center.
- The workflow will run. You can observe the progress as nodes light up green. The "Wait for 15 seconds" node will introduce a pause while the video renders.
- Once the workflow completes (all nodes turn green), check the Airtable base. The "Status" for your row should be "Done" and the "Result" field should be populated with the URL to your new video.
Localizing your videos into other languages
One of the powerful features of this workflow is the ability to easily localize your inspirational videos into multiple languages. This involves selecting the target language, choosing a compatible font that supports that language's characters, and picking an appropriate AI voice to match.
Example: creating a video in Korean
Let's walk through creating a video in Korean, which uses a non-Western character set, to demonstrate how to ensure font compatibility:
- Set the target language in Airtable: In your "Inspirational videos" Airtable base, create a new row or edit an existing one. Set the
Language
column toKorean
. - Choose a compatible font: For Korean characters, you'll need a font that supports the Korean script. A popular choice from Google Fonts is Noto Sans KR. In your Airtable row, set the
Title Font
column toNoto Sans KR
andSubtitles Font
toKorean
. You can find more supported font families here. - Select a matching voice: For a Korean voiceover, you'll need an Azure voice specifically for Korean. For Azure, you can use a voice like
ko-KR-HyunsuNeural
. In your Airtable row, set theVoice Model
toazure
andVoice Name
toko-KR-HyunsuNeural
. You can find the full list of supported Azure voices. - Run the workflow: Trigger the n8n workflow as you did before. JSON2Video will use the specified Korean font and voice to create your localized video.
Using alternative AI models
The workflow is configured to use Azure for voiceovers and Flux Schnell for image generation by default. While Azure voiceovers are included in all JSON2Video plans without consuming extra credits, other models like ElevenLabs for voice and Flux Pro for images consume additional credits. You can learn more about credit consumption in the JSON2Video documentation.
Using ElevenLabs
If you prefer the voice quality or specific voices offered by ElevenLabs, you can easily switch:
To use "ElevenLabs", simply change the "Voice Model" column in your Airtable row to 'elevenlabs' and choose a supported voice in the "Voice Name" column (e.g., 'Adam', 'Rachel', 'Bella').
Using Flux Pro
For higher-quality, realistic AI-generated images, you can switch to Flux Pro:
To use "Flux Pro", simply change the "Image Model" column in your Airtable row to 'flux-pro'.
Customizing your videos
Beyond the core content, you can deeply customize your videos by manipulating the template variables, refining AI-generated content, or even editing the underlying movie template itself.
Using template variables
The JSON2Video movie template (ID fOnm0pvJFwKBtwgcCDTk) defines multiple variables to easily do simple customizations without altering the JSON structure directly. These variables are populated by the data from your Airtable base and OpenAI's output.
voice_name
: (String) The specific name of the AI voice to be used for the narration (e.g., "en-US-JennyMultilingualNeural" for Azure, "Adam" for ElevenLabs).voice_model
: (String) The AI model provider for the voiceover, either "azure" or "elevenlabs".image_model
: (String) The AI model provider for generating scene images, such as "flux-schnell", "flux-pro", or "freepik-classic".subtitles_model
: (String) The AI transcription model for subtitles, typically "default" or "whisper".subtitles_font
: (String) The font family used for the automatically generated subtitles.music_url
: (String, URL) An optional URL to an MP3 audio file that will play as background music throughout the video.title_video
: (String, URL) The URL to a background video used for the introductory title scene.title
: (String) The main title text for the video, generated by OpenAI.title_font
: (String) The font family used for the main title text.scene_list
: (Array of Objects) A list of scene objects, each containingvoiceover_text
andimage_prompt
, generated by OpenAI. This enables a dynamic number of scenes.
Refining the AI-Generated content
The core content (voiceover text, image prompts, and video title) is generated by OpenAI based on a "system prompt" and your input. You can modify this prompt to customize the resulting videos and guide the AI's creativity.
The system prompt used in the "OpenAI" node is:
You are an expert motivational copy-writer and visual-storyboard artist.
**Goal**
Produce a ~2-minute motivational speech (≈ 220–260 words) divided into coherent “scenes”.
**Scene Structure**
- **Scene 1 — Hook:** Immediately engage the viewer with the central <TOPIC>.
- **Scenes 2 – 8/9 — Development (4–5 scenes):** Deeply explore the theme, evoke emotion, and build momentum.
- **Final Scene — Uplift:** Leave the viewer with a clear, energizing call to improve their life.
(=> total 9-10 scenes.)
Each scene must contain:
1. **voiceover_text** – the narration for that scene, written in <LANGUAGE>.
2. **image_prompt** – a richly detailed, *photorealistic* English prompt that visually captures the scene’s message.
• Maintain a consistent color palette, lighting style and overall aesthetic across every image to ensure harmony.
• Avoid any mention or depiction of violence, gore, nudity, or other potentially NSFW elements.
**Input placeholders**
- `<TOPIC>` – central theme of the speech (e.g., “overcoming self-doubt”).
- `<LANGUAGE>` – language for the narration (e.g., “Spanish”).
**Output format** – return pure JSON, no explanatory text:
```json
{
"title": "<Concise inspiring title>",
"scenes": [
{
"voiceover_text": "<Scene 1 narration in <LANGUAGE>>",
"image_prompt": "<Scene 1 photorealistic prompt in English>"
},
...
]
}
Editing the movie template
For advanced users who want to make deep changes to the structure, timing, animations, or visual design beyond what variables allow, you can edit the JSON2Video movie template directly. This requires advanced skills and a high knowledge of the JSON2Video API's Movie object, Scene object, and various Element types (Image, Video, Text, Voice, Audiogram, Subtitles, Component).
Follow these steps to customize the template:
- Open the provided movie template in the JSON2Video Visual Editor.
- From the top bar "Template" menu, click "Save template as..." to create your own editable copy.
- Make your desired edits to the template's structure, elements, or animations.
- Once you're satisfied, from the "Template" menu, click "Show Template ID" to get the new unique ID for your customized template.
- In your n8n workflow, double-click the "Submit a new job" node.
- In the "JSON Body" field, locate the
"template": "fOnm0pvJFwKBtwgcCDTk"
line and replacefOnm0pvJFwKBtwgcCDTk
with your new template ID.
Conclusion and next steps
Congratulations! You have successfully built an automated workflow using n8n and JSON2Video to create dynamic, long-form inspirational videos. You've learned how to integrate Airtable as a data source, leverage OpenAI for content generation, and utilize JSON2Video's powerful API to turn structured data into compelling video narratives. You also explored how to localize videos, switch between different AI models, and customize your video output through template variables and direct template editing. This workflow empowers you to scale your content production, delivering personalized and impactful videos with minimal manual effort. Consider exploring other JSON2Video tutorials to discover more ways to automate video creation for various use cases.
Published on July 28th, 2025
