Introduction
Inspirational long-form videos have become a powerful medium on social media, captivating audiences with their uplifting messages, compelling storytelling, and visually engaging content. These videos often combine motivational speeches, beautiful imagery, and stirring music to create an emotional impact, encouraging viewers to pursue their goals, overcome challenges, and find inner strength. They thrive on platforms like YouTube, Facebook, and Instagram, where longer formats allow for deeper dives into personal growth, resilience, and positive thinking. The popularity of such content stems from our innate human desire for encouragement and a sense of shared experience in navigating life's complexities.
Examples
Overview of the automation
This automation streamlines the creation of inspirational long-form videos through the following steps:
- Define your video's topic, language, and aesthetic preferences in an Airtable base
- Make.com triggers OpenAI to generate a motivational script and corresponding image prompts based on your input
- The AI-generated content is passed to JSON2Video, which uses a pre-designed template to automatically create a high-quality video with:
- Voiceovers
- Dynamic visuals
- Background music
- Subtitles
- The completed video's URL is automatically updated back into your Airtable base, ready for sharing
Prerequisites
To run this automation, you will need accounts for the following services:
- Airtable: We chose Airtable for this tutorial due to its seamless integration with no-code tools like Make.com. Airtable offers a generous free tier that is more than sufficient for this project, eliminating the need for a paid subscription.
- Make.com: The central automation platform that connects all services. Make.com also has a free tier available for getting started.
- OpenAI: To generate the motivational speech text and image prompts. You will need an API key.
- JSON2Video: To transform the generated content into a video. You will need an API key.
Build the automation
Let's set up the automation step-by-step to generate your inspirational videos.
Setting the Airtable base
Clone the Airtable base
First, you'll need to clone the pre-configured Airtable base that will serve as our content hub:
- Open the Airtable template in your browser.
- Click on the "Copy base" button next to the base name (usually in the top left corner). A new window will open.
- Select the destination workspace in your Airtable account where you want to copy the base.
Your cloned base will contain a table named "Inspirational videos" with the following fields:
Field name | Description |
---|---|
ID | Auto-numbered unique identifier for each video entry. |
Topic | The main theme or subject of the inspirational video (e.g., "overcoming fear", "the power of consistency"). |
Language | The target language for the voiceover and subtitles (e.g., "English", "Spanish"). |
Voice Name | The specific voice to be used for the AI-generated voiceover (e.g., "en-US-JennyMultilingualNeural"). |
Voice Model | The AI model to use for voice generation (e.g., "azure", "elevenlabs"). |
Title Font | The font family to use for the video title (e.g., "Oswald Bold"). |
Image Model | The AI model to use for generating background images (e.g., "flux-schnell", "flux-pro"). |
Subtitles Model | The transcription model for subtitles (e.g., "default", "whisper"). |
Subtitles Font | The font family for the subtitles (e.g., "Noto Sans KR"). |
Music URL | A URL to a background music track for the video. |
Status | Tracks the status of the video generation (e.g., "Todo", "In progress", "Done"). |
Result | Will contain the URL to the generated video once complete. |
Get your Airtable personal access token
To allow Make.com to connect with your Airtable base, you'll need a Personal Access Token (PAT). Follow these steps to obtain it:
- Go to your Airtable developer hub.
- Click "Create new token."
- Give your token a name (e.g., "Make.com JSON2Video demos").
- Under "Scopes," add the following permissions:
data.records:read
data.records:write
schema.bases:read
- Under "Access," select "Add a base" and choose the "Entertainment" base or the name you gave to the base when you cloned it for this tutorial.
- Click "Create token" and copy the generated token. Keep it safe, as you won't be able to see it again.
Getting your API keys
Get your OpenAI API key
To use OpenAI for content generation, you'll need an API key:
- Go to the OpenAI API keys page.
- Click on "Create new secret key".
- Give your key a name (e.g., "Make.com Video Generator").
- Copy the generated key. Remember to save it immediately, as you won't be able to view it again after closing the window.
Get your JSON2Video API key
JSON2Video requires an API key for authenticating requests and accessing its video generation services. While a primary API key works, we recommend creating a secondary API key for specific integrations like Make.com:
- Log in to your JSON2Video dashboard. If you don't have an account, you can get a free API key.
- Navigate to the API Keys page.
- Click "Create new secondary API key" (if available for your plan). If you're on a free plan, you can use your primary API key.
- Give the key a descriptive name (e.g., "Make.com Integration").
- Ensure the "Render" permission is enabled for this key.
- Copy the generated API key. Keep it secure and private.
Create the workflow
Import the workflow
Now, let's import the pre-built Make.com workflow (also known as a "scenario") that automates the video creation process:
- Log in to your Make.com account.
- On your dashboard, click "Scenarios" in the left sidebar, then click "Create a new scenario" or "Import a new scenario" if you want to import directly.
- Click on the three dots (...) icon near the top right (if creating a new scenario) and select "Import from File..."
- Download the workflow definition file and upload it to Make.com.
- The scenario will load, but its modules will need to be configured.
Update the module settings
Each module in the imported workflow needs to be connected to your accounts using the API keys and tokens you obtained earlier.
Update the Airtable modules
There are two Airtable modules in the workflow: "Search Records" (module 16) and "Update a Record" (module 17). Both need to be connected to your Airtable account:
- Double-click the "Search Records" module (16).
- Under the "Connection" field, click "+ Add" to create a new connection.
- Select "Access Token" as the connection type.
- Paste your Airtable Personal Access Token (PAT) that you obtained in the "Get your Airtable personal access token" section.
- Give the connection a name (e.g., "My Airtable PAT") and click "Save".
- For the "Base" field, select the Airtable base you cloned earlier (e.g., "Entertainment").
- For the "Table" field, select "Inspirational videos".
- Repeat these steps for the "Update a Record" module (17), using the same Airtable connection.
Update the OpenAI modules
The "Create a Completion" module (4) for OpenAI needs your API key:
- Double-click the "Create a Completion" module (4).
- Under the "Connection" field, click "+ Add" to create a new connection.
- Paste your OpenAI API key you obtained earlier.
- Give the connection a name (e.g., "My OpenAI API Key") and click "Save".
Update the JSON2Video modules
The workflow uses two JSON2Video modules: "Create a Movie from a Template ID" (module 23) and "Wait for a Movie to Render" (module 9). Both will use the same JSON2Video connection:
- Double-click the "Create a Movie from a Template ID" module (23).
- Under the "Connection" field, click "+ Add" to create a new connection.
- Paste your JSON2Video API key you obtained earlier.
- Give the connection a name (e.g., "My JSON2Video API Key") and click "Save".
- The "Template ID" field is already pre-filled with the ID
fOnm0pvJFwKBtwgcCDTk
, which corresponds to the template used in this tutorial. - Repeat these steps for the "Wait for a Movie to Render" module (9), using the same JSON2Video connection.
The JSON payload passed to JSON2Video API sets a pre-designed JSON2Video template with ID fOnm0pvJFwKBtwgcCDTk
. It passes dynamic content as variables, including the voice name, voice model, topic, intro, outro, questions, and font from your Airtable data and OpenAI's output. The background video and color scheme are set statically in this template for a consistent quiz aesthetic.
Run your first automated video creation
Now that all credentials are configured, you're ready to create your first video:
- Go to your Airtable base.
- In the "Inspirational videos" table, create a new record.
- For the "Topic" field, enter a motivational topic, for example: "The importance of believing in yourself".
- For the "Language" field, enter "English".
- For the "Voice Name" field, enter "en-US-JennyMultilingualNeural".
- For the "Voice Model" field, enter "azure".
- For the "Title Font" field, enter "Oswald Bold".
- For the "Image Model" field, enter "flux-schnell".
- For the "Subtitles Model" field, enter "default".
- For the "Subtitles Font" field, enter "Oswald Bold".
- For the "Music URL" field, enter "https://cdn.json2video.com/assets/audios/inspirational-03.mp3".
- Ensure the "Status" field is set to "Todo".
- Save the record.
- Go back to your Make.com scenario.
- Click on the "Run once" button (usually at the bottom-center or top-right of the scenario builder).
- The workflow will start processing. First, it will fetch the "Todo" record from Airtable. Then, OpenAI will generate the content. Next, JSON2Video will create the video. Finally, the "Wait for a Movie to Render" module will wait for the video to be ready.
- Once the scenario finishes running, go back to your Airtable base.
- The "Status" field for your record should now be "Done", and the "Result" field will be populated with the URL to your newly generated video. Click on the URL to watch your inspirational video!
Localizing your videos into other languages
One of the powerful aspects of this automation is its ability to localize your inspirational videos into various languages. This means you can reach a global audience with content tailored to their native tongue, enhancing relatability and impact. The process involves adjusting a few key fields in your Airtable record.
Example: creating a video in Arabic
Let's create an inspirational video with a Korean voiceover and subtitles, demonstrating how to handle non-western charsets:
- In your Airtable "Inspirational videos" table, create a new record.
- For the "Topic" field, enter something inspiring, like: "Mental Toughness - Training your mind for adversity.".
- For the "Language" field, enter "Arabic".
- For the "Voice Name" field, enter "en-US-OnyxTurboMultilingualNeural". This voice is multilingual, so it can speak in Arabic as well. You can find a full list of supported Azure voices by language here.
- For the "Voice Model" field, ensure it is set to "azure".
- For the "Title Font" field, enter "Noto Sans Arabic". This font supports the Arabic character set. Remember that the font you choose must support the target language's characters. You can explore Google Fonts for suitable options.
- For the "Image Model" field, enter "flux-schnell".
- For the "Subtitles Model" field, enter "whisper". Arabic is not supported by the default transcription model, so we use Whisper instead.
- For the "Subtitles Font" field, enter "Arial". This is a simple font that supports the Arabic character set.
- For the "Music URL" field, you can use the same URL as before: "https://cdn.json2video.com/assets/audios/inspirational-05.mp3".
- Set the "Status" field to "Todo".
- Run the Make.com scenario as described in the previous section.
- Once complete, check the "Result" URL in Airtable. Your video will now feature an Arabic voiceover and Arabic subtitles, demonstrating effective localization!
This is a video with an Arabic voiceover and Arabic subtitles based on the previous example.
Using alternative AI models
The current workflow uses Azure for voiceovers and Flux Schnell for images by default. While these are efficient and cost-effective, JSON2Video supports other powerful AI models that offer different characteristics, often at an additional credit cost. You can learn more about JSON2Video's Credit consumption.
Using ElevenLabs
If you prefer a different voice quality or specific voices, you can switch to ElevenLabs for your voiceovers. Keep in mind that ElevenLabs consumes extra credits from your JSON2Video account.
- In your Airtable "Inspirational videos" table, create a new record or edit an existing one.
- For the "Voice Model" column, change the value from "azure" to "elevenlabs".
- For the "Voice Name" column, choose a supported ElevenLabs voice (e.g., "Daniel", "Serena", "Adam").
- Ensure the "Status" is "Todo" and run the Make.com scenario. Your video will now feature an ElevenLabs voiceover.
Using Flux Pro
For higher quality, more realistic image generation, you can switch to Flux Pro. Using Flux Pro will consume extra credits.
- In your Airtable "Inspirational videos" table, create a new record or edit an existing one.
- For the "Image Model" column, change the value from "flux-schnell" to "flux-pro".
- Ensure the "Status" is "Todo" and run the Make.com scenario. Your video will now feature images generated with Flux Pro.
Customizing your videos
The provided JSON2Video movie template offers several variables to easily customize your videos without needing to modify the core template structure. Beyond these, you can refine the AI-generated content by adjusting the prompt given to OpenAI, or for advanced customization, you can directly edit the movie template itself.
Using template variables
The JSON2Video movie template (ID fOnm0pvJFwKBtwgcCDTk
) defines multiple variables that allow for easy customization from your Airtable input. These variables are mapped directly in the "Create a Movie from a Template ID" module in Make.com.
voice_model
: Specifies the AI model for text-to-speech (e.g., 'azure', 'elevenlabs').voice_name
: The name of the voice to be used for the voiceover (e.g., 'en-US-JennyMultilingualNeural').image_model
: The AI model for generating background images (e.g., 'flux-schnell', 'flux-pro').subtitles_model
: The transcription model used for automatically generated subtitles (e.g., 'default', 'whisper').subtitles_font
: The font family for the subtitles (e.g., 'Oswald Bold', 'Korean').music_url
: A URL to an audio file used as background music for the entire video.title_video
: A URL to a video clip used as the background for the intro title scene.title
: The main title text that appears in the intro scene of the video.title_font
: The font family for the main title text.scene_list
: An array of objects, where each object defines a scene withvoiceover_text
and animage_prompt
. This allows for a dynamic number of scenes generated by OpenAI.
Refining the AI-Generated content
The motivational speech and image prompts are generated by OpenAI based on a "system prompt" and your specified topic and language. You can modify this system prompt to influence the style, tone, and content of the AI-generated output. The current system prompt is:
You are an expert motivational copy-writer and visual-storyboard artist.
**Goal**
Produce a ~2-minute motivational speech (≈ 220–260 words) divided into coherent “scenes”.
**Scene Structure**
- **Scene 1 — Hook:** Immediately engage the viewer with the central .
- **Scenes 2 – 8/9 — Development (4–5 scenes):** Deeply explore the theme, evoke emotion, and build momentum.
- **Final Scene — Uplift:** Leave the viewer with a clear, energizing call to improve your life.
(=> total 9-10 scenes.)
Each scene must contain:
1. **voiceover_text** – the narration for that scene, written in .
2. **image_prompt** – a richly detailed, *photorealistic* English prompt that visually captures the scene’s message.
• Maintain a consistent color palette, lighting style and overall aesthetic across every image to ensure harmony.
• Avoid any mention or depiction of violence, gore, nudity, or other potentially NSFW elements.
**Input placeholders**
- `` – central theme of the speech (e.g., “overcoming self-doubt”).
- `` – language for the narration (e.g., “Spanish”).
**Output format** – return pure JSON, no explanatory text:
```json
{
"title": "",
"scenes": [
{
"voiceover_text": ">",
"image_prompt": ""
},
...
]
}
To modify this prompt:
- In Make.com, double-click the "Create a Completion" module (4) for OpenAI.
- Locate the "Messages" section, specifically the "System" role.
- Edit the "Content" field to adjust the prompt as desired. For example, you could instruct it to make the tone more dramatic, or request a different number of scenes.
Editing the movie template
For advanced customization of the video's structure, timing, animations, or specific visual elements, you can directly edit the JSON2Video movie template. This requires a higher level of understanding of the JSON2Video API documentation.
Here's how to customize the template:
- Open the movie template in the JSON2Video Visual Editor.
- From the top bar "Template" menu, click "Save template as..." to create your own editable copy.
- Make your desired changes to the template (e.g., adjust scene durations, change text styles beyond variables, add new elements). The editor auto-saves.
- Once you're satisfied with your edits, from the "Template" menu, click "Show Template ID" to get the unique ID of your new, customized template.
- Go back to your Make.com scenario and double-click the "Create a Movie from a Template ID" module (23).
- Replace the existing "Template ID" with your new template ID.
Conclusion and next steps
Congratulations! You have successfully learned how to leverage the power of Make.com, OpenAI, Airtable, and JSON2Video to automatically generate compelling, long-form inspirational videos. You've mastered setting up data sources, integrating AI for content creation, and customizing video elements to suit your specific needs, even for different languages and AI models.
This tutorial provides a solid foundation for automated video production. Feel free to explore other tutorials to discover more advanced video creation possibilities and further automate your content workflows!
Published on July 7th, 2025
