Introduction
Imagine effortlessly transforming your ideas or data into polished, ready-to-share social media videos without ever touching complex video editing software. This tutorial will guide you through automating the creation of dynamic social media reels using a powerful combination of n8n, Airtable, OpenAI, and JSON2Video. Get ready to streamline your content creation process and boost your online presence!
Examples
Overview of the automation
This automation workflow simplifies social media reel creation into a few clicks. It starts by monitoring an Airtable base for new topics you want to turn into a video. Once a new topic is identified, n8n triggers OpenAI to generate a script with voiceover text and image prompts for each scene of your reel. This AI-generated content is then sent to JSON2Video, which uses a pre-designed template to automatically render your video, complete with AI-generated images and voiceovers, and dynamic subtitles. Finally, the generated video URL is updated back into your Airtable base, ready for you to share.

Prerequisites
To follow this tutorial and build your own social media reel automation, you will need accounts and API keys for the following services:
- n8n Account: You can use a self-hosted n8n instance or subscribe to their cloud service.
- Airtable Account: We've chosen Airtable for this tutorial because its integration with no-code tools like n8n is incredibly straightforward. Airtable also offers a generous free tier, making it accessible without requiring a paid subscription.
- OpenAI API Key: Required to generate the video scripts and image prompts using AI.
- JSON2Video API Key: Necessary for rendering your videos programmatically.
Build the automation
Let's dive into setting up the automation that will bring your social media reels to life. We'll start by preparing your Airtable base, then gather the necessary API keys, and finally configure the n8n workflow.
Setting the Airtable base
Clone the Airtable base
To get started, clone our pre-configured Airtable base. This base contains all the necessary fields to manage your social media reel projects efficiently.
- Open the Airtable template in your browser.
- Click on the "Copy base" button located beside the base name at the top of the page. A new window will open.
- Select the destination workspace in your Airtable account where you want to clone the base.

Your cloned base, named "Entertainment" (or whatever you renamed it to), will contain a table called "Social media reels" with the following fields:
Field name | Description |
---|---|
ID |
An auto-generated unique identifier for each reel. |
Topic |
The main subject or theme of your social media reel. This will be used by OpenAI to generate the script. |
Language |
The target language for the voiceover and subtitles (e.g., "English", "Spanish", "Arabic"). |
Voice Name |
The specific voice to be used for the AI-generated voiceover (e.g., "en-US-BrianMultilingualNeural", "Daniel"). |
Voice Model |
The AI model for voice generation (e.g., "azure", "elevenlabs"). Defaults to "azure" (free to use). |
Image Model |
The AI model for image generation (e.g., "freepik-classic", "flux-schnell", "flux-pro"). Defaults to "freepik-classic" (free to use). |
Subtitles Model |
The AI model for transcribing audio into subtitles (e.g., "default", "whisper", "none"). Defaults to "default". |
Subtitles Font |
The font family for the subtitles (e.g., "Oswald Bold", "Noto Sans Arabic"). |
Status |
Tracks the progress of the video creation: "Todo", "In progress", "Done". |
Result |
The URL of the final rendered social media reel once it's complete. |
Get your Airtable personal access token
To allow n8n to connect with your Airtable base, you'll need a Personal Access Token (PAT). Follow these steps to obtain it:
- Go to your Airtable developer hub.
- Click "Create new token."
- Give your token a name (e.g., "n8n JSON2Video demos").
- Under "Scopes," add the following permissions:
data.records:read
data.records:write
schema.bases:read
- Under "Access," select "Add a base" and choose the "Entertainment" base or the name you gave to the base when you cloned it for this tutorial.
- Click "Create token" and copy the generated token. Keep it safe, as you won't be able to see it again.
Getting your API keys
Get your OpenAI API key
You'll need an OpenAI API key to allow n8n to connect with the OpenAI models for generating content. Follow these steps to obtain it:
- Log in to your OpenAI platform account.
- Navigate to the API keys section.
- Click on "Create new secret key."
- Give your key a name (e.g., "n8n Social Reels").
- Copy the generated key. Keep it secure, as you won't be able to view it again after closing the window.
Get your JSON2Video API key
To connect n8n to JSON2Video and enable video rendering, you'll need a JSON2Video API key. While a Primary API key works, we recommend creating a Secondary API key with "Render" permissions for better security practices.
- Log in to your JSON2Video dashboard.
- Go to the API Keys page.
- If you're using a free account, your Primary API key is already generated. Copy it.
- (Optional, for paid plans) To create a Secondary API key, click "Create new API key." Give it a descriptive name (e.g., "n8n Social Reels"), select "Render" as the permission role, and save. Copy the generated key.
Create the workflow
Now that you have your Airtable base ready and all necessary API keys, let's set up the n8n workflow.
Import the workflow
To quickly set up the n8n workflow, you can import our pre-built template:
- Download the workflow definition file.
- In your n8n instance, click on "Workflows" in the left sidebar.
- Click the "New" button in the top right, then select "Import from File..."
- Select the
workflow.json
file you just downloaded. - The workflow will appear on your canvas.
Update the node settings
After importing the workflow, you need to configure the credentials for the Airtable, OpenAI, and JSON2Video nodes.
Update the Airtable nodes
The workflow contains two Airtable nodes: "Airtable - Read" and "Airtable - Update". Both need to be configured with your Personal Access Token (PAT).
- Double-click the "Airtable - Read" node (or "Airtable - Update").
- Under the "Credential to connect with" field, click "+ Create new credential".
- Choose "Access Token" as the credential type.
- Paste your Airtable Personal Access Token you obtained in the Get your Airtable personal access token section.
- Give the credential a descriptive name (e.g., "My Airtable PAT").
- Click "Create" to save the credential.
- Repeat these steps for the other Airtable node.
Update the OpenAI nodes
The workflow uses one OpenAI node ("OpenAI") to generate the script and image prompts.
- Double-click the "OpenAI" node.
- Under the "Credential to connect with" field, click "+ Create new credential".
- Choose "OpenAI API" as the credential type.
- Paste your OpenAI API key you obtained in the Get your OpenAI API key section.
- Give the credential a descriptive name (e.g., "My OpenAI API Key").
- Click "Create" to save the credential.
Update the JSON2Video nodes
The workflow uses two HTTP Request nodes to interact with JSON2Video: "Submit a new job" and "Check status". Both require your JSON2Video API key.
- Double-click the "Submit a new job" node.
- Under "Headers", locate the "x-api-key" parameter.
- Replace the "
-- YOUR API KEY HERE --
" with your JSON2Video API key. - Click "Done" to save the changes.
- Repeat these steps for the "Check status" node.
The JSON payload passed to the JSON2Video API in the "Submit a new job" node uses a pre-designed JSON2Video template with ID hShBhvAYM4Xd9mq5pceu
(you can explore it in the JSON2Video Visual Editor). It passes dynamic content as variables, including the voice name, voice model, image model, subtitles model, and font family from your Airtable data, and the generated scenes (voiceover text and image prompts) from OpenAI's output. The background video and color scheme are set statically in this template for a consistent social media reel aesthetic, but can be customized.
Run your first automated video creation
Once all credentials are configured and saved, you're ready to run your first automated video creation!.
- In your Airtable table ("Social media reels" in the "Entertainment" base), create a new row.
- Enter a descriptive topic in the
Topic
column (e.g., "The benefits of remote work"). - Choose a
Language
(e.g., "English"). - Select a
Voice Name
(e.g., "en-US-BrianMultilingualNeural"). - Select "azure" for
Voice Model
and "freepik-classic" forImage Model
(these are free models). - Set the
Status
to "Todo". - Go back to your n8n workflow.
- Click on the "Test workflow" button in the bottom-center of the n8n interface.
- The workflow will begin executing. You'll see green indicators moving between nodes as each step completes. This process can take a few minutes as OpenAI generates the content and JSON2Video renders the video.
- Once the workflow finishes, check your Airtable base. The
Status
column for your row should change to "Done", and theResult
column should be populated with the URL to your newly created social media reel.
Localizing your videos into other languages
One of the powerful features of this automation is its ability to localize your social media reels into different languages. This is crucial for reaching a global audience and making your content more accessible. The key to localization lies in correctly setting the language, choosing a compatible font, and selecting a matching voice in your Airtable base.
Example: creating a video in Arabic
Let's create a social media reel in Arabic to demonstrate the localization process. Arabic uses a non-Western script, which requires careful font selection.
- In your Airtable base, create a new row.
- In the
Topic
column, enter a topic (e.g., "Exploring the wonders of ancient Egypt"). - For the
Language
column, select "Arabic". - For the
Voice Name
, choose an Arabic voice. For Azure, a good option isar-EG-SalmaNeural
. You can find a full list of supported Azure voices by language. - For the
Voice Model
, keep "azure" (it's free!). - For the
Image Model
, keep "flux-schnell" (also free!). - For the
Subtitles Model
, set it to "whisper" as it's the only model that supports Arabic. - For the
Subtitles Font
, selectArial
. This font supports the Arabic script, ensuring your subtitles render correctly. - Set the
Status
to "Todo". - Run the n8n workflow by clicking "Test workflow".
Once the workflow completes, you'll find a link to your new social media reel in Arabic in the Result
column of your Airtable row!
Using alternative AI models
The default setup uses Azure for voiceovers and Freepik Classic/Flux Schnell for images, both of which are free to use within JSON2Video. However, JSON2Video supports other powerful AI models like ElevenLabs for voices and Flux Pro for images, which offer enhanced quality but consume extra credits. You can learn more about credit consumption in the JSON2Video documentation.
Using ElevenLabs
If you want to leverage the high-quality voices offered by ElevenLabs, you can easily switch the voice model in your Airtable base:
- In your Airtable row, change the
Voice Model
column toelevenlabs
. - In the
Voice Name
column, enter a supported ElevenLabs voice (e.g., "Daniel", "Serena"). You can find a list of available voices in the ElevenLabs Voice Library (requires login).
When you run the workflow, JSON2Video will use the specified ElevenLabs voice, and relevant credits will be deducted from your JSON2Video account for the voiceover generation.
Using Flux Pro
For more detailed and realistic AI-generated images, you can opt for the Flux Pro model:
- In your Airtable row, change the
Image Model
column toflux-pro
.
Running the workflow now will instruct JSON2Video to use Flux Pro for image generation, consuming additional credits from your JSON2Video account.
Customizing your videos
The provided JSON2Video template is designed for versatility, allowing you to customize various aspects of your social media reels without deep knowledge of the JSON2Video API. You can achieve this by manipulating template variables, refining AI-generated content, or even directly editing the movie template for more advanced changes.
Using template variables
The JSON2Video movie template (ID hShBhvAYM4Xd9mq5pceu) defines multiple variables that allow for easy customization. These variables are passed from your n8n workflow to the template.
scenes
: An array of objects, where each object defines a scene withimagePrompt
(text for AI image generation) andvoiceOverText
(text for AI voiceover). This is dynamically generated by OpenAI in the workflow.musicURL
: (Optional) A URL to a background music track. It will be trimmed to the duration of the video.musicVolume
: The volume of the background music (e.g.,0.2
for low volume). Can interfere with voiceover transcription if too high.logoURL
: (Optional) A URL to an image file for a logo or watermark.logoPosition
: The position of the logo (e.g.,top-left
,bottom-right
, ornone
to hide it).voiceModel
: The AI model for voice generation (azure
,elevenlabs
,elevenlabs-flash-v2-5
).voiceConnectionID
: Your JSON2Video Connection ID if you want to use your own API key for the voice model.voice
: The specific voice name (e.g.,en-US-BrianMultilingualNeural
for Azure,Daniel
for ElevenLabs).imageAspectRatio
: The aspect ratio of the AI-generated images (vertical
,horizontal
,squared
).imageModel
: The AI model for image generation (freepik-classic
,flux-schnell
,flux-pro
).subtitlesModel
: The transcription model for subtitles (default
,whisper
, ornone
to disable subtitles).fontFamily
: The font family for the subtitles (e.g.,Oswald Bold
).fontURL
: (Optional) A URL to a custom TTF font file for subtitles. Leave blank to use a pre-defined font.
Refining the AI-Generated content
The core content of your reels (voiceovers and image prompts) is generated by OpenAI based on a "system prompt" within the n8n workflow. You can modify this prompt to customize the output and achieve different styles or tones for your videos.
The system prompt used in the OpenAI node is:
=Create a script of a social media video about the topic included below.
The video will be organized in scenes. Each scene has a voice over and an image.
The voice over text must be at least 20 words.
There should be not more than 4 scenes.
Your response must be in JSON format following this schema:
{
"scenes": [{
"voiceOverText": "",
"imagePrompt": ""
}]
}
The image prompt must be written in ENGLISH, being detailed and photo realistic. In the image prompt, you MUST AVOID describing any situation in the image that can be considered unappropriate (violence, disgusting, gore, sex, nudity, NSFW, etc) as it may be rejected by the AI service.
By editing this prompt in the "OpenAI" node's settings (under "Messages > System message"), you can influence:
- The length and style of the voiceover text.
- The number of scenes.
- The detail and specific content of the image prompts.
- The overall tone of the video script.
For example, you could add instructions for a more humorous tone, request specific types of imagery, or adjust the minimum word count for voiceovers.
Editing the movie template
For advanced customization that goes beyond variable adjustments, you can duplicate and modify the underlying JSON2Video movie template itself. This allows for deep changes to the structure, timing, animations, and visual design of your videos. This requires a higher understanding of the JSON2Video API schema.
Follow these steps to edit the template:
- Open the provided movie template in the JSON2Video Visual Editor.
- From the top bar "Template" menu, click "Save template as..." to create a duplicate in your own account.
- Edit the template using the visual editor or directly by editing the JSON (accessible via "Template > Edit JSON"). You can change scene transitions, add new elements (like a different background video, text overlays, or components), adjust element timings, and much more. Refer to the JSON2Video documentation on customizing templates for detailed guidance.
- Once you've made your desired changes, from the "Template" menu, click "Show Template ID" to get the ID of your new, customized template.
- In your n8n workflow, double-click the "Submit a new job" node.
- In the "JSON Body" field, locate the
"template": "hShBhvAYM4Xd9mq5pceu"
line and replacehShBhvAYM4Xd9mq5pceu
with your new template ID. - Click "Done" to save the changes in n8n.
Now, every time you run the workflow, it will use your personalized template to render the social media reels.
Conclusion and next steps
Congratulations! You've successfully built an automated workflow to create dynamic social media reels using n8n, Airtable, OpenAI, and JSON2Video. You've learned how to:
- Set up and manage your video content ideas in Airtable.
- Automate script and image prompt generation with OpenAI.
- Programmatically render engaging videos using JSON2Video.
- Customize your videos through template variables and even by modifying the underlying JSON2Video template.
- Localize your video content for broader audiences.
This tutorial provides a solid foundation for your video automation journey. From here, you can further enhance your workflow by exploring more advanced JSON2Video features, integrating with other platforms (like social media schedulers or analytics tools), or experimenting with different AI models and prompt engineering techniques. The possibilities for automated content creation are vast, and you're now equipped to explore them!
Published on July 7th, 2025
