Introduction

Imagine effortlessly transforming your ideas or data into polished, ready-to-share social media videos without ever touching complex video editing software. This tutorial will guide you through automating the creation of dynamic social media reels using a powerful combination of n8n, Airtable, OpenAI, and JSON2Video. Get ready to streamline your content creation process and boost your online presence!

Examples

Overview of the automation

This automation workflow simplifies social media reel creation into a few clicks. It starts by monitoring an Airtable base for new topics you want to turn into a video. Once a new topic is identified, n8n triggers OpenAI to generate a script with voiceover text and image prompts for each scene of your reel. This AI-generated content is then sent to JSON2Video, which uses a pre-designed template to automatically render your video, complete with AI-generated images and voiceovers, and dynamic subtitles. Finally, the generated video URL is updated back into your Airtable base, ready for you to share.

n8n workflow for social media reels

Prerequisites

To follow this tutorial and build your own social media reel automation, you will need accounts and API keys for the following services:

Build the automation

Let's dive into setting up the automation that will bring your social media reels to life. We'll start by preparing your Airtable base, then gather the necessary API keys, and finally configure the n8n workflow.

Setting the Airtable base

Clone the Airtable base

To get started, clone our pre-configured Airtable base. This base contains all the necessary fields to manage your social media reel projects efficiently.

  1. Open the Airtable template in your browser.
  2. Click on the "Copy base" button located beside the base name at the top of the page. A new window will open.
  3. Select the destination workspace in your Airtable account where you want to clone the base.
Airtable base for social media reels

Your cloned base, named "Entertainment" (or whatever you renamed it to), will contain a table called "Social media reels" with the following fields:

Field name Description
ID An auto-generated unique identifier for each reel.
Topic The main subject or theme of your social media reel. This will be used by OpenAI to generate the script.
Language The target language for the voiceover and subtitles (e.g., "English", "Spanish", "Arabic").
Voice Name The specific voice to be used for the AI-generated voiceover (e.g., "en-US-BrianMultilingualNeural", "Daniel").
Voice Model The AI model for voice generation (e.g., "azure", "elevenlabs"). Defaults to "azure" (free to use).
Image Model The AI model for image generation (e.g., "freepik-classic", "flux-schnell", "flux-pro"). Defaults to "freepik-classic" (free to use).
Subtitles Model The AI model for transcribing audio into subtitles (e.g., "default", "whisper", "none"). Defaults to "default".
Subtitles Font The font family for the subtitles (e.g., "Oswald Bold", "Noto Sans Arabic").
Status Tracks the progress of the video creation: "Todo", "In progress", "Done".
Result The URL of the final rendered social media reel once it's complete.

Get your Airtable personal access token

To allow n8n to connect with your Airtable base, you'll need a Personal Access Token (PAT). Follow these steps to obtain it:

  1. Go to your Airtable developer hub.
  2. Click "Create new token."
  3. Give your token a name (e.g., "n8n JSON2Video demos").
  4. Under "Scopes," add the following permissions:
    • data.records:read
    • data.records:write
    • schema.bases:read
  5. Under "Access," select "Add a base" and choose the "Entertainment" base or the name you gave to the base when you cloned it for this tutorial.
  6. Click "Create token" and copy the generated token. Keep it safe, as you won't be able to see it again.

Getting your API keys

Get your OpenAI API key

You'll need an OpenAI API key to allow n8n to connect with the OpenAI models for generating content. Follow these steps to obtain it:

  1. Log in to your OpenAI platform account.
  2. Navigate to the API keys section.
  3. Click on "Create new secret key."
  4. Give your key a name (e.g., "n8n Social Reels").
  5. Copy the generated key. Keep it secure, as you won't be able to view it again after closing the window.

Get your JSON2Video API key

To connect n8n to JSON2Video and enable video rendering, you'll need a JSON2Video API key. While a Primary API key works, we recommend creating a Secondary API key with "Render" permissions for better security practices.

  1. Log in to your JSON2Video dashboard.
  2. Go to the API Keys page.
  3. If you're using a free account, your Primary API key is already generated. Copy it.
  4. (Optional, for paid plans) To create a Secondary API key, click "Create new API key." Give it a descriptive name (e.g., "n8n Social Reels"), select "Render" as the permission role, and save. Copy the generated key.

Create the workflow

Now that you have your Airtable base ready and all necessary API keys, let's set up the n8n workflow.

Import the workflow

To quickly set up the n8n workflow, you can import our pre-built template:

  1. Download the workflow definition file.
  2. In your n8n instance, click on "Workflows" in the left sidebar.
  3. Click the "New" button in the top right, then select "Import from File..."
  4. Select the workflow.json file you just downloaded.
  5. The workflow will appear on your canvas.

Update the node settings

After importing the workflow, you need to configure the credentials for the Airtable, OpenAI, and JSON2Video nodes.

Update the Airtable nodes

The workflow contains two Airtable nodes: "Airtable - Read" and "Airtable - Update". Both need to be configured with your Personal Access Token (PAT).

  1. Double-click the "Airtable - Read" node (or "Airtable - Update").
  2. Under the "Credential to connect with" field, click "+ Create new credential".
  3. Choose "Access Token" as the credential type.
  4. Paste your Airtable Personal Access Token you obtained in the Get your Airtable personal access token section.
  5. Give the credential a descriptive name (e.g., "My Airtable PAT").
  6. Click "Create" to save the credential.
  7. Repeat these steps for the other Airtable node.
Update the OpenAI nodes

The workflow uses one OpenAI node ("OpenAI") to generate the script and image prompts.

  1. Double-click the "OpenAI" node.
  2. Under the "Credential to connect with" field, click "+ Create new credential".
  3. Choose "OpenAI API" as the credential type.
  4. Paste your OpenAI API key you obtained in the Get your OpenAI API key section.
  5. Give the credential a descriptive name (e.g., "My OpenAI API Key").
  6. Click "Create" to save the credential.
Update the JSON2Video nodes

The workflow uses two HTTP Request nodes to interact with JSON2Video: "Submit a new job" and "Check status". Both require your JSON2Video API key.

  1. Double-click the "Submit a new job" node.
  2. Under "Headers", locate the "x-api-key" parameter.
  3. Replace the "-- YOUR API KEY HERE --" with your JSON2Video API key.
  4. Click "Done" to save the changes.
  5. Repeat these steps for the "Check status" node.

The JSON payload passed to the JSON2Video API in the "Submit a new job" node uses a pre-designed JSON2Video template with ID hShBhvAYM4Xd9mq5pceu (you can explore it in the JSON2Video Visual Editor). It passes dynamic content as variables, including the voice name, voice model, image model, subtitles model, and font family from your Airtable data, and the generated scenes (voiceover text and image prompts) from OpenAI's output. The background video and color scheme are set statically in this template for a consistent social media reel aesthetic, but can be customized.

Run your first automated video creation

Once all credentials are configured and saved, you're ready to run your first automated video creation!.

  1. In your Airtable table ("Social media reels" in the "Entertainment" base), create a new row.
  2. Enter a descriptive topic in the Topic column (e.g., "The benefits of remote work").
  3. Choose a Language (e.g., "English").
  4. Select a Voice Name (e.g., "en-US-BrianMultilingualNeural").
  5. Select "azure" for Voice Model and "freepik-classic" for Image Model (these are free models).
  6. Set the Status to "Todo".
  7. Go back to your n8n workflow.
  8. Click on the "Test workflow" button in the bottom-center of the n8n interface.
  9. The workflow will begin executing. You'll see green indicators moving between nodes as each step completes. This process can take a few minutes as OpenAI generates the content and JSON2Video renders the video.
  10. Once the workflow finishes, check your Airtable base. The Status column for your row should change to "Done", and the Result column should be populated with the URL to your newly created social media reel.

Localizing your videos into other languages

One of the powerful features of this automation is its ability to localize your social media reels into different languages. This is crucial for reaching a global audience and making your content more accessible. The key to localization lies in correctly setting the language, choosing a compatible font, and selecting a matching voice in your Airtable base.

Example: creating a video in Arabic

Let's create a social media reel in Arabic to demonstrate the localization process. Arabic uses a non-Western script, which requires careful font selection.

  1. In your Airtable base, create a new row.
  2. In the Topic column, enter a topic (e.g., "Exploring the wonders of ancient Egypt").
  3. For the Language column, select "Arabic".
  4. For the Voice Name, choose an Arabic voice. For Azure, a good option is ar-EG-SalmaNeural. You can find a full list of supported Azure voices by language.
  5. For the Voice Model, keep "azure" (it's free!).
  6. For the Image Model, keep "flux-schnell" (also free!).
  7. For the Subtitles Model, set it to "whisper" as it's the only model that supports Arabic.
  8. For the Subtitles Font, select Arial. This font supports the Arabic script, ensuring your subtitles render correctly.
  9. Set the Status to "Todo".
  10. Run the n8n workflow by clicking "Test workflow".

Once the workflow completes, you'll find a link to your new social media reel in Arabic in the Result column of your Airtable row!

Using alternative AI models

The default setup uses Azure for voiceovers and Freepik Classic/Flux Schnell for images, both of which are free to use within JSON2Video. However, JSON2Video supports other powerful AI models like ElevenLabs for voices and Flux Pro for images, which offer enhanced quality but consume extra credits. You can learn more about credit consumption in the JSON2Video documentation.

Using ElevenLabs

If you want to leverage the high-quality voices offered by ElevenLabs, you can easily switch the voice model in your Airtable base:

  1. In your Airtable row, change the Voice Model column to elevenlabs.
  2. In the Voice Name column, enter a supported ElevenLabs voice (e.g., "Daniel", "Serena"). You can find a list of available voices in the ElevenLabs Voice Library (requires login).

When you run the workflow, JSON2Video will use the specified ElevenLabs voice, and relevant credits will be deducted from your JSON2Video account for the voiceover generation.

Using Flux Pro

For more detailed and realistic AI-generated images, you can opt for the Flux Pro model:

  1. In your Airtable row, change the Image Model column to flux-pro.

Running the workflow now will instruct JSON2Video to use Flux Pro for image generation, consuming additional credits from your JSON2Video account.

Customizing your videos

The provided JSON2Video template is designed for versatility, allowing you to customize various aspects of your social media reels without deep knowledge of the JSON2Video API. You can achieve this by manipulating template variables, refining AI-generated content, or even directly editing the movie template for more advanced changes.

Using template variables

The JSON2Video movie template (ID hShBhvAYM4Xd9mq5pceu) defines multiple variables that allow for easy customization. These variables are passed from your n8n workflow to the template.

Refining the AI-Generated content

The core content of your reels (voiceovers and image prompts) is generated by OpenAI based on a "system prompt" within the n8n workflow. You can modify this prompt to customize the output and achieve different styles or tones for your videos.

The system prompt used in the OpenAI node is:

=Create a script of a social media video about the topic included below.

The video will be organized in scenes. Each scene has a voice over and an image.
The voice over text must be at least 20 words.
There should be not more than 4 scenes.
Your response must be in JSON format following this schema:
{
   "scenes": [{
      "voiceOverText": "",
      "imagePrompt": ""
    }]
}

The image prompt must be written in ENGLISH, being detailed and photo realistic. In the image prompt, you MUST AVOID describing any situation in the image that can be considered unappropriate (violence, disgusting, gore, sex, nudity, NSFW, etc) as it may be rejected by the AI service.

By editing this prompt in the "OpenAI" node's settings (under "Messages > System message"), you can influence:

For example, you could add instructions for a more humorous tone, request specific types of imagery, or adjust the minimum word count for voiceovers.

Editing the movie template

For advanced customization that goes beyond variable adjustments, you can duplicate and modify the underlying JSON2Video movie template itself. This allows for deep changes to the structure, timing, animations, and visual design of your videos. This requires a higher understanding of the JSON2Video API schema.

Follow these steps to edit the template:

  1. Open the provided movie template in the JSON2Video Visual Editor.
  2. From the top bar "Template" menu, click "Save template as..." to create a duplicate in your own account.
  3. Edit the template using the visual editor or directly by editing the JSON (accessible via "Template > Edit JSON"). You can change scene transitions, add new elements (like a different background video, text overlays, or components), adjust element timings, and much more. Refer to the JSON2Video documentation on customizing templates for detailed guidance.
  4. Once you've made your desired changes, from the "Template" menu, click "Show Template ID" to get the ID of your new, customized template.
  5. In your n8n workflow, double-click the "Submit a new job" node.
  6. In the "JSON Body" field, locate the "template": "hShBhvAYM4Xd9mq5pceu" line and replace hShBhvAYM4Xd9mq5pceu with your new template ID.
  7. Click "Done" to save the changes in n8n.

Now, every time you run the workflow, it will use your personalized template to render the social media reels.

Conclusion and next steps

Congratulations! You've successfully built an automated workflow to create dynamic social media reels using n8n, Airtable, OpenAI, and JSON2Video. You've learned how to:

This tutorial provides a solid foundation for your video automation journey. From here, you can further enhance your workflow by exploring more advanced JSON2Video features, integrating with other platforms (like social media schedulers or analytics tools), or experimenting with different AI models and prompt engineering techniques. The possibilities for automated content creation are vast, and you're now equipped to explore them!

Published on July 7th, 2025

Author
Joaquim Cardona
Joaquim Cardona Senior Internet business executive with more than 20 years of broad experience in Internet business, media sector, digital marketing, online video and mobile technologies.