Introduction
Creating a constant stream of high-quality, unique reels can be time-consuming and resource-intensive. This is where automation comes into play, enabling you to generate a multitude of compelling videos with minimal manual effort, keeping your content fresh and your audience hooked.
Examples
Overview of the automation
This tutorial will guide you through building an automation workflow using Make.com, Airtable, OpenAI, and JSON2Video to automatically generate social media reels.
- You'll start by defining your video topics and preferences in Airtable.
- Make.com will then trigger a scenario, pulling this data and sending the topic to OpenAI to generate scene-by-scene voiceover text and image prompts.
- Finally, Make.com will pass this generated content to JSON2Video, which will create the video reel, complete with AI-generated images, voiceovers, and subtitles, before updating Airtable with the direct link to your finished video.
Prerequisites
To follow this tutorial, you will need accounts and API access for the following tools:
- Airtable: A flexible database platform that excels at organizing information. We've chosen Airtable for this tutorial due to its seamless integration with no-code tools like Make.com, offering a much smoother experience compared to Google Sheets. Airtable also provides a generous free tier, making it accessible without requiring a paid subscription.
- Make.com: A powerful no-code automation platform (formerly Integromat) that connects various apps and services to automate workflows.
- OpenAI: An artificial intelligence company that provides advanced language models for text generation.
- JSON2Video: An API for programmatic video creation and customization.
Build the automation
Let's dive into building your automated social media reel generator. We'll start by setting up your Airtable base, then gather the necessary API keys, and finally, assemble the Make.com workflow that brings everything together.
Setting the Airtable base
Clone the Airtable base
We've prepared an Airtable template to get you started quickly. Follow these steps to clone it:
- Open the Airtable template.
- Click on the "Copy base" button located next to the base name at the top of the page. A new window will open.
- Select the destination workspace within your Airtable account where you want to add the base.
The Airtable base contains a table named "Social media reels" with the following fields:
Field name | Description |
---|---|
ID |
An auto-numbered ID for each reel. |
Topic |
The topic for your social media reel. This will be sent to OpenAI to generate content. |
Language |
The target language for the voiceover and subtitles. (e.g., "English", "Korean") |
Voice Name |
The specific voice to use for the AI-generated voiceover (e.g., "en-US-BrianMultilingualNeural", "Jenny"). |
Voice Model |
The AI model for voice generation (azure or elevenlabs ). |
Image Model |
The AI model for image generation (freepik-classic , flux-schnell , or flux-pro ). |
Subtitles Model |
The AI model for subtitle transcription (default or whisper ). |
Subtitles Font |
The font family for the subtitles (e.g., "Oswald Bold", "Noto Sans KR"). |
Status |
Tracks the processing status of the reel (Todo , In progress , Done ). |
Result |
The URL of the generated video reel once it's complete. |
Get your Airtable personal access token
To allow Make.com to connect with your Airtable base, you'll need a Personal Access Token (PAT). Follow these steps to obtain it:
- Go to your Airtable developer hub.
- Click "Create new token."
- Give your token a name (e.g., "Make.com JSON2Video demos").
- Under "Scopes," add the following permissions:
data.records:read
data.records:write
schema.bases:read
- Under "Access," select "Add a base" and choose the "Entertainment" base or the name you gave to the base when you cloned it for this tutorial.
- Click "Create token" and copy the generated token. Keep it safe, as you won't be able to see it again.
Getting your API keys
Get your OpenAI API key
To allow Make.com to interact with OpenAI's models, you'll need an OpenAI API key:
- Go to the OpenAI API keys page.
- Click on "Create new secret key".
- Give your key a name (e.g., "JSON2Video Reel Generator").
- Copy the generated key and keep it safe. You won't be able to see it again after this step.
Get your JSON2Video API key
You'll need a JSON2Video API key to authenticate your requests and generate videos. We recommend creating a "Secondary API key" for this purpose, but using your "Primary API key" is also fine.
- Go to your JSON2Video get API key page. If you don't have an account, you'll need to sign up first.
- Once logged in, navigate to the API Keys dashboard page.
- To create a secondary API key (recommended for better security and management), click "Create new API key." For the purposes of this tutorial, ensure it has at least "Render" permissions.
- Copy your API key. Keep it secure, as you won't be able to retrieve it again.
Create the workflow
Import the workflow
To streamline the setup, you can import the pre-built Make.com workflow:
- Log in to your Make.com account.
- In the left sidebar, click on "Scenarios".
- At the top right, click on the "Create a new scenario" button.
- In the new scenario editor, click on the "More" (three dots) menu at the bottom-center of the canvas.
- Select "Import from File..."
- Upload the workflow.json file that was provided to you.
- The workflow will appear on your canvas. It consists of several modules that need to be configured.
Update the node settings
Now, let's configure each module in the imported workflow with your API keys and connections.
Update the Airtable modules
The workflow contains two Airtable modules: "Search Records" (the trigger) and "Update Records" (the final step). Both need to be connected to your Airtable account.
- Double-click the "Airtable - Search Records" module (the first module).
- Under the "Connection" field, click "+ Add" to create a new connection.
- In the connection pop-up, give your connection a descriptive name (e.g., "My Airtable Connection").
- In the "Personal Access Token" field, paste your Personal Access Token (PAT) you obtained in the "Get your Airtable personal access token" section.
- Click "Save".
- For the "Base", select the "Entertainment" base (or whatever you named your cloned base).
- For the "Table", select "Social media reels".
- Ensure the "Formula" is set to
{Status}='Todo'
to only process new requests. - Click "OK".
- Repeat these steps for the "Airtable - Update Records" module, using the same Airtable connection you just created. For this module, ensure the "Table" is "Social media reels" and map the "Record ID" to the `id` from the "Search Records" module (
16.id
).
Update the OpenAI modules
The "OpenAI (GPT-3) - Create a Completion" module needs your OpenAI API key.
- Double-click the "OpenAI (GPT-3) - Create a Completion" module.
- Under the "Connection" field, click "+ Add" to create a new connection.
- Give your connection a name (e.g., "My OpenAI Connection").
- Paste your OpenAI API key into the "API Key" field.
- Click "Save".
- Ensure the "Model" is set to
gpt-4o
(or a similar capable model if `gpt-4o` is not available or preferred). - Verify that the "Messages" section is correctly configured to generate the video script from your Airtable topic.
- Click "OK".
Update the JSON2Video modules
The workflow uses two JSON2Video modules: "Social Media Reel" (within a Toolbox module) and "Wait for a Movie to Render". Both need your JSON2Video API key.
- Double-click the "JSON2Video - Social Media Reel" module.
- Under the "Connection" field, click "+ Add" to create a new connection.
- Give your connection a name (e.g., "My JSON2Video Connection").
- Paste your JSON2Video API key into the "API Key" field.
- Click "Save".
- Confirm that the "Template ID" is set to
hShBhvAYM4Xd9mq5pceu
. This ID points to a pre-designed JSON2Video template for social media reels (view in Visual Editor). - This module dynamically passes content as variables to the template, including:
scenes
: The array of image prompts and voiceover texts from OpenAI's output.voiceModel
: The voice model (Azure or ElevenLabs) from your Airtable data.voice
: The voice name from your Airtable data.imageAspectRatio
: Set statically tovertical
for social media reels.imageModel
: The image generation model (Freepik Classic, Flux Schnell, or Flux Pro) from your Airtable data.subtitlesModel
: The subtitles transcription model (Default or Whisper) from your Airtable data.fontFamily
: The font for subtitles from your Airtable data.
- Click "OK".
- Repeat these steps for the "JSON2Video - Wait for a Movie to Render" module, using the same JSON2Video connection you just created. Ensure the "Project ID" is mapped to the output of the "Social Media Reel" module (
23.project
).
Run your first automated video creation
Once all connections and module settings are configured, you're ready to create your first automated social media reel:
- In your Airtable base, go to the "Social media reels" table.
- Add a new record (row).
- In the "Topic" column, enter a topic for your reel, for example:
The history of coffee
. - In the "Language" column, enter
English
. - For "Voice Name", enter
en-US-BrianMultilingualNeural
. - For "Voice Model", enter
Azure
. - For "Image Model", enter
Flux Schnell
. - For "Subtitles Model", enter
Default
. - For "Subtitles Font", enter
Oswald Bold
. - Ensure the "Status" column is set to
Todo
. - Return to your scenario in Make.com.
- At the bottom-left of the canvas, click the "Run once" button.
- The workflow will now execute. Observe the modules light up as data flows through them.
- Once the execution is complete (which may take a few minutes as JSON2Video renders the video), check your Airtable base.
- The "Status" for your record should change to "Done", and the "Result" column will be populated with a URL to your newly generated social media reel.
Localizing your videos into other languages
One of the powerful features of this automation is its ability to localize videos into different languages. This involves ensuring that the AI-generated text, voiceover, and subtitles are all in sync with your target language.
To localize your videos, you'll need to:
- Set the target language in Airtable: The "Language" field in your Airtable base is crucial. This value is passed to the OpenAI prompt, instructing it to generate voiceover text in the specified language.
- Choose a compatible font: If your chosen language uses a non-Latin script (e.g., Korean, Japanese, Arabic), you must select a "Subtitles Font" that supports that character set. Generic fonts like "Arial" may not display correctly.
- Select a matching voice: The "Voice Name" and "Voice Model" in Airtable need to correspond to a voice available for your target language in JSON2Video's AI voice catalog.
Example: creating a video in Korean
Let's create a social media reel in Korean. This will highlight the importance of selecting the correct font for non-Western character sets:
- In your Airtable base, add a new record.
- In the "Topic" column, enter something like:
The US Declaration of Independence
. - In the "Language" column, enter
Korean
. - For "Voice Name", you'll need a Korean voice. A suitable Azure voice would be
ko-KR-HyunsuNeural
. (You can find more Korean voices in the Azure voice catalog). - For "Voice Model", enter
Azure
. - For "Image Model", enter
Flux Schnell
. - For "Subtitles Model", enter
Default
. - For "Subtitles Font", it's crucial to select a font that supports Korean characters for subtitles. Choose
Korean
. (You can explore more supported fonts for subtitles in the JSON2Video documentation). - Set the "Status" to
Todo
. - Run your Make.com scenario.
After the workflow completes, check the "Result" URL in Airtable. You should now have a social media reel about the beauty of Seoul, with Korean voiceover and Korean subtitles correctly displayed.
Using alternative AI models
The workflow is configured to use default AI models that are either free or cost-effective. However, JSON2Video supports a variety of AI models for voiceovers and image generation, some of which may consume extra credits but offer different qualities or features. You can check the Credit consumption page for more details on pricing.
Using ElevenLabs
If you prefer using ElevenLabs for voiceovers due to their high-quality, natural-sounding voices, you can easily switch. The workflow is already set up to allow you to specify the "Voice Model" in Airtable:
- In your Airtable base, in the "Voice Model" column, change the value from
Azure
toElevenLabs
(orElevenLabs Flash v2.5
if you prefer the faster model). - In the "Voice Name" column, provide a supported ElevenLabs voice name (e.g.,
Adam
,Rachel
,Daniel
). You can find a list of voices in the ElevenLabs Voices Library (you need to be logged in). - Set the "Status" to
Todo
and run your Make.com scenario.
JSON2Video will now use ElevenLabs to generate the voiceover, consuming extra credits per minute as detailed in the credit consumption documentation.
Using Flux Pro
Similarly, for image generation, the default "Flux Schnell" model is free to use. If you need higher quality, more realistic images, you can switch to "Flux Pro".
- In your Airtable base, in the "Image Model" column, change the value from
Flux Schnell
toFlux Pro
. - Set the "Status" to
Todo
and run your Make.com scenario.
Using "Flux Pro" will consume extra credits per image, as outlined in the credit consumption documentation.
Customizing your videos
The provided JSON2Video template is designed for quick setup, but you can deeply customize your videos to match your specific branding and content needs. JSON2Video offers several ways to achieve this, from simple variable changes to editing the underlying template structure.
Using template variables
The JSON2Video movie template used in this tutorial defines multiple variables that allow for easy customization without needing to delve into the complex JSON structure. These variables are controlled through the Make.com module that calls the template.
Here are the available variables and their descriptions:
scenes
: An array of objects, where each object defines a scene with animagePrompt
andvoiceOverText
. This is dynamically generated by OpenAI.musicURL
: An optional URL to a background music audio file. It will be trimmed to the video's duration.musicVolume
: The volume of the background music track. Options range from Silent (0) to Louder than original (1.5). Default is 0.2 (Low).logoURL
: An optional URL to your logo image.logoPosition
: The position of the logo on the video. Options include Hidden, Top left, Top right, Bottom left, and Bottom right. Default is Hidden.voiceModel
: The AI model to use for generating voiceovers (Azure, ElevenLabs, ElevenLabs Flash v2.5).voiceConnectionID
: Your JSON2Video Connection ID if you want to use your own API key for the voice model.voice
: The specific voice name to use for the voiceover, depending on the chosen Voice Model.imageAspectRatio
: The aspect ratio of the AI-generated image (Vertical, Horizontal, or Squared). Default is Vertical.imageModel
: The AI model to use for generating images (Freepik Classic, Flux Schnell, Flux Pro).subtitlesModel
: The transcription model for subtitles (Default, Whisper, or No subtitles).fontFamily
: The font family for the subtitles. Default is Oswald Bold.fontURL
: An optional URL to your custom TTF font file for subtitles.
Refining the AI-Generated content
The core content (voiceover text and image prompts) in this workflow is generated by OpenAI's AI model based on the "Topic" you provide in Airtable. You can influence the AI's output by modifying the system prompt within the OpenAI module in Make.com.
Here is the "system prompt" used in the OpenAI module as a reference:
Create a script of a social media video about the topic included below.
The video will be organized in scenes. Each scene has a voice over and an image.
The voice over text must be at least 20 words.
There should be not more than 4 scenes.
Your response must be in JSON format following this schema:
{
"scenes": [{
"voiceOverText": "",
"imagePrompt": ""
}]
}
The image prompt must be written in ENGLISH, being detailed and photo realistic. In the image prompt, you MUST AVOID describing any situation in the image that can be considered unappropriate (violence, disgusting, gore, sex, nudity, NSFW, etc) as it may be rejected by the AI service.
The voiceover_text must be in {{16.Language}}.
By editing this prompt, you can instruct the AI to generate more specific types of content, set different length requirements, or alter the tone and style of the voiceover and image prompts. For example, you could ask for more scenes, a different image style (e.g., "cartoonish" instead of "photo realistic"), or a specific narrative arc.
Editing the movie template
For more advanced customization, you can duplicate the provided JSON2Video movie template and make deep changes to its structure, timing, animations, and visual elements. This requires a higher knowledge of the JSON2Video API and its JSON syntax and mastering features.
Follow these steps to edit the movie template:
- Open the movie template in the JSON2Video Visual Editor.
- From the top bar "Template" menu, click "Save template as..." to create a new, editable copy.
- Edit the template using the visual editor or by directly modifying the JSON. For example, you could change the layout of text, add new animated components (component element), modify transition effects between scenes (scene object), or even add elements that persist across all scenes (layering).
- Once you've made your changes, from the "Template" menu, click "Show Template ID" to get the ID of your new, customized template.
- Return to your Make.com scenario. Double-click the "JSON2Video - Social Media Reel" module.
- Replace the existing "Template ID" (
hShBhvAYM4Xd9mq5pceu
) with your new template ID. - Click "OK" and save your Make.com scenario.
Now, any new reels generated by your workflow will use your customized template, reflecting all the deeper changes you've made.
Conclusion and next steps
Congratulations! You have successfully built an automated system to generate social media reels using Make.com, Airtable, OpenAI, and JSON2Video. You've learned how to connect various services, leverage AI for content generation, and automate the video creation process from start to finish.
You can now effortlessly produce engaging and dynamic social media content at scale, saving significant time and resources. Consider exploring other JSON2Video tutorials to further expand your video automation capabilities and create even more diverse and sophisticated video content.
Published on July 7th, 2025
