Introduction
"Would You Rather" videos have exploded in popularity across social media platforms like TikTok, YouTube Shorts, and Instagram Reels. Their addictive appeal lies in their simplicity: they present viewers with two challenging, often humorous or thought-provoking, choices, encouraging engagement as people debate their preferences in the comments. These short, interactive videos are a powerful way to spark conversation, build community, and drive organic reach, making them a fantastic content format for creators and brands alike.
Examples
Overview of the automation
This automation streamlines the creation of "Would You Rather" videos using a combination of powerful tools:
- The process begins in Airtable, where you define the topic, language, and other video-specific settings for each "Would You Rather" challenge.
- Make.com then picks up this data, sends the topic to OpenAI to generate the full script and questions, including image prompts and suggested audience percentages.
- Finally, Make.com passes this information to JSON2Video, which leverages a pre-designed template to automatically generate the video, complete with AI-generated images and voiceovers.
- Once the video is ready, its URL is updated back in Airtable.
Prerequisites
To follow this tutorial, you will need accounts and API keys for the following services:
- Airtable: A flexible spreadsheet-database hybrid. We chose Airtable for this tutorial because its integration with no-code tools like Make.com is significantly smoother than Google Sheets. Airtable also offers a generous free tier, making it accessible without a paid subscription.
- Make.com: A powerful automation platform to connect Airtable, OpenAI, and JSON2Video. Make.com also has a comprehensive free tier.
- OpenAI: For AI-powered content generation (the "Would You Rather" questions and options). OpenAI offers free credits to new users to test their models.
- JSON2Video: The API that automates video creation from your data. JSON2Video provides a free plan with 600 non-renewable credits, which are sufficient to test and create multiple videos.
Build the automation
Let's dive into setting up the automation that will bring your "Would You Rather" videos to life.
Setting the Airtable base
Clone the Airtable base
To get started, you'll need to clone the pre-built Airtable base that will serve as your content hub:
- Open the Airtable template in your browser.
- In the top-left corner, click the "Copy base" button next to the base name. A new window will open.
- Select the destination workspace in your Airtable account where you'd like to copy the base.
The "Would you rather" table in this base includes the following fields:
Field name | Description |
---|---|
ID | Auto-generated unique identifier for each video entry. |
Topic | The main subject or theme for the "Would you rather" questions (e.g., "Food", "Travel"). |
Language | The target language for the video script (e.g., "English", "Spanish"). |
Voice Name | The specific AI voice to be used for the voiceover (e.g., "en-US-RyanMultilingualNeural"). |
Voice Model | The AI model for voice generation (e.g., "azure", "elevenlabs"). |
Image Model | The AI model for image generation (e.g., "flux-schnell", "flux-pro"). |
Font | The font family to be used in the video (e.g., "Protest Riot", "Noto Sans KR"). |
Status | The current status of the video generation ("Todo", "In progress", "Done"). |
Result | The URL of the generated video once it's complete. |
Get your Airtable personal access token
To allow Make.com to connect with your Airtable base, you'll need a Personal Access Token (PAT). Follow these steps to obtain it:
- Go to your Airtable developer hub.
- Click "Create new token."
- Give your token a name (e.g., "Make.com JSON2Video demos").
- Under "Scopes," add the following permissions:
data.records:read
data.records:write
schema.bases:read
- Under "Access," select "Add a base" and choose the "Entertainment" base or the name you gave to the base when you cloned it for this tutorial.
- Click "Create token" and copy the generated token. Keep it safe, as you won't be able to see it again.
Getting your API keys
Get your OpenAI API key
You'll need an API key from OpenAI to allow Make.com to generate the "Would you rather" content:
- Log in to your OpenAI platform dashboard.
- Navigate to the "API keys" section (usually found under your profile or settings).
- Click on "Create new secret key".
- Give your key a name (e.g., "Make.com Would You Rather").
- Copy the generated key. Make sure to save it somewhere secure, as you won't be able to view it again after closing the window.
Get your JSON2Video API key
A JSON2Video API key is required to authorize requests from Make.com to create videos:
- Log in to your JSON2Video dashboard.
- Go to the API Keys page.
- We recommend creating a "Secondary API key" for this integration with "Render" permissions. If you are on a free plan, you can use your "Primary API key".
- Copy your chosen API key.
Create the workflow
Import the workflow
To quickly set up the automation in Make.com, you can import the pre-built workflow:
- Log in to your Make.com account.
- Navigate to the "Scenarios" section.
- Click on "Create a new scenario" or select an existing one where you want to add this workflow.
- In the scenario editor, click on the "..." (more options) menu, usually found at the bottom or top of the canvas.
- Select "Import from File..."
- Upload the provided workflow definition file: workflow.json.
- The modules will appear on your canvas.
Update the module settings
Now, you need to configure the connections for each module in the imported workflow:
Update the Airtable modules
The workflow contains two Airtable modules: "Search Records" and "Update Records". Both need to be connected to your Airtable account using the Personal Access Token you generated:
- Double-click the first Airtable module ("Search Records").
- Under the "Connection" field, click "+ Create new connection".
- Choose "Access Token" as the credential type.
- Paste your Personal Access Token you obtained in section "Get your Airtable personal access token".
- Click "Save". Make sure to select the correct Base and Table as outlined in the "Setting the Airtable base" section.
- Repeat the process for the second Airtable module ("Update Records").
Update the OpenAI modules
The workflow has one OpenAI module ("Create a Chat Completion"). Configure it with your OpenAI API Key:
- Double-click the OpenAI module.
- Under the "Connection" field, click "+ Create new connection".
- Provide a name for your connection (e.g., "My OpenAI Connection").
- Paste your OpenAI API Key you obtained in section "Get your OpenAI API key".
- Click "Save".
Update the JSON2Video modules
The workflow includes two JSON2Video modules: "Create a Movie from a Template ID" and "Wait for a Movie to Render". Both require your JSON2Video API key:
- Double-click the "Create a Movie from a Template ID" module.
- Under the "Connection" field, click "+ Create new connection".
- Provide a name for your connection (e.g., "My JSON2Video Connection").
- Paste your JSON2Video API key you obtained in section "Get your JSON2Video API key".
- Click "Save".
- The JSON payload passed to JSON2Video API sets a pre-designed JSON2Video template with ID
GSUiFX8nSbXwhWDHFWGp
. It passes dynamic content as variables, including the voice name, voice model, image model, questions, and font from your Airtable data and OpenAI's output. The background video and color scheme are set statically in this template for a consistent "Would you rather" aesthetic. - Repeat the process for the "Wait for a Movie to Render" module.
Run your first automated video creation
Once all credentials are configured, you're ready to create your first automated video:
- In your Airtable table ("Would you rather"), add a new record.
- Fill in the "Topic" (e.g., "Healthy Habits"), "Language" (e.g., "English"), "Voice Name" (e.g., "en-US-RyanMultilingualNeural"), "Voice Model" (e.g., "azure"), "Image Model" (e.g., "flux-schnell"), and "Font" (e.g., "Protest Riot") columns.
- Set the "Status" column to "Todo".
- In Make.com, click on the "Run once" button at the bottom-center of the scenario editor.
- The workflow will execute step-by-step. You can observe the data flowing between modules.
- Once the execution is complete, return to your Airtable base. The "Status" for your record should be "Done", and the "Result" column should be populated with the URL to your newly created video.
Localizing your videos into other languages
One of the powerful features of this automation is its ability to generate videos in multiple languages. By simply adjusting a few fields in your Airtable base, you can localize your "Would You Rather" content, reaching a global audience.
The key steps for localization involve:
- Setting the target language in Airtable: This tells OpenAI which language to generate the questions and options in.
- Choosing a compatible font: Ensure the selected font supports the characters of your target language.
- Selecting a matching voice: Pick an AI voice that speaks your target language naturally.
Example: creating a video in Korean
Let's create a "Would you rather" video in Korean to demonstrate the localization process:
- In your Airtable table, add a new record.
- For the "Topic", enter something like "Daily Choices".
- For "Language", type "Korean". This instructs OpenAI to generate Korean questions and options.
- For "Voice Name", select a Korean voice supported by Azure, such as "ko-KR-HyunsuNeural". You can find a full list of Azure voices and their language support in the JSON2Video documentation.
- For "Voice Model", keep "azure" (or change to "elevenlabs" if you have a Korean ElevenLabs voice).
- For "Image Model", keep "flux-schnell" (image prompts are always in English, so no change needed here).
- For "Font", select "Noto Sans KR". This font supports Korean characters, ensuring your text renders correctly. The template uses Google Fonts, and "Noto Sans KR" is a suitable option for Korean.
- Set the "Status" to "Todo".
- Run the Make.com scenario again. Once complete, you'll find a new video in your Airtable "Result" column with Korean content.
Using alternative AI models
The default setup uses Azure for voiceovers and Flux Schnell for image generation. While these are efficient and often free within your JSON2Video plan, you might want to explore alternative AI models like ElevenLabs for voiceovers or Flux Pro for images to achieve different qualities or styles. Be aware that using ElevenLabs or Flux Pro typically consumes extra credits from your JSON2Video account.
Using ElevenLabs
The Airtable table includes a "Voice Model" column, allowing you to easily switch between voice AI models. To use "ElevenLabs" for your voiceovers:
- In your Airtable record, simply change the value in the "Voice Model" column to 'elevenlabs'.
- Then, in the "Voice Name" column, choose a supported ElevenLabs voice name (e.g., "Daniel", "Serena"). You can find a full list of supported ElevenLabs voices in the JSON2Video documentation.
The Make.com workflow will automatically detect these changes and instruct JSON2Video to use ElevenLabs for voice synthesis in your next video generation.
Using Flux Pro
Similarly, to use "Flux Pro" for image generation:
- In your Airtable record, simply change the value in the "Image Model" column to 'flux-pro'.
The Make.com scenario will then pass this setting to JSON2Video, which will use the Flux Pro model to generate the images for your "Would You Rather" options.
Customizing your videos
This automation uses a pre-designed JSON2Video template, but you have several options to customize your videos further, from simple variable adjustments to deep structural changes.
Using template variables
The JSON2Video movie template (ID GSUiFX8nSbXwhWDHFWGp
) defines multiple variables, allowing you to easily customize aspects of your videos without directly editing the JSON structure. These variables are passed from Make.com to JSON2Video.
Here are the available variables and their descriptions:
like_and_subscribe_voiceover_text
: The text for the voiceover encouraging likes and subscriptions at the end of the video.voice_model
: The AI model to use for all voiceovers (e.g., "azure", "elevenlabs").voice_name
: The specific AI voice to use for all voiceovers (e.g., "en-US-RyanMultilingualNeural").image_model
: The AI model to use for generating images for the options (e.g., "flux-schnell", "flux-pro").background_color1
,background_color2
,background_color3
: Hexadecimal color codes for the alternating background colors in the video.or_text
: The text displayed between the two options (e.g., "OR", "O").or_text_color
: The color of the "OR" text.or_font_family
: The font family for the "OR" text.options_text_color
: The color of the text for the "Would You Rather" options.options_font_family
: The font family for the "Would You Rather" options text.result_text_color
: The color of the percentage results text.result_font_family
: The font family for the percentage results text.questions
: An array of objects, each representing a "Would You Rather" question. Each object contains:option1_text
: Text for the first option.option1_image_prompt
: Image generation prompt for the first option.option2_text
: Text for the second option.option2_image_prompt
: Image generation prompt for the second option.option1_result
: Estimated percentage of people choosing option 1.voiceover_text
: Voiceover text for the question (e.g., ""Fast food" or "Slow food"").
Refining the AI-Generated content
The core content (questions, options, image prompts, and voiceover text) is dynamically generated by OpenAI based on your provided topic and language. You can influence this generation by modifying the "system prompt" within the OpenAI module in your Make.com workflow. This prompt acts as instructions for the AI model.
The current system prompt used is:
You are an entertainment expert.
Create a "Would you rather"-style video script on the given topic.
The "Would you rather"-style videos show 2 options to choose to the viewer, give a few seconds to think and then show the percentage of users who chose each option. The two options must be difficult to choose from.
Examples:
- Would you rather...
- Travel the world; or
- Lay on a beach
- Would you rather...
- Star Wars; or
- Star Trek
- Would you rather...
- Swim with sharks; or
- Play with snakes
The objective is to get the viewer into a hard decision between 2 options.
Return the output in this exact JSON format:
```json
{
"like_and_subscribe_voiceover_text": "",
"or_text": "OR",
"questions": [
{
"option1_text": "Fast food",
"option1_image_prompt": "A McDonalds hamburger with French fries and soda drink",
"option2_text": "Slow food",
"option2_image_prompt": "A fresh salad and a stewed turkey",
"option1_result": 15,
"voiceover_text": "\"Fast food\" or \"Slow food\""
}
]
}
```
Requirements:
* Create 5 questions.
* Each question must have 2 short answer options (3 words maximum each).
* Each option has an image prompt to illustrate the option. Image prompt must be ALWAYS in ENGLISH.
* Each option has a voiceover text that can be the same than "optionX_text" or a little longer if needed
* The "option1_result" is your best estimate of what percentage of a general audience will answer option #1. Use integer values only.
* The questions should be engaging for a broad audience.
* Questions, answers, voiceovers and the "or_text" must be in given language.
* You can be funny and introduce randomly a joke with a weird question.
* If necessary, translate and improve the provided "like_and_subscribe_voiceover_text".
* In the property "or_text" translate the English word "OR" to the target language in uppercase, like in "Star Wars OR Star Trek".
Only return the JSON. Do not add explanations or introductions.
You can edit this prompt in the OpenAI module's settings in Make.com to guide the AI towards different styles, tones, or types of questions. For instance, you could instruct it to be more whimsical, specific to a niche audience, or to generate more challenging scenarios.
Editing the movie template
For advanced customization of the video's structure, timing, or animations, you can duplicate and modify the JSON2Video movie template itself. This requires a deeper understanding of the JSON2Video API and its JSON syntax.
Follow these steps:
- Open the provided movie template in the JSON2Video visual editor: https://json2video.com/tools/visual-editor/?template=GSUiFX8nSbXwhWDHFWGp.
- From the top bar "Template" menu, click "Save template as..." to create your own editable copy.
- Make your desired changes to the template's scenes, elements, timings, or animations. Refer to the JSON2Video API documentation for detailed information on properties like duration and timing, layering elements, and positioning.
- Once you're satisfied with your edits, go to the "Template" menu again and click "Show Template ID" to retrieve the ID of your newly customized template.
- Finally, in your Make.com workflow, double-click the "Create a Movie from a Template ID" module. Replace the existing "Template ID" with your new template ID.
Now, all videos generated through your Make.com scenario will use your customized template.
Conclusion and next steps
Congratulations! You've successfully built an automated system to generate dynamic "Would You Rather" videos using Airtable, Make.com, OpenAI, and JSON2Video. You've learned how to set up your data source, connect various APIs, leverage AI for content generation, and automate the video creation process from start to finish. You also explored how to localize your content and customize video elements using template variables and by editing the movie template itself.
This tutorial provides a strong foundation for video automation. Feel free to explore other tutorials and experiment with different video formats, AI models, and customization options to unlock even more creative possibilities for your content creation!
Published on July 7th, 2025
