Introduction

"Would You Rather" videos have exploded in popularity across social media platforms like TikTok, YouTube Shorts, and Instagram Reels. Their addictive appeal lies in their simplicity: they present viewers with two challenging, often humorous or thought-provoking, choices, encouraging engagement as people debate their preferences in the comments. These short, interactive videos are a powerful way to spark conversation, build community, and drive organic reach, making them a fantastic content format for creators and brands alike.

Examples

Overview of the automation

This automation streamlines the creation of "Would You Rather" videos using a combination of powerful tools:

Prerequisites

To follow this tutorial, you will need accounts and API keys for the following services:

Build the automation

Let's dive into setting up the automation that will bring your "Would You Rather" videos to life.

Setting the Airtable base

Clone the Airtable base

To get started, you'll need to clone the pre-built Airtable base that will serve as your content hub:

  1. Open the Airtable template in your browser.
  2. In the top-left corner, click the "Copy base" button next to the base name. A new window will open.
  3. Select the destination workspace in your Airtable account where you'd like to copy the base.

The "Would you rather" table in this base includes the following fields:

Field name Description
ID Auto-generated unique identifier for each video entry.
Topic The main subject or theme for the "Would you rather" questions (e.g., "Food", "Travel").
Language The target language for the video script (e.g., "English", "Spanish").
Voice Name The specific AI voice to be used for the voiceover (e.g., "en-US-RyanMultilingualNeural").
Voice Model The AI model for voice generation (e.g., "azure", "elevenlabs").
Image Model The AI model for image generation (e.g., "flux-schnell", "flux-pro").
Font The font family to be used in the video (e.g., "Protest Riot", "Noto Sans KR").
Status The current status of the video generation ("Todo", "In progress", "Done").
Result The URL of the generated video once it's complete.

Get your Airtable personal access token

To allow Make.com to connect with your Airtable base, you'll need a Personal Access Token (PAT). Follow these steps to obtain it:

  1. Go to your Airtable developer hub.
  2. Click "Create new token."
  3. Give your token a name (e.g., "Make.com JSON2Video demos").
  4. Under "Scopes," add the following permissions:
    • data.records:read
    • data.records:write
    • schema.bases:read
  5. Under "Access," select "Add a base" and choose the "Entertainment" base or the name you gave to the base when you cloned it for this tutorial.
  6. Click "Create token" and copy the generated token. Keep it safe, as you won't be able to see it again.

Getting your API keys

Get your OpenAI API key

You'll need an API key from OpenAI to allow Make.com to generate the "Would you rather" content:

  1. Log in to your OpenAI platform dashboard.
  2. Navigate to the "API keys" section (usually found under your profile or settings).
  3. Click on "Create new secret key".
  4. Give your key a name (e.g., "Make.com Would You Rather").
  5. Copy the generated key. Make sure to save it somewhere secure, as you won't be able to view it again after closing the window.

Get your JSON2Video API key

A JSON2Video API key is required to authorize requests from Make.com to create videos:

  1. Log in to your JSON2Video dashboard.
  2. Go to the API Keys page.
  3. We recommend creating a "Secondary API key" for this integration with "Render" permissions. If you are on a free plan, you can use your "Primary API key".
  4. Copy your chosen API key.

Create the workflow

Import the workflow

To quickly set up the automation in Make.com, you can import the pre-built workflow:

  1. Log in to your Make.com account.
  2. Navigate to the "Scenarios" section.
  3. Click on "Create a new scenario" or select an existing one where you want to add this workflow.
  4. In the scenario editor, click on the "..." (more options) menu, usually found at the bottom or top of the canvas.
  5. Select "Import from File..."
  6. Upload the provided workflow definition file: workflow.json.
  7. The modules will appear on your canvas.

Update the module settings

Now, you need to configure the connections for each module in the imported workflow:

Update the Airtable modules

The workflow contains two Airtable modules: "Search Records" and "Update Records". Both need to be connected to your Airtable account using the Personal Access Token you generated:

  1. Double-click the first Airtable module ("Search Records").
  2. Under the "Connection" field, click "+ Create new connection".
  3. Choose "Access Token" as the credential type.
  4. Paste your Personal Access Token you obtained in section "Get your Airtable personal access token".
  5. Click "Save". Make sure to select the correct Base and Table as outlined in the "Setting the Airtable base" section.
  6. Repeat the process for the second Airtable module ("Update Records").
Update the OpenAI modules

The workflow has one OpenAI module ("Create a Chat Completion"). Configure it with your OpenAI API Key:

  1. Double-click the OpenAI module.
  2. Under the "Connection" field, click "+ Create new connection".
  3. Provide a name for your connection (e.g., "My OpenAI Connection").
  4. Paste your OpenAI API Key you obtained in section "Get your OpenAI API key".
  5. Click "Save".
Update the JSON2Video modules

The workflow includes two JSON2Video modules: "Create a Movie from a Template ID" and "Wait for a Movie to Render". Both require your JSON2Video API key:

  1. Double-click the "Create a Movie from a Template ID" module.
  2. Under the "Connection" field, click "+ Create new connection".
  3. Provide a name for your connection (e.g., "My JSON2Video Connection").
  4. Paste your JSON2Video API key you obtained in section "Get your JSON2Video API key".
  5. Click "Save".
  6. The JSON payload passed to JSON2Video API sets a pre-designed JSON2Video template with ID GSUiFX8nSbXwhWDHFWGp. It passes dynamic content as variables, including the voice name, voice model, image model, questions, and font from your Airtable data and OpenAI's output. The background video and color scheme are set statically in this template for a consistent "Would you rather" aesthetic.
  7. Repeat the process for the "Wait for a Movie to Render" module.

Run your first automated video creation

Once all credentials are configured, you're ready to create your first automated video:

  1. In your Airtable table ("Would you rather"), add a new record.
  2. Fill in the "Topic" (e.g., "Healthy Habits"), "Language" (e.g., "English"), "Voice Name" (e.g., "en-US-RyanMultilingualNeural"), "Voice Model" (e.g., "azure"), "Image Model" (e.g., "flux-schnell"), and "Font" (e.g., "Protest Riot") columns.
  3. Set the "Status" column to "Todo".
  4. In Make.com, click on the "Run once" button at the bottom-center of the scenario editor.
  5. The workflow will execute step-by-step. You can observe the data flowing between modules.
  6. Once the execution is complete, return to your Airtable base. The "Status" for your record should be "Done", and the "Result" column should be populated with the URL to your newly created video.

Localizing your videos into other languages

One of the powerful features of this automation is its ability to generate videos in multiple languages. By simply adjusting a few fields in your Airtable base, you can localize your "Would You Rather" content, reaching a global audience.

The key steps for localization involve:

  1. Setting the target language in Airtable: This tells OpenAI which language to generate the questions and options in.
  2. Choosing a compatible font: Ensure the selected font supports the characters of your target language.
  3. Selecting a matching voice: Pick an AI voice that speaks your target language naturally.

Example: creating a video in Korean

Let's create a "Would you rather" video in Korean to demonstrate the localization process:

  1. In your Airtable table, add a new record.
  2. For the "Topic", enter something like "Daily Choices".
  3. For "Language", type "Korean". This instructs OpenAI to generate Korean questions and options.
  4. For "Voice Name", select a Korean voice supported by Azure, such as "ko-KR-HyunsuNeural". You can find a full list of Azure voices and their language support in the JSON2Video documentation.
  5. For "Voice Model", keep "azure" (or change to "elevenlabs" if you have a Korean ElevenLabs voice).
  6. For "Image Model", keep "flux-schnell" (image prompts are always in English, so no change needed here).
  7. For "Font", select "Noto Sans KR". This font supports Korean characters, ensuring your text renders correctly. The template uses Google Fonts, and "Noto Sans KR" is a suitable option for Korean.
  8. Set the "Status" to "Todo".
  9. Run the Make.com scenario again. Once complete, you'll find a new video in your Airtable "Result" column with Korean content.

Using alternative AI models

The default setup uses Azure for voiceovers and Flux Schnell for image generation. While these are efficient and often free within your JSON2Video plan, you might want to explore alternative AI models like ElevenLabs for voiceovers or Flux Pro for images to achieve different qualities or styles. Be aware that using ElevenLabs or Flux Pro typically consumes extra credits from your JSON2Video account.

Using ElevenLabs

The Airtable table includes a "Voice Model" column, allowing you to easily switch between voice AI models. To use "ElevenLabs" for your voiceovers:

  1. In your Airtable record, simply change the value in the "Voice Model" column to 'elevenlabs'.
  2. Then, in the "Voice Name" column, choose a supported ElevenLabs voice name (e.g., "Daniel", "Serena"). You can find a full list of supported ElevenLabs voices in the JSON2Video documentation.

The Make.com workflow will automatically detect these changes and instruct JSON2Video to use ElevenLabs for voice synthesis in your next video generation.

Using Flux Pro

Similarly, to use "Flux Pro" for image generation:

  1. In your Airtable record, simply change the value in the "Image Model" column to 'flux-pro'.

The Make.com scenario will then pass this setting to JSON2Video, which will use the Flux Pro model to generate the images for your "Would You Rather" options.

Customizing your videos

This automation uses a pre-designed JSON2Video template, but you have several options to customize your videos further, from simple variable adjustments to deep structural changes.

Using template variables

The JSON2Video movie template (ID GSUiFX8nSbXwhWDHFWGp) defines multiple variables, allowing you to easily customize aspects of your videos without directly editing the JSON structure. These variables are passed from Make.com to JSON2Video.

Here are the available variables and their descriptions:

Refining the AI-Generated content

The core content (questions, options, image prompts, and voiceover text) is dynamically generated by OpenAI based on your provided topic and language. You can influence this generation by modifying the "system prompt" within the OpenAI module in your Make.com workflow. This prompt acts as instructions for the AI model.

The current system prompt used is:

You are an entertainment expert.

Create a "Would you rather"-style video script on the given topic.

The "Would you rather"-style videos show 2 options to choose to the viewer, give a few seconds to think and then show the percentage of users who chose each option. The two options must be difficult to choose from.

Examples:
- Would you rather...
    - Travel the world; or
    - Lay on a beach

- Would you rather...
    - Star Wars; or
    - Star Trek

- Would you rather...
    - Swim with sharks; or
    - Play with snakes

The objective is to get the viewer into a hard decision between 2 options. 

Return the output in this exact JSON format:

```json
{
    "like_and_subscribe_voiceover_text": "",
    "or_text": "OR",
    "questions": [
        {
          "option1_text": "Fast food",
          "option1_image_prompt": "A McDonalds hamburger with French fries and soda drink",
          "option2_text": "Slow food",
          "option2_image_prompt": "A fresh salad and a stewed turkey",
          "option1_result": 15,
          "voiceover_text": "\"Fast food\" or \"Slow food\""
        }
    ]
}
```

Requirements:
* Create 5 questions.
* Each question must have 2 short answer options (3 words maximum each).
* Each option has an image prompt to illustrate the option. Image prompt must be ALWAYS in ENGLISH.
* Each option has a voiceover text that can be the same than "optionX_text" or a little longer if needed
* The "option1_result" is your best estimate of what percentage of a general audience will answer option #1. Use integer values only.
* The questions should be engaging for a broad audience.
* Questions, answers, voiceovers and the "or_text" must be in given language.
* You can be funny and introduce randomly a joke with a weird question.
* If necessary, translate and improve the provided "like_and_subscribe_voiceover_text".
* In the property "or_text" translate the English word "OR" to the target language in uppercase, like in "Star Wars OR Star Trek".

Only return the JSON. Do not add explanations or introductions.

You can edit this prompt in the OpenAI module's settings in Make.com to guide the AI towards different styles, tones, or types of questions. For instance, you could instruct it to be more whimsical, specific to a niche audience, or to generate more challenging scenarios.

Editing the movie template

For advanced customization of the video's structure, timing, or animations, you can duplicate and modify the JSON2Video movie template itself. This requires a deeper understanding of the JSON2Video API and its JSON syntax.

Follow these steps:

  1. Open the provided movie template in the JSON2Video visual editor: https://json2video.com/tools/visual-editor/?template=GSUiFX8nSbXwhWDHFWGp.
  2. From the top bar "Template" menu, click "Save template as..." to create your own editable copy.
  3. Make your desired changes to the template's scenes, elements, timings, or animations. Refer to the JSON2Video API documentation for detailed information on properties like duration and timing, layering elements, and positioning.
  4. Once you're satisfied with your edits, go to the "Template" menu again and click "Show Template ID" to retrieve the ID of your newly customized template.
  5. Finally, in your Make.com workflow, double-click the "Create a Movie from a Template ID" module. Replace the existing "Template ID" with your new template ID.

Now, all videos generated through your Make.com scenario will use your customized template.

Conclusion and next steps

Congratulations! You've successfully built an automated system to generate dynamic "Would You Rather" videos using Airtable, Make.com, OpenAI, and JSON2Video. You've learned how to set up your data source, connect various APIs, leverage AI for content generation, and automate the video creation process from start to finish. You also explored how to localize your content and customize video elements using template variables and by editing the movie template itself.

This tutorial provides a strong foundation for video automation. Feel free to explore other tutorials and experiment with different video formats, AI models, and customization options to unlock even more creative possibilities for your content creation!

Published on July 7th, 2025

Author
Joaquim Cardona
Joaquim Cardona Senior Internet business executive with more than 20 years of broad experience in Internet business, media sector, digital marketing, online video and mobile technologies.