In today's content-driven world, short-form videos like quizzes are a fantastic way to engage your audience. But creating them manually can be time-consuming. In this tutorial, we'll walk you through building a powerful, automated workflow using N8N that generates dynamic quiz videos from a simple topic, ready for social media.

We will connect Airtable (as our data source), OpenAI (to generate the quiz content), and JSON2Video (to render the final video).

By the end, you'll have a fully automated system that can produce videos like these:

Examples of quiz videos generated with this tutorial

Input variables Resulting video
  • Topic: "Cinema"
  • Language: "English"
  • Voice: "en-US-AmandaMultilingualNeural"
  • Voice Model: "azure"
  • Difficulty: "Expert"
  • Font: "Noto Sans"
  • Topic: "World geography"
  • Language: "Korean"
  • Voice: "ko-KR-SunHiNeural"
  • Voice Model: "azure"
  • Difficulty: "Expert"
  • Font: "Noto Sans KR"
  • Topic: "Cinema"
  • Language: "Japanese"
  • Voice: "ja-JP-NanamiNeural"
  • Voice Model: "azure"
  • Difficulty: "Easy"
  • Font: "Noto Sans JP"

Quiz video tutorial

What You'll Need

Before we begin, make sure you have accounts for the following services:

The Big Picture: Our Workflow

The N8N workflow follows a logical sequence to automate video creation:

  1. Fetch a Task: The workflow starts by grabbing a new quiz topic from an Airtable base.
  2. Generate Content: It sends the topic to OpenAI and asks it to generate a complete quiz script, including questions, multiple-choice answers, and voiceover text, all in a structured JSON format.
  3. Render the Video: It sends a request to the JSON2Video API, using a pre-made template and populating it with the content generated by OpenAI.
  4. Wait and Check: Video rendering isn't instant. The workflow will pause, then periodically check the status of the rendering job until it's complete.
  5. Update and Finish: Once the video is ready, the workflow updates the Airtable record with the final video URL and marks the task as "Done". If an error occurs, it stops and reports the issue.

You can import the complete workflow into your N8N instance by downloading this file: quiz-videos-01-workflow.json

N8N workflow for quiz videos

Step-by-Step Guide

Let's break down how to configure each node in the N8N workflow.

Step 1: Airtable - The Data Source

First, we need a place to manage our video ideas. We'll use an Airtable base with a table named "Quizzes".

Create a table with the following fields:

The Airtable node in N8N is configured to search this table for a record where the Status is Todo and fetch a single record to process.

Have a look at the Airtable base used in this tutorial.

Airtable base for quiz videos

Step 2: OpenAI - The Content Engine

This node takes the topic, difficulty, and language from Airtable and uses them to prompt the OpenAI API. The prompt is carefully engineered to request a quiz script in a specific JSON format that our JSON2Video template expects.

The node is configured to use the Chat Models API and has "JSON Output" enabled to ensure the response is a clean, usable JSON object for the next step.

Step 3: Submit a new job - Calling JSON2Video

This is where the magic happens. We use an HTTP Request node to call the JSON2Video API and start the rendering process.

{
    "template": "cSTYFRZhXeBZotbwcjuM",
    "variables": {
        "voiceName": "{{ $('Airtable').item.json['Voice Name'] }}",
        "voiceModel": "{{ $('Airtable').item.json['Voice Model'] }}",
        "topic": "{{ $json.message.content.topic }}",
        "intro_voiceover": "{{ $json.message.content.intro_voiceover }}",
        "like_and_subscribe_voiceover": "{{ $json.message.content.like_and_subscribe_voiceover }}",
        "questions": {{ JSON.stringify($json.message.content.questions) }},
        "fontFamily": "{{ $('Airtable').item.json.Font }}"
    }
}

Here, we are not sending a full JSON script. Instead, we reference the template by its ID and pass all the dynamic content (from OpenAI and Airtable) as variables. This keeps the workflow clean and separates the video's design from its content.

N8N JSON2Video node

Step 4: The Polling Loop (Wait, Check, Switch)

Since video rendering can take a few minutes, we can't just wait for the previous node to finish. We need to check back periodically.

  1. Wait for 15 seconds: A simple node that pauses the workflow.
  2. Check status (HTTP Request): This node makes a GET request to the same /movies endpoint. It passes the project ID it received from the "Submit a new job" node as a query parameter.
  3. Switch: This node inspects the status field from the "Check status" response.
    • If the status is "done", it proceeds to the final step.
    • If the status is "error", it proceeds to an error-handling node.
    • Otherwise (e.g., "pending" or "running"), it loops back to the "Wait" node to check again.

Step 5: Handling the Outcome

Creating Videos in Other Languages

One of the most powerful features of this workflow is its ability to generate quizzes in multiple languages. Whether you want to create content for a Spanish, Korean, or Japanese-speaking audience, the process is straightforward with just a few adjustments.

The process involves two key steps: generating the content in the target language and ensuring the video can display the language's characters correctly.

Generating Localized Content

Our N8N workflow is already set up for localization. The key is the Language field in your Airtable base.

When the workflow runs, it pulls the value from this field (e.g., "Korean") and passes it directly to the OpenAI node. The prompt instructs OpenAI to generate all questions, answers, and voiceover scripts in that specific language.

You also need to provide a voice that can speak the target language. For example, for Korean, you might use a voice like ko-KR-SunHiNeural in the Voice Name field in Airtable. JSON2Video provides a wide catalog of voices for different languages.

Ensuring Correct Font Rendering

This is the most critical step for non-Latin languages. The default font in the video template ("Oswald") does not contain characters for languages like Japanese or Korean. If you try to render a video in these languages with the default font, you will see empty squares (â–¡) or garbled text.

To fix this, you must specify a font that supports the character set of your target language. JSON2Video supports all Google Fonts, so you just need to provide the correct name.

You can set this in the Submit a new job HTTP Request node in N8N. Find the fontFamily variable in the JSON body and change its value.

Here are some recommended Google Fonts for different languages:

For Latin-based languages like Spanish or Italian, the default font will likely work, but you can still change it for stylistic purposes (e.g., to "Roboto" or "Lato").

Putting It All Together: A Japanese Quiz Example

Let's say you want to create a quiz about Japanese history.

1. In your Airtable row, you would set:

2. In your N8N workflow, you would modify the "Submit a new job" node's JSON body to use a Japanese font:

"variables": {
    ...
    "fontFamily": "Noto Sans JP"
}

With these settings, OpenAI will generate the quiz content in Japanese, and JSON2Video will use the "Noto Sans JP" font to correctly render the Japanese characters in the final video. It's that simple!

Customizing the Voice with ElevenLabs

While the default Microsoft Azure voices are high-quality and free to use, you might want a more unique or specific voice for your brand. This workflow is designed to seamlessly integrate with ElevenLabs, a popular AI voice generation service known for its realistic and expressive voices.

To use an ElevenLabs voice, you'll simply update the configuration in your Airtable base. The N8N workflow will handle the rest.

How to Use ElevenLabs Voices

Follow these two steps in your "Quizzes" table in Airtable for the row you want to process:

  1. Set the Voice Model: In the Voice Model column, change the value from azure to one of the following:

    • elevenlabs
    • elevenlabs-flash-v2-5 (a faster, more recent model)
  2. Specify the Voice Name: In the Voice Name column, enter the name of the ElevenLabs voice you want to use (e.g., "Rachel", "Daniel", "Serena") or a specific Voice ID from your ElevenLabs account.

That's it! You don't need to change anything in the N8N workflow itself. The "Submit a new job" node is already configured to read these values and pass them to the JSON2Video API, which will then use the specified ElevenLabs model and voice to generate the audio.

A Note on Credit Consumption

It's important to understand how using different voice models affects your JSON2Video credit usage.

For a detailed breakdown of how credits are used for different AI services, please refer to the Credit Consumption page.

Example: Using an ElevenLabs voice

To create a quiz using a voice from ElevenLabs, you would set the following values in your Airtable row:

Input variables Resulting video
  • Topic: "US history"
  • Language: "Spanish"
  • Voice: "9oPKasc15pfAbMr7N6Gs"
  • Voice Model: "elevenlabs"
  • Difficulty: "Easy"
  • Font: "Noto Sans"

Customizing the Background and Colors

You can overhaul the visual style of your quiz videos without ever leaving your N8N workflow. The JSON2Video template is built with variables that control its core design elements, such as the animated background and color scheme.

By passing additional variables from the "Submit a new job" node, you can dynamically change the look and feel of each video.

Simple visual customization

The template uses several key variables for its design:

To customize these, you simply need to add them to the variables object in the JSON body of your "Submit a new job" HTTP Request node in N8N.

For example, let's change the theme to a light green palette:

  1. In your N8N workflow, open the parameters for the "Submit a new job" node.
  2. Navigate to the JSON Body field.
  3. Add the new key-value pairs for the background and colors inside the variables object as shown below.
{
    "template": "cSTYFRZhXeBZotbwcjuM",
    "variables": {
        "voiceName": "{{ $('Airtable').item.json['Voice Name'] }}",
        "voiceModel": "{{ $('Airtable').item.json['Voice Model'] }}",
        "topic": "{{ $json.message.content.topic }}",
        "intro_voiceover": "{{ $json.message.content.intro_voiceover }}",
        "like_and_subscribe_voiceover": "{{ $json.message.content.like_and_subscribe_voiceover }}",
        "questions": {{ JSON.stringify($json.message.content.questions) }},
        "fontFamily": "{{ $('Airtable').item.json.Font }}",
        "background_video": "https://json2video-test.s3.amazonaws.com/assets/videos/backgrounds/radial-dff2d8-c6dea6-1080x1920.mp4",
        "primary_color": "#608552",
        "secondary_color": "#c6dea6"
    }
}

With this change, the next video generated by the workflow will feature a new green animated background, and the answer boxes will use the specified light green colors. This simple modification allows you to adapt the video's design to match your brand, a specific theme, or just to keep your content looking fresh. Feel free to experiment with your own brand colors and find different looping background videos to make your quizzes truly unique!

Input variables Resulting video
  • Topic: "Every day science"
  • Language: "English"
  • Voice: "en-US-BrianMultilingualNeural"
  • Voice Model: "azure"
  • Difficulty: "Average"
  • Font: "Noto Sans"

Deeper redesign

The provided template ID (cSTYFRZhXeBZotbwcjuM) produces a specific visual style. But what if you want to make it your own? The best way to do this is to create your own copy of the template and modify it.

The JSON2Video API doesn't allow overriding template elements directly in the API call, so you need to duplicate it in your account first.

  1. Go to the JSON2Video Visual Editor.
  2. In the top menu, select Template > Open template by ID.
  3. Enter the ID cSTYFRZhXeBZotbwcjuM and click "Open".
  4. The template will load. Now, save your own copy by going to Template > Save template as. This will create a new template in your account with a new, unique ID.
  5. In the top menu, select Template > Show template ID to get the ID of your new copy.
  6. Go back to your N8N workflow and open the "Submit a new job" node. Replace the old template ID with your new one.

Customization Ideas

Now that you have your own copy, you can use the Visual Editor or the JSON editor (Template > Edit JSON) to make changes. The template is built with variables for easy customization. Here are some ideas:

How Much Does Each Video Cost?

The primary cost is for the video rendering process. For standard videos like the ones this template produces, the consumption rate is 1 credit per second of the final video's duration. This calculation assumes you are using the default azure voice model, which is included in your plan at no extra cost.

The quiz videos generated by this workflow typically have a duration of about 1.5 minutes (or 90 seconds). Therefore, a single 90-second video will consume approximately 90 credits from your account.

The actual monetary cost per credit depends on the pricing plan you choose, as larger plans offer a better rate. As a general estimate, you can expect the cost for a 1.5-minute quiz video created with this workflow to be in the range of $0.20 to $0.50 (USD).

Conclusion

You now have a powerful, automated system for creating engaging quiz videos. By combining the flexibility of N8N with the content generation of OpenAI and the video rendering power of JSON2Video, you can scale your video production efforts with minimal manual work. Feel free to expand on this workflow, experiment with different templates, and integrate other services to fit your unique needs.

Published on June 18th, 2025

Author
Joaquim Cardona
Joaquim Cardona Senior Internet business executive with more than 20 years of broad experience in Internet business, media sector, digital marketing, online video and mobile technologies.