Table of Contents
In today's content-driven world, short-form videos like quizzes are a fantastic way to engage your audience. But creating them manually can be time-consuming. In this tutorial, we'll walk you through building a powerful, automated workflow using N8N that generates dynamic quiz videos from a simple topic, ready for social media.
We will connect Airtable (as our data source), OpenAI (to generate the quiz content), and JSON2Video (to render the final video).
By the end, you'll have a fully automated system that can produce videos like these:
Examples of quiz videos generated with this tutorial
Input variables | Resulting video |
---|---|
|
|
|
|
|
|
Quiz video tutorial
What You'll Need
Before we begin, make sure you have accounts for the following services:
- N8N: A free and open-source workflow automation tool. You can use N8N Cloud or self-host it.
- Airtable: To store your video topics and track video status.
- OpenAI: To programmatically generate quiz questions and voiceover scripts. You'll need an OpenAI API key.
- JSON2Video: The video rendering API. You'll need a JSON2Video account to get your API key.
The Big Picture: Our Workflow
The N8N workflow follows a logical sequence to automate video creation:
- Fetch a Task: The workflow starts by grabbing a new quiz topic from an Airtable base.
- Generate Content: It sends the topic to OpenAI and asks it to generate a complete quiz script, including questions, multiple-choice answers, and voiceover text, all in a structured JSON format.
- Render the Video: It sends a request to the JSON2Video API, using a pre-made template and populating it with the content generated by OpenAI.
- Wait and Check: Video rendering isn't instant. The workflow will pause, then periodically check the status of the rendering job until it's complete.
- Update and Finish: Once the video is ready, the workflow updates the Airtable record with the final video URL and marks the task as "Done". If an error occurs, it stops and reports the issue.
You can import the complete workflow into your N8N instance by downloading this file: quiz-videos-01-workflow.json

Step-by-Step Guide
Let's break down how to configure each node in the N8N workflow.
Step 1: Airtable - The Data Source
First, we need a place to manage our video ideas. We'll use an Airtable base with a table named "Quizzes".
Create a table with the following fields:
- Topic (Single line text): The subject of the quiz (e.g., "World Capitals").
- Difficulty (Single select): Options like "Easy", "Average", "Hard".
- Language (Single line text): The language for the quiz (e.g., "English", "Spanish").
- Voice Name (Single line text): The JSON2Video voice to use (e.g., "en-US-EmmaMultilingualNeural").
- Voice Model (Single line text): The voice model, usually "azure".
- Status (Single select): Options "Todo", "In progress", "Done".
- Result (URL): This field will store the link to the final video.
The Airtable node in N8N is configured to search this table for a record where the Status is Todo
and fetch a single record to process.
Have a look at the Airtable base used in this tutorial.

Step 2: OpenAI - The Content Engine
This node takes the topic, difficulty, and language from Airtable and uses them to prompt the OpenAI API. The prompt is carefully engineered to request a quiz script in a specific JSON format that our JSON2Video template expects.
The node is configured to use the Chat Models API and has "JSON Output" enabled to ensure the response is a clean, usable JSON object for the next step.
Step 3: Submit a new job - Calling JSON2Video
This is where the magic happens. We use an HTTP Request node to call the JSON2Video API and start the rendering process.
- Method:
POST
- URL:
https://api.json2video.com/v2/movies
- Send Headers: Enabled, with
x-api-key
set to your JSON2Video API key. - Specify Body: JSON
- JSON Body:
{
"template": "cSTYFRZhXeBZotbwcjuM",
"variables": {
"voiceName": "{{ $('Airtable').item.json['Voice Name'] }}",
"voiceModel": "{{ $('Airtable').item.json['Voice Model'] }}",
"topic": "{{ $json.message.content.topic }}",
"intro_voiceover": "{{ $json.message.content.intro_voiceover }}",
"like_and_subscribe_voiceover": "{{ $json.message.content.like_and_subscribe_voiceover }}",
"questions": {{ JSON.stringify($json.message.content.questions) }},
"fontFamily": "{{ $('Airtable').item.json.Font }}"
}
}
Here, we are not sending a full JSON script. Instead, we reference the template
by its ID and pass all the dynamic content (from OpenAI and Airtable) as variables
. This keeps the workflow clean and separates the video's design from its content.

Step 4: The Polling Loop (Wait, Check, Switch)
Since video rendering can take a few minutes, we can't just wait for the previous node to finish. We need to check back periodically.
- Wait for 15 seconds: A simple node that pauses the workflow.
- Check status (HTTP Request): This node makes a
GET
request to the same/movies
endpoint. It passes theproject
ID it received from the "Submit a new job" node as a query parameter. - Switch: This node inspects the
status
field from the "Check status" response.- If the status is "done", it proceeds to the final step.
- If the status is "error", it proceeds to an error-handling node.
- Otherwise (e.g., "pending" or "running"), it loops back to the "Wait" node to check again.
Step 5: Handling the Outcome
- Success (Airtable Update): If the video is rendered successfully, another Airtable node updates the original record. It sets the Status to "Done" and populates the Result field with the video URL returned by JSON2Video.
- Failure (Stop and Error): If the rendering fails, the workflow stops and displays the error message from JSON2Video, making it easy to debug what went wrong.
Creating Videos in Other Languages
One of the most powerful features of this workflow is its ability to generate quizzes in multiple languages. Whether you want to create content for a Spanish, Korean, or Japanese-speaking audience, the process is straightforward with just a few adjustments.
The process involves two key steps: generating the content in the target language and ensuring the video can display the language's characters correctly.
Generating Localized Content
Our N8N workflow is already set up for localization. The key is the Language field in your Airtable base.
When the workflow runs, it pulls the value from this field (e.g., "Korean") and passes it directly to the OpenAI node. The prompt instructs OpenAI to generate all questions, answers, and voiceover scripts in that specific language.
You also need to provide a voice that can speak the target language. For example, for Korean, you might use a voice like ko-KR-SunHiNeural
in the Voice Name field in Airtable. JSON2Video provides a wide catalog of voices for different languages.
Ensuring Correct Font Rendering
This is the most critical step for non-Latin languages. The default font in the video template ("Oswald") does not contain characters for languages like Japanese or Korean. If you try to render a video in these languages with the default font, you will see empty squares (â–¡) or garbled text.
To fix this, you must specify a font that supports the character set of your target language. JSON2Video supports all Google Fonts, so you just need to provide the correct name.
You can set this in the Submit a new job HTTP Request node in N8N. Find the fontFamily
variable in the JSON body and change its value.
Here are some recommended Google Fonts for different languages:
- Japanese:
Noto Sans JP
- Korean:
Noto Sans KR
- Chinese (Simplified):
Noto Sans SC
- Chinese (Traditional):
Noto Sans TC
For Latin-based languages like Spanish or Italian, the default font will likely work, but you can still change it for stylistic purposes (e.g., to "Roboto" or "Lato").
Putting It All Together: A Japanese Quiz Example
Let's say you want to create a quiz about Japanese history.
1. In your Airtable row, you would set:
- Topic:
Japanese History
- Language:
Japanese
- Voice Name:
ja-JP-NanamiNeural
2. In your N8N workflow, you would modify the "Submit a new job" node's JSON body to use a Japanese font:
"variables": {
...
"fontFamily": "Noto Sans JP"
}
With these settings, OpenAI will generate the quiz content in Japanese, and JSON2Video will use the "Noto Sans JP" font to correctly render the Japanese characters in the final video. It's that simple!
Customizing the Voice with ElevenLabs
While the default Microsoft Azure voices are high-quality and free to use, you might want a more unique or specific voice for your brand. This workflow is designed to seamlessly integrate with ElevenLabs, a popular AI voice generation service known for its realistic and expressive voices.
To use an ElevenLabs voice, you'll simply update the configuration in your Airtable base. The N8N workflow will handle the rest.
How to Use ElevenLabs Voices
Follow these two steps in your "Quizzes" table in Airtable for the row you want to process:
-
Set the Voice Model: In the Voice Model column, change the value from
azure
to one of the following:elevenlabs
elevenlabs-flash-v2-5
(a faster, more recent model)
-
Specify the Voice Name: In the Voice Name column, enter the name of the ElevenLabs voice you want to use (e.g., "Rachel", "Daniel", "Serena") or a specific Voice ID from your ElevenLabs account.
That's it! You don't need to change anything in the N8N workflow itself. The "Submit a new job" node is already configured to read these values and pass them to the JSON2Video API, which will then use the specified ElevenLabs model and voice to generate the audio.
A Note on Credit Consumption
It's important to understand how using different voice models affects your JSON2Video credit usage.
- The
azure
model is free to use and does not consume any additional credits beyond the standard video rendering costs. - The
elevenlabs
andelevenlabs-flash-v2-5
models are premium services and will consume additional credits from your JSON2Video account for every minute of audio generated.
For a detailed breakdown of how credits are used for different AI services, please refer to the Credit Consumption page.
Example: Using an ElevenLabs voice
To create a quiz using a voice from ElevenLabs, you would set the following values in your Airtable row:
Input variables | Resulting video |
---|---|
|
|
Customizing the Background and Colors
You can overhaul the visual style of your quiz videos without ever leaving your N8N workflow. The JSON2Video template is built with variables that control its core design elements, such as the animated background and color scheme.
By passing additional variables from the "Submit a new job" node, you can dynamically change the look and feel of each video.
Simple visual customization
The template uses several key variables for its design:
background_video
: A URL to an MP4 file that serves as the looping background for the quiz.primary_color
: The main theme color, used for the answer boxes.secondary_color
: A complementary color, used for the "incorrect" answer boxes after the correct answer is revealed.title_color
: The color used for "Trivia Time" text.answers_bgcolor
: Background color for answer boxes before selection. Defaults to theprimary_color
.answers_fgcolor
: Text color for answer boxes before selection. Defaults to#FFFFFF
.correct_bgcolor
: Background color for the correct answer box after reveal. Defaults to#77FF77
.correct_fgcolor
: Text color for the correct answer box after reveal. Defaults to#000000
.incorrect_bgcolor
: Background color for incorrect answer boxes after reveal. Defaults to thesecondary_color
.incorrect_fgcolor
: Text color for incorrect answer boxes after reveal. Defaults to theprimary_color
.
To customize these, you simply need to add them to the variables
object in the JSON body of your "Submit a new job" HTTP Request node in N8N.
For example, let's change the theme to a light green palette:
- In your N8N workflow, open the parameters for the "Submit a new job" node.
- Navigate to the JSON Body field.
- Add the new key-value pairs for the background and colors inside the
variables
object as shown below.
{
"template": "cSTYFRZhXeBZotbwcjuM",
"variables": {
"voiceName": "{{ $('Airtable').item.json['Voice Name'] }}",
"voiceModel": "{{ $('Airtable').item.json['Voice Model'] }}",
"topic": "{{ $json.message.content.topic }}",
"intro_voiceover": "{{ $json.message.content.intro_voiceover }}",
"like_and_subscribe_voiceover": "{{ $json.message.content.like_and_subscribe_voiceover }}",
"questions": {{ JSON.stringify($json.message.content.questions) }},
"fontFamily": "{{ $('Airtable').item.json.Font }}",
"background_video": "https://json2video-test.s3.amazonaws.com/assets/videos/backgrounds/radial-dff2d8-c6dea6-1080x1920.mp4",
"primary_color": "#608552",
"secondary_color": "#c6dea6"
}
}
With this change, the next video generated by the workflow will feature a new green animated background, and the answer boxes will use the specified light green colors. This simple modification allows you to adapt the video's design to match your brand, a specific theme, or just to keep your content looking fresh. Feel free to experiment with your own brand colors and find different looping background videos to make your quizzes truly unique!
Input variables | Resulting video |
---|---|
|
|
Deeper redesign
The provided template ID (cSTYFRZhXeBZotbwcjuM
) produces a specific visual style. But what if you want to make it your own? The best way to do this is to create your own copy of the template and modify it.
The JSON2Video API doesn't allow overriding template elements directly in the API call, so you need to duplicate it in your account first.
- Go to the JSON2Video Visual Editor.
- In the top menu, select Template > Open template by ID.
- Enter the ID
cSTYFRZhXeBZotbwcjuM
and click "Open". - The template will load. Now, save your own copy by going to Template > Save template as. This will create a new template in your account with a new, unique ID.
- In the top menu, select Template > Show template ID to get the ID of your new copy.
- Go back to your N8N workflow and open the "Submit a new job" node. Replace the old template ID with your new one.
Customization Ideas
Now that you have your own copy, you can use the Visual Editor or the JSON editor (Template > Edit JSON) to make changes. The template is built with variables for easy customization. Here are some ideas:
- Add additional voiceovers: Why not add a voiceover reading the correct answer?
- Add background images: You can replace the background video with an AI generated image related to the question.
- Overlay your logo: You can add a small logo to one of the squares to improve your branding.
How Much Does Each Video Cost?
The primary cost is for the video rendering process. For standard videos like the ones this template produces, the consumption rate is 1 credit per second of the final video's duration. This calculation assumes you are using the default azure
voice model, which is included in your plan at no extra cost.
The quiz videos generated by this workflow typically have a duration of about 1.5 minutes (or 90 seconds). Therefore, a single 90-second video will consume approximately 90 credits from your account.
The actual monetary cost per credit depends on the pricing plan you choose, as larger plans offer a better rate. As a general estimate, you can expect the cost for a 1.5-minute quiz video created with this workflow to be in the range of $0.20 to $0.50 (USD).
Conclusion
You now have a powerful, automated system for creating engaging quiz videos. By combining the flexibility of N8N with the content generation of OpenAI and the video rendering power of JSON2Video, you can scale your video production efforts with minimal manual work. Feel free to expand on this workflow, experiment with different templates, and integrate other services to fit your unique needs.
Published on June 18th, 2025
