Playground

What is the Playground?

The Playground is a quick testing environment for experimenting with AI models, prompts, and parameters without creating a full Flow.

When to use the Playground

  • Test prompts — Try different prompt variations before adding them to Flows

  • Compare models — See how different models respond to the same input

  • Experiment with parameters — Adjust temperature, max tokens, and other settings to understand their impact

  • Quick demos — Show stakeholders AI responses without building a full workflow

Example of Playground in action

You're building a customer support chatbot that needs to handle refund requests. Before adding the prompt to a Flow, you test it in the Playground:

  1. Start with GPT-4o and a system prompt: "You are a helpful customer support agent. Process refund requests professionally and ask for order ID if not provided."

  2. Test with edge cases: angry customers, missing information, requests outside the return window

  3. Switch to Claude 3.5 Sonnet to compare how it handles the same scenarios—you notice it's more concise in requesting missing details

  4. Lower temperature to 0.3 for more consistent responses across similar requests

  5. Save as Flow once you've validated the model and prompt work reliably

This 10-minute testing session prevents deploying a Flow that might handle edge cases poorly in production.

Opening the Playground

Click Playground in the left sidebar to open the testing environment.

Playground interface

The Playground has three main areas:

Model selection

Choose which AI model to use. Available models depend on whether you're using platform keys or BYOK. See available models for use in Runtype here.

Prompt editor

Write your prompt in the text area. You can use template variables like {{variable_name}} and provide values in the input section below that appears once a variable is mentioned

Parameters

Click Advanced Settings to adjust model behavior. The panel includes:

  • Response Format — Choose between model default output or structured JSON

  • Temperature — Use the slider to control randomness from 0 (Focused) to 1 (Creative), or toggle "Use model default"

  • Max Tokens — Set response length limit using the input field or use model default

  • Reasoning — Select whether to enable, disable, or use model default reasoning behavior

Start with temperature around 0.7 for most use cases. Lower it to 0.2-0.3 for tasks requiring consistency, or raise it to 0.8-1.0 for creative tasks.

Running a test

  1. Select your model

  2. Write your system prompt in the System Prompt text area and optionally add variables like {{variable_name}} and fill them in below

  3. Optionally adjust parameters (temperature, max tokens, reasoning, response format) via Advanced settings

  4. Optionally attach tools

  5. In the chat panel on the right, type a message and send

  6. Review the response in the chat

The response appears with timing information and token usage stats for your review.

Saving your configuration

Once you've refined your prompt and settings, save them for reuse:

  1. Click Save as Flow or Save as Agent

    1. Choose Flow to create a simple prompt-based workflow, or Agent for agentic behavior with tool use

  2. Name your Flow or Agent and confirm

Your Playground configuration—including the model, prompt, parameters, and any attached tools—will transfer to the new Flow or Agent, ready for further development.

Saving as an Agent preserves tool attachments and enables multi-step reasoning. Saving as a Flow creates a single prompt step.

Next steps

  • Create a Flow using your tested prompt

  • Understand prompt engineering best practices

Was this helpful?