Prompt & Parameters
Select a prompt above via one of the provided buttons, then click Generate Comparison to see how model behavior shifts.
Parameter Controls
These are simplified versions of the knobs you’d configure on a real chat model.
Higher values → more varied and creative responses; lower → more focused and predictable.
Rough limit on response length. Higher values allow longer, more detailed answers.
Conceptual sampling mode: toggle between narrower vs. wider token exploration.
Simulates stricter guardrails. When enabled, unsafe prompts get more cautious responses.
Behavior Comparison
Waiting for first run.
Waiting for first run.
Behavior Notes
As you generate comparisons, this section will translate parameter shifts into plain-language observations you can use in interviews (e.g., “Higher temperature creates more varied phrasing, but also more rambling.”).