Skip to content
v1.0 — Synthetic Panel API

Evaluate ideas against calibrated consumer panels

One POST request. A panel of demographically realistic synthetic personas evaluates your concept, copy, or product. You get structured directional signal — not rigorous research, not guesswork.

evaluate.ts
const response = await fetch("https://api.instantfocus.dev/v1/evaluate", {
  method: "POST",
  headers: {
    "Authorization": `Bearer ${process.env.INSTANTFOCUS_API_KEY}`,
    "Content-Type": "application/json"
  },
  body: JSON.stringify({
    study_type: "concept_test",
    stimulus: "AI-powered meal planner from a fridge photo. $4.99/mo.",
    audience: { size: 100, age_range: [25, 45] }
  })
});

const { data } = await response.json();
// data.mean_score → 6.8/10
// data.purchase_intent → { definitely: 0.18, probably: 0.34, ... }
// data.top_concerns → ["price sensitivity", "accuracy of AI suggestions"]

Five evaluation methods, one endpoint

Each study type returns structured JSON with per-persona responses and aggregate statistics. Same endpoint, different study_type field.

concept_test

Evaluate a product concept, feature idea, or value proposition against a target audience.

Returns: mean_score, purchase_intent distribution, top_concerns, top_appeals, themes
sentiment

Measure emotional reaction to messaging, announcements, pricing changes, or brand communications.

Returns: positive/negative/neutral/mixed breakdown, intensity_score, key_themes
nps

Predict Net Promoter Score for a product or service description before launch.

Returns: nps_score (-100 to 100), promoter/passive/detractor percentages, reasoning
survey

Ask up to 10 open-ended questions against a panel. Useful for exploratory qualitative research.

Returns: per-question response aggregation, theme extraction, representative quotes
ab_test

Compare two variants — headlines, CTAs, pricing frames, landing page copy — head to head.

Returns: winner, per-variant scores, preference_ratio, key_differentiators

All studies accept an optional audience object to filter the generated panel. Defaults to 100 personas, US national, all demographics.

size: 10–10,000age_range: [min, max]gender: all | male | female | nonbinaryregion: us_national | us_northeast | us_west | uk | eu | globalincome_bracket: all | low | middle | upper_middle | higheducation: all | high_school | bachelors | graduatetech_adoption: innovator | early_adopter | early_majority | laggard

How it works

Three stages, executed per-request. No pre-generated data. Every panel is fresh.

01

Panel generation

Synthetic personas are generated with demographics (age, gender, income, education, region) and Big Five personality traits calibrated against population-level norms.

02

Evaluation

Each persona independently evaluates your stimulus through an LLM, responding as their calibrated profile. Batched for throughput (10 personas/batch, 5 concurrent).

03

Aggregation

Individual responses are aggregated into structured results: scores, distributions, themes, and representative verbatims. Returned as typed JSON.

Methodology

Persona generation is calibrated against published normative datasets. Not random — statistically grounded.

Personality

Big Five traits sampled from IPIP-NEO norms (Srivastava et al., 2003; N=132,515). Age-adjusted means and standard deviations. Gender-adjusted for agreeableness and neuroticism.

Demographics

Age distribution from U.S. Census American Community Survey. Income brackets mapped to Census income percentiles. Education levels from Current Population Survey.

Technology adoption

Rogers' diffusion of innovations curve (2.5% innovators, 13.5% early adopters, 34% early/late majority, 16% laggards). Correlated with age and openness.

Evaluation model

OpenAI gpt-4o-mini with structured JSON output. Study-type-specific system prompts. Temperature 0.7 for response diversity within persona constraints.

Directional, not definitive

InstantFocus generates synthetic signal based on LLM-simulated consumer responses. Published research (NORC, NNGroup, arXiv) shows LLM-simulated panels achieve 75–85% directional accuracy against real survey data. This is a sanity check tool — useful for fast iteration, not for replacing real user research on critical decisions.

API reference

RESTful JSON API. Bearer token authentication. All responses follow a consistent envelope.

EndpointMethodAuthDescription
/v1/evaluatePOSTBearerRun a synthetic panel study
/v1/healthGETNoneService health check
/v1/usageGETBearerCurrent credit balance and usage
response envelope
{
  "ok": true,
  "data": { /* study-specific results */ },
  "meta": {
    "request_id": "a1b2c3d4-...",
    "credits_used": 100,
    "credits_remaining": 4900,
    "latency_ms": 3420
  }
}

Credit-based usage

1 credit = 1 persona evaluation. A concept test with 100 personas consumes 100 credits.

Free
$0
100 credits / month
  • All 5 study types
  • Full API access
  • Community support
Enterprise
Custom
100,000+ credits / month
  • All 5 study types
  • Custom panel calibration
  • Dedicated support
  • SLA guarantee

Start evaluating

Free tier includes 100 credits per month. No credit card required.

install
# npm
npm install @instantfocus/sdk

# or use the REST API directly
curl https://api.instantfocus.dev/v1/health