How We Use Your Data
PiqCue observes how you think during quizzes and learning sessions, not to judge you, but to help you. Here is exactly what that means.
Jump to Claims Registry →For Students
What we observe during quizzes
When you use PiqCue, we collect two kinds of product data:
• Site usage data — page visits, navigation events, and broad traffic patterns so we can see what students use
• Learning-session telemetry — response time, answer changes, confidence ratings, hint usage, and attention patterns during quizzes
• Technical error data — error reports and sampled replay data when the app breaks so we can debug production failures
These signals help us improve the product, understand where students get stuck, and decide when to offer support inside a session.
Why we observe these signals
The goal is to help you think better under difficulty — not to judge you or score you. When the system detects that you might be drifting, struggling with a concept, or losing confidence, it can:
• Offer a microbreak to reset your focus
• Show a re-entry cue to help you re-engage with the material
• Suggest a strategy for approaching the problem differently
• Adjust question ordering to rebuild momentum
These supports are designed to feel helpful, not intrusive. They're based on behavioral patterns, not surveillance.
What stays private
• Your individual responses are never shared with other students
• We use a pseudonymous browser ID so we can reconnect your sessions across visits
• We do not sell personal data or student work
• We use service providers such as Vercel Analytics and Sentry to operate the site and investigate failures
• Teachers do not see raw per-question timings, raw attention scores, or replay tools from your student session
• Your "why are you learning this?" response is stored only for your session summary
Classroom mode
If you join a classroom with a join code, your teacher can see individual student summaries and class-wide patterns from your sessions — like your score trends, common struggle areas, and whether attention support was offered. This helps them plan better lessons and offer targeted help.
Your teacher cannot see:
• Your exact response times on individual questions
• Your raw attention drift scores
• Your emotional state inferences (these are shown only to you)
Your teacher can see:
• Your quiz scores and score trends
• Your most common struggle pattern (e.g., "conceptual confusion")
• Whether attention support was active during your session
• An overall attention health indicator
Your controls
• You can use most of the product without joining a classroom
• You can always skip the "why are you learning this?" prompt
• Confidence ratings are optional — you can skip them
• If a microbreak appears, you can dismiss it immediately
• You can request a break at any time using the "Need a moment?" button
• You can leave a session at any time — partial data is still saved for your benefit
• You can contact hello@piqcue.com with privacy questions
Claims Registry
What we can claim (evidence-backed), what we think may be true (not yet proven), and what we must not claim.
What we can claim
- • Behavioral struggle classification from interaction patterns (inferred; use "Likely"/"Possible" language)
- • Attention drift detection from response-time and accuracy patterns (behavioral proxy, not direct measure)
- • Intervention budget reduces overload (design-based; user feedback pending)
- • Identity-safe framing in results; no "failed" or "weak at"
- • Content health scoring (automated; human review not yet systematic)
What we think may be true
- • Microbreaks and re-entry cues improve post-break accuracy — needs pilot counterfactual
- • Strategy cards matched to struggle state improve outcomes — needs A/B or pre/post
- • Representation shifts when drift detected re-engage — needs qualitative feedback
- • Transfer probes measure real learning — needs longitudinal design
What we must not claim
- • "Improves grades" or "Raises test scores" — no controlled study
- • "Diagnoses learning disabilities" or "Identifies ADHD" — behavioral inference only
- • "Guarantees" any outcome — no guarantee language yet
- • "AI-powered" for core classifier — ALE v1 is rule-based
- • "Replaces the teacher" or "Fully autonomous" — human-in-the-loop always
For Teachers
What data you see
The teacher dashboard shows student-level summaries and class-wide behavioral patterns from quiz and learning sessions. This includes:
• Score distribution — histogram of quiz scores across all students
• Struggle patterns — how often each of the 7 struggle types appears (conceptual confusion, procedural slip, memory retrieval gap, strategy lock-in, low confidence, overconfidence, misread question)
• Concept mastery — percentage correct per concept tag
• Attention health — whether attention support was triggered during sessions
• Needs-attention alerts — students with declining trends or recurring struggles
How diagnostic labels work
The system uses behavioral signals (response time, answer changes, confidence ratings, accuracy patterns) to infer likely struggle types. These are behavioral inferences, not diagnoses.
Every diagnostic label carries a confidence score:
• High confidence (≥80%) — "Likely pattern: …"
• Medium confidence (60–79%) — "Possible: …"
• Low confidence (<60%) — these signals are filtered out and not shown on your dashboard
You may see a note like "X lower-confidence signals were filtered" — this means the system detected possible patterns but wasn't confident enough to surface them. This protects you from acting on uncertain information.
What you don't see
To protect student experience and privacy, certain data is not surfaced to teachers:
• Raw attention drift scores — you see an overall health indicator, not the numeric score
• Emotional state inferences — frustration, anxiety, shame avoidance signals are shown only to students in their own results screen
• Individual response times — you see aggregate patterns, not per-question timing
• Confidence self-ratings — these are used internally for calibration analysis but not displayed individually
Intervention support in student sessions
During quizzes and learning sessions, the system may offer students support interventions:
• Microbreaks — brief calming screens when attention drift is detected
• Re-entry cues — concept-specific prompts to help re-engage after a break
• Strategy cards — metacognitive strategies matched to the detected struggle type
• Emotion reflections — identity-safe feedback shown in quiz results
These interventions are budgeted — the system limits how many supports appear in a single session to avoid overwhelming students. The priority order is: safety signals first, then attention resets, then concept hints, then strategy cards.
Data freshness and reliability
Dashboard data includes freshness indicators:
• Live — data from the last hour
• Recent — data from the last 24 hours
• Stale — data older than 24 hours
If students complete quizzes while offline, their data is queued locally and synced when connectivity returns. You may occasionally see a "data may be incomplete" note — this means some events are still being synced.
Analytics queries are performance-bounded (capped at 200 sessions, 2000 question events) to ensure fast dashboard loading.
Questions? Email hello@piqcue.com for public-beta privacy questions. If you are using PiqCue through a classroom, you can also talk to your teacher or school administrator about classroom-specific concerns.