GROUND produces AI-generated industry intelligence. Credentialed professionals assess it through structured peer review — not to gatekeep access, but to build a permanent record of practitioner judgment. Every assessment is named, scored, and archived.
Five-Dimension Assessment Framework
Factual Accuracy
Claims, statistics, and data points are verifiable and correct based on the reviewer's professional knowledge
30%
Source Quality
Underlying sources are credible, recent, and appropriate for the claims being made
25%
Analytical Rigor
Conclusions are supported by evidence presented; limitations and uncertainties are appropriately acknowledged
20%
Industry Relevance
The article addresses a real question that working practitioners in this space actually face
15%
Practitioner Utility
A working professional would find this analysis actionable or decision-relevant in their day-to-day work
10%
Signal Credibility Index (SCI)
Weighted Formula
SCI = (Factual × 0.30) + (Source × 0.25) + (Rigor × 0.20) + (Relevance × 0.15) + (Utility × 0.10) ÷ 5
· Pre-validation: editorial estimate (labeled "Estimated")
· Single validator: computed from scores (labeled "Validated")
· 2+ validators: weighted average across all assessors (labeled "Peer Reviewed")
Assessment States
Awaiting Validation
No structured assessments submitted yet. Editorial estimate shown.
Validated
One credentialed professional has submitted a structured assessment.
Peer Reviewed
Two or more assessors, dimension scores in general agreement.
Under Review
Two or more assessors diverge by 2+ points on at least one dimension. Active discussion in progress.
Gold Standard
Three or more assessors, all dimension averages ≥ 4.0, no contested dimensions. Highest credibility tier.
Annotation Types
Corroborate
I can confirm this claim from my own professional experience or data.
Challenge
This claim needs qualification, correction, or important context that changes its meaning.
Extend
Here's additional context, a related data point, or an angle the article doesn't cover.
Clarify
This could be misread by practitioners. Here's what it actually means in practice.
Our Philosophy
GROUND doesn't require reviewers to be right. We require them to be specific. A Challenge annotation that turns out to be incorrect is still valuable — it marks a point of contestation in the professional record. The reviewer owns their claim. The community evaluates it over time.
This is how peer review works in academic journals. The same logic applies to industry intelligence: structured, named, and permanently archived. Not crowdsourced opinion — professional judgment on record.
The quality gate for submissions is simple: at least one concrete local insight. A named market, a specific data point, a direct professional reference. Generic agreement doesn't advance the peer record.
Level 2 Intelligence Synthesis
When an article receives its second structured assessment, GROUND automatically generates a Level 2 Synthesis — an AI-assisted document that compares the two assessments, identifies convergence and divergence, and surfaces what the stacked intelligence reveals that neither review alone could. This synthesis grows with each additional assessment.