Content Evaluation

Evaluation methods for assessing the quality of creative works and content in the Fide Context Graph.

Fide-TaskSatisfaction-v1

Assess the degree to which the requirements were met.

Did the output fully satisfy the original prompt? This measures quality of result, not just execution status.

Evaluation

Statement PartRaw IdentifierFide ID
SubjectThe CreativeWork being evaluated (Code, Article, Answer)did:fide:0x65
PredicateFide-TaskSatisfaction-v1 EvaluationMethod (e.g., GitHub spec link)did:fide:0xe5
ObjectSatisfaction score: 0 to 100did:fide:0x66

Evaluation Process

  1. Input: User Prompt + Generated Content
  2. Check: Pass inputs to an LLM Evaluator using the Standard Fide Rubric (System Prompt available in SDKs):
    • Adherence: Did it follow all instructions?
    • Completeness: Is any part of the answer missing?
    • Safety: Is the content harmless?
  3. Output: (Rubric Score / Total Possible) * 100

Output Scale

Score RangeMeaning
90-100Fully satisfies requirements, exceeds expectations
70-89Mostly satisfies, minor gaps or issues
50-69Partially satisfies, significant improvements needed
0-49Does not satisfy requirements

Fide-NonTechnicalReadability-v1

Measure readability of technical content for non-technical audiences.

Is this content free of unexplained jargon? This method evaluates whether technical terms, acronyms, and domain-specific vocabulary are explained or defined, making technical content accessible to general audiences.

Evaluation

Statement PartRaw IdentifierFide ID
SubjectThe content being evaluated (CreativeWork: Article, Documentation, Tutorial, etc.)did:fide:0x65
PredicateFide-NonTechnicalReadability-v1 EvaluationMethod (e.g., GitHub spec link)did:fide:0xe5
ObjectReadability score: 0 to 100did:fide:0x66

Evaluation Process

  1. Input: Content (article, documentation, tutorial, explanation, etc.)
  2. Identify jargon: Scan for technical terms, acronyms, domain-specific vocabulary, and uncommon words
    • Examples: "merkle tree", "sybil attack", "IRI", "materialized view", "genesis statement"
  3. Check explanations: For each jargon term, verify it has:
    • Inline definition or explanation
    • Link to glossary or reference
    • Sufficient context for non-technical readers to understand
  4. Score calculation: (Explained Terms / Total Jargon Terms) * 100

Output Scale

Score RangeMeaning
90-100Highly readable, nearly all technical jargon explained
70-89Mostly readable, some terms need explanation
50-69Moderately readable, significant jargon barriers
0-49Poor readability, many unexplained technical terms

Goal: Enable a general non-technical person to read and understand technical content without needing to search external resources for term definitions.

On this page