Q-Trust Plane

THREAT MODEL

Threat Model

Formal threat model: attacker capabilities, assumptions, and mitigations (STRIDE + Web3-specific).

Highlights

  • Assume compromised CI runners, agents, and workloads (design for adversarial environments).
  • Treat logs as untrusted unless cryptographically protected and externally verifiable.
  • Replay resistance through single-use, TTL-bound, context-bound grants.
  • Long-term cryptographic resilience via hybrid signature paths (PQC-ready).

THREAT-MODEL

Q-Trust Plane — Threat Model & Security Analysis
Document: Threat Model (STRIDE + Web3-specific)
Version: 1.0
Scope: Control Plane, Agents, CI/CD, IaC, Web3, Bridges, Oracles, Cryptography


0. Purpose

This document defines the formal threat model for Q-Trust Plane.

Its goals are to:

  • identify realistic threats against the system
  • describe attacker capabilities and assumptions
  • map threats to mitigations
  • justify architectural and cryptographic decisions
  • demonstrate security maturity for enterprise, Web3, and audit contexts

This is a defensive, safety-first model designed for high-impact systems.


1. Security Assumptions

1.1 Assumptions (Explicit)

  • CI runners, agents, and workloads can be compromised
  • Developers and operators can make mistakes
  • Insiders may act maliciously
  • Logs cannot be trusted unless cryptographically protected
  • Tokens will eventually leak
  • Blockchains provide immutability, not confidentiality
  • Future cryptographic breaks are plausible (post-quantum horizon)

1.2 Non-assumptions (What We Do NOT Trust)

  • Long-lived credentials
  • Manual approvals without cryptographic proof
  • “Trusted admins”
  • Centralized audit logs
  • Single-key control for critical Web3 operations

2. Assets to Protect

2.1 Primary Assets

  • Authorization decisions (allow/deny)
  • Grant integrity (capability tokens)
  • Policy integrity (QPL policy sets)
  • Evidence integrity
  • AuditAnchor on-chain commitments
  • Signing keys (classic + PQC)
  • Governance state (who can do what, when)

2.2 Secondary Assets

  • Identity claims
  • Attestation metadata
  • Execution context
  • Merkle proofs
  • Audit APIs

3. Threat Actors

Actor Capability
External attacker Network access, phishing, dependency attacks
CI runner attacker Full control of a build agent
Insider (developer) Code access, CI permissions
Insider (admin) Policy access, operational access
Supply-chain attacker Malicious dependency/artifact
Web3 attacker On-chain analysis, tx injection
Future adversary Cryptographic capability increase

4. Threat Model Methodology

We use:

  • STRIDE for classical system threats
  • Web3-specific attack classes
  • Supply chain & CI/CD threat analysis
  • Cryptographic lifecycle threats

Each threat includes:

  • Description
  • Impact
  • Mitigations (architectural + cryptographic)

5. STRIDE Analysis — Control Plane

5.1 Spoofing Identity

Threat:
An attacker impersonates a legitimate subject (CI job, user, bot).

Examples:

  • stolen OIDC token
  • forged JWT
  • replayed identity assertion

Mitigations:

  • OIDC issuer allowlist
  • audience validation
  • short token lifetime
  • subject fingerprinting
  • binding to workload identity (runner/agent)
  • context binding in grants (job id, commit SHA)
  • grant TTL in seconds

5.2 Tampering

Threat:
Modification of policies, grants, or evidence.

Examples:

  • policy file altered
  • evidence logs rewritten
  • grant payload modified

Mitigations:

  • canonicalization + hashing
  • signed policy bundles
  • signed grants (hybrid)
  • hash-chained evidence ledger
  • Merkle tree anchoring on-chain
  • deny-wins resolution

5.3 Repudiation

Threat:
An actor denies having performed an action.

Examples:

  • “I didn’t deploy that contract”
  • “That upgrade wasn’t approved by me”

Mitigations:

  • signed grants
  • signed evidence
  • immutable hash chain
  • on-chain Merkle anchoring
  • externally verifiable inclusion proofs

Result:
Actions are cryptographically non-repudiable.


5.4 Information Disclosure

Threat:
Sensitive data leaks via logs, proofs, or APIs.

Examples:

  • secrets in evidence
  • internal metadata exposed on-chain

Mitigations:

  • evidence stores only commitments (hashes)
  • no secrets in canonical evidence
  • domain separation + salting per tenant
  • minimal disclosure audit APIs
  • on-chain anchors store only Merkle roots

5.5 Denial of Service (DoS)

Threat:
Attackers flood authorization or evidence endpoints.

Examples:

  • excessive auth requests
  • evidence spam
  • anchoring congestion

Mitigations:

  • rate limiting per tenant/subject
  • request size limits
  • batch anchoring (epoch-based)
  • idempotent anchor publishing
  • backpressure on agents

5.6 Elevation of Privilege

Threat:
An actor gains permissions beyond intended scope.

Examples:

  • CI job deploying to prod
  • bot performing admin upgrade
  • insider bypassing approval flow

Mitigations:

  • default deny
  • explicit allow only
  • deny-wins policy resolution
  • short-lived grants
  • context binding (env, branch, chain)
  • multi-approval obligations
  • separation of duties via policies

6. Threats — Agents & Execution Plane

6.1 Compromised CI Runner

Threat:
Attacker controls a runner executing pipelines.

Impact:
Could attempt unauthorized deploys.

Mitigations:

  • runner identity binding
  • attestation requirements (SLSA, SBOM)
  • artifact digest pinning
  • one-time grants
  • TTL seconds
  • inability to reuse grants in other jobs

6.2 Malicious Agent Code

Threat:
Agent binary modified to bypass checks.

Mitigations:

  • agent signature verification (recommended)
  • server-side validation of bindings
  • obligations validated centrally
  • evidence cross-check against grant
  • least privilege at target systems

7. Supply Chain & CI/CD Threats

7.1 Dependency Poisoning

Threat:
Malicious library introduced during build.

Mitigations:

  • SBOM presence requirement
  • SBOM digest anchoring
  • artifact signature verification
  • SLSA provenance checks

7.2 Artifact Substitution

Threat:
Different artifact deployed than what was approved.

Mitigations:

  • artifact digest binding in grant
  • evidence requires artifact digest
  • mismatch → evidence rejection

7.3 Pipeline Logic Abuse

Threat:
Pipeline modified to bypass controls.

Mitigations:

  • policy conditions on branch/tag
  • signed commits/tags
  • approvals bound to policy
  • deny-wins behavior

8. Web3-Specific Threats

8.1 Unauthorized Contract Upgrade

Threat:
Upgradeable proxy modified maliciously.

Impact:
Total loss of protocol control.

Mitigations:

  • strict upgrade policies
  • multi-approval obligations
  • diff/hash evidence (old/new impl)
  • TTL < 60s
  • mandatory on-chain anchoring

8.2 Bridge Signer Rotation Attack

Threat:
Signer set changed to attacker-controlled keys.

Impact:
Bridge drain.

Mitigations:

  • quorum approvals
  • two-person rule
  • emergency lockout obligations
  • signer set hash evidence
  • anchored audit proof

8.3 Oracle Feed Manipulation

Threat:
Oracle parameters or feeds altered.

Mitigations:

  • release-only policies
  • approval groups
  • parameter hash evidence
  • anchoring + audit

8.4 Replay or Front-Run Attacks

Threat:
Reuse or front-running of authorization.

Mitigations:

  • grants are off-chain
  • single-use nonce
  • context binding
  • execution must match grant context exactly

9. Evidence & Audit Threats

9.1 Log Tampering

Threat:
Evidence modified or deleted.

Mitigations:

  • hash-chained evidence
  • hybrid signatures
  • on-chain Merkle anchoring

9.2 Fake Audit Trails

Threat:
Fabricated logs presented to auditors.

Mitigations:

  • independent verification via on-chain roots
  • Merkle inclusion proofs
  • policy hash commitments

10. Cryptographic Threats

10.1 Key Compromise

Threat:
Signing key leaked.

Mitigations:

  • short-lived grants reduce blast radius
  • key rotation
  • separation of signing roles
  • optional HSM / TEE / MPC
  • audit detection via anchored events

10.2 Algorithm Break (Post-Quantum)

Threat:
Classical signature schemes broken.

Mitigations:

  • hybrid signatures (classic + PQC)
  • policy bundles signed with PQC
  • evidence signed with PQC
  • migration path without trust reset

11. Denial-of-Trust Scenarios

11.1 Control Plane Outage

Behavior:

  • critical actions → fail-closed
  • low-risk actions → allowed only if explicitly configured

Rationale:
Availability must not override security.


11.2 Anchoring Failure

Behavior:

  • evidence queued but marked unfinalized
  • policies may block further critical actions
  • anchoring resumes without losing integrity

12. Residual Risks

No system eliminates all risk.

Remaining risks:

  • human approval errors
  • misconfigured policies
  • catastrophic chain failures
  • collusion beyond approval thresholds

Design choice:
Q-Trust reduces risk to explicit, provable governance failures, not silent ones.


13. Security Posture Summary

Category Posture
Identity Verified, bound, short-lived
Authorization Deterministic, explicit
Tokens Ephemeral, single-use
Execution Context-bound
Evidence Signed, chained
Audit On-chain anchored
Web3 Ops Multi-approval, TTL
Crypto Hybrid PQC-ready

14. Conclusion

Q-Trust Plane is designed for environments where:

  • authorization failure equals catastrophe
  • auditability must be mathematical, not procedural
  • Web3 governance cannot rely on trust alone
  • future cryptographic shifts must be anticipated

By combining deterministic policy evaluation, cryptographic grants, and on-chain audit anchoring, Q-Trust converts trust assumptions into verifiable guarantees.


Security is not the absence of incidents.
It is the presence of proof.