Skip to main content
Zero-Knowledge Threat Modeling

When the Threat Model Knows Zero: Auditing Assumptions in Zero-Knowledge Proof Systems

The Problem: When Cryptographic Guarantees Rest on Unseen FoundationsZero-knowledge proofs are heralded as a panacea for privacy and verifiability in blockchain, identity, and computational integrity. Yet beneath the elegant mathematics lies a fragile ecosystem of assumptions that, if unchecked, can render the entire system insecure. This article dissects those assumptions from an auditor's perspective, targeting readers who already understand ZKP mechanics but need to embed adversarial thinking into their workflows.The core challenge is that ZKPs do not eliminate trust; they shift it. A typical ZKP system assumes honest provers, correct verification, secure parameters, and robust cryptographic primitives. Each of these pillars can be undermined by subtle flaws—malicious provers exploiting underconstrained circuits, verifiers accepting invalid proofs due to implementation bugs, or trusted setup ceremonies that leak toxic waste. The threat model must account for adversaries who know zero yet still compromise privacy or soundness.Why Traditional Threat Models Fall ShortTraditional threat modeling

The Problem: When Cryptographic Guarantees Rest on Unseen Foundations

Zero-knowledge proofs are heralded as a panacea for privacy and verifiability in blockchain, identity, and computational integrity. Yet beneath the elegant mathematics lies a fragile ecosystem of assumptions that, if unchecked, can render the entire system insecure. This article dissects those assumptions from an auditor's perspective, targeting readers who already understand ZKP mechanics but need to embed adversarial thinking into their workflows.

The core challenge is that ZKPs do not eliminate trust; they shift it. A typical ZKP system assumes honest provers, correct verification, secure parameters, and robust cryptographic primitives. Each of these pillars can be undermined by subtle flaws—malicious provers exploiting underconstrained circuits, verifiers accepting invalid proofs due to implementation bugs, or trusted setup ceremonies that leak toxic waste. The threat model must account for adversaries who know zero yet still compromise privacy or soundness.

Why Traditional Threat Models Fall Short

Traditional threat modeling focuses on network-level attacks, software vulnerabilities, and access control. ZKPs introduce a new dimension: mathematical trust. An attacker does not need to steal a key; they can instead break the soundness guarantee by crafting a false proof that passes verification. For example, in 2022, a critical bug in a popular ZKP library allowed provers to forge proofs for arbitrary statements by exploiting a missing constraint in the R1CS compiler. This was not a cryptographic break—it was an assumption that the compiler correctly translated all constraints, which it did not.

The situation is compounded by the opacity of ZKP implementations. Many teams rely on audited libraries and standard protocols, assuming that the audit covered every edge case. However, audits typically verify that the code matches the specification, not that the specification itself is secure. A protocol may be mathematically sound yet leak information through timing or side channels. Auditors must therefore extend their scope to include the entire trust chain: the randomness source, the proving system's algebraic structure, the verification algorithm's numerical stability, and even the physical security of setup ceremonies.

This section sets the stage for a deep dive into each assumption category, providing a framework for identifying and mitigating risks that are invisible to conventional security reviews. The goal is to harden ZKP systems against adversaries who exploit assumptions, not just code bugs.

Core Frameworks: Understanding the Cryptographic Trust Chain

To audit ZKP assumptions, one must first map the trust chain—the sequence of dependencies that must hold for the system to be secure. This chain includes the setup phase, the proving algorithm, the verification algorithm, and the protocol's interaction model. Each link can be broken by an attacker who understands its hidden assumptions.

The Setup Phase: Toxic Waste and Honest Parties

Many ZKPs, particularly those using the Groth16 proving system, require a trusted setup where a common reference string (CRS) is generated. The assumption is that at least one participant in the ceremony is honest and deletes the toxic waste—the secret randomness used to generate the CRS. If all participants collude or are coerced, the toxic waste can be used to forge proofs. Auditors must verify the setup ceremony's security parameters: the number of participants, the randomness generation method, and the deletion proofs. For example, in a multi-party computation (MPC) ceremony, each participant contributes randomness that is combined to produce the final CRS. The assumption that at least one participant is honest is probabilistic; increasing the participant count reduces the risk but does not eliminate it. Auditors should also check that the ceremony's transcript is verifiable and that participants cannot be identified after the fact to avoid coercion.

Prover and Verifier Assumptions

The prover assumes the verifier uses the correct verification algorithm and does not extract private information from the proof. Conversely, the verifier assumes the prover cannot construct a valid proof for a false statement (soundness) and that the proof reveals nothing beyond its truth (zero-knowledge). Both assumptions depend on the algebraic properties of the underlying curve and the correctness of the implementation. A common pitfall is the use of non-standard curves or custom hash functions that have not been rigorously analyzed. For instance, the BN254 curve has been widely used but has a small embedding degree that could, in theory, be exploited for discrete log attacks by a sufficiently powerful adversary. Practitioners must weigh computational efficiency against long-term security margins.

Another critical framework is the distinction between interactive and non-interactive proofs. Interactive proofs assume a live prover-verifier exchange where the verifier can issue random challenges. Non-interactive proofs, like those using the Fiat-Shamir heuristic, replace the verifier's randomness with a hash function. The assumption is that the hash function acts as a random oracle—an idealization that may fail in practice if the hash function has weaknesses or if the proof transcript does not include sufficient context to prevent replay attacks. Auditors should check that the Fiat-Shamir transformation includes all public inputs and that the hash function's output length matches the security parameter.

Understanding these frameworks allows auditors to systematically question every assumption: Is the setup truly trusted? Is the prover's computational bound realistic? Is the verifier's algorithm side-channel resistant? Only by answering these questions can one build a threat model that knows zero assumptions are safe.

Execution: Workflows for Auditing ZKP Assumptions

Auditing ZKP assumptions requires a structured process that goes beyond code review. This section outlines a repeatable workflow that integrates cryptographic analysis, implementation review, and protocol-level testing.

Step 1: Map the Assumption Landscape

Begin by enumerating every assumption in the system. Use a dependency graph that includes: (a) security of the elliptic curve and pairing; (b) correctness of the proving system's arithmetic; (c) integrity of the random oracle model; (d) honesty of at least one setup participant; (e) correctness of the constraint system translation; (f) absence of side-channel leaks in the prover/verifier; (g) secure key management for any private inputs; (h) resilience to denial-of-service attacks on the verifier; (i) timeout and replay protections. Document each assumption's risk level and the evidence supporting it. For example, the assumption that the BN254 curve is secure for the next decade is contradicted by recent advances in discrete log computation; a higher-risk assumption might require a curve with a larger embedding degree.

Step 2: Verify Cryptographic Primitives

Next, test the cryptographic primitives against known attacks. For elliptic curves, verify that the order is prime, that the embedding degree meets the security requirement, and that the curve does not have any special structure (e.g., a small subgroup). For hash functions used in Fiat-Shamir, check that the output is uniformly distributed and that the function is collision-resistant. Use test vectors from the proving system's specification to ensure the implementation produces correct proofs. A common mistake is using a hash function with insufficient output bits, effectively reducing the security level. For example, using SHA-256 truncated to 128 bits halves the security against collision attacks.

Step 3: Review Constraint System and Circuit

The constraint system that translates a computation into a ZKP circuit is a frequent source of bugs. Auditors should manually inspect the circuit for missing constraints, underconstrained variables, and incorrect witness generation. Automated tools like Circom's constraint analysis can help, but they cannot catch logical errors where constraints are valid but insufficient to prevent a malicious prover from producing a false proof. In one anonymized case, a circuit that verified a hash preimage omitted a constraint on the length of the preimage, allowing an attacker to submit a shorter input that produced a different hash but passed verification because the hash function was not fully constrained.

Finally, test the entire workflow with adversarial inputs. Generate proofs with deliberately invalid witnesses and verify that they are rejected. Check edge cases like zero values, maximum field elements, and boundary conditions. The goal is to break the system before an attacker does.

Tools, Stack, Economics, and Maintenance Realities

Selecting the right tools and understanding the economic and maintenance implications of ZKP assumptions is crucial for long-term security. This section compares the main proving systems, their trade-offs, and the ongoing costs of keeping assumptions valid.

Comparison of Proving Systems

The choice of proving system dictates many assumptions. Below is a comparison of three common systems:

SystemSetupProof SizeVerification TimeKey Assumption
Groth16Trusted (one-time)~200 bytes~2 msHonest setup; toxic waste destroyed
PLONKUniversal trusted setup~1 KB~5 msSetup honest; SRS security
STARKsTransparent (no setup)~50 KB~100 msHash function security; no Fiat-Shamir issues

Each system has different assumption profiles. Groth16 offers the smallest proofs but requires a new trusted setup for each circuit. PLONK's universal setup reduces the frequency of ceremonies but still requires one initial ceremony that must be secure. STARKs eliminate the setup assumption entirely but rely on hash functions and produce larger proofs, which can be a problem for bandwidth-constrained environments. The economic cost of a trusted setup ceremony can be significant—participants must be vetted, and the ceremony must be audited, which can cost hundreds of thousands of dollars. Additionally, if a vulnerability is discovered in the proving system after deployment, upgrading may require a new setup, which is a major maintenance burden.

Maintenance Realities: Upgrading and Deprecation

Cryptographic assumptions do not remain valid forever. Over time, attacks improve, and the security margin of a curve or hash function erodes. Teams must plan for upgrading proving systems, which may involve migrating user proofs, regenerating setup parameters, and updating smart contracts. This process is often underestimated. For example, the migration from BN254 to BLS12-381 required changes to the curve, the proving library, and the verification contract, taking several months for a major DeFi protocol. Auditors should verify that the system architecture allows for such upgrades without breaking existing proofs or requiring user action.

Another maintenance reality is the need for ongoing cryptographic monitoring. Teams should subscribe to cryptographic forums and track new attacks on their primitives. A proactive threat model includes a deprecation policy: when a curve's security margin drops below a threshold, it must be replaced. This requires that the system's proving and verification logic is modular and can be updated without a full redeployment. Failure to do so can lead to a situation where the system is using a deprecated curve that is no longer secure, but no one notices until it is too late.

Growth Mechanics: Building a Security Culture Around ZKP Assumptions

For organizations that rely on ZKPs, security is not a one-time audit but a continuous process. This section explores how to embed assumption auditing into the development lifecycle, scale knowledge across teams, and leverage the broader ZKP community.

Integrating Audits into CI/CD

Automated checks for ZKP assumptions can be integrated into the continuous integration pipeline. For example, after each circuit change, run a constraint analysis tool to detect missing constraints, verify that the proving system version is still supported, and check that the hash function meets current security recommendations. These checks catch regressions early and reduce the burden on human auditors. A team I am aware of automated their Fiat-Shamir transformation validation, ensuring that every proof includes a unique context string that prevents replay across different sessions. This caught a bug where the context string was accidentally omitted in a new circuit, which would have allowed an attacker to reuse a proof from one transaction in another.

Training and Knowledge Sharing

Scaling security expertise is a challenge because ZKP auditing requires both cryptographic and software engineering skills. One effective approach is to create internal training modules that cover common assumption pitfalls, with case studies from the literature (without naming specific vulnerable projects). Teams can also participate in ZKP security workshops or invite external auditors to review their assumptions. The cost of these activities is often justified by the cost of a breach. For a project holding millions in assets, spending $200,000 on a comprehensive audit is a fraction of the potential loss.

Community engagement is another growth mechanic. Contributing to open-source ZKP projects allows teams to stay current with best practices and learn from others' mistakes. For instance, discovering that a widely used library had a vulnerability in its trusted setup verification function prompted many teams to re-audit their own implementations. By sharing audit findings (anonymized), the entire ecosystem becomes more robust. This collective learning is essential because ZKP security is still a young field, and many assumptions are only tested when they break.

Finally, consider the long-term growth of the ZKP stack itself. As new proving systems emerge (e.g., lookup arguments, recursion), each introduces its own assumptions. Teams must decide when to adopt new technology versus staying with a proven but older system. The decision should be based on a risk assessment: is the new system's security better understood? Has it been peer-reviewed? Are there known attacks? A conservative approach that prioritizes tested assumptions over novelty is often the safest growth path.

Risks, Pitfalls, and Mistakes with Mitigations

Even experienced teams fall into common traps when auditing ZKP assumptions. This section catalogs the most dangerous pitfalls and provides concrete mitigations based on real-world incidents.

Pitfall 1: Over-Reliance on Audit Reports

Many teams treat a single audit as a seal of security, but auditors cannot catch every assumption flaw. An audit might verify that the code matches the specification, but if the specification itself has an incorrect assumption—such as using a curve with insufficient security—the audit report will not flag it. Mitigation: treat audits as one layer of defense. Complement them with in-house review of cryptographic primitives, threat modeling, and adversarial testing. Do not assume that an audited system is secure; assume that the auditors missed something and try to find it.

Pitfall 2: Ignoring the Social Layer

Trusted setup ceremonies are often executed with careful attention to cryptographic details but neglect the social engineering aspect. Participants may be coerced, bribed, or tricked into revealing their contributions. In one scenario, a ceremony participant was convinced to use a compromised randomness generator, making the entire CRS insecure. Mitigation: use MPC ceremonies where no single participant can break security, and ensure that participants are from diverse, independent organizations. Additionally, verifiable delay functions can be used to make it computationally infeasible to derive the toxic waste even if all participants collude, though this adds complexity.

Pitfall 3: Assuming Side-Channel Resistance

ZKP implementations are often optimized for speed, sacrificing constant-time execution. This can leak information through timing, power consumption, or cache access. For example, a prover that uses variable-time multiplication on the curve might leak the number of zero bits in the secret input, allowing an attacker to reconstruct the witness. Mitigation: audit the implementation for data-dependent branches and memory access patterns. Use constant-time libraries where available, and test timing variability with statistical analysis. In many cases, the fix is straightforward—replacing a variable-time operation with a constant-time alternative—but it requires awareness and discipline.

Pitfall 4: Underestimating Quantum Risk

While large-scale quantum computers do not exist today, the assumptions underlying many ZKPs rely on discrete log and hash function security, both of which are threatened by Shor's and Grover's algorithms. Mitigation: plan for a post-quantum transition by using hash-based or lattice-based ZKPs where feasible, or at least ensure that the system can be upgraded without a hard fork. For long-lived systems, consider using STARKs, which only rely on hash functions and are believed to be quantum-resistant. The cost of transitioning early may be high, but the cost of being caught unprepared is higher.

Mini-FAQ: Assumptions Under the Microscope

This section answers common questions that arise during ZKP assumption audits, providing concise yet substantive guidance.

Q1: Can we completely eliminate trusted setups?

Yes, by using transparent proving systems such as STARKs or Bulletproofs. However, these come with trade-offs: larger proof sizes and higher verification costs. For systems where proof size matters (e.g., on-chain verification), Groth16 remains attractive despite its setup requirements. The decision depends on whether the risk of a compromised setup outweighs the costs of larger proofs. In many DeFi applications, the on-chain gas cost of verifying a 200-byte proof is significantly lower than a 50-KB proof, making Groth16 the default choice. The mitigation is to invest in a robust setup ceremony with multiple participants and verifiable randomness.

Q2: How often should we re-evaluate our assumptions?

At least annually, or whenever a new attack is published that affects your primitives. For example, when the 2019 attack on small-field STARKs was disclosed, teams using STARKs with 64-bit fields had to upgrade immediately. Subscribe to the IACR ePrint server and follow ZKP security mailing lists. Additionally, re-evaluate after any major protocol change, such as adding a new circuit or changing the hash function. A good practice is to include a cryptographic review in the change management process for any modification that touches the proving or verification logic.

Q3: What is the most common mistake in ZKP implementations?

Underconstrained circuits. This occurs when the circuit's constraints do not fully capture the computation, allowing a malicious prover to produce a valid proof for a false statement. In one notable incident, a circuit for verifying a blockchain transaction omitted constraints on the transaction nonce, enabling an attacker to replay transactions. The fix is to formally verify the circuit's correctness using symbolic execution or constraint comparison against a reference implementation. Use tools like Circomspect or Aleo's snarkOS to analyze circuits for underconstrained variables. Even with these tools, manual review by an expert is essential because automation cannot capture all semantic nuances.

Q4: How do we handle the risk of a zero-day in the proving system?

Prepare a contingency plan. If a vulnerability is discovered in the proving system (e.g., a bug in the Groth16 verification algorithm), the system must be able to switch to an alternative proving system or upgrade the library without downtime. This requires that the verifier is modular and supports multiple proof types. For on-chain verifiers, this means having a fallback contract that can be activated by governance. The plan should be tested in a staging environment before it is needed. Additionally, maintain relationships with multiple proving system vendors or open-source maintainers to get early patches.

Synthesis and Next Actions

Zero-knowledge proofs offer powerful guarantees, but only if every assumption in the stack is rigorously audited. This guide has walked through the key assumption categories—setup, primitives, circuits, and protocols—and provided a framework for systematic review. The next step is to apply this framework to your own system.

Immediate Actions

First, create an assumption inventory for your ZKP system. List every component and its implicit trust assumptions. For each, assess the impact of a failure and the current evidence supporting its validity. Prioritize high-impact assumptions that are poorly supported. For example, if your system uses a custom curve, prioritize a third-party review of its security. Second, integrate assumption checks into your CI/CD pipeline. Automate what you can: curve parameter validation, hash function compliance, and constraint coverage. Third, schedule a quarterly cryptographic review meeting where the team discusses any new developments in ZKP security and reassesses the assumption landscape. Document the decisions and revisit them when circumstances change.

Long-Term Strategy

Look ahead to the next three to five years. The ZKP landscape is evolving quickly: new proving systems, better security models, and potential quantum threats. Invest in modularity so that you can swap out components without rewriting the entire system. Consider joining a consortium or working group that focuses on ZKP security standards, such as the ZKP Security Working Group at the Linux Foundation. By contributing to shared resources, you gain early access to best practices and tooling. Finally, remember that security is a process, not a product. The threat model that knows zero assumptions is not a static document but a living practice of questioning and verifying. Continue to challenge your own assumptions, and you will build systems that stand the test of time.

About the Author

This article was prepared by the editorial team for this publication. We focus on practical explanations and update articles when major practices change.

Last reviewed: May 2026

Share this article:

Comments (0)

No comments yet. Be the first to comment!