Workshop VerifAI @ ICLR Featured

NANOZK: Layerwise Zero-Knowledge Proofs for Verifiable Large Language Model Inference

Zhaohui Geoffrey Wang

VerifAI Workshop at International Conference on Learning Representations (ICLR), 2026

type Workshop
year 2026
venue VerifAI @ ICLR
arXiv 2603.18046

// ABSTRACT

Users querying proprietary LLM APIs lack cryptographic verification that the claimed model was actually used. NANOZK enables verifiable inference by decomposing transformer computations into independent layer operations, generating constant-size proofs regardless of model width. Lookup table approximations preserve model accuracy (0% perplexity degradation across 21 configurations), while Fisher information-guided verification enables practical budget allocation. Achieves 70x smaller proofs and 5.7x faster proving than EZKL, with formal soundness guarantees (epsilon < 1e-37). LLaMA-3-1B: 620ms proof generation, 2.4KB proof size, 22ms verification on CPU.

// BIBTEX

@inproceedings{wang2026nanozk,
  title     = {NANOZK: Layerwise Zero-Knowledge Proofs for Verifiable Large Language Model Inference},
  author    = {Zhaohui Geoffrey Wang},
  booktitle = {VerifAI Workshop at International Conference on Learning Representations (ICLR)},
  year      = {2026},
  month     = {3},
  eprint    = {2603.18046},
  archivePrefix = {arXiv},
}