Affiliation:
1. Massachusetts Institute of Technology, Cambrdige, USA
2. Massachusetts Institute of Technology, Cambridge, USA
Abstract
Fully Homomorphic Encryption (FHE) enables computing on encrypted data, letting clients securely offload computation to untrusted servers. While enticing, FHE has two key challenges that limit its applicability: it has high performance overheads (10,000× over unencrypted computation) and it is extremely hard to program. Recent hardware accelerators and algorithmic improvements have reduced FHE’s overheads and enabled large applications to run under FHE. These large applications exacerbate FHE’s programmability challenges. Writing FHE programs directly is hard because FHE schemes expose a restrictive, low-level interface that prevents abstraction and composition. Specifically, FHE requires packing encrypted data into large vectors (tens of thousands of elements long), FHE provides limited operations on these vectors, and values have noise that grows with each operation, which creates unintuitive performance tradeoffs. As a result, translating large applications, like neural networks, into efficient FHE circuits takes substantial tedious work. We address FHE’s programmability challenges with the Fhelipe FHE compiler. Fhelipe exposes a simple, numpy-style
tensor
programming interface, and compiles high-level tensor programs into efficient FHE circuits. Fhelipe’s key contribution is
automatic data packing
, which chooses data layouts for tensors and packs them into ciphertexts to maximize performance. Our novel framework considers a wide range of layouts and optimizes them analytically. This lets compile large FHE programs efficiently, unlike prior FHE compilers, which either use inefficient layouts or do not scale beyond tiny programs. We evaluate on both a state-of-the-art FHE accelerator and a CPU. is the first compiler that matches or exceeds the performance of large hand-optimized FHE applications, like deep neural networks, and outperforms a state-of-the-art FHE compiler by gmean 18.5. At the same time, dramatically simplifies programming, reducing code size by 10–48.
Publisher
Association for Computing Machinery (ACM)
Reference88 articles.
1. 2020. HEAAN software library. https://github.com/snucrypto/HEAAN.
2. 2020. Lattigo. https://github.com/ldsec/lattigo.
3. 2020. Microsoft SEAL HE library. https://github.com/microsoft/SEAL.
4. Martín Abadi Ashish Agarwal Paul Barham Eugene Brevdo Zhifeng Chen Craig Citro Gregory S. Corrado Andy Davis Je rey Dean Matthieu Devin Sanjay Ghemawat Ian J. Goodfellow Andrew Harp Geo rey Irving Michael Isard Yangqing Jia Rafal Józefowicz Lukasz Kaiser Manjunath Kudlur Josh Levenberg Dan Mané Rajat Monga Sherry Moore Derek Gordon Murray Chris Olah Mike Schuster Jonathon Shlens Benoit Steiner Ilya Sutskever Kunal Talwar Paul A. Tucker Vincent Vanhoucke Vijay Vasudevan Fernanda B. Viégas Oriol Vinyals Pete Warden Martin Wattenberg Martin Wicke Yuan Yu and Xiaoqiang Zheng. 2016. TensorFlow: a system for Large-Scale machine learning. In OSDI-12.
5. Rashmi Agrawal, Leo de Castro, Guowei Yang, Chiraag Juvekar, Rabia Yazicigil, Anantha Chandrakasan, Vinod Vaikuntanathan, and Ajay Joshi. 2023. FAB: An FPGA-based accelerator for bootstrappable fully homomorphic encryption. In HPCA-29.