2025

SLOTHE : Lazy Approximation of Non-Arithmetic Neural Network Functions over Encrypted Data

Kevin Nam*, Youyeon Joo*, Seungjin Ha, Yunheung Paek$\dagger$ (* equal contribution, $\dagger$ corresponding author)

USENIX Security Symposium (USENIX Sec), 2025

Existing works adopt an eager approximation (EA) strategy to approximate non-arithmetic functions (NAFs), which statically replaces each NAF with a fixed polynomial, locking in computational errors and limiting optimization opportunities. We propose SLOTHE, a lazy approximation (LA) solution that recursively decomposes NAF codes into arithmetic and nonarithmetic sub-functions, selectively approximating only the non-arithmetic components when required.

SLOTHE : Lazy Approximation of Non-Arithmetic Neural Network Functions over Encrypted Data

Kevin Nam*, Youyeon Joo*, Seungjin Ha, Yunheung Paek$\dagger$ (* equal contribution, $\dagger$ corresponding author)

2025

Existing works adopt an eager approximation (EA) strategy to approximate non-arithmetic functions (NAFs), which statically replaces each NAF with a fixed polynomial, locking in computational errors and limiting optimization opportunities. We propose SLOTHE, a lazy approximation (LA) solution that recursively decomposes NAF codes into arithmetic and nonarithmetic sub-functions, selectively approximating only the non-arithmetic components when required.

LOHEN: Layer-wise Optimizations for Neural Network Inferences over Encrypted Data with High Performance or Accuracy

Kevin Nam, Youyeon Joo, Dongju Lee, Seungjin Ha, Hyunyoung Oh, Hyungon Moon$\dagger$, Yunheung Paek$\dagger$ ($\dagger$ corresponding author)

USENIX Security Symposium (USENIX Sec), 2025

When FHE is applied to neural networks (NNs), we have observed that the distinct layered architecture of NN models opens the door for a performance improvement by using layer-wise Ciphertext Configurations (CCs), because a globally chosen CC may not be the best possible CC for every layer individually. This paper introduces LOHEN, a technique crafted to attain high performance of NN inference by enabling to use layer-wise CC efficiently.

LOHEN: Layer-wise Optimizations for Neural Network Inferences over Encrypted Data with High Performance or Accuracy

Kevin Nam, Youyeon Joo, Dongju Lee, Seungjin Ha, Hyunyoung Oh, Hyungon Moon$\dagger$, Yunheung Paek$\dagger$ ($\dagger$ corresponding author)

2025

When FHE is applied to neural networks (NNs), we have observed that the distinct layered architecture of NN models opens the door for a performance improvement by using layer-wise Ciphertext Configurations (CCs), because a globally chosen CC may not be the best possible CC for every layer individually. This paper introduces LOHEN, a technique crafted to attain high performance of NN inference by enabling to use layer-wise CC efficiently.

Efficient Keyset Design for Neural Networks Using Homomorphic Encryption

Youyeon Joo, Seungjin Ha, Hyunyoung Oh$\dagger$, Yunheung Paek$\dagger$ ($\dagger$ corresponding author)

MDPI Sensors (Q2, IF 3.5), 2025

FHE-based neural network inference suffers from substantial overhead due to expensive primitive operations, such as ciphertext rotation and bootstrapping. We focus on optimizing the efficiency of these computations through keyset design. Specifically, we explore three aspects of the keyset design space (KDS) that influence both computational overhead and memory consumption.

Efficient Keyset Design for Neural Networks Using Homomorphic Encryption

Youyeon Joo, Seungjin Ha, Hyunyoung Oh$\dagger$, Yunheung Paek$\dagger$ ($\dagger$ corresponding author)

2025

FHE-based neural network inference suffers from substantial overhead due to expensive primitive operations, such as ciphertext rotation and bootstrapping. We focus on optimizing the efficiency of these computations through keyset design. Specifically, we explore three aspects of the keyset design space (KDS) that influence both computational overhead and memory consumption.

2024

Rotation Keyset Generation Strategy for Efficient Neural Networks Using Homomorphic Encryption

Youyeon Joo, Hyunyoung Oh$\dagger$, Yunheung Paek$\dagger$ ($\dagger$ corresponding author)

International Conference on Artificial Intelligence Computing and Systems (AIComps), 2024

While recent FHE schemes support efficient SIMD-like operations, they require frequent data realignment through rotation, necessitating substantial memory for precomputed rotation keys. We propose an application-aware, memory-efficient rotation keyset generation method that reduces memory consumption, while maintaining similar latency.

Rotation Keyset Generation Strategy for Efficient Neural Networks Using Homomorphic Encryption

Youyeon Joo, Hyunyoung Oh$\dagger$, Yunheung Paek$\dagger$ ($\dagger$ corresponding author)

2024

While recent FHE schemes support efficient SIMD-like operations, they require frequent data realignment through rotation, necessitating substantial memory for precomputed rotation keys. We propose an application-aware, memory-efficient rotation keyset generation method that reduces memory consumption, while maintaining similar latency.

2023

Area-Efficient Accelerator for the Full NTRU-KEM Algorithm

Yongseok Lee, Kevin Nam, Youyeon Joo, Jeehwan Kim, Hyunyoung Oh$\dagger$, Yunheung Paek$\dagger$ ($\dagger$ corresponding author)

International Conference on Computational Science and Its Applications (ICCSA), 2023

In this paper, we aim to implement an efficient NTRU-KEM algorithm with full functionality by incorporating all functions, including key generation, using a hardware and software co-design approach.

Area-Efficient Accelerator for the Full NTRU-KEM Algorithm

Yongseok Lee, Kevin Nam, Youyeon Joo, Jeehwan Kim, Hyunyoung Oh$\dagger$, Yunheung Paek$\dagger$ ($\dagger$ corresponding author)

2023

In this paper, we aim to implement an efficient NTRU-KEM algorithm with full functionality by incorporating all functions, including key generation, using a hardware and software co-design approach.