Kevin Nam*, Youyeon Joo*, Seungjin Ha, Yunheung Paek$\dagger$ (* equal contribution, $\dagger$ corresponding author)
USENIX Security Symposium (USENIX Sec), 2025
Existing works adopt an eager approximation (EA) strategy to approximate non-arithmetic functions (NAFs), which statically replaces each NAF with a fixed polynomial, locking in computational errors and limiting optimization opportunities. We propose SLOTHE, a lazy approximation (LA) solution that recursively decomposes NAF codes into arithmetic and nonarithmetic sub-functions, selectively approximating only the non-arithmetic components when required.
Kevin Nam*, Youyeon Joo*, Seungjin Ha, Yunheung Paek$\dagger$ (* equal contribution, $\dagger$ corresponding author)
2025
Existing works adopt an eager approximation (EA) strategy to approximate non-arithmetic functions (NAFs), which statically replaces each NAF with a fixed polynomial, locking in computational errors and limiting optimization opportunities. We propose SLOTHE, a lazy approximation (LA) solution that recursively decomposes NAF codes into arithmetic and nonarithmetic sub-functions, selectively approximating only the non-arithmetic components when required.
Kevin Nam, Youyeon Joo, Dongju Lee, Seungjin Ha, Hyunyoung Oh, Hyungon Moon$\dagger$, Yunheung Paek$\dagger$ ($\dagger$ corresponding author)
USENIX Security Symposium (USENIX Sec), 2025
When FHE is applied to neural networks (NNs), we have observed that the distinct layered architecture of NN models opens the door for a performance improvement by using layer-wise Ciphertext Configurations (CCs), because a globally chosen CC may not be the best possible CC for every layer individually. This paper introduces LOHEN, a technique crafted to attain high performance of NN inference by enabling to use layer-wise CC efficiently.
Kevin Nam, Youyeon Joo, Dongju Lee, Seungjin Ha, Hyunyoung Oh, Hyungon Moon$\dagger$, Yunheung Paek$\dagger$ ($\dagger$ corresponding author)
2025
When FHE is applied to neural networks (NNs), we have observed that the distinct layered architecture of NN models opens the door for a performance improvement by using layer-wise Ciphertext Configurations (CCs), because a globally chosen CC may not be the best possible CC for every layer individually. This paper introduces LOHEN, a technique crafted to attain high performance of NN inference by enabling to use layer-wise CC efficiently.
Youyeon Joo, Seungjin Ha, Hyunyoung Oh$\dagger$, Yunheung Paek$\dagger$ ($\dagger$ corresponding author)
MDPI Sensors (Q2, IF 3.5), 2025
FHE-based neural network inference suffers from substantial overhead due to expensive primitive operations, such as ciphertext rotation and bootstrapping. We focus on optimizing the efficiency of these computations through keyset design. Specifically, we explore three aspects of the keyset design space (KDS) that influence both computational overhead and memory consumption.
Youyeon Joo, Seungjin Ha, Hyunyoung Oh$\dagger$, Yunheung Paek$\dagger$ ($\dagger$ corresponding author)
2025
FHE-based neural network inference suffers from substantial overhead due to expensive primitive operations, such as ciphertext rotation and bootstrapping. We focus on optimizing the efficiency of these computations through keyset design. Specifically, we explore three aspects of the keyset design space (KDS) that influence both computational overhead and memory consumption.
Youyeon Joo, Hyunyoung Oh$\dagger$, Yunheung Paek$\dagger$ ($\dagger$ corresponding author)
International Conference on Artificial Intelligence Computing and Systems (AIComps), 2024
While recent FHE schemes support efficient SIMD-like operations, they require frequent data realignment through rotation, necessitating substantial memory for precomputed rotation keys. We propose an application-aware, memory-efficient rotation keyset generation method that reduces memory consumption, while maintaining similar latency.
Youyeon Joo, Hyunyoung Oh$\dagger$, Yunheung Paek$\dagger$ ($\dagger$ corresponding author)
2024
While recent FHE schemes support efficient SIMD-like operations, they require frequent data realignment through rotation, necessitating substantial memory for precomputed rotation keys. We propose an application-aware, memory-efficient rotation keyset generation method that reduces memory consumption, while maintaining similar latency.
Yongseok Lee, Kevin Nam, Youyeon Joo, Jeehwan Kim, Hyunyoung Oh$\dagger$, Yunheung Paek$\dagger$ ($\dagger$ corresponding author)
International Conference on Computational Science and Its Applications (ICCSA), 2023
In this paper, we aim to implement an efficient NTRU-KEM algorithm with full functionality by incorporating all functions, including key generation, using a hardware and software co-design approach.
Yongseok Lee, Kevin Nam, Youyeon Joo, Jeehwan Kim, Hyunyoung Oh$\dagger$, Yunheung Paek$\dagger$ ($\dagger$ corresponding author)
2023
In this paper, we aim to implement an efficient NTRU-KEM algorithm with full functionality by incorporating all functions, including key generation, using a hardware and software co-design approach.