") does not match the recommended repository name for your site ("
").
", so that your site can be accessed directly at "http://
".
However, if the current repository name is intended, you can ignore this message by removing "{% include widgets/debug_repo_name.html %}
" in index.html
.
",
which does not match the baseurl
("
") configured in _config.yml
.
baseurl
in _config.yml
to "
".
Kevin Nam*, Youyeon Joo*, Seungjin Ha, Yunheung Paek$\dagger$ (* equal contribution, $\dagger$ corresponding author)
USENIX Security Symposium (USENIX Sec), 2025
Existing works adopt an eager approximation (EA) strategy to approximate non-arithmetic functions (NAFs), which statically replaces each NAF with a fixed polynomial, locking in computational errors and limiting optimization opportunities. We propose SLOTHE, a lazy approximation (LA) solution that recursively decomposes NAF codes into arithmetic and nonarithmetic sub-functions, selectively approximating only the non-arithmetic components when required.
Kevin Nam*, Youyeon Joo*, Seungjin Ha, Yunheung Paek$\dagger$ (* equal contribution, $\dagger$ corresponding author)
2025
Existing works adopt an eager approximation (EA) strategy to approximate non-arithmetic functions (NAFs), which statically replaces each NAF with a fixed polynomial, locking in computational errors and limiting optimization opportunities. We propose SLOTHE, a lazy approximation (LA) solution that recursively decomposes NAF codes into arithmetic and nonarithmetic sub-functions, selectively approximating only the non-arithmetic components when required.
Kevin Nam, Youyeon Joo, Dongju Lee, Seungjin Ha, Hyunyoung Oh, Hyungon Moon$\dagger$, Yunheung Paek$\dagger$ ($\dagger$ corresponding author)
USENIX Security Symposium (USENIX Sec), 2025
When FHE is applied to neural networks (NNs), we have observed that the distinct layered architecture of NN models opens the door for a performance improvement by using layer-wise Ciphertext Configurations (CCs), because a globally chosen CC may not be the best possible CC for every layer individually. This paper introduces LOHEN, a technique crafted to attain high performance of NN inference by enabling to use layer-wise CC efficiently.
Kevin Nam, Youyeon Joo, Dongju Lee, Seungjin Ha, Hyunyoung Oh, Hyungon Moon$\dagger$, Yunheung Paek$\dagger$ ($\dagger$ corresponding author)
2025
When FHE is applied to neural networks (NNs), we have observed that the distinct layered architecture of NN models opens the door for a performance improvement by using layer-wise Ciphertext Configurations (CCs), because a globally chosen CC may not be the best possible CC for every layer individually. This paper introduces LOHEN, a technique crafted to attain high performance of NN inference by enabling to use layer-wise CC efficiently.