APA (7th ed.) Citation

Lee, J., Lee, J., Han, D., Lee, J., Park, G., & Yoo, H. (2019, February). 7.7 LNPU: A 25.3TFLOPS/W Sparse Deep-Neural-Network Learning Processor with Fine-Grained Mixed Precision of FP8-FP16. Digest of technical papers - IEEE International Solid-State Circuits Conference, 142-144. https://doi.org/10.1109/ISSCC.2019.8662302

Chicago Style (17th ed.) Citation

Lee, Jinsu, Juhyoung Lee, Donghyeon Han, Jinmook Lee, Gwangtae Park, and Hoi-Jun Yoo. "7.7 LNPU: A 25.3TFLOPS/W Sparse Deep-Neural-Network Learning Processor with Fine-Grained Mixed Precision of FP8-FP16." Digest of Technical Papers - IEEE International Solid-State Circuits Conference Feb. 2019: 142-144. https://doi.org/10.1109/ISSCC.2019.8662302.

MLA (9th ed.) Citation

Lee, Jinsu, et al. "7.7 LNPU: A 25.3TFLOPS/W Sparse Deep-Neural-Network Learning Processor with Fine-Grained Mixed Precision of FP8-FP16." Digest of Technical Papers - IEEE International Solid-State Circuits Conference, Feb. 2019, pp. 142-144, https://doi.org/10.1109/ISSCC.2019.8662302.

Warning: These citations may not always be 100% accurate.