Yang, Y., Jin, Q., Huang, F., & Lu, Z. (2025). Adversarial prompt and fine-tuning attacks threaten medical large language models. Nature communications, 16(1), 9011-10. https://doi.org/10.1038/s41467-025-64062-1
Chicago Style (17th ed.) CitationYang, Yifan, Qiao Jin, Furong Huang, and Zhiyong Lu. "Adversarial Prompt and Fine-tuning Attacks Threaten Medical Large Language Models." Nature Communications 16, no. 1 (2025): 9011-10. https://doi.org/10.1038/s41467-025-64062-1.
MLA (9th ed.) CitationYang, Yifan, et al. "Adversarial Prompt and Fine-tuning Attacks Threaten Medical Large Language Models." Nature Communications, vol. 16, no. 1, 2025, pp. 9011-10, https://doi.org/10.1038/s41467-025-64062-1.
Warning: These citations may not always be 100% accurate.