Loading

Security Considerations for Large Language Model Use: Implementation Research in Securing LLM-Integrated Applications
Nikhil Pesati

Nikhil Pesati, High School Senior, The Harker School, San Jose, California, USA.

Manuscript received on 21 July 2024 | Revised Manuscript received on 30 July 2024 | Manuscript Accepted on 15 September 2024 | Manuscript published on 30 September 2024 | PP: 19-27 | Volume-13 Issue-3, September 2024 | Retrieval Number: 100.1/ijrte.C814213030924 | DOI: 10.35940/ijrte.C8142.13030924

Open Access | Editorial and Publishing Policies | Cite | Zenodo | OJS | Indexing and Abstracting
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC-BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)

Abstract: Large Language Models (LLMs) are rapidly being adopted in various applications due to their natural language capabilities that enable user interaction using human language. As system designers, developers, and users embrace generative artificial intelligence and large language models in various applications, they need to understand the significant security risks associated with them. The paper describes a typical LLM-integrated application architecture and identifies multiple security risks to address while building these applications. In addition, the paper provides guidance on potential mitigations to consider in this rapidly evolving space to help protect systems and users from potential attack vectors. This paper presents the common real-world application patterns of LLMs trending today. It also provides a background on generative artificial intelligence and related fields.

Keywords: Large Language Models, Security, Copilot, OWASP.
Scope of the Article: Security Technology