Let ChatGPT be a tool, Not a Security Concern
Estimated read time: 2 minutes 30 seconds
If you follow the news, read industry publications, use social media, or chat with coworkers, you’ve likely heard something about generative AI and ChatGPT daily over the last few months. Although this is a current hot topic, artificial intelligence is not new, and more likely than not, you are using several forms of AI in your daily life. These include digital voice assistants like Google Home, Amazon Alexa, and Apple’s Siri. Additionally, predictive text, website chatbots, maps and navigation, face ID, and social media ads are forms of AI. Forbes specifically calls out the use of AI and ChatGPT as a new battleground where cybercriminals operate.
While ChatGPT, Google’s Bard, and many other generative AI technologies are exciting and helpful with work-related projects, companies must be extra careful in how their employees use the fast-evolving technology. From a cybersecurity and information protection perspective, these tools are information aggregators. Therefore, they can and most likely retain requests.
How does ChatGPT work?
ChatGPT is a conversational chatbot that utilizes advanced artificial intelligence to generate conversation in text format. GPT is short for Generative Pre-trained Transformer and uses Large Language Model (LLM). LLM works by analyzing large amounts of data, allowing it to learn patterns and rules in human language. It gains power as it becomes trained by accumulating data and improving the output based on user feedback.
What are the concerns?
When prompting ChatGPT to generate text, employers should be concerned with the following:
- inadvertent sharing of client data
- offering advice to clients based on the generated text
- tendency to provide false, biased, or deceptive information
What are some guidelines?
As this is new territory for many, there are some enduring cybersecurity guidelines to use when using ChatGPT:
- Never feed it with customer information – not name, address, financial information, nothing.
- Be extra careful if the client is in an industry with only a handful of players. Always err on the side of caution.
- Never feed it with company information.
- Never feed it PII, HIPAA, financial information, or anything deemed sensitive.
- Never re-use the generated text verbatim without reading and understanding it – the generated content may be incorrect.
- Always err on the side of caution.
Can ChatGPT be safely used?
Yes. However, it’s difficult to assess what is safe, as the threshold of information protection is a moving target. A good rule of thumb is to answer this question: if I feed ChatGPT information, am I comfortable with that information being public? If the answer is no, then avoid using it for that purpose.
The best thing a business can do is implement a policy that allows for the safe use of AI technologies with built-in safeguards that protect the company. Remember that the technology is constantly evolving, so make space for updates and revisit it frequently. Ongoing employee education is also an integral part of protecting your business.
We’re here to help
If you have any questions or concerns about implementing an AI/ChatGPT policy at your company or for additional help with your cybersecurity needs, call 818-334-8626 or click here to contact us. We look forward to speaking with you soon.