When you look at the lifetime of the term “AI” in Google Trend, you can see a nice steady buzz about it until the end of November in 2022. Then, the topic takes a swift upward climb.
Today, the conversation is not about is ChatGPT, Gemini or Copilot can help you with basic questions and tasks, the conversation is about what else can they do and what else ARE they doing.
As we learn more about these AI tools, we started to understand how they acquired data to train these AI programs. Most of them use public data, but what does that mean – ‘public”?
The conversation and security around AI is already a hot topic in many fields including communications, marketing, education, and other industries. However, it is the IT field that has been forced to create a vocabulary around some of the most profound risks.
According to an ISMG report conducted during the third quarter of 2023 and commissioned by Google, Microsoft, Clearwater, Exabeam, and OneTrust, AI has moved itself into the pole position and using AI tools continues to raises important questions for CTOs and IT teams.
In the study, respondents listed sensitive data leaks as a top concern for 80% of business leaders and 82% of cybersecurity professionals. Cited by 71% of business leaders, inaccurate data, especially hallucinations are another top concern.
Taken from the article Here are 5 gen AI security terms busy business leaders should know on the Google Cloud blog, these are good terms to learn so you can better understand the AI universe and the risks of using these tool for any work product.
Prompt Manipulation
Can be exploited when the attacker uses prompt design, prompt engineering, or prompt injection to force an unintended response from a model including revealing sensitive data.
Data Leakage
AI models reveals sensitive information that was never intended to be including in an output response. This issue can be two-fold with both inaccurate responses or unauthorized access to sensitive data.
Model Theft
Most custom AI models do including sensitive intellectual property. Protecting the code and related assets is another important area to fend off cyber attacks.
Data Poisoning
Without properly secured training data, hackers can manipulates the data source for the model. Aside from corrupt data, it can maliciously influence prompt output.
Hallucinations
Just like the word indicates, an AI model can create responses that are not only factually incorrect, but simply false and completely a nonsense fabrication. There are a lot of reasons for this to happen, but ongoing testing and review diligence are always needed for these models.
Since we live in the reality where AI exists, it is good to understand the use and shortcomings of these tools.