720-891-1663

AI: Productivity Gains vs. Security Risks

No sane person can deny the allure of Generative Pre-trained Transformer AIs. Smart people are also concerned about the security risks they create.

For example, if you use an AI to write code, could the AI include a back door? Or malware? How do you know without reviewing every line of code in detail?

In a recent 60 minutes interview, Google CEO Sundar Pichai said that there is an aspect of AI that they in the field (not just Google) “don’t fully understand. And you can’t tell why it said this”.

If the AI is used to write the code for a collision avoidance system, does that concern you?

Also, since most GPT vendors won’t tell us how they trained their GPTs for both legal reasons (likely they broke multiple laws doing it) and competitive reasons, how do you know if the model is good for what you want it to do for you?

And these systems have repeatedly been shown to lie. Ask your favorite AI to give you footnotes to what it wrote. It will. They just don’t exist. This is just an example. There is a guy in Europe who is suing OpenAI because they say he is dead. He is not. They won’t/can’t fix it and so this alive dead guy is suing them.

For those of you who are, say, on the elderly side, you may remember the 1983 film Wargames. If you don’t, watch it. Pretty right on point, 40 years later. It is about an AI that decides to test itself by creating a fake nuclear attack. When the real soldiers refuse launch a counterattack, it tries to lock them out and launch the counterattack itself. The system developer said, in the movie, the system was hallucinating. A term widely used in discussing AI systems today.

Another concern is about pouring sensitive data into an AI and the AI then integrating that data and using it to generate a response to another user. Amazon’s GPT, CodeWhisperer, has guardrails to help stop this.

One more thought to consider. As third party apps that you use every day integrate GPT into them, it may not be clear when you are feeding your data into an AI. It is going to become incumbent on you, as part of your vendor due diligence, to understand what they are doing with your data and what you are agreeing to let them do with your data as part of the terms of service. Then you have to watch what happens if and when they change the terms of service. This week SalesForce announced they are adding Einstein GPT in late summer. If you are a SalesForce user and that doesn’t work for you, likely your only option would be to cancel your service – assuming you don’t have a term contract – or continue paying for it, but not using it. Microsoft has added its CoPilot AI to Dynamics and Pegasystems has added Generative AI into its Pega Infinity platform.

In some cases, the use of the AI is optional, but how do you disable it if you don’t want your users to use it.

Here is another thought. What if the data you feed it for some reason is covered by a non-disclosure agreement. Can the other party now sue you for breaching the NDA?

It is just another thing to consider.

Credit: Computerworld, TechTarget and TechTarget

Facebooktwitterredditlinkedinmailby feather

One Reply to “AI: Productivity Gains vs. Security Risks”

  1. Ray Hutchins says:

    Excellent insights!!

Leave a Reply

Your email address will not be published. Required fields are marked *