New York’s Department of Financial Services released an industry letter on “cybersecurity risks arising from artificial intelligence and strategies to combat related risks” on October 16th, 2024.
In general, it does not require any new security measures but rather, it requires that you incorporate consideration of AI risk into consideration when you are complying with existing requirements.
Here is a link to the industry guidance.
The guidance starts off with a bunch of background information on AI and AI-enhanced risks like AI-based social engineering and AI-enhanced cybersecurity attacks. Hopefully all of that is already on your radar.
Then they tell you what they expect you to do. It says that you have to assess risks and implement cybersecurity standards taking the risk that AI represents into account. This means that the next risk assessment that we conduct for you will ask a number of AI-related questions for the first time.
It also means that you may need to update some of the things that you are currently doing to remain compliant. An obvious one is that you will need to update your policies and procedures to include AI-related risk if you have not already done that.
And, it means that your vendor risk assessment process needs to include AI-related questions.
Likewise, training may need to be updated to make sure that employees are aware of AI related risks and what AI tools are allowed to be used and how they may be used. That includes updated training for executives (“senior governing body”) including board members.
The law firm Alston & Bird provided their interpretation of the regulation which provides their insight into mitigation steps that you should consider. You can find that guidance here.
We have been proactive in this area for a couple of years and added an AI Usage policy to our policy package two years ago. If there is anything we can do to help with this, please let us know.