As of Oct. 16, 2024, the New York State Department of Financial Services issued new guidance for the entities it regulates—specifically addressing cyber security risks arising from artificial intelligence.
This guidance is especially relevant for insurance agents and brokers who operate under the DFS’s regulation, as it highlights the risks and defensive strategies related to AI’s impact on cybersecurity. It’s important to note that the DFS did not create any new compliance requirements. The guidance is meant to explain how covered entities should use the framework set forth in New York state cyber security regulation to assess and address the cyber security risks arising from AI.
The role of AI in cybersecurity
AI is transforming the cyber security landscape in two significant ways. On one hand, AI offers advanced tools for detecting and responding to cyberthreats, automating processes that can quickly identify vulnerabilities and enhancing security measures. On the other hand, it amplifies the capabilities of cybercriminals, enabling them to launch more sophisticated and large-scale attacks.
AI poses a particular challenge for the financial services industry, including the insurance industry. While AI-driven solutions can bolster security, the same technology can be weaponized, putting organizations at greater risk of exposure to cyberthreats.
Key AI-related cyber security risks
The DFS’s guidance outlines four main areas in which AI introduces cyber security risks:
No. 1: AI-enabled social engineering.
AI is being used to create highly convincing phishing emails, deepfake videos and fake audio recordings to deceive employees and gain access to sensitive information. This kind of manipulation, enhanced by AI increases the risk of security breaches through impersonation and unauthorized access to sensitive data.
No. 2: AI-enhanced cyberattacks.
Cybercriminals can use AI to identify system vulnerabilities quickly, to deploy malware and to develop more effective ransomware. This speed and precision make AI-enhanced attacks more dangerous and harder to defend against—particularly in industries like insurance, where nonpublic information often is stored.
No. 3: Data exposure and theft.
AI often requires vast amounts of data, including sensitive nonpublic personal information and biometric data. This makes entities using AI more attractive targets for cybercriminals, who seek to exploit the data-rich environments created by AI-dependent systems.
No. 4: Third-party vulnerabilities.
AI products rely heavily on external vendors and third-party service providers, increasing the risk of supply-chain attacks. If a third party’s AI system is compromised, it could become an entry point for broader cyber security incidents affecting the primary entity and other connected organizations.
Mitigating AI-related cyber security risks
While these risks are significant, the DFS guidance also offers strategies to help insurance professionals manage them effectively within the existing framework set forth in the DFS’s Cybersecurity Regulation—23 NYCRR Part 500. Below are some key measures to consider:
Risk assessments.
Regular cybersecurity risk assessments are mandatory under Part 500, and DFS emphasizes that these should include specific consideration of AI-related risks. Agents and brokers should evaluate their own use of AI, as well as the AI technologies used by third-party vendors, ensuring potential vulnerabilities are addressed.
Vendor management.
Given the interconnected nature of AI and third-party services, it’s crucial to conduct thorough due diligence on third-party service providers. Insurance professionals should ensure that vendors using AI are following best practices to secure sensitive information and are contractually required to report any cyber security events promptly.
Access controls.
To mitigate the threat of deepfakes and other AI-powered attacks, the DFS requires multifactor authentication for all authorized users accessing sensitive systems. Insurance agencies should prioritize MFA solutions that cannot easily be manipulated by AI, such as physical security keys or advanced biometric systems with liveness detection.
Cyber security training.
Regular and comprehensive training is essential to combat AI-driven social engineering. The DFS recommends that all personnel—including senior executives—be trained to recognize and to respond to AI-enabled threats. Simulation exercises, such as phishing tests and deepfake recognition drills, can be particularly effective.
Monitoring and data management.
Continuous monitoring of systems is critical to identify potential threats early. In addition, proper data management practices, such as minimizing the collection and storage of unnecessary nonpublic personal information, can limit the damage in the event of a breach. Entities also should monitor for unusual AI-related behaviors, such as suspicious queries that could indicate attempts to extract nonpublic personal information.
AI as a cyber security tool.
While the DFS guidance emphasizes the risks AI poses, it also highlights the benefits of AI when integrated into cyber security efforts. AI can help automate threat detection, analyze behavior anomalies and predict potential security incidents. Agents and brokers should consider leveraging AI-based cyber security tools to enhance their defensive capabilities and protect client data.
Prepare for the future
AI is continuously evolving. As it does, so will the associated cyber security risks. The DFS’s guidance underscores the need for vigilance in an increasingly complex digital landscape.
For insurance agents and brokers, understanding these risks and implementing the recommended controls is critical to safeguarding sensitive information and maintaining compliance with DFS regulations.
By staying informed and proactively addressing AI-related cyber security concerns, insurance professionals can help protect their businesses and clients from emerging threats.
Bradford J. Lachut, Esq.
Bradford J. Lachut, Esq., joined PIA as government affairs counsel for the Government & Industry Affairs Department in 2012 and then, after a four-month leave, he returned to the association in 2018 as director of government & industry affairs responsible for all legal, government relations and insurance industry liaison programs for the five state associations. Prior to PIA, Brad worked as an attorney for Steven J. Baum PC, in Amherst, and as an associate attorney for the law office of James Morris in Buffalo. He also spent time serving as senior manager of government affairs as the Buffalo Niagara Partnership, a chamber of commerce serving the Buffalo, N.Y., region, his hometown. He received his juris doctorate from Buffalo Law School and his Bachelor of Science degree in Government and Politics from Utica College, Utica, N.Y. Brad is an active Mason and Shriner.