In December 2024, New York Governor Kathy Hochul signed a groundbreaking law requiring state agencies to audit and regulate their use of artificial intelligence (AI) in public work. This move came in response to growing concerns over AI’s potential for misuse, privacy issues, and its impact on employment. For example, criminals have used deepfake technology to steal millions of dollars, highlighting the urgent need for regulation.
The new law mandates that New York state agencies conduct detailed assessments of any software incorporating AI technology, including underlying machine learning models. These reviews must be submitted to the governor and legislative leaders and made public online to ensure transparency and accountability in government AI deployments.
The legislation addresses key ethical concerns, particularly in critical decision-making processes. For decisions on unemployment benefits and child-care assistance, the law requires human review of AI-generated outcomes. This ensures that important decisions affecting citizens’ lives are not left solely to AI algorithms.
The law also tackles the issue of workforce displacement caused by AI. It prevents state agencies from arbitrarily reducing employee work hours or job duties simply to implement AI automation. This balanced approach aims to harness AI’s benefits while protecting workers.
Security experts have long called for stricter regulation of AI across industries. A major concern is the potential for AI solutions to reveal personally identifiable information (PII). For instance, in healthcare, machine learning models trained on patient data could be vulnerable to attacks that expose sensitive information.
New York is not alone in regulating AI. In May 2024, Colorado enacted the Colorado AI Act, which imposes strict rules on developers and deployers of high-risk AI systems. These are defined as systems capable of making consequential decisions that significantly affect consumers in areas such as education, employment, finance, healthcare, and housing.
Government agencies, like private corporations, have increasingly used AI to speed up work, cut costs, and improve efficiency. AI is used for tasks such as processing documents, handling permit applications, and managing license renewals. It also enhances decision-making through advanced risk assessment and resource allocation models.
However, AI systems can be targeted by malicious actors. Threat actors can use AI to create convincing phishing emails or develop malware that evades traditional detection methods. They can also manipulate AI models to undermine public trust in the technology.
To protect AI systems, organizations must implement comprehensive security measures. This includes model validation, regular security audits, input sanitization, adversarial training, strict access control, and continuous monitoring. Balancing the benefits and risks of AI is crucial, especially in the public sector. The goal is to maintain the efficiency of AI-powered services while ensuring their security and public trust.
Read more: