New York City is taking another swing at something few governments have managed to get right: regulating artificial intelligence. The City Council recently passed a package of bills known as the GUARD Act, short for Guaranteeing Unbiased AI Regulation and Disclosure, which aims to bring more accountability to how city agencies use algorithmic tools. After several previous attempts fell short, the new plan represents the most serious effort yet to build real oversight into the city’s use of AI.
“Privacy advocates have long had concerns about use of AI and algorithmic tools in government.” — cityandstateny.com New York City will try (again) to regulate AI
At the center of the GUARD Act is the creation of an Office of Algorithmic Accountability, which will be a dedicated department that will audit and monitor AI systems used by city agencies. The office will assess the tools before deployment, investigate complaints from the public, and publish a list of every system it reviews. It will also set citywide standards for fairness, transparency, and privacy and work to ensure agencies follow them. The city also plans to educate residents through an AI Education and Engagement Initiative, which will fund over 100 community listening sessions and train local leaders in AI literacy by 2026.
Supporters say this is exactly the kind of oversight governments need to keep up with rapidly advancing technology. They argue that algorithms are already shaping crucial decisions about housing, benefits, policing, and resource distribution, often without transparency or recourse when things go wrong. On the other hand, critics warn that such offices can become slow, underfunded, or politically sidelined. They worry that by the time any rules are enforced, the technology will have evolved beyond the regulators’ reach.
Still, some form of regulation is inevitable and necessary. AI is advancing faster than public policy, and while some fear overregulation could stifle innovation, doing nothing risks losing control entirely. The truth is, the cat may already be out of the bag, but that doesn’t mean we should stop trying to guide where it goes next. What do you think? Do you believe governments should create dedicated offices to assess how they're using AI?
Watch the full discussion on this episode of Nuance where we discuss this legislation and the fake MAGA accounts posing as USA supporters.
Additional:
Twitter's New Feature Is Exposing Fake MAGA Accounts Worldwide
New York City will try (again) to regulate AI
New York City Council Sets Up a New AI Oversight Office