As artificial intelligence grows more ubiquitous, its potential and the challenges it presents are coming increasingly into focus. How we balance the risks and opportunities is shaping up as one among the defining questions of our era. In much the identical way that cities have emerged as hubs of innovation in culture, politics, and commerce, in order that they are defining the frontiers of AI governance.

Some examples of how cities have been taking the lead include the Cities Coalition for Digital Rights, the Montreal Declaration for Responsible AI, and the Open Dialogue on AI Ethics. Others might be present in San Francisco’s ban of facial-recognition technology, and New York City’s push for regulating the sale of automated hiring systems and creation of an algorithms management and policy officer. Urban institutes, universities and other educational centres have also been forging ahead with a variety of AI ethics initiatives.

These efforts point to an emerging paradigm that has been known as AI Localism. It’s a component of a bigger phenomenon often called New Localism, which involves cities taking the lead in regulation and policymaking to develop context-specific approaches to a wide range of problems and challenges. We have also seen an increased uptake of city-centric approaches inside international law frameworks.

In so doing, municipal authorities are filling gaps left by insufficient state, national or global governance frameworks related to AI and other complex issues. Recent years, for instance, have seen the emergence of “broadband localism”, during which local governments address the digital divide; and “privacy localism”, each in response to challenges posed by the increased use of knowledge for law enforcement or recruitment.

AI localism encompasses a wide selection of issues, stakeholders, and contexts. In addition to bans on AI-powered facial recognition, local governments and institutions are procurement rules pertaining to AI use by public entities, public registries of local governments’ AI systems, and public teaching programs on AI. But whilst initiatives and case studies multiply, we still lack a scientific method to evaluate their effectiveness – and even the very need for them. This limits policymakers’ ability to develop appropriate regulation and more generally stunts the expansion of the sphere.

Building an AI Localism framework

Below are ten principles to assist systematise our approach to AI Localism. Considered together, they add as much as an incipient framework for implementing and assessing initiatives all over the world:

  • Principles provide a North Star for governance: Establishing and articulating a transparent set of guiding principles is an important place to begin. For example, the Emerging Technology Charter for London, launched by the mayoral office in 2021 to stipulate “practical and ethical guidelines” for research around emerging technology and smart-city technology pilots, is one example. Similar projects exist in Nantes, France, which rolled out a data charter to underscore the local government’s commitment to data sovereignty, protection, transparency, and innovation. Such efforts help interested parties chart a course that effectively balances the potential and challenges posed by AI while affirming a commitment to openness and transparency on data use for the general public.

  • Public engagement provides a social license: Establishing trust is crucial to fostering responsible use of technology in addition to broader acceptance and uptake by the general public. Forms of public engagement – crowdsourcing, awareness campaigns, mini-assemblies, and more – may help to construct trust, and must be a part of a deliberative process undertaken by policymakers. For example, the California Department of Fair Employment and Housing held their first virtual public hearing with residents and employee advocacy groups on the growing use of AI in hiring and human resources, and the potential for technological bias in procurement.

  • AI literacy enables meaningful engagement: The goal of AI literacy is to encourage familiarity with the technology itself in addition to with associated ethical, political, economic and cultural issues. For example, the Montreal AI Ethics Institute, a non-profit focused on advancing AI literacy, provides free, timely, and digestible details about AI and AI-related happenings from internationally.

In New York City, the town has established an Algorithms Management and Policy Officer to control using how data captured by security cameras and other devices is managed.
Cyprian Latewood/Wikipedia, CC BY
  • Tap into local expertise: Policymakers should tap into cities’ AI expertise by establishing or supporting research centres. Two examples are the Confederation of Laboratories for Artificial Intelligence Research in Europe (CLAIRE), a pan-European project that takes a European focus to AI uses in cities and “How Busy Is Toon”, a web site developed by Newcastle City Council and Newcastle University to offer real-time transit information in regards to the city centre.

  • Innovate in how transparency is provided: To construct trust and foster engagement, AI Localism should encompass time-tested transparency principles and practices. For example, Amsterdam and Helsinki disclose AI use and explain how algorithms are employed for specific purposes. In addition, AI Localism can innovate in how transparency is provided, instilling awareness and systems to discover and overcome “AI blind spots” and other types of unconscious bias.

  • Establish means for accountability and oversight: One of the signal features of AI Localism is a recognition of the necessity for accountability and oversight to be certain that principles of responsive governance are being adhered to. Examples include New York City’s Algorithms Management and Policy Officer, Singapore’s Advisory Council on the Ethical Use of AI and Data, and Seattle’s Surveillance Advisory Working Group.

  • Signal boundaries through binding laws and policies: Principles are only pretty much as good as they’re implemented or enforced. Ratifying laws, akin to New York City’s Biometrics Privacy Law, which requires clear notices that biometric data is being collected by businesses, limits how businesses can use such data. It also prohibits selling and making the most of the info. Such regulation sends a transparent message to consumers that their data rights and protections are upheld and holds corporations accountable to respecting privacy privileges.

  • Use procurement to shape responsible AI markets: As municipal and other governments have done in other areas of public life, cities should use procurement policies to encourage responsible AI initiatives. For instance, the Berkeley, California Council passed an ordinance requiring that public departments justify using latest surveillance technologies and that the advantages of those tools outweigh the harms prior to procurement.

  • Establish data collaboratives to tackle asymmetries: Data collaboratives are an emerging type of intersectoral partnership, during which private data is reused and deployed toward the general public good. In addition to yielding latest insights and innovations, such partnerships will also be powerful tools for breaking down the info asymmetries that each underlie and drive so many wider socio-economic inequalities. Encouraging data collaboratives, by identifying possible partnerships and matching supply and demand, is thus a very important component of AI Localism. Initial efforts include the Amsterdam Data Exchange, which allows for data to be securely shared to deal with local issues.

  • Make good governance strategic: Too many AI strategies don’t include governance and too many governance approaches should not strategic. It is thus imperative that cities have a transparent vision on how they see data and AI getting used to enhance local wellbeing. Charting an AI strategy, as was undertaken by the Barcelona City Council in 2021, can create avenues to embed smart AI use across agencies and open up AI awareness to residents to make responsible data use and considerations a typical thread reasonably than a siloed exercise inside local government.

AI Localism is an emergent area, and each its practice and research remain in flux. The technology itself continues to vary rapidly, offering something of a moving goal for governance and regulation. Its state of flux highlights the necessity for the sort of framework outlined above. Rather than playing catch-up, responding reactively to successive waves of technological innovation, policymakers can respond more consistently, and responsibly, from a principled bedrock that takes under consideration, the customarily competing needs of varied stakeholders.

This article was originally published at theconversation.com