The City of San Francisco has passed new rules which will significantly curb the abilities of public sector organisations to purchase and utilise facial recognition technologies.
Opinions on newly emerging surveillance technologies have varied drastically, with some pointing to the benefits of safety and efficiency for intelligence and police forces, while others have bemoaned the crippling potential it could have on civil liberties and privacy.
The new rules in San Francisco do not necessarily ban surveillance technologies entirely, but barriers to demonstrate justification have been significantly increased.
“The success of San Francisco’s #FacialRecognition ban is owed to a vast grassroots coalition that has advocated for similar policies around the Bay Area for years,” said San Francisco Supervisor Aaron Peskin.
The legislation will come into effect in 30 days’ time. From that point, no city department or contracting officer will be able to purchase equipment unless the Board of Supervisors has appropriated funds for such acquisition. New processes will also be introduced including a surveillance technology policy for the department which meet the demands of the Board, as well as a surveillance impact report.
The department would also have to produce an in-depth annual report which would detail:
- How the technology was used
- Details of each instance data was shared outside the department
- Crime statistics
The impact report will have to include a huge range of information including all the forward plans on logistics, experiences from other government departments, justification for the expenditure and potential impact on privacy. The department may also have to consult public opinion, while it will have to create concrete policies on data retention, storage, reporting and analysis.
City officials are making it as difficult as possible to make use of such technologies, and considering the impact or potential for abuse, quite rightly so. As mentioned before, this is not a ban on next-generation surveillance technologies, but an attempt to ensure deployment is absolutely necessary.
As mentioned before, the concerns surround privacy and potential violations of civil liberties, which were largely outlined in wide-sweeping privacy reforms set forward by California Governor Jerry Brown last year. The rules are intended to spur on an ‘informed public debate’ on the potential impacts on the rights guaranteed by the First, Fourth, and Fourteenth Amendments of the US Constitution.
Aside from the potential for abuse, it does appear City Official and privacy advocates are concerned over the impact on prejudices based on race, ethnicity, religion, national origin, income level, sexual orientation, or political perspective. Many analytical technologies are based on the most likely scenario, leaning on stereotypical beliefs and potentially increasing profiling techniques, effectively removing impartiality of viewing each case on its individual factors.
While the intelligence and policing community will most likely view such conditions as a bureaucratic mess, it should be absolutely be viewed as necessary. We’ve already seen the implementation of such technologies without public debate and scrutiny, a drastic step considering the potential consequences.
Although the technology is not necessarily new, think of border control at airports, perhaps the rollout in China has swayed opinion. When an authoritarian state like China, where political and societal values conflict that of the US, implements such technologies some will begin to ask what the nefarious impact of deployment actually is.
In February, a database emerged demonstrating China has used a full suite of AI tools to monitor its Uyghur population in the far west of the country. This could have been a catalyst for the rules.
That said, the technology is also far from perfect. Police forces across the UK has been trialling facial recognition and data analytics technologies with varied results. At least 53 UK local councils and 45 of the country’s police forces are heavily relying on computer algorithms to assess the risk level of crimes against children as well as people cheating on benefits.
In May last year, the South Wales Police Force has to defend its decision to trial NEC facial recognition software during the 2017 Champions League Final as it is revealed only 8% of the identifications proved to be accurate.
It might be viewed by some as bureaucracy for the sake of bureaucracy but considering the potential for abuse and damage to privacy rights, such administrative barriers are critical. More cities should take the same approach as San Francisco.