Email your representatives about Facial Recognition Technology

Diversity, Inclusion, EquityLeave a Comment on Email your representatives about Facial Recognition Technology

Email your representatives about Facial Recognition Technology

In light of the recent riots in Dublin, Helen McEntee is calling for an expansion in the use of facial recognition technology. As I have discussed on this blog, there are numerous ethical, privacy, and civil liberties issues with facial recognition technology. While McEntee has said “There will have to be safeguards – codes of practice – in place. People’s individual privacy, GDPR issues, all of this will have to be addressed and will have to be brought forward with the legislation“, I do not believe our government have a good track record in respecting these issues, and rushing to expand the scope of this legislation will almost certainly ensure that safeguards, codes of practice, and other issues will not be addressed in time.

I believe it is important to contact my government representatives to let them know that I do not support the expansion of this legislation, and that these concerns need to be addressed.I encourage you to do the same, and for ease, have included below the email I have sent to my own TDs. Feel free to use this as is, or modify to suit. You can find TD contact details, and identify those in your constituency, on the Oireachtas website.

Dear [name],

I am writing to express my concern over plans to expand the scope of facial recognition technology legislation. While I understand that there are calls for action following the riots in Dublin last week, I believe it is crucially important that the serious ethical, privacy, and civil liberties issues with using this technology are understood and addressed by all of our representatives before moving forward with the adoption of this technology, and certainly before rushing to expand the scope.

The Irish Council for Civil Liberties has frequently discussed the issues with facial recognition technology and has been vocal in its desire for a ban on the technology, with good reason. There are known key issues with the technology that have not been addressed, including:

  1. Bias, Discrimination, and Accuracy – Countless studies have shown that most if not all facial recognition algorithms are biased, and have much lower accuracy when it comes to recognising faces which are not white males. While this would be annoying if it were simply a case of being unable to face-unlock your phone, the application of this technology in law enforcement has led to wrongful arrests in the US due to erroneous identification. A NIST study [1] of almost 200 facial recognition algorithms noted that they were 10 to 100 times more likely to misidentify Asian and African American faces. A test of Amazon’s facial recognition technology [2] matched 28 members of Congress with mugshots of people who had committed crimes.

    FRT algorithms are trained on datasets, and unless that dataset is of high quality and represents a full diversity of individuals, the algorithm will learn the biases that already exist in society. Unchecked operation of these algorithms will result in individuals with facial differences, individuals from minority communities, individuals who are not white and male, being disproportionately affected by misidentifications.
  2. Transparency and Accountability – Citizens have a right to know if their faces are being used to train the FRT algorithms the Gardaí plan to deploy, and there is currently no established method for people to discover if their face has been used, or to opt out of their face being used. Biometric data privacy and security has been a continuing problem for this government, and we have seen from the outcomes of the investigations around the Public Services Card that the level of transparency and accountability necessary for the deployment of these systems is not in place, and not robust enough to be trusted. There is no room for scope creep with the gathering, storage, and use of biometric data, and the use of FRT with bodycam and CCTV footage offers citizens absolutely no way to opt out of this data gathering.

    Facial recognition algorithms themselves operate as a “black box” – offering no explanation as to why faces were matched or not, what criteria were used to match faces, etc. This means that Gardaí will also be unable to explain to someone why their face has been matched, or why they have been questioned in relation to an issue. The algorithms are completely opaque, and do not provide the kind of clear, understandable transparency that is absolutely necessary when applied to policing to ensure that they are not abused or misused. With a black box system, could the Gardaí even satisfy a request for a removal from the database? How could they ensure this has been completed?

    Investigations by the Data Protection Commission have shown numerous issues with, and violations of, data privacy laws by both An Garda Síochána, and government bodies. This has not established a basis for trust and transparency between the public and these organisations when it comes to respecting data privacy, and does not lead me to believe that these bodies will be held properly accountable for issues with data privacy in respect to FRT.
  3. Regulations and safeguards – If the technology must be used, then it is absolutely critical that clear regulations and safeguards are established before a single piece of footage is scanned. These cannot be delayed or applied after the fact, and they cannot be vague. There must be clear regulations about who can use it, how and where it can be used, and what options are available for people who feel it has been abused. There must be clear guidelines about what actions can be taken in the case of misidentification, or requests for removal from the database used by the algorithm.

    Anyone working in technology could speak to the idea of “least privilege” – i.e. that the way you should apply security to things is by assessing the absolute least amount of privilege necessary to do something, and then allow only that. If the government is going to insist on the use of FRT, I urge you to consider applying such a principle. If the use is to be restricted to only reviewing footage after the fact, make that explicit and clear in the legislation. Those who can use it, and the exact circumstances in which it may be used must be clear and explicitly defined. The penalties for misuse of the technology by any individual should be clearly defined.

Facial recognition technology has the potential to seriously impact all of our daily lives, with implications for civil liberty, mass surveillance, and misuse of biometric data to name just a few issues. While I recognise the importance of using modern technology to enhance public safety, it is imperative that we not sleepwalk into writing loose legislation that will lead to misidentification of individuals and abuse of the systems.

Regards,

Jennifer Keane

[1] NIST study on Facial Recognition – https://www.nist.gov/news-events/news/2019/12/nist-study-evaluates-effects-race-age-sex-face-recognition-software

[2] Amazon’s FRT – https://www.aclu.org/news/privacy-technology/amazons-face-recognition-falsely-matched-28

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top