IBM just got out of the business of facial recognition software. Microsoft and Amazon have limited what they are doing with facial recognition with law enforcement agencies. Facebook paid a $550 Million settlement related to their facial recognition technology. What is going on?
These companies are reacting wisely to the potential for bias and the potential abuse of this technology in law enforcement. Let’s examine three main issues related to the use of facial recognition technology in law enforcement: bias, lack of federal regulations, and too many regulations at the local level.
Is this software biased? Almost certainly, but not in the way we think of bias in a politically-charged environment. The bias in facial recognition is related to the accuracy of “recognition” with minorities of all types.
AI of this type is trained on large data sets. In many cases, the data used to train AI software consists of overwhelmingly white faces. That has to do with the existing libraries of photographs in the US today. The unfortunate result is that the faces of minorities (of all kinds) are harder to recognize. AI needs extremely large, and high-quality datasets. If there aren’t enough pictures of Hispanics or African Americans or Asians, then the software will have a hard time correctly identifying a face.
The challenge with this issue is that a person caught on camera has a higher likelihood of being incorrectly identified if they are part of that smaller dataset of minorities. If law enforcement agencies depend too heavily on facial recognition to identify people to track, then the likelihood of someone’s civil liberties being abused increases.
The good news is that these biases can be addressed in most cases. It takes time and it takes humans in the decision-making loop.
Lack of Federal Regulations
As of this writing, the federal government hasn’t passed any regulations related to facial recognition. This is the most likely reason Microsoft and Amazon have postponed their interactions with law enforcement and their software for identifying people.
The providers of this software want federal regulations so they can abide by the regulations and so they are not liable for the results of their software. When something inevitably goes wrong, a lawsuit will target the deepest pockets, and that is usually the technology company.
If the providers of this software can develop software and training and processes around the use of their technology, then they can prove they are doing what Congress asked them to do. This seems like a very reasonable request and can be a win for society, for the software companies and law enforcement.
Too Many Regulations
Part of the challenge these software providers have is that a mish-mash of regulations exists at the local and state levels. Because there is no overarching federal regulation, states and municipalities need to create their own regulations. The software companies and states and local governments are looking for leadership at the federal level.
In Facebook’s case, their settlement was related to violating the Illinois Biometric Information Privacy Act. This act prevents companies from storing biometric data without user consent. Illinois may have the most comprehensive biometric privacy law and unless a facial recognition software provider is going to avoid implementations in that state, they will need to adhere to that law.
Also, many states have different laws including California, New Hampshire, and Oregon. Municipalities such as Somerville, MA; Seattle, WA; and Detroit, MI have slightly different versions of regulations regarding the use of this technology. The technology companies providing this software are asking for help from regulatory agencies to enable law enforcement to use their software in ways that are good for society and that protect minorities equally.
Hopefully, Congress will take on this request. Facial recognition technology can be used for good, even by law enforcement. Once the regulations are clear, the potential for society to benefit is significant.