Call us Toll Free (USA): 1-833-844-9468     International: +1-603-280-4451 M-F 8am to 6pm EST
How to Be Smarter About Bio-metrics

How to Be Smarter About Bio-metrics

Facial recognition—one of the most popular methods of biometric enrollment and customized marketing—will bring us to ultra-surveillance, targeted assassinations and Black Mirror-style oversight…..at least this is what critics of the technology would have you believe. Yet we don’t see such dystopian outcomes in commercial authentication and identity verification today. So why are these critics so concerned, and what can security professionals do to alleviate their concerns?

By 2024, the market for facial recognition applications and related biometric functions is expected to grow at a 20% compounded rate to almost $15.4 billion.  Already, almost 245 million video surveillance systems have been deployed worldwide, and that number is growing. Video facial recognition technology isn’t going away.

Yet as the technology keeps expanding to new segments and use cases, ethical concerns have not settled down—in fact, they’ve proliferated. Early concerns focused on the surveillance itself: should human beings be watched 24/7? As the

Use of CCTV data in criminal investigations proved to have value, though, these concerns have grown to focus on the data that a video stream provides and the inferences that can be drawn from that data.

Machine learning and new predictive techniques, when used to analyze a video stream, can produce findings well beyond facial identity. They can infer emotional state, religious affiliation, class, race, and gender, and health status. In addition, machine learning methods can determine where someone is going (travel trajectory), where they came from (national origin), how much they make (through clothing analysis), diseases they suffer from (through analysis of the vocal track) and much more.

Yet like all technology, these techniques are imperfect. They don’t always recognize faces accurately: false positives and false negatives happen. Some algorithms get confused if you wear a hat or sunglasses, grow a beard, or cut your hair differently.

Even worse, training data used to develop many early facial recognition algorithms were originally mostly Caucasian, so people of African and Asian descent were not recognized as accurately, which resulted in biased conclusions.

Biometrics themselves are not a foolproof system. Some facial recognition systems can be “hacked” with dolls, masks, and false faces. Recently, Philip Bontrager, a researcher at NYU, revealed that he had created a “DeepMasterPrint” fingerprint, which could combine the characteristics of many fingerprints into one “master print” that could log into devices secured with only a single fingerprint authentication routine.

So, the critics of biometrically-based recognition and authentication have a right to be concerned about the weaknesses of an early–yet–broadly deployed technology. A single finger on a pad or a single face seen by a camera should be insufficient to grant access. Biometrics are hackable, and over time it’s clear that we’ll find increasing exploits that take advantage of known and unknown weaknesses.

Two recent developments that are changing the game for everyone who relies on biometrics magnify the importance of these concerns. This time, artificial intelligence researchers, activists, lawmakers and many of the largest technology companies also express concern.

These two developments, happening simultaneously, are:

  • Machine Learning in Real-Time: The advent of machine learning techniques that can act very quickly to make inferences from video data and provide them in near real-time, with convincing conclusions (especially to untrained observers). Not only is the tech really fast—it looks good, too.
  • Autonomous System Integration: The merging of these video surveillance conclusions with autonomous systems, so a conclusion from a facial recognition system can lead an autonomous system to take immediate action—no human interaction required.

How might this be used? Let’s look at an example. Today’s autonomous systems can already take action. For example, when you walk into a room, a home system camera can recognize you and set the lights (or music) to your preferred setting. Alexa can order stuff for you, a car can drive itself, or a building can lock its doors on its own.

Of course, then, activists and tech leaders are concerned that we will give these systems power over human life and agency. What if Alexa calls the cops on your son? What if a system relies on false recognition to take lethal action? What if the door locking mechanism also incapacitates an intruder? What if they incapacitate a legal resident by false face recognition?

These scenarios, to a limited extent, have already happened. Last March, a self-driving Uber car, which had “human recognition algorithms” built into its video system, failed to recognize a pedestrian and killed her. News reports already indicate that the Chinese government is using such techniques to track minority populations and assign risk factors to citizens, without their knowledge or consent.

Activists point to this use of surveillance and facial analysis technology as an example of how trust can degrade in society and how specific attitudes might be tracked by unscrupulous players—even in democratic societies with free press and freedom of movement.

Businesses also have their reputations—and their stock prices—negatively affected by the unethical activity. More than one company has discovered that when they violate the trust of partners or customers, business collapses overnight.

However, some moves are afoot to provide protection against bad actors. This month, the Algorithmic Justice League and the Center of Privacy & Technology at Georgetown University Law Center unveiled the Safe Face Pledge, which asks companies not to provide facial AI for autonomous weapons or sell to law enforcement, unless explicit laws are passed to protect people. Last week, Microsoft said that facial recognition married to autonomous systems carries significant risks and proposed rules to combat the threat. Research group AI Now, which includes AI researchers from Google and other companies, issued a similar call.

The problem that the Safe Face Pledge is trying to solve is that autonomous systems don’t truly have agency: if a system takes action, there is no one to hold accountable. An autonomous system doesn’t lose its job, get charged with a felony or get a report in the file. This is a problem of accountability: who is ultimately responsible?

IT professionals and security experts now fall into the uncomfortable position of pondering the philosophical implications of tech deployment and mediating between the needs of a business and the need to act ethically. Fortunately, there are some simple steps that can be taken to navigate this tightrope.

Three distinct cautionary actions can protect your systems against charges of bias or overreach:

  • Use Multiple Biometrics: Don’t rely on one low fidelity biometric for high-security authentication. Enroll multiple fingerprints via a high-fidelity enrollment mechanism like a certified FBI channeler—not a single smartphone scanner. Even better is to use a two-factor biometric solution that includes scanning of multiple fingerprints, facial and fingerprint, or voice and facial and fingerprint.
  • Safe Face Pledge: It’s worth checking out the Safe Face Pledge website (org) to understand the implications of marrying facial recognition to autonomous systems (even a door lock could be autonomous!) and to prevent risks to your employer—or the larger population. Ensure you have educated your business decision-makers on the evident problems with the proliferation of this technology without safeguards.
  • Put a Human Being in the Loop: Be very cautious about allowing an autonomous system to take action based solely on a single biometric identifier. This technology, in many regards, is still in its infancy and can’t be fully trusted. Always put a human being in the loop. A person needs to be involved and ultimately be held accountable for decisions that have an impact on your business.

With these protections in place, it’s possible to deliver the clear differentiator of real-time facial recognition and autonomous technology to accelerate your business, while simultaneously protecting your business and accelerating your trust with partners and customers.

About the Author

How to Be Smarter About Bio-metricsNed Hayes is the General Manager for SureID and a Vice President at Sterling. He was educated at Stanford University Graduate School of Business and the Rainier Writing Workshop. He has also studied cyborg identity and robotic ethics at the Graduate Theological Union at UC Berkeley. Ned is a technologist, identity researcher and author. His most recent novel was the national bestseller The Eagle Treewhich was nominated for the Pacific Northwest Booksellers Award, the PEN/Faulkner, the Washington State Book Award and was named one of the top 5 books about the autistic experience. He co-founded the technology company TeleTrust and was the founding product lead for Paul Allen’s ARO team at Vulcan. He has also provided product direction for new technology innovation at Xerox PARC, Intel, Microsoft, and Adobe and has contributed to a variety of technology patents for these companies

cyberdefensegenius - ai chatbot

13th Anniversary Global InfoSec Awards for 2025 now open for early bird packages! Winners Announced during RSAC 2025...

X