Your Face Could Cost You The Job: The Dangerous Rise Of Facial Recognition At Work
- Janice Gassam Asare

- Dec 8, 2025
- 4 min read

The Covid-19 pandemic ushered in a new era of remote work. Many employers desperate to track and monitor employees working away from the office implemented different technology tools to surveille employees. According to one 2025 report, more than half of fortune 100 employees were required to return back to the office fulltime. With many back-to-office mandates in place, remnants of surveillance culture have remained.
Many companies are using facial recognition software to manage employees. A recent survey by ExpressVPN indicated that 74% of U.S. employers are using online monitoring and surveillance tools with 67% using biometric tracking like facial recognition and fingerprint scans. Employers use facial recognition software in a number of different ways: to track employee attendance, to identify employees, to interview and screen job candidates, to reduce the number of employee touchpoints, and to track employees (this is common for delivery and gig workers). What are the vulnerabilities and limitations of using facial analysis and recognition software in the workplace and how does it reinforce biases?
There have been several different cases where facial analysis and recognition software has caused harm, reinforcing biases in the workplace. In 2025, a complaint was filed with the Colorado Civil Rights Division and the Equal Employment Opportunity Commission (EEOC) against the software company Intuit and the human resources assessment software vendor Hirevue. The complaint alleges that the AI used by Hirevue resulted in an Indigenous and Deaf woman being denied a promotion based on her race and disability. In a separate case, a makeup artist at a leading brand claimed to have been fired in 2020 because of a video interview through HireVue, where the facial analysis software marked her poorly for her body language during the interview.
In an email statement, Hirevue’s chief data scientist Dr. Lindsay Zuloaga shared, "Hirevue has never evaluated a candidate’s personal appearance, body language, eye contact, what they are wearing, or the background where they are taking a video interview. These factors don’t have any connection to a person’s success on the job, which makes them irrelevant to determining if someone has the skills and competencies for a job.
While our technology did have a visual analysis component many years ago, it was proactively removed by the company from all of its new assessments in 2020. Hirevue’s internal research demonstrated that advances in natural language processing had significantly increased the predictive power of language. With these advances, visual analysis no longer significantly added value to assessments.
Regarding the complaint filed with the Colorado Civil Rights Division and the Equal Employment Opportunity Commission (EEOC), this complaint is completely without merit. Our records show that Intuit did not use an AI-backed assessment in this hiring process. And as a result, the candidate could not have received auto-generated feedback from Hirevue.
Hirevue considers the ethical development of AI and candidate transparency to be core values of the business. Hirevue’s assessments are designed using a validated and peer-reviewed bias-mitigation technique that exceeds the very high standards established in the US in 1978 (the EEOC’s Uniform Guidelines). Central to these standards is ensuring that no statistically significant adverse impact (discrimination) occurs between demographic groups."
In 2024, an Uber Eats driver won a case where he alleged that that company fired him because of racist facial recognition software. The former employee claimed that he was fired after the company’s verification checks, which use facial recognition software, failed to recognize his face. Scholar and writer Dr. Joy Buolamwini has focused much of her research on the flaws with facial recognition technology discussing in her book Unmasking AI as well as the documentary Coded Bias how the technology is less accurate at identifying darker skin tones.
There is a wealth of evidence that indicates that facial analysis and recognition technology disproportionately impacts marginalized communities. This technology frequently misidentifies Black people leading to wrongful arrests. One 2025 study indicated that facial recognition tools had higher error rates for adults with Down syndrome. Researchers also note that facial recognition tools are less accurate for transgender individuals and these tools struggle to identify non-binary folks.
Integrating facial analysis and recognition tools into the workplace can have deleterious effects on employees. A 2023 assessment of feedback shared with the White House Office of Science and Technology Policy indicated that digital surveillance in the workplace creates a sense of distrust among employees, making them feel constantly monitored and leading to a decline in productivity and morale. Workers also noted that digital surveillance could limit unionizing, deterring this type of behavior in the workplace. There were also employee concerns about data privacy and how the data collected would be used.
Employers should think twice about implementing facial analysis and recognition software in the workplace. Not only is the type of technology prone to bias, but it can also erode employee trust and morale. If organizations have this type of technology in place already, they should request more information from the vendor about audits and what accountability measures are in place to ensure accuracy and mitigate bias. Employees should know their rights and there must be transparency around how data is collected, stored, and used. We must deeply consider the future we are creating when our face holds the key to whether we praised or punished.
This article was originally published August 6, 2025 in Forbes.






Comments