Thomas Ahearn On AI and Background Checks
The use of technology such as Artificial Intelligence (AI) algorithms will continue to improve background checks for employment purposes but the “human touch” will still be needed due to discrimination concerns in 2019. This trend has been chosen by global background check provider Employment Screening Resources® (ESR) as seventh on the list of “ESR Top Ten Background Check Trends” for 2019.
“Artificial intelligence” or “AI” is intelligence demonstrated by machines and not the natural intelligence displayed by humans and animals. AI research is defined as the study of “intelligent agents,” any device that perceives its environment and takes actions that maximize chances of achieving goals. The term “AI” is used when machines mimic functions of the human mind such as learning and problem solving.
The use of AI in the background check process is relatively new. In December 2018, Forbes reported the use of AI for screening job applicants and employees could raise “thorny ethical issues” about how much private life matters in the workplace even though the application of AI algorithms in the background check process could “help reduce employment bias by better classifying information deemed relevant.”
However, Reuters reported in October 2018 that online retail giant Amazon had “scrapped” a secret AI recruiting tool for showing bias against women when hiring. Amazon built computer programs to review resumes “with the aim of mechanizing the search for top talent” but the system “was not rating candidates for software developer jobs and other technical posts in a gender-neutral way.”
The discrimination against women in Amazon’s AI recruiting tool was due to the fact that the computer models “were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period” and the vast majority of these resumes being observed by AI came from men because of the overall male dominance in the tech industry, Reuters reported.
A report from CNBC in December 2018 explained how “biased AI” can be created by faulty algorithms or insufficient data: “AI programs are made up of algorithms, or a set of rules that help them identify patterns so they can make decisions with little intervention from humans. But algorithms need to be fed data in order to learn those rules — and, sometimes, human prejudices can seep into the platforms.”
The use of AI for background checks can be controversial. In November 2018, The Verge reported that popular social network sites Facebook, Instagram, and Twitter began limiting the amount of accessible data to a startup company that used “advanced artificial intelligence” to screen potential babysitters after a report by The Washington Post detailing the company’s methods attracted widespread criticism.
The government is also watching AI. In September 2018, seven members of Congress sent letters to the U.S. Equal Employment Opportunity Commission (EEOC), Federal Bureau of Investigation (FBI), and Federal Trade Commission (FTC) voicing concerns over the use of AI and asking these agencies if they vetted the potential biases of AI algorithms being used for, among other things, hiring employees.
The letter to the EEOC asked if AI could violate Title VII of the Civil Rights Act of 1964, the Equal Pay Act of 1963, or the Americans with Disabilities Act of 1990. The letter to the FBI voiced concerns over facial recognition technology. The letter to the FTC was concerned that AI could “perpetuate gender, racial, age, and other biases” and its use “may violate civil rights laws and could be unfair and deceptive.”
The EEOC – the U.S. government agency enforcing federal laws prohibiting employment discrimination – enforces Title VII which prohibits employment discrimination based on race, color, religion, sex, or national origin. Job applicants can sue employers for employment discrimination if they believe they were not hired due to the traits covered by Title VII. AI use in screening could increase these lawsuits.
In November of 2018, the FTC – the U.S. government agency that protects consumers and promotes competition – held the seventh session of its Hearings on Competition and Consumer Protection in the 21st Century to examine competition and consumer protection issues associated with the current and future use of algorithms, AI, and predictive analytics in business decisions and conduct.
The FTC hearing examined uses of algorithms, AI, and predictive analytics, consumer protection issues associated with their use, and how competitive dynamics and industry conduct are affected by the use of these technologies. The FTC Hearings on Competition and Consumer Protection in the 21st Century examine whether new technologies such as AI might require adjustments to consumer protection law.
In November 2018, an AI policy initiative – The Ethical Machine: Big Ideas for Designing Fairer AI and Algorithms – was launched by the Shorenstein Center at the Harvard Kennedy School to focus on expanding the legal and academic scholarship around AI ethics and regulation, and to help Congress and other policymakers be better equipped to effectively regulate the growing impact of AI on society.
In addition to AI, blockchain technology is cited as new technology that may impact the future of background screening for employment purposes due to its ability to create a unique record, and the ability of job applicants to own their own record. Blockchain technology would enable applicants to use their record in the job marketplace, which is especially advantageous in the so-called “gig economy.”
While there is no doubt technology and automation increased productivity, streamlined processes, and reduced turnaround time (TAT) in the screening industry, the use of AI in the background checks of job applicants and employees will still need a guiding “human touch” until sufficiently non-biased AI algorithms can be created to ensure that employees will not make discriminatory hiring decisions.