(Zach Gibson for the Urban Institute)

Artificial Intelligence and Its Impact on the Future of Employment Equity

In the time it will take to read this blog post, many of you could be passed over for a job.

At the Urban Institute’s recent Next50 Changemaker Forum, theoretical neuroscientist Dr. Vivienne Ming made a similar observation referring to the increasingly prominent role of artificial intelligence (AI) in informing employment decisions, even for “passive” candidates: jobseekers who might be unaware of a particular job opening or who aren’t actively looking to leave their existing job.

AI is transforming decisionmaking in an astounding number of socially consequential domains, including recruitment and hiring processes. Employers are turning to screening algorithms, often complex and nonlinear, that assess, score, and rank applicants to help hiring managers decide who should move on to the next stage of hiring. A substantial number of job applicants are automatically or summarily rejected at this stage—in this sense, screening algorithms act as gatekeepers to economic opportunity.

At their best, AI-powered hiring models can help employers efficiently identify candidates based on specific criteria and mitigate the subjectivity that may arise with human decisionmaking.

But algorithms can also replicate and deepen existing inequities. Hiring algorithms trained on inaccurate, biased, or unrepresentative data can produce employment outcomes biased along lines of race, sex, or other characteristics protected by antidiscrimination law.

At our recent Artificial Intelligence and Employment Equity Knowledge Lab, we convened social scientists, technologists, employment attorneys, and policymakers to shed light on one of the animating questions of Urban’s Next50 agenda:

What would it take to establish a framework to promote equal employment opportunity in the design and deployment of hiring algorithms?

Three key insights emerged from our discussion.

1. Distinct issues of equity are implicated at each stage of the algorithmic hiring process.

Employers are using AI-powered models at various stages of the hiring process: in advertising, recruiting candidates, screening and evaluating applicants, and even determining salary requirements. AI-powered models could help recruiters make better-informed decisions but can exacerbate existing inequities if deployed without appropriate safeguards in place. Screening models, for example, may reflect prior interpersonal, institutional, and systemic social biases when they aim to replicate an employer’s prior hiring decisions.

Bias may also be introduced into a system when recruiters misinterpret or place undue weight on results and recommendations generated by AI-powered tools. To understand and address these risks, we must distinguish how AI is being used at each stage of the hiring process and for what specific goal.

So why are employers turning to hiring algorithms? Many are primarily motivated by the promise of greater efficiency and cost savings provided by automated decisionmaking. There was concern that such employers may embark on efforts to automate existing recruitment and hiring processes without a clear understanding of how bias may be replicated from past hiring decisions.

Other employers are incorporating AI-powered tools in their strategic efforts to increase diversity and mitigate bias. Catalyte uses AI and predictive analytics to identify people, regardless of background, who have the potential to succeed as software developers. Some employers are deploying tools such as TapRecruit, which leverages AI to optimize the language used in job descriptions to attract a more qualified and diverse applicant pool.

An effective framework to promote opportunity must remain sufficiently flexible to accommodate these differences among employers and to consider the full range of motivations.

2. Harnessing the potential of algorithmic hiring requires careful thought about the application of our legal frameworks.

Important questions remain over how existing legal and policy frameworks should be applied or adapted to the use of algorithms in hiring. The Uniform Guidelines on Employee Selection Procedures, adopted by the Equal Employment Opportunity Commission and other federal agencies in 1978, hinge on “validation”: the disputed selection device must be shown to be sufficiently related to or predictive of job performance. Participants noted that correlation by itself can be a misleading metric for assessing validity of algorithms.

Algorithms, by their design, often predict applicant characteristics that correlate with job performance without considering the existence of a causal link between them. And correlation does not imply causation. Although algorithms have the potential to uncover job-related characteristics with strong predictive power, they could just as easily identify correlations arising from statistical noise or, more troubling, from previously undetected bias in the training data.

Last year, Amazon scrapped a resume screening model that penalized resumes with the word “women’s”—as in “women’s chess club captain”—and downgraded graduates of two all-women’s colleges. The spurious correlation the screening model relied on was attributed to training data that was heavily skewed toward male applicants.

Given the novel validation challenges presented by algorithmic hiring, we need to consider how the uniform guidelines should be applied and how antidiscrimination law could be adapted.

Participants highlighted the validation challenges posed by hiring models that continue to train—and therefore change—after deployment. For such models, it may be necessary to build safeguards to ensure that employment outcomes are stable long enough to be validated and compared across subgroups of candidates to assess adverse impact.

As a complementary step, clear frameworks for data governance would provide mechanisms to assess algorithms and ensure the quality and integrity of relevant data and design processes. We need robust mechanisms to ensure responsibility and accountability for hiring algorithms and the employment outcomes to which they contribute.

3. Civil rights principles should be built into the product development process.

AI alone won’t produce fairer employment outcomes; affirmatively advancing equity requires intentional design along the product development cycle. Articulating a set of civil rights principles to guide the development process for hiring algorithms could be an important first step.

Industry stakeholders and civil society groups have identified ethical principles for the use of AI generally; the challenge is to distill and translate them to provide useful guidance in the specific context of hiring.

To create a future with greater equity, AI-driven hiring screens must be designed to ensure that they are treating all people fairly with a commitment to meaningful transparency and accountability. This requires rigorous validation of screens and adequate documentation of decisions to ensure AI-powered models perform as intended and possess a sufficient business justification.

What’s next?

Harnessing the potential of AI to advance opportunity will require us to bring data and evidence to bear on these questions. Urban is committed to this work and is exploring opportunities to build on our discussion:

  • developing a framework for meaningful transparency and accountability in the use of AI-powered hiring models
  • fostering a community of learning to vet and share strategies for detecting and mitigating algorithmic bias in the employment context
  • identifying opportunities to use AI-based tools at various stages of the hiring process to create a more diverse and qualified applicant pool, in part by better identifying talent from underrepresented communities
  • adapting existing legal frameworks for the use of hiring algorithms and articulating a set of principles to guide developers and users specifically in the employment context, drawing on the ethical principles that various entities have drafted for the use of AI more generally
  • exploring policy solutions at the federal, state, and local levels to incentivize stakeholders in the employment space to be cognizant of replicating bias from past hiring decisions and to use hiring algorithms in equitable ways

 

Photo: Don Baer, Darren Walker, Shamina Singh, Vivienne Ming, and Jacob Hsu speak on the “Progress or Peril? Technology’s Power to Advance Opportunity and Equity” panel at the Urban Institute’s Next50 Changemaker Forum on May 15, 2019.