1st Edition

Standards for the Control of Algorithmic Bias The Canadian Administrative Context

By Natalie Heisler, Maura R. Grossman Copyright 2024

    Governments around the world use machine learning in automated decision-making systems for a broad range of functions. However, algorithmic bias in machine learning can result in automated decisions that produce disparate impact and may compromise Charter guarantees of substantive equality. This book seeks to answer the question: what standards should be applied to machine learning to mitigate disparate impact in government use of automated decision-making?

    The regulatory landscape for automated decision-making, in Canada and across the world, is far from settled. Legislative and policy models are emerging, and the role of standards is evolving to support regulatory objectives. While acknowledging the contributions of leading standards development organizations, the authors argue that the rationale for standards must come from the law and that implementing such standards would help to reduce future complaints by, and would proactively enable human rights protections for, those subject to automated decision-making. The book presents a proposed standards framework for automated decision-making and provides recommendations for its implementation in the context of the government of Canada’s Directive on Automated Decision-Making.

    As such, this book can assist public agencies around the world in developing and deploying automated decision-making systems equitably as well as being of interest to businesses that utilize automated decision-making processes.

    Acknowledgements *

    List of Tables *

    List of Abbreviations *

    Chapter One: Introduction *

    1.1 Regulation of Artificial Intelligence: The European Context *

    1.2 Regulation of Artificial Intelligence: The Canadian Administrative Context *

    1.3 Equality Rights: Disparate Impact in ADM *

    1.3.1 Case Study: Disparate Impact in the COMPAS ADM *

    1.4 Situating Disparate Impact in the Charter *

    1.5 The Role of Standards in Protecting Human Rights *

    1.5.1 Narrowing the Scope of Administrative Law *

    1.5.2 Soft Law and Its Status in Judicial Review *

    1.6 Methodology *

    Chapter Two: Administrative Law and Standards for the Control of Algorithmic Bias *

    2.1 Foundational Principles: Transparency, Deference and Proportionality *

    2.1.1 Transparency *

    2.1.2 Deference *

    2.1.3 Proportionality *

    2.2 Reasonableness Review *

    2.2.1 Illustrative Scenario *

    2.3 Standards to Mitigate the Creation of Biased Predictions *

    2.3.1 Construct Validity *

    2.3.2 Representativeness of Input Data *

    2.3.3 Knowledge Limits *

    2.3.4 Measurement Validity in Model Inputs *

    2.3.5 Measurement Validity in Output Variables *

    2.3.6 Accuracy of Input Data *

    2.4 Standards for the Evaluation of Predictions *

    2.4.1 Accuracy of Predictions and Inferences: Uncertainty *

    2.4.2 Individual Fairness *

    2.5 Chapter Summary: Proposed Standards for the Control of Algorithmic Bias *

    Chapter Three: Substantive Equality and Standards for the Measurement of Disparity *

    3.1 The Measure of Disparity in the Prima Facie Test of Discrimination *

    3.2 Legislative and Policy Approaches to the Measurement of Disparity *

    3.3 The Supreme Court of Canada on Measures of Disparity in Fraser *

    3.4 Disaggregated Data *

    3.5 Chapter Summary: Standards for the Measurement of Disparity *

    Chapter Four: Implementation Recommendations *

    4.1 Overview of the Standards Framework *

    4.2 Implementing the Standards Framework *

    Chapter Five: Conclusions and Further Research *

    References *

    Biography

    Natalie Heisler has advised public- and private-sector organizations around the world in the strategy and deployment of data, analytics, and artificial intelligence for more than twenty years. Natalie brings a unique, multidisciplinary perspective to her work, spanning social, regulatory, policy, and technical dimensions. Natalie has a BA in Psychology, an MSc in Mathematics, and an MA in Political Science and lives in Toronto, Canada.

    Maura R. Grossman, JD, PhD, is a research professor in the David R. Cheriton School of Computer Science at the University of Waterloo and an affiliate faculty member at the Vector Institute of Artificial Intelligence, both in Ontario, Canada. She also is principal at Maura Grossman Law, in Buffalo, New York, USA. Professor Grossman’s multidisciplinary work falls at the intersection of law, health, technology, ethics, and policy.