A hiring engine that downgrades women. A court tool that flags Black defendants as “high risk” at double the rate even when they don’t reoffend.

A Council of Europe report by legal scholar Frederik Zuiderveen Borgesius argues that while AI promises efficiency and growth, it often mirrors—and magnifies—human bias.

How bias sneaks into “smart” systems

AI is mostly machine learning: systems that make data‑driven predictions. That sounds neutral. It isn’t. The report maps six points where discrimination seeps in—think of them as leaks in the pipeline:

  • What to optimize: Choosing the “target” matters. If a company decides a “good” employee is someone who is “rarely late,” applicants with longer commutes (often poorer, often from minority neighborhoods) get penalized, regardless of performance.

  • Labels that learn old bias: Training on past decisions bakes in past discrimination. A UK medical school’s 1980s admissions tool reproduced earlier human bias against women and immigrants. As one analysis put it, the program “was not introducing new bias but merely reflecting that already in the system.”

  • Skewed samples: Over‑policing certain neighborhoods creates records that over‑represent those areas. Feed those records into predictive policing and they send even more officers there—a feedback loop.

  • Feature choices: Selecting “elite university” as a hiring signal might screen out qualified candidates from under‑represented groups.

  • Proxies and “redundant encodings”: Even if race isn’t in the data, other variables stand in for it—postal code, social graph, shopping patterns. Remove one proxy and others still encode the same trait.

  • Intentional discrimination: Proxies can also be used on purpose to hide biased aims, like excluding pregnant applicants by inferring pregnancy from shopping habits.

Humans then add “automation bias” on top: the tendency to rubber‑stamp computer output because it looks objective.

Where the harms show up

  • Criminal justice: The COMPAS risk score used in parts of the US was “correct” about recidivism roughly 61% of the time. But Black defendants were almost twice as likely as white defendants to be labelled higher risk and not reoffend. Debates over fairness metrics—equal error rates versus “calibration” (a score’s meaning is the same across groups)—show that not all fairness goals can be satisfied at once.

  • Hiring and admissions: Amazon reportedly scrapped a résumé screener after it learned to down‑rank women based on historical hiring patterns. An earlier UK medical school tool had already shown how easily models absorb human prejudice.

  • Ads and pricing: Women were shown fewer high‑salary job ads in one study of Google’s ad ecosystem, though responsibility was opaque. A tutoring company charged higher prices more often in areas with more Asian residents. No one may have set out to discriminate, but the effect was real.

  • Vision and language: Commercial gender classification tools misclassified darker‑skinned women up to 34.7% of the time (for lighter‑skinned men the rate was 0.8%). Translation tools defaulted to “he” for engineers and “she” for nurses when translating from gender‑neutral languages, amplifying stereotypes.

Each case illustrates the report’s central warning: many small, opaque decisions can add up to unequal access, higher costs, and fewer opportunities for certain groups.

What the law can (and can’t) do

Two legal toolkits matter most.

  • Non‑discrimination law bans both direct discrimination and “indirect discrimination,” where a neutral rule disproportionately harms a protected group. It focuses on effects, not intent. That helps with many AI harms—but proving a disproportionate effect is hard when systems are black boxes, and many unfair segmentations (say, by device type or “willingness to pay”) fall outside protected categories.

  • Data protection law (notably the EU’s GDPR and Council of Europe’s Convention 108) sets baseline duties: fairness, transparency, data minimization, accuracy, security, and accountability. It requires “data protection impact assessments” (DPIAs) for high‑risk systems and, in principle, restricts “solely automated” decisions that have legal or similarly significant effects. Where such automation is allowed, people must get human review and “meaningful information about the logic involved.”

But gaps remain. Models and abstract profiles can sit outside data‑protection scope until applied to a person. Many bias‑testing methods need sensitive data (like race) that organizations are often barred from processing. And when base rates differ across groups, fairness criteria can mathematically conflict—law doesn’t yet tell us which to prefer.

What to do now

The report’s advice is pragmatic.

  • Build mixed teams and train them. Managers, lawyers, and engineers need a shared playbook on bias, proxies, and feedback loops.

  • Assess and monitor risk. Run DPIAs. Document choices about targets, labels, and features. Test for disparities over time; models drift as the world changes.

  • Avoid obvious proxies. One hiring firm refused to use “distance to work” because it tracks race too closely.

  • Open the blinds. Especially in the public sector, design for auditability. Publish impact assessments where possible. Add sunset clauses so systems must prove their worth or be retired.

  • Strengthen oversight. Equality Bodies and human‑rights monitors should bring in technical expertise, get involved early in public procurement, coordinate with Data Protection Authorities, and resist “ethics‑washing” that swaps press releases for enforceable safeguards.

Regulate by context, not buzzword

Blanket “AI regulation” won’t work. Credit scoring, hiring, insurance, and policing pose different risks and values. Sector‑specific rules—combined with better enforcement of existing non‑discrimination and data‑protection law—offer a clearer path. The aim is simple to state and hard to do: keep the benefits of data‑driven systems while refusing systems that entrench unfairness.

The closing note echoes the opening tension: algorithms don’t spring from nowhere; people design, choose, and deploy them. With careful choices now, society can enjoy the gains of AI without making inequality the default setting.