Federal use of A.I. in visa applications could breach human rights, report says

Impacts of automated decision-making involving immigration applications and how errors and assumptions could lead to “life-and-death ramifications”

OTTAWA — A new report is warning about the federal government’s interest in using artificial intelligence to screen and process immigrant files, saying it could create discrimination, as well as privacy and human rights breaches.

The research, conducted by the University of Toronto’s Citizen Lab outlines the impacts of automated decision-making involving immigration applications and how errors and assumptions within the technology could lead to “life-and-death ramifications” for immigrants and refugees.

The authors of the report issue a list of seven recommendations calling for greater transparency and public reporting and oversight on government’s use of artificial intelligence and predictive analytics to automate certain activities involving immigrant and visitor applications.

“We know that the government is experimenting with the use of these technologies … but it’s clear that without appropriate safeguards and oversight mechanisms, using A.I. in immigration and refugee determinations is very risky because the impact on people’s lives are quite real,” said Petra Molnar, one of the authors of the report.

“A.I. is not neutral. It’s kind of like a recipe and if your recipe is biased, the decision that the algorithm will make is also biased and difficult to challenge.”

Earlier this year, federal officials launched two pilot projects to have an A.I. system sort through temporary resident visa applications from China and India. Mathieu Genest, a spokesman for Immigration Minister Ahmed Hussen, says the analytics program helps officers triage online visa applications to “process routine cases more efficiently.”

He says the technology is being used exclusively as a “sorting mechanism” to help immigration officers deal with an ever-growing number of visitor visas from these countries by quickly identifying standard applications and flagging more complex files for review.

Immigration officers always make final decisions about whether to deny a visa, Genest says.

But this isn’t the only dive into artificial intelligence being spearheaded by the Immigration Department.

In April, the department started gauging interest from the private sector in developing other pilot projects involving A.I., or ”machine learning,” for certain areas of immigration law, including in humanitarian and compassionate applications, as well as pre-removal risk assessments.

These two refugee streams of Canada’s immigration system are often used as a last resort by vulnerable people fleeing violence and war to remain in Canada, the Citizen Lab report notes.

“Because immigration law is discretionary, this group is really the last group that should be subject to technological experiments without oversight,” Molnar says.

She notes that A.I. has a “problematic track record” when it comes to gender and race, specifically in predictive policing that has seen certain groups over-policed.

“What we are worried about is these types of biases are going to be imported into this high risk laboratory of immigration decision-making.”

The government says officials are only interested developing or acquiring a tool to help Immigration and Justice Department officials manage litigation and develop legal advice in immigration law.

“The intent is to support decision makers in their work and not replace them,” Genest said.

“We are monitoring and assessing the results and success of these pilots before we launch or consider expanding it to other countries and lines of business.”

In April, Treasury Board released a white paper on “responsible artificial intelligence in the government of Canada,” and is currently consulting with stakeholders to develop a draft directive on the use of automated decision-making technologies within government.

Molnar says she hopes officials will consider the Citizenship Lab’s research and recommendations, including their call for an independent, arms-length oversight body to monitor and review the use of A.I. decision-making systems.

“We are beyond the conversation whether or not A.I. is being used. The question is, if A.I. is here to stay we want to make sure it is done right.”

The Canadian Press

Like us on Facebook and follow us on Twitter.

Just Posted

Highway 97 rock slide north of Summerland beginning to stabilize

Costs of road work so far estimated at between $300,000 and $350,000

Free hugs? Kelowna students launch social experiment

Are all hugs the same? Some local students went out and measured.

No love for Warriors in Valentine’s Day lost to Prince George

West Kelowna lost 5-2 Thursday to the Spruce Kings

‘The whole door was gone’ witness recounts alleged kidnapping at Kelowna apartment

Patricia Sawadsky looked outside to see numerous police vehicles surrounding her apartment

VIDEO: Canada’s flag turns 54 today

The maple leaf design by George Stanley made its first appearance Feb. 15, 1965

Workshop with ‘accent reduction’ training cancelled at UBC

The workshop was cancelled the same day as an email was sent out to international students

Former B.C. premier Gordon Campbell accused of sexual touching

Accuser went to police, interviewed by Britian’s Daily Telegraph

Syrian refugee responds to racism in Canada

Guest columnist Mustafa Zaqrit

Lower Mainland boy shot with pellet gun

Surrey RCMP believe Cloverdale pellet gun incidents are ‘linked’

Avalanche control planned tomorrow on Highway 1

The highway will be closed in the morning east of Revelstoke

Okanagan firefighters respond to emergency call for own member

Oliver firefighters found out who the call was for when they arrived on the scene

Valentine’s Day stabbing in Kamloops

A 21-year-old man is facing charges; the victim sustained non-life-threatening injuries

Judge rules Abbotsford home must be sold after son tries to evict mom

Mom to get back down payment and initial expenses

Most Read