New Canadian AI Rules Were Just Announced in Montreal: Here Are 5 Key Takeaways

Artificial intelligence (AI) in Canada is evolving, and the government has taken action to mitigate harmful effects caused by it via new legislation announced in Montreal on September 22.

Important and timely, this collective effort will shape the responsible and ethical use of AI in the country, which could become dangerous without mediative measures.

Photo by Jenny Ueberberg on Unsplash

Here, we break down the five crucial aspects of these newly announced rules that are set to impact how AI is developed, implemented, and monitored in Canada over the next decade.

Voluntary Nature

The code of conduct is voluntary for companies using artificial intelligence (AI), indicating that it is not legally binding. Instead, it encourages organizations to follow specific guidelines designed to push them towards ethical conduct.

The absence of this legal threat gives companies flexibility on how they enforce the code, which many in the business world will see as a positive. However, it does leave the government open to criticism should some entities take advantage and use the technology for unethical purposes.

The hope, though, is that a culture of responsible AI usage will emerge, with companies keen to show that they are transparent and ethically responsible.

Bias Testing

Bias testing isn’t a new concept and exists across various industries. Go to any new online casino in Canada and it will have recently received testing from independent

agencies for fairness. In advertising, businesses may use bias testing to ensure they don’t perpetuate stereotypes and affect minority groups.

In AI, however, it takes on potentially greater importance. Unbiased AI is crucial to prevent the spreading of harmful stereotypes and the potential negative impact they have on minority groups. Bias testing comes into play here by examining AI algorithms to make sure that they’re free of inherent bias. Should it find any, then it flags it up to an operator who rectify the flaw.

The new Canadian code of conduct includes bias testing within its framework as it aims to fairness and equality in AI applications, something that will be applauded by most onlookers.

Immediate Effectiveness

The code will become effective right after its unveiling, which will put pressure on organizations to take steps immediately.

Various governments around the world have been criticized for their slow response to AI, most notably in the US so the Canadian government are keen to show that they’re serious about quick AI reform.

Whether companies will be able to update their AI approach quickly is a different matter and may lead to some confrontation.

Risk Mitigation

Anyone who signs up to the code must commit to taking steps to mitigating risks to safety, which implies means ensuring that AI systems are developed and operated with safety in mind.

This aims to nip potential safety issues in the bud by targeting developers responsible for creating new features within AI. Without this mechanism, the likelihood of bad actors using the technology for ill-gotten purposes is much higher.

Incident Reporting and Action

When an emergency occurs in any situation, the availability of emergency services is crucial.

The same applies to an AI crisis. Incident reporting is an integral component of the code, and it will be helped by the obligatory installation of AI monitoring mechanisms. They must report any incident immediately and take steps to rectify the situation themselves before outside help arrives.

Public trust in AI relies heavily on organizations being able to troubleshoot problems caused by the technology and the code aims to give large firms the tools to do this.

A step forward

The new code of conduct is bound to attract some criticism from AI sceptics about how it doesn’t do enough to mitigate the risks of this powerful new technology, but many people will view it as a large step forward for AI governance.

Not only have the Canadian government issued the new legislation swiftly, but they have also ensured that it gives organizations the tools to fight and report safety threats caused by the technology. In fact, it appears to be one of the strongest regulatory moves than any nation has taken to date.

We shouldn’t be surprised if we see other large nations take similar action sooner rather than later.