Prosecuting hate in Canada: Why, How, and When

Section 2 of the Canadian Charter of Rights and Freedoms grants all Canadians the fundamental right of freedom of expression – but as one young man in Markham, Ontario learned this week, the Charter also permits the enforcement of reasonable limits on expression.

 


18 year old Tristan Stronach, a grade 12 student, was charged under section 372(2) of the Criminal Code – making indecent communications – after his instructor had to conclude an online lesson after Stronach allegedly made racist remarks about the black community. The nature of the alleged comments, while not described specifically, has caused some to ask: why isn’t he being charged with a hate crime?

The answer is: because there is no specific “hate crime” offence in the Criminal Code.

Section 372(2) of the Criminal Code reads as follows:

Indecent communications

(2) Everyone commits an offence who, with intent to alarm or annoy a person, makes an indecent communication to that person or to any other person by a means of telecommunication.

“But what about hate speech?”

Section 319(1) of the Criminal Code reads as follows:

Public incitement of hatred

319 (1) Everyone who, by communicating statements in any public place, incites hatred against any identifiable group where such incitement is likely to lead to a breach of the peace is guilty of:

(a) an indictable offence and is liable to imprisonment for a term not exceeding two years; or

(b) an offence punishable on summary conviction.

 Wilful promotion of hatred

(2) Everyone who, by communicating statements, other than in private conversation, wilfully promotes hatred against any identifiable group is guilty of

(a) an indictable offence and is liable to imprisonment for a term not exceeding two years; or

(b) an offence punishable on summary conviction.

While it has been made clear that the allegations relate to racist comments towards a single identifiable group – the black community – charges under this section were likely not approved because the evidence is unable to support a conviction. The comments were not made in a “public” place, and while they were made in the virtual presence of a group of individuals, they did not promote hatred – i.e.,  the comments weren’t made in such a way that they would result in other individuals following suit and creating a breach of the peace as a result.

Notwithstanding the above, if the accused is convicted of making indecent communications, the court will consider to what degree bias, prejudice, or hate played a role. These are aggravating factors that could result in a harsher sentence. Through this legislative structure, these aggravating factors can be considered for a variety of offences – assault, theft, murder, and so on.

As Canadians, we are very fortunate to live in a country that allows us to speak, move, and exist freely – but cases like this are a reminder that equality reigns supreme.

Predictive Policing: Brave New World

In one of our previous posts, we discussed biometric technology and the role it plays in Canadian law enforcement. It is, however, only one of the “predictive” tools utilized by the police in relation to criminal investigations.


A new report by the Citizen Lab at the University of Toronto goes into alarming detail regarding growth of algorithmic policing methods, and how this technology compromises the privacy rights of Canadian citizens. The report is incredibly thorough and comprehensive, delving into how this controversial technique offends various sections of our Canadian Charter of Rights and Freedoms. Firstly, though, it is important that our readers understand what algorithmic policing is.

The overall success of any algorithm is the system’s ability to gather, store, and analyze data – with law enforcement’s methodology being no different.  A “location focused” algorithmic approach seeks to determine (predict) which areas are more likely to see criminal activity. The algorithmic system in these pursuits analyzes historical police data to identify geographical locations where crimes are, in theory, more likely to be committed. If this sounds familiar to you, then you’ve likely heard of, or accessed, the Vancouver Police Department’s GeoDash crime map – an online tool where you can navigate a map of the City of Vancouver by crime occurrence. You can choose from a variety of offences on the dropdown list, including homicide, break and enter, mischief, theft, and “offences against a person” which likely includes a variety of crimes such as sexual assault, assault causing bodily harm, and uttering threats. By looking at this map, you get an idea of which neighborhoods in Vancouver are most vulnerable to crime – except that it’s a little bit more sophisticated than that, and goes far beyond simply dropping a pin on the map. The public can see where the crime took place, but not who is alleged to have committed it.  The offender’s personal information is logged, in as much detail as possible, and becomes part of a larger system dedicated to predictive surveillance – i.e., it creates a profile of which individuals are more likely to commit a particular crime. This profile can be used to identify people who are “more likely to be involved in potential criminal activity, or to assess an identified person for their purported risk of engaging in criminal activity in the future”.

While this information is definitely concerning, there is another issue:  we have very little insight into the extent that this technology is being used. We know that the methods by which police gather information have historically discriminated against minority groups and those living in marginalized communities. This seems to guarantee that the VPD’s use of algorithmic investigative techniques relies on data that is often obtained through biased methods. We know that black and indigenous individuals are disproportionately represented in the correctional system, which can only mean that they are disproportionately represented in respect of these algorithms.

Although not everyone agrees that systemic racism exists within the VPD, the calls to address, unravel and mitigate the harm to marginalized groups continue to amplify. The idea that information collected under the apprehension of bias will not only remain on record, but will be used to further future investigations, is an indicator that Canadian law enforcement’s road to redemption will likely be a bumpy one.

Biometrics Hazard: How Facial Recognition Technology is used by Canadian Law Enforcement

“Not only is this a concern with the possibility of misidentifying someone and leading to wrongful convictions, it can also be very damaging to our society by being abused by law enforcement for things like constant surveillance of the public”

– Nicole Martin, Forbes contributor

Star Trek. Back to the Future. District 9. I, Robot. These are only a few examples of films that have relied on biometrics – more commonly referred to as Facial Recognition – as a theme for entertainment. All are fiction based and while you may have thought of biometrics as a tool used by elusive government agencies like the FBI and CIA, that isn’t the case at all. Advancements in biometric technology have been seized upon by various law enforcement and government agencies across Canada – creating serious concerns from privacy and civil liberty advocates, and of course, criminal defence counsel.

The Calgary Police Service began using Facial Recognition technology in 2014. The system they use, known as NeoFace Reveal, works by analyzing an uploaded image and translating it into a mathematical pattern known as an algorithm. The image is then logged in a database and used for comparison against other uploaded images.

The Toronto Police Service hopped on board too. They reported uploading 800,000 images into their Repository for Integrated Criminalistic Imaging, or RICI by 2018. Their use of biometrics began with a trial in 2014, and in 2018, the Service purchased a system at a cost of about $450,000. Between March and December of 2018, the Toronto Police Service ran 1,516 searches, with about 910 of those searches (or 60%) resulting in a potential match. Of those potential matches, approximately 728 people were identified (about 80%). There were no statistics provided in relation to ethnicity, age, or gender, however, research has raised concerns about disproportionate effects of biometrics as it relates to people of color.

Manitoba Police do not currently use biometric technology as an investigative tool, although the idea was floated around in 2019 after the commission of a report concerning growing crime rates in Winnipeg’s downtown core. The Provincial government in Manitoba went so far as to suggest that this technology could be used to identify violent behavior – which sounds a lot like active surveillance, an unethical use of biometrics, which demonstrates one of the most profound concerns surrounding use of this technology. And while it is only a matter of time until the Manitoba Police do use this technology, many retailers in the province are already using it.

At home here in British Columbia, the Vancouver Police Department denies using Facial Recognition technology as a mechanism to investigate crime – in fact, back in 2011, they turned down ICBC’s offer to assist in identifying suspects involved in the Vancouver Stanley Cup Riots with the aid of their software. The office of the BC Privacy Commissioner confirmed that any use of ICBC’s facial recognition data by the VPD would amount to a breach of privacy for its customers.The office of the Privacy Commissioner of Canada has been keeping track since at least 2013 yet, there is little regulation of the use of biometrics in public and private sectors. 

The same cannot be said for the RCMP in British Columbia, who, as recently as two weeks ago refused to confirm or deny use of biometrics as an investigative tool, but questions have been raised as to whether or not the RCMP is a client of Clearview AI, a facial recognition startup pioneered by US citizen Hoan Ton-That. Clearview’s work has not gone unnoticed – Facebook and Twitter have issued cease and desist letters, making it very clear that they do not support Clearview’s objectives. Google issued a cease and desist letter as well – however, their position on biometrics is fuzzy – especially since they are trying to make advancements in this area as well. So far, though, they have come under fire for their tactics and the results that have been generated. 

The Canadian Government’s position on the use of biometrics is established on their website. When you submit your biometric information at Service Canada (for example), your information isn’t actually stored there, rather, it is sent to the Canadian Immigration Biometric Identification System, where it will remain for a period of 10 years. Further, your biometrics information will be shared with the United States, Australia, New Zealand and the United Kingdom. And yes – you can refuse to provide this information – but it will likely put a kink in your travel plans.

 

One important factor to consider about all of these agencies and their use of biometric technology is that this tool was never intended for use as active surveillance, or a method to intervene in incidents of crime in real-time. Whether it is a violent assault, sexual assault, theft under or over $5,000, murder or kidnapping, biometrics is an “after the fact” investigative mechanism. If used ethically and within parameters that preserve the privacy of all citizens 100% of the time, perhaps there would be no need for alarm – but that is incredibly unlikely. As more agencies begin to use this technology, the lack of regulatory oversight is bound to create an enormous pervasion of your privacy – and you may never know about it.

Bill C-75: The bad, the worse, and the ugly

On March 29, 2018, Bill C-75 had its first reading in the House of Commons, and upon publication, was quick to receive scrutiny from lawyers across the country.

The Bill seeks to amend provisions of several key pieces of legislation, including the Criminal Code and the Youth Criminal Justice Act. However, in doing so, many rights currently afforded to an Accused will become a thing of the past.

The first major concern that stands out is the proposal to abolish the use of peremptory challenges in the jury selection process. When jurors are being selected, an Accused person and his Defence counsel are afforded 12 of these challenges, permitting them to deny a juror without explanation. Crown Counsel also has 12 challenges for their own use. The purpose of peremptory challenges is to provide balance in the adversarial trial process – however, the motivation behind their use differs depending on who you ask. The Bill doesn’t elaborate on how jury selection will be managed without peremptory challenges.

Equally alarming is the proposal to deny Preliminary Hearings for offences that don’t carry a maximum term of life imprisonment upon conviction. It is also being suggested that Justices be given power to limit issues examined and witnesses called during a Prelim. The Preliminary Hearing’s purpose is to determine whether the Crown has enough evidence to commit an Accused person to stand Trial, a valuable tool for the Defence in any given case (even if the offence doesn’t carry a potential life sentence). However, it isn’t beneficial only to the Accused. The evidence heard at a Preliminary Hearing is transcribed, to be recalled upon by parties at Trial. The issues explored at the Prelim can assist in narrowing what issues will be raised at trial, which in return reduces the likelihood of wasted court time on irrelevant issues (especially important in consideration of the impact of delay!). With the ability to seek a Direct Indictment from the Attorney General, the proposal to limit Prelims is wholly unnecessary.

Next up, and not surprisingly, we see this Bill seek to increase punitive measures for Accused persons facing allegations of abuse against an intimate partner. These consequences begin prior to any finding of guilt – in fact, they begin at the onset of proceedings, when an Accused person seeks release on bail. Bill C-75 suggests more “onerous interim release requirements” for individuals facing allegations relating to violence against an intimate partner. This essentially means that the terms of release will be increasingly stringent. On that note, the Bill also proposes to increase the maximum term of imprisonment for repeat intimate abuse offenders, and to have violence against a partner considered an aggravating factor at
sentencing.

Perhaps most disturbing is the revision relating to police powers and written evidence in the form of an Affidavit. Currently, a police officer is required to attend a trial in person to give oral evidence regarding their involvement in the case. They are subjected to cross-examination on that involvement, at which time they must truthfully answer questions posed by the Defence. This is a crucial opportunity for the Defence to raise reasonable doubt (when considering that police officers often offer the most compelling and credible evidence) which is the only reason for taking a matter to trial. Of course, the Defence will still be allowed to apply to cross-examine a police officer on their written evidence – but that application requires additional court time, and one struggles to believe that such an application would be denied in any event. So this proposed amendment will likely result in additional delay and squandered court time.

Many of these amendments strike at the heart of the adversarial process, and an Accused’s person’s right to make full answer and Defence to the charges against them. Numerous changes are procedural, justified by the assertion that too many cases are being thrown out over judicial delay. Systemic flaws, a lack of inquiry and input by judicial staff, and failure to accept and validate the concerns of concerned legal professionals in the private sector are a few of the factors that have resulted in impractical proposals pushed forward in Bill C-75.