Uncertainty breeds many things in people. Through an inherent lack of clarity, it tends to trigger primitive reaction and behaviour, instilling in us a sense of confusion, vulnerability and distrust. Ultimately, unless an adequate amount of understanding is gained, uncertainty often also results in panic and fear, and an override of our use of relative rationality and logical reasoning. In our quest to understand, to cut through the vagaries and ambiguity toward a clearer view of our unclear situation or circumstance, we seek out information, as much of it as possible from as many sources as we can find, sometimes overreaching limitations and boundaries. This is most definitely true with respect to Clearview AI’s unlawful practices — misconduct which was recently condemned by the Office of the Privacy Commissioner of Canada (OPCC). However, as flagrant as the company’s misdoings may have been, they have further sparked the need for greater discussion and comprehension related to privacy rights and regulation to help govern the proper use of technologies that are enhanced with the power of artificial intelligence and machine learning.
Clearview AI Misconduct
It was concluded on Wednesday, February 3, that New York-based technology company, Clearview AI — a developer and provider of facial recognition software — violated Canadian federal and provincial privacy laws by scraping the images of billions of people from across the Internet. The joint investigation by the OPCC, the Commission d’accès à l’information du Québec, the Office of the Information and Privacy Commissioner for British Columbia and the Office of the Information and Privacy Commissioner of Alberta found that the technology company’s acts, “represented mass surveillance and was a clear violation of the privacy rights of Canadians”.
Servicing both law enforcement and commercial organizations, Clearview AI provided its clients with a databank of more than 3 billion images for the purpose of matching and identifying unknown individuals within their communities and establishments. Images stored in the technology company’s databank included those of Canadians, resulting in the investigation which found that Clearview’s acts “creates the risk of significant harm to individuals” and that it had “collected highly sensitive biometric information without the knowledge or consent of individuals”, using and disclosing the personal information of Canadians for “inappropriate purposes, which cannot be rendered appropriate via consent”.
Opening Up Larger Discussion
Clearview AI disagrees with the investigation’s findings, holding that, “given the significant potential benefit of Clearview’s services to law enforcement and national security and the fact that significant harm is unlikely to occur for individuals, the balancing of privacy rights and Clearview’s business needs favoured the company’s entirely appropriate purposes,” and that, “Clearview cannot be held responsible for offering services to law enforcement or any other entity that subsequently makes an error in its assessment of the person being investigated”. The Commissioners rejected the technology company’s arguments. And rightly so. However, industry expert, loss prevention veteran, and Founder of Bottom Line Matters consultancy, Stephen O’Keefe, says that the case nonetheless opens up a larger discussion and debate concerning Canadian privacy rights and the use of these types of technologies by law enforcement and those operating within the retail industry.
“There are a number of existing technologies, and many more emerging technologies in development, which present massive benefits toward ensuring a safe and secure environment,” he points out. “The existing technologies in a lot of cases could provide retailers with comprehensive solutions to an array of challenges that they face. But the investment in these technologies and their implementation into retail operations is being held up in the legal departments of organizations, or resisted by businesses altogether, as a result of the ambiguity around their appropriate uses with respect to privacy rights. Retailers require the development of a more complete definition on the matter and a continuation of the conversation. The Privacy Office, for its part, is doing its best in efforts to protect the personal information of Canadians. Its intention is not by any means to outlaw these types of technologies or their use by businesses. But rather to arrive at some clarity around its acceptable uses and to develop regulation to provide needed boundaries.”

OPCC Working Toward Sensible and Balanced Regulation
The OPCC recognizes the immense promise that technologies powered by artificial intelligence and machine learning present to industries in potentially helping to address some of today’s most pressing issues. As part of its work to move toward clearer regulation, the OPCC provides businesses with its 10 fair information principles which are meant to guide retailers with respect to the protection of individuals’ personal information. And, in January 2020, it launched a public consultation on its proposals for ensuring the appropriate regulation of AI in the Personal Information Protection and Electronic Documents Act (PIPEDA). The aim of its proposals is to ensure that any necessary legislative changes to PIPEDA are considered in order to help businesses reap the benefits of AI while upholding individuals’ fundamental right to privacy. O’Keefe points out that the OPCC has A Regulatory Framework for AI: Recommendations for PIPEDA Reform on its website as well as other resources that retailers can reference and leverage to evaluate their own use of AI technology.
“There’s a lot of really great information that the OPCC offers online to help organizations govern themselves with respect to obtaining personally identifiable information,” he says. “As part of its Guidance on Inappropriate Data Practices, it outlines the factors that were set out by the Federal Court which are used to evaluate whether an organization’s purpose in collecting, using and disclosing information is in compliance with PIPEDA subsection 5(3). This information can be extremely useful for businesses as they explore different technologies and how they can be used to benefit their operations. However, the challenges lie in the fact that the development of emerging facial recognition technologies and artificial intelligence are advancing so quickly that they’re constantly outpacing necessary updates to the regulations. What’s resulting is a limitation being placed on the potential of these technologies because of a hesitancy and reluctance shown by business to adopt them based on ambiguity around their use.”
It’s a concern that’s noted by Jill Clayton, Information and Privacy Commissioner of Alberta, and one of the Commissioners involved in the Clearview AI investigation. She recognizes the dual obligation of the OPCC in protecting the identifiable information of Canadians while encouraging and supporting the development of technologies offering such transformative and beneficial potential.
“As the use of facial recognition technology expands, significant issues around accuracy, automated decision making, proportionality and ethics persist,” she says. “The Clearview investigation shows that across Canada we need to be discussing acceptable uses and regulation of facial recognition. Regulation would not only assist in upholding privacy rights, it would provide much needed certainty to all organizations thinking about using or developing the technology.”
Clearing Up Ambiguity
As O’Keefe asserts, the retail industry and the developers and providers of these types of technologies would gladly welcome further discussion toward sensible regulation. Until such regulation is developed, however, he points to a four-criteria litmus test which can be used by companies and applied against their own use of facial recognition and artificial intelligence to help clear up whatever ambiguity remains for them regarding their use of these technologies. The four criteria that O’Keefe suggests need to be met, in conjunction with adherence to the OPCC’s 10 fair information principals, are as follows:
- Necessity – does the proposed collection, use and/or disclosure of information meet a substantial business need?
- Effectiveness – will the proposed collection, use and/or disclosure be effective in meeting that business need?
- Proportionality – is the loss of privacy to the individual proportional to the benefits obtained by the organization?
- Alternatives – are there more privacy-friendly alternatives that would meet the business need?
O’Keefe stresses that one of the fundamental learnings that emerges from cases like the OPCC’s investigation into the conduct of Clearview AI is the fact that retailers and other businesses that leverage artificial intelligence and facial recognition need to approach their consideration of use with greater diligence and accountability. He says that it’s about gaining a full understanding of the technology, the artificial intelligence that’s been created and the data that helps feed that intelligence in order to properly assess the capability and usage of technology systems that they purchase. However, what he suggests is of equal importance is the requirement of retailers and their technology partners to remain resolute and to not be discouraged by the OPCC’s findings into Clearview AI’s practices.
“Artificial intelligence is here to stay and proposes enormous capabilities and benefits for businesses. Retailers shouldn’t abandon their use of existing technologies or their exploration of emerging ones. There’s so much untapped potential at the moment, and it continues to evolve and progress. Machine learning is constantly enhancing and informing artificial intelligence toward increasing levels of sophistication. Today, intelligent systems are being created. And in the future, we’ll see the development and advancement of super-intelligent systems which will be informed to the point of executing decisions. Retailers don’t want to miss out on this kind of innovation and the tools that can help them transform their businesses. To ensure that they don’t miss out, the industry needs to understand that not all uses of facial recognition and artificial intelligence technology represents a breach of privacy rights. By utilizing and adhering to the OPCC’s 10 fair information principals, conducting the 4-criteria litmus test against their own desired use of technology systems and remaining diligent when it comes to their exploration of different technologies, retailers can continue moving forward within the increasingly digital world while ensuring compliance with Canada’s privacy laws.”
For more information concerning interpretation and a better understanding of the Personal Information Protection and Electronic Documents Act (PIPEDA) as it relates to the collection, use and disclosure of information and the appropriate use of facial recognition and artificial intelligence technologies, contact Stephen O’Keefe at stephen@blmorg.com
Good article. Well balanced perspective.