Some retailers and shopping centre landlords in Canada are utilizing AI-based facial recognition technology to keep order and reduce thefts at a time when social issues and retail shrinkage are on the rise. The ethical utilization of AI-based data is critical as misidentification and other issues are posing problems, prompting businesses to look to best practices when using facial recognition tech.
A recent example is an Indigenous man in Manitoba who was accused of being a repeat thief in a Canadian Tire store in suburban Winnipeg. Facial recognition technology in the store identified him as a suspect, allegedly having stolen from the store months before, and the man vehemently denies that it was him. Ultimately it was determined that he wasn’t the thief, prompting an apology. It’s one of numerous examples of AI-based technology not performing with the accuracy expected.
And there has been backlash in terms of the use of such technologies in shopping centres — Cadillac Fairview came under fire in 2020 for collecting about five million images of shoppers at the landlord’s digital kiosks, prompting Canada’s privacy watchdog to launch an investigation into the practice. Cadillac Fairview claimed that the data was anonymized though it was found that privacy was not maintained with images being kept.
This is posing a challenge for retailers and landlords as thefts increase in major Canadian cities. Since the pandemic, shrinkage in stores has risen dramatically for a variety of reasons. Economics is certainly one of them, with some having lost jobs and using theft as a means to an end. Mental health has also become an issue following repeat pandemic lockdowns and other factors since early 2020 when the pandemic hit the world and changed things forever. Being able to identify criminals is a desired outcome to maintain order and profitability for retailers.
The world of technology adoption is changing quickly, with a rise the utilization of Artificial Intelligence across various platforms including facial recognition technology. Retail Insider recently had the opportunity to sit down with Kathy Baxter, Principal Architect of Ethical Artificial Intelligence Practice at Salesforce at the Dreamforce Conference in San Francisco to discuss the ethical use of AI tech in businesses broadly.
Baxter said that there are many potential issues utilizing AI, be it in a hiring system that might discriminate based on factors, or voice recognition technology a business may use that cannot recognize a particular accent. She said that ultimately retailers and other businesses need to mitigate the potential of harmful effects given that AI is becoming commonplace both online and in physical spaces.
She felt so passionately about the topic that in 2018 she wrote her own job description and created a role to oversee how AI can be used in the most ethical ways. The goal is to build and maintain systems utilizing AI that customers can ultimately trust. With that, she and a team created Salesforce’s Trusted AI Principles, which is a commitment to developing AI that’s responsible, accountable, transparent, empowering and inclusive.
It was a timely move, given that many consumers don’t trust AI technology — part of the reason is likely a lack of knowledge in what it does, while at the same time consumers also overwhelmingly believe that companies have a duty to improve the state of the world. Incidents such as what happened recently at Canadian Tire, or with Cadillac Fairview, don’t help.
Specific to Salesforce, Baxter said that the company’s Einstein teams also saw the need to build ethics into Salesforce products including identifying risks and opportunities to mitigate outcomes.
One example she noted for online sellers product recommendations — consumers may question why a particular item was chosen by an AI platform. If AI were to suggest cosmetics items for women only, it could ultimately exclude some men and those who are transgender or non-binary. Challenges may further persist if, say, a household uses one computer to make purchases which means one spouse may make different purchases, causing confusion and other potential issues. Developing technologies must be thoughtful, with ‘consequence scanning’ being a tool that asks users to envision potentially unintended outcomes of a new feature and how to mitigate harm.
She also noted that a dedicated Data Science Review Board can be utilized to enforce best practices in data quality and model building, be it for a particular product or even the entire organization. This helps determine if there is bias and how it can be overcome. Ultimately a group reviewing AI-related platforms aims to create transparency in how they collect data used by machine learning algorithms.
Baxer said that developing ethical AI takes time and effort for companies developing and utilizing tech. It’s part of a goal to be more responsible while adding value to innovation.
Accountability with the use of technology is also key to successful outcomes, be it using facial recognition or otherwise. The use of AI is becoming commonplace which means retailers in Canada will be using it in the future involving most activities. Properly utilizing data will be key to prevent such issues as the misidentification of thieves in stores — this will be critical to build trust and even prevent litigation in years to come.