Canada risks losing its artificial intelligence edge?

Canada is a world-leader in AI research, but if our companies don’t adopt the tech quickly enough they risk being swallowed up or, worse, made irrelevant.

Canada is often feted as a world leader in leading-edge artificial intelligence research, with companies such as Facebook Inc., Google LLC, Uber Technologies Inc. and LG Electronic Inc. taking advantage of the highly specialized expertise coming out of the country’s universities, especially in Toronto and Montreal.

But it’s a different story when it comes to actually adopting AI, according to an Accenture 10-country survey of 305 business leaders, including 44 Canadian executives.

Canada is only narrowly ahead of Brazil in AI adoption, and behind the rest of the surveyed countries, including the United States, China and the United Kingdom.

Nevertheless, Accenture, Deloitte and myriad other companies are rushing to bring the gospel of artificial intelligence to all corners of the Canadian economy, and hundreds of millions of dollars are being invested in startups to make it happen.

AI venture capital deals totalled US$352 million in just the first three quarters of 2018, according to a report by PwC Canada.

The report also said that half of all early-stage deals in the third quarter of 2018 were with AI companies.

The reason is simple.

Artificial intelligence has the power to make virtually every business more efficient, and create durable economic advantages for companies and countries alike, but those that are not quick adopters run the risk of being swallowed up or, worse, made irrelevant.

“We should absolutely be focusing on our major industries and making sure that we are getting adoption in those industries,” said Jodie Wallis, Accenture’s managing director of artificial intelligence in Canada.

“I don’t think there’s a whole lot in it for Canadians to compete with Google. I don’t think that’s where the win is.” As it stands, she said, Canada is not doing enough to keep its AI expertise “serving us, rather than letting our supply trickle elsewhere.” Wallis directly asked Prime Minister Justin Trudeau about the adoption of artificial intelligence for an episode of Accenture’s

The AI Effect podcast this fall.

“We know there are a lot of people working hard on this all around the world, and Canada has advantages that we need to build on,” Trudeau said. “And that means getting our business people to really realize that this transformative technology is not just closer than it may seem, but more accessible because we’re surrounded by so many strong AI ecosystems in Canada.”

What exactly those ecosystems are is as murky as the definition of artificial intelligence itself. Trudeau offered one definition of it in the interview, Accenture’s is a little bit different, but the label is also getting slapped on all sorts of technologies as a marketing tool.

In short, AI is not one thing, but a whole range of technologies where computers mimic human intelligence. One subset is machine learning, where a computer system has an automatic feedback function so it can learn from experience and change its behaviour over time to get smarter.

But even machine learning has subsets, including deep learning, which uses computer algorithms called neural networks that are structured to mimic the way the human brain works.

Deep learning is where a lot of the industry excitement is, because neural networks are particularly good at chewing through large amounts of data — especially unstructured data such as photos, video and text — and identifying patterns.

Canadian researchers were among the pioneers in developing deep learning, which is decades old but has only become a viable technology in the past decade.

Since then, Google, Facebook, Uber, Amazon.com Inc., Microsoft Corp. and many other giant technology companies have quickly jumped on board the deep learning train in a big way. These companies realize deep learning can fit into a variety of business cases by processing large quantities of data, identifying patterns and then making predictions.

For example, a bank can use historical fraud data to identify when a potential credit card transaction is suspicious, and then lock down the account while sending a text message to the customer. If the algorithm guesses right, the bank saves itself a fraudulent transaction. If the algorithm guesses wrong and the customer texts back to unlock the credit card, it’s still another data point that can further refine the system to make it smarter next time.

Even if you’re not playing around with cutting-edge deep learning technology, artificial intelligence is generally tied to processing data. “Large, traditional enterprises tend to have a lot of data. They tend to have a lot of unused data. AI is just a way to tap into that goldmine they’re sitting on,” said Megan Anderson, director of business development at Integrate.ai, a Toronto startup that raised $30 million in venture capital in September.

The company helps large enterprises such as the Bank of Nova Scotia and Corus Entertainment Inc. implement AI solutions. “Often when people think machine learning and deep learning, they think of a fancy model,” Anderson said. “In fact, the model is often the easiest part. The hard part is, what data are you using for that model?”

One challenge that comes with using deep learning is that it is not always clear how a system arrives at an answer after processing the data. Say a bank is using machine learning to predict who is most likely to default on a mortgage and offer loans accordingly.

Hypothetically, if the underlying data was somehow biased based on race, gender or age, the neural network would pick up on that bias and refuse to issue loans to certain racial groups, or it might offer less favourable rates.

Guarding against bias and managing ethical concerns is a major area of study as machines take on increased decision-making functions. According to the Accenture survey, such concerns are something Canadian companies seem to be fairly sensitive toward, but Anderson said companies shouldn’t hold back because of them.

“I always caution people to not over-strategize at the expense of just kicking something off right away and learning by doing,” she said. For one thing, Anderson said, humans don’t get decisions right 100 per cent of the time either, but it’s easier to audit machine decisions and check for bias than it is with human decisions.

Regardless of any concerns, most people who work in AI expect deep learning and other forms of machine learning will be embedded in just about every computer system in the future.

Consumer-focused companies will use neural networks to predict customer demand in order to make inventory and supply chain decisions.

Mining companies will use it to analyze geological samples and optimize extraction techniques.

Cities are already using it to predict traffic patterns and direct drivers around traffic jams.

Eventually, as the company’s deep learning system gets smarter, it wants to expand the service to offer advice and feedback as customers create newsletters and billboards using automated tools.

But instead of outsourcing specific tasks to AI systems, it might make more sense for large enterprises to build custom tools.

Deep learning gets smarter with more data, so outsourcing such functions serves to improve the algorithms of those third-party vendors.

Large companies that keep all their data in-house and feed it through bespoke AI systems can make their own systems smarter and able to deliver more value and sharper insights. For example, a bank could outsource fraud management to an AI service that can handle that one function, but the same bank could decide to make deep learning a central part of its whole system by pulling in credit card transactions, investments and other data in order to make more refined predictions about fraud. The same system could also help make predictions about who is likely to default on loans, and which customers are likely to need specific financial products.

But it won’t be easy.

For one thing, banking data is currently stored in different systems and lines of business. All the systems basically don’t talk to each other. All the lines of business have their own systems. In principle, AI can have a massive impact at a financial institution, but the barriers are not AI related.

The same is potentially true of every sector of the economy, which is why AI consulting firms are betting they are the ones that can take AI to the mainstream.

The consulting firm launched its Omnia practice in Canada earlier this year with about 300 people, but wants 600 to 700 AI practitioners by the end of the fiscal year. The most important part of using AI isn’t the data, or the algorithms, but finding the best ways to harness the technology for a particular business.      







Data Source: Financial Post.