Tom Fox welcomes Simon Moss to this week’s show. Simon – who describes his background as “eclectic”, having worked in and led many companies over his career, including IBM – is now the CEO of Ayasdi, one of the most innovative companies in the artificial intelligence space. Simon and Tom discuss the important work Ayasdi is doing for its clients.
The Data Problem
Tom asks why AI can’t seem to keep up with the volume of data that needs to be reviewed for AML, ABC and trade sanctions. Simon disagrees that it’s an issue of volume. The problem is diversity and distribution. He says, “The problem with data now is that it is so diverse, so distributed, and we’re still trying to deploy products of extraordinary innovation – including AI products – in the same ways as we did in the 1970s.” He laments that we try to homogenize data into a construct, which uses 80% of our data management resources. “We have institutionalized redundancy in data management, and it is getting worse because of the proliferation of data sources.” While this structure works for data at rest, it is unsuitable for unstructured data and data in use.
A Unique Approach
“We don’t use the data model approach,” Simon remarks. Ayasdi believes that a company is represented in its data, so they create a model that is unique to each client. “…it knocks 40 to 50% off the time to actually deploy innovation,” he says. He explains why machine learning cannot effectively predict or discover crime or compliance issues. “Hypothesis-based machine learning is brilliant for finding a needle in a haystack… The problem with compliance is you’re looking for a needle in a stack of needles.” Ayasdi’s approach, on the other hand, is to let the data tell the story. “The breakthrough that Ayasdi uses,” Simon says, “is what’s called unsupervised learning as part of a machine learning process. In other words, we are not going to give the software a hypothesis of what to look for. We simply say, ‘Go find interesting stuff.’ Let the data tell us the story.”
Innovating for the Future
Ayasdi is engineering the operational diligence and deployment needed for the future. It was technology that drove the blue collar transformation of the early 2000s, and it is technology that will drive the transformation of the white collar industry over the next decade. “We’re engineering our technology to make sure that we can service a customer expectation in the future,” Simon says. Tom comments, “It strikes me that the insights that could be generated [go] really far beyond the anti money laundering and fraud and corruption.” Simon agrees. He shares three examples of how Ayasdi has helped their clients gain valuable insights and profit from them. “What we’re doing is we’re creating true Alpha. We’re creating true opportunity and we’re creating true transparency. When those decisions are made, you know that the decision that has been created has all the explainability, all the referential insight that’s needed, all the appropriate data, so that when a regulator comes in and says, Why did you do this? It’s all completely supported.”
The true challenge of innovation, Simon argues, is that a solution works at scale. “The challenge is, How do you optimize the operating model of an institution? And you do that by looking at the institution as a whole.”
Resources
Ayasdi.com
Simon Moss on LinkedIn
Tag: AI
In this episode, we discuss the rapidly expanding use of artificial intelligence, machine learning and robotic process automation to undertake trade surveillance and mitigate fraud. In this episode, we discuss the rapidly expanding use of artificial intelligence, machine learning and robotic process automation to undertake trade surveillance and mitigate fraud. Joining me today are two experts on the subject of Artificial intelligence from both the technical and legal and compliance perspectives.
Join us each week as we take a deep dive into the various forms of fraud across the world and discuss crime families, penny stock boiler rooms, international money launderers, narco-traffickers, oligarchs, dictators, war lords, kleptocrats and more.
Scott Moritz is a leading authority on white-collar crime, anti-corruption, and in the evaluation, design, remediation, implementation, and administration of corporate compliance programs, codes of conduct. He is also considered an authority in the establishment, training, and oversight of the investigative protocols carried out by financial intelligence, corporate security, and internal audit units.
In this week’s Innovation In Compliance show, Tom Fox, together with guests Sean Freidlin, Yan Tougas and Patrick Henz, have a roundtable discussion about their experience with taking Seattle University’s free course, AI Ethics For Business. They chat about what they felt were the highlights of the course, as well as the opportunities for improvement.
Patrick Henz
Patrick likes how trainers from different disciplines work together as a team to present the course. He suggests that this interdisciplinary approach could be used by companies for compliance training, since compliance is becoming more of an integrated function, mainly due to budgetary constraints. Patrick emphasizes the importance of continuous learning: the world is changing so quickly that we cannot rely solely on our university training to keep abreast.
The topic of robotic process automation was missing from the course, Patrick thinks. He believes that psychology and ethics, topics discussed in the first module, are relevant for all compliance practitioners. He comments, “We’re not only here to identify the bad employees but furthermore to protect the good employees, which includes protecting them against themselves…”
Yan Tougas
Organizations using and/or creating AI must create their own set of governing values and principles from the outset, Yan points out. Two of those values should be transparency and agency. “If we are going to use AI to make some critical decisions about people’s welfare,” Yan says, “…we need to create room in the process for a human to make a final decision.” He points out that the pressure to rush to market is one reason companies do not create their own values and principles around AI. “We need to be extra careful and make sure that we don’t let this pressure to get to market and this pressure to adopt AI blind us from the homework we need to do up front,” he comments.
Yan appreciates the Operational Readiness document in Module Three, which he describes as a practical tool compliance practitioners can use. On the other hand, he thinks that the user interface and the quizzes at the end of each module could be improved.
Sean Friedlin
Sean finds it refreshing that a large corporation such as Microsoft has partnered with Seattle University to create free training for the public good. He hopes that more companies would embrace these types of partnerships in the spirit of corporate social responsibility. Sean sees this as the emergence of a deeper commitment to ethics as AI develops. He notes with interest that the Vatican has joined in this conversation.
Sean poses two interesting questions:
- What impact will COVID-19 have on AI advancement?
- What makes a good online learning experience?
Having the subject matter experts as narrators and anchors throughout the course establishes their credibility; Sean views this as a pattern other course creators should follow. He finds the course content too text-heavy, however, and the UI design mobile unfriendly.
Tom Fox
The exercise emphasized for Tom the need for companies to start with ethical values and accountability for the entire organization. You cannot simply ask those involved with these cutting-edge questions to be the sole corporate repository of ethical and moral values, he argues. Put these values in place now, enshrine them throughout your organization so when the
business opportunity or a crisis arrives, you would already have the framework in place to make a decision aligned with your company values.
The course is a good reminder to consider governance and structure as part of your compliance regime, Tom comments. It was a positive experience overall; however, it may not work for ongoing communications or training due to time.
Resources
Seattle University course- AI Ethics for Business
Rome Call for AI Ethics
Rome Call
Vatican joins IBM, Microsoft to call for facial recognition regulation
One thing is certain going into 2020 and beyond is that technology that will improve the efficiency of compliance and will assist in the operationalization of compliance into fabric of every business which embraces it. I would posit that the compliance professional who incorporates the techniques they advocate into their organization’s compliance program will not only move their compliance program forward but also make their company run more efficiently and, at the end of the day, more profitably.
AI is a step which weds the human interaction and experiences with the data which is available to every company – its own internal information which is most generally sitting in siloed verticals and not being used. This data can provide the foundation for business research and risk-forecasting models and AI. When you couple this data with the insights into what humans do well or poorly; you can pair the best of these two seemingly disparate incongruities. Moreover, when a compliance function embraces the use of AI and embraces this human and technological approach for forecasting and risk assessments and then keeps improving their risk management techniques, it will create a sustainable strategic business, compliance and intelligence advantage over its competition.
Three Key Takeaways:
- Use the big data in your own organization.
- Break down silos to get the data.
- Using the data in your own organization will drive greater business efficiency and greater profitability.
We previously considered how AI can be used as a business advantage for compliance. The power of AI can extend the more traditional functions of prevention, detection and remediation. The first way is in simply the mass amount of data which could inundate a compliance practitioner. Many compliance practitioners are overwhelmed about the amount of data available to them and do not know how or even where to begin.
Patrick Taylor has said that AI allows the compliance practitioner to understand the “subtle clues in that pattern of activity that will clue me in to take a different look.” He likened it to seeing “patterns in raked leaves” which allows you to then step in and take a deeper and broader look at an issue, either through an audit or investigation. This is where compliance practitioner can step back and literally keep an eye on the big picture and longer term as opposed to just the immediate numbers and information in front of them. It may also be the best hope for finding that kind of systemic fraudulent behavior
Three key takeaways:
- Do you know what your information means?
- AI can help both the detect and prevent prongs in a best practices compliance program.
- AI can help you to see the patterns in raked leaves.
Next, we consider the four practices that create the conditions for delivering an AI solution to compliance. Using these four practices can lead to enhanced operational excellence, more efficient business processes, and a more robust compliance experience. They are: (1) developing clear, realistic use cases, (2) managing AI learning, (3) continuous Improvement and (4) thinking cognitively. By applying these practices, business leaders can full operationalize AI applications for compliance into their organizational DNA and set themselves up to reap those rewards. It is a continuous cycle. The capabilities enable employees to execute the practices, and the practices themselves exercise and strengthen the capabilities. This cycle helps companies continually adapt at developing and using AI applications that make operations more efficient and create business value through greater profitability.
Three key takeaways:
- AI is not a panacea.
- It is not simply about reading numbers, it is thinking critically.
- Continuous improvement is a key by product of using AI in compliance.
Next we consider the crucial capabilities which a compliance function must have to implement an AI solution. Over the next several pieces, I will use the article Using AI to Enhance Business Operations by Monideepa Tarafdar, Cynthia M. Beath as an introduction into the how the corporate compliance function can use an Artificial Intelligence (AI) program to not only enhance the compliance function but also business operations.
Generating value from AI programs is not easy for compliance professionals as there can be multiple roadblocks to successful design and implementation. The problem is, many companies which desired to benefit from AI programs failed to do so have failed to develop the necessary organizational capabilities. The authors identified five capabilities that companies need to splice into their organization’s DNA to create an effective AI program, have adapted for the compliance function.
Three key takeaways:
- What is the power of an AI application?
- What are the foundations of AI application competence?
- What are some of the roadblocks to AI competence?
Vince Walden welcomes Lee Tiedrich to the Walden Pond podcast this week. Lee is a partner at Covington and Burling where she co-chairs the firm’s global and multidisciplinary artificial intelligence (AI) practice. AI is at the intersection of law and technology, she says. The technology is growing faster than the law, and Covington helps clients navigate the evolving legal landscape so they can capitalize on the opportunities presented by AI. Other aspects of their work include product counseling, advising clients how to improve their operational efficiency using AI, and advising clients about how to adapt their business based on the policy landscape.
Listen to the episode now:
What is AI?
Lee defines AI as using computing to automate, imitate or emulate human behavior. There are three key components to AI, algorithms and code, data, and hardware. Advancement in digital and hardware technology is greatly responsible for the enthusiasm for AI in the market. Lee predicts that the adoption and development of AI will continue to grow.
Compliance Professionals Need to Know
If you’re using AI or planning to, you should be aware of the key issues and policy developments, especially in your jurisdiction. The policy landscape is evolving rapidly. If AI is relevant to your business, become informed of where the policy is going and think about how that impacts your business and what type of changes you might need to make to your operations. Another issue that’s relevant to compliance professionals is how to make AI trustworthy to enjoy its benefits while mitigating against unintended harms. Lee says that governance is an effective tool to help manage the data, development, and deployment of AI. Given the rapidly evolving landscape and the growing interest in AI, organizations should dedicate some resources to understanding the AI legal landscape.
Resources
InsideTechMedia.com
Law 360 article: The 10 Best Practices For Due Diligence in AI Transactions