Voices of Leadership: Insights and Inspirations from Women Leaders

AI Meets Curiosity: A Conversation with Mardi Witzel About AI Innovation, Leadership and Governance

Bespoke Projects Season 1 Episode 17

On today's episode, we talk with Mardi Witzel, CEO and co-founder of PolyML, an advanced AI and machine learning company. Mardi is also a leader in AI governance, serving on the Digital Governance Council’s AI Ethics Advisory Panel and contributing her expertise to the Province of Ontario’s AI Expert Working Group.

Mardi explains PolyML's innovative intersection of data analytics and AI to improve manufacturing. Learn how this horizontal technology is revolutionizing the auto parts sector, enhancing efficiency, quality, and safety.

The evolution of AI brings both excitement and concerns, including issues of fairness, privacy, security, job loss, and the potential for mass disinformation. We talk about the evolving landscape of AI governance, the importance of explainability, and Mardi's role as a leader in AI governance.

Mardi's curiosity has been instrumental in navigating her unique career path. Her journey reminds us that curiosity can lead us anywhere.

Connect with Mardi:
LinkedIn

Resources:
PolyML
NuEnergy.ai
White paper: AI-Related Risk: The Merits of an ESG-Based Approach to Oversight
The Center for International Governance Innovation (CIGI)
Women Get On Board
The Institute of Corporate Directors

What did you think of today's episode? We want to hear from you!

Thank you for listening today. Please take a moment to rate and subscribe to our podcast. When you do this, it helps to raise our podcast profile so more leaders can find us and be inspired by the stories our Voices of Leadership have to share.

Connect with us:
Voices of Leadership Website
Instagram
Bespoke Productions Hub

Mardi:

when Netflix is making a recommendation to you, it doesn't really matter how it came up with that recommendation. It's not a high stakes decision. But if an AI model is making a recommendation or is even deriving an insight that has anything to do with your fundamental human rights, with your financial, with your healthcare, then arguably we want to know how that model is coming up with its decision or its recommendation, or even its insight.

Amy:

Welcome to Voices of Leadership, the podcast that shines a spotlight on the remarkable women of the International Women's Forum. I'm your host, amy, and I'm inviting you on a journey through the minds of trailblazers. Each episode is a candid conversation with women leaders who are reshaping industries, defying norms and being instigators of change. Through these conversations, we aim to ignite a fire of inspiration within you, whether you're a budding leader, a seasoned executive or simply someone with a passion for growth. On today's episode, we talk with Marty Witzel, ceo of PolyML, an advanced AI and machine learning company. Marty is also a leader in AI governance, serving on the Digital Governance Council's AI Ethics Advisory Panel and contributing her expertise to Ontario's AI Expert Working Group. Marty's unique career path reminds us that curiosity can lead you anywhere.

Mardi:

Hi Marty, welcome to the show. Hi, amy, it's great to be with you today.

Amy:

It's so nice to see you. You and I know each other pretty much just because we've all lived in Waterloo our entire lives. However, since you joined IWF, it's been lovely to reconnect with you. Can you tell us how you got involved with IWF?

Mardi:

Oh, absolutely. It's funny. I had never even heard of IWF and I sit on a board Technically it's a council, the Ontario Council of the Chartered Professional Accountants, with Ginny Dibenko, formerly the Dean of the Business School at Wilfrid Laurier, and Ginny and I happened to be sitting beside each other at the board table and she was mentioning IWF and as she explained a little bit about the organization, I guess I probably lit up and said, oh, that sounds so interesting. And then she said, well, if you would be interested, and there you go. So thank you, Ginny.

Amy:

I'll say I say this I think every episode that everybody pretty much mentions Ginny is one of their connections to IWF. So I think, maybe, except for the ones that aren't part of this chapter so you're not alone, and we had a lovely dine around at your house, which was so much fun Were you able to get to know a bunch of the members there.

Mardi:

Yeah, so I think the dine around is a particularly nice format for getting to know people because you know if you're standing up in a larger environment, you might talk to a small group of people, maybe you drift on to the next group. I mean, all events are good, but the Dine Around is particularly well suited to sitting and speaking with people at the table and moving around, and we had about 20 or 22 of us, so I really, really enjoyed it and, yes, I had a chance to get to meet lots of people I hadn't met before, which, as a new member, is a special treat.

Amy:

It was. It was so much fun. Okay, so now I want to talk about your latest venture. So you are CEO and co founder of poly ML, an early stage Canadian AI and machine learning company, so can you tell us how and why PolyML was started?

Mardi:

First of all, you did a great job. Polyml stands for Poly Algorithm Machine Learning. We're an AI and machine learning shop. I really couldn't be more excited than to be involved in a technology company at this point, a startup that has great prospects for scaling and, as many of you know, you might have read about the importance of productivity and growth in our sort of collective, healthy economy future. It's companies like PolyML that are going to power the Canadian economy forward.

Mardi:

Now the interesting thing is we have what we call horizontal technology, meaning the technology that we build can be applied across a wide range of sectors, and the first place that we're doing business is in the auto parts manufacturing sector. So we've actually got a partner, martin Rea International. They're one of Canada's largest auto parts manufacturers and we've been working with them through an NGEN grant. Now, I don't know if your listeners will be familiar with the ecosystem of federal funding, but startups in Canada are particularly reliant on federal funding to get off the ground, and there are five Canadian super clusters, and one of them is all about next generation manufacturing. So you know you might have heard the term industry 4.0.

Mardi:

Manufacturing plants globally are tooling up to be able to use data, to do data analytics, to use AI, to improve their efficiency, to make their part quality better, and so this is the kind of a project that we got going on with Martin Rea and I mean I have to say, over the two years that we worked together in this project, we've been particularly grateful to work with a wonderful partner like Martin Rea is, and also to be federally funded, because, frankly, the VC community in Canada I mean there's a strong venture capital community in Canada but it's very conservative in terms of how it allocates funds and it looks for companies to have traction, and so, anyway, it's a struggle. I'm answering more than your question, but how we got started is applying our novel proprietary technology in auto parts manufacturing.

Amy:

Well, that's great. We need to know way more than just the tagline, because it's definitely a new space and I don't know a lot about it and I know a little bit about automotive manufacturing. So it's fascinating to hear how data and AI is being integrated, because my assumption is manufacturing has been a slower sector to adapt to digital technologies.

Mardi:

Yeah, it's probably fair to say, although increasingly I think the push with the internet of things is to use sensors and objects that you wouldn't normally expect can provide data for analysis.

Mardi:

So I the things that I so I came into this industry through AI governance and AI governance were concerned with how data is used, and so, of course, there's been data in industries like financial and healthcare and law enforcement for years, and you're right, the data hasn't been there in the same way in manufacturing, because these are physical industries, like construction is a physical industry.

Mardi:

But with the rise of the Internet of Things and the ability to extract data from places that you didn't used to think of data coming from, it opens the door to analytics to improve processes. So it doesn't seem super sexy, but it's actually exciting to think about streaming data off of sensors on welding equipment. We can collect information to do with the pressure of a weld, the current analytics to do with the temperature, the acoustics, and use all of that information to go hunting for how to solve problems in the production process, how to improve part quality, how to make a plant environment safer for the employees working there, and so it actually is exciting and it's a different kind of data than they use in the banks. But nonetheless, I think you're going to see more and more physical industries figuring out ways to collect data to improve their businesses.

Amy:

When you mentioned it's a horizontal technology and you're in automotive manufacturing, can it apply, or is that the goal that it will apply to other physical industries?

Mardi:

Yeah, exactly so. We have a very novel technology for harvesting insights from big, complicated data, and it doesn't really matter to us whether that data has to do with robotic welding equipment or another use case in auto parts manufacturing, like stamping, or, in fact, it could be transactional data at a bank and it could be related to the identification of fraud. Or in banks. There are use cases around the determination of a person's credit worthiness, use cases around the determination of a person's credit worthiness In healthcare. That data could be to do with rare diseases and the identification of what genomic data relates to the rare disease. The reason that I point out all those use cases is that they're extremely different, but our technology can be used to help identify insights from data to do with all of them and, of course, there are tools and techniques for doing that today, but we do it differently, and one of the things I think is the most special about our approach to harvesting insights is that the way we do it lends itself to doing fully interpretable AI models, and this is a real challenge today with AI is the number of models that are built, especially from complicated data, where you're forced to use techniques like neural nets that are very effective at coming up with highly accurate predictions, but you actually don't have any idea how those models translate inputs into outputs, so they call them black boxes, and maybe in some use cases that's okay.

Mardi:

Maybe when Netflix is making a recommendation to you, it doesn't really matter how it came up with that recommendation it's not a high stakes decision. But if an AI model is making a recommendation or is even deriving an insight that has anything to do with your fundamental human rights, with your financial, with your healthcare, then arguably we want to know how that model is coming up with its decision or its recommendation or even its insight. And that's what PolyML does. We have an ability to harvest insights directly from data and then to build fully interpretable models. So it's very exciting for regulated industries, and then it's very useful where you have big, complicated data, which you tend to have in auto parts manufacturing.

Amy:

It sounds like it's really honing in on something that the industry, or manufacturing in general, needs.

Mardi:

It is so when we started. Our first product has to do with robotic welding equipment, and take it for an example. A manufacturing auto parts manufacturer might have a problem with expulsions on their equipment, and the expulsions are the sparks that you see coming off of the robotic welding equipment. It looks sort of sexy in the movies, but it's actually bad for the production process. It degrades the weld head, it degrades the quality of the weld and it causes part splatter, which causes part damage. It's also bad for the environment, and it wouldn't be unusual for a manufacturer to slow down their process to try to avoid or reduce the number of expulsions. So then, all of a sudden, you're losing throughput.

Mardi:

What our technology has the ability to do is look at a big, complicated data set that has collected information to do with the pressure, the resistance, the current, the acoustics, and there could be hundreds of thousands of data points that we've collected information on and we can identify a very small number, the two or three most important influencing variables as to what's causing the expulsion to happen. And with that insight from the data, then the manufacturer can adjust their process in order to both increase throughput and reduce the number, if not eliminate, expulsions. And so it's actually a win-win from a sustainability standpoint. If you think about it, it's economically beneficial. If you think about it, it's economically beneficial, but it's also ESG beneficial because we have a better quality plant environment, a better quality product. We have less waste, less destruct. It's a very strong story.

Amy:

So what do you?

Mardi:

think PolyML's most significant impact on the manufacturing sector will be Well, I think our technology is novel enough that it can help them mine insights from data that they're sitting on, that they haven't been able to access, and I think in the manufacturing sector they're just learning to be data-centric, data-oriented. The banks have been doing quantitative modeling for decades frankly, I mean back in the 70s. Machine learning has existed since the 70s and not all models are AI and machine learning. So banks have been mining their data for insights into profitability, making projections, using it for trading. But the manufacturing sector, because it's a physical industry, isn't used to having digital data, and I think tools like ours help them leverage the potential of data that they've only just realized that they're sitting on. So it helps them move, you know, in practical terms, to be more efficient, to be safer, to have less waste, and it's opening the door for a different way of thinking about how to create and build value within their enterprise.

Amy:

That sounds so exciting. So I'm excited you're here for PolyML, but also just AI in general. I mean as a layperson in the AI space sort of thing. The speed at which the technology changes can be very overwhelming and sometimes I don't know how to digest all of the information that's out there. What are you most excited about in the evolution of AI and what are your biggest concerns?

Mardi:

Okay. Well, you wouldn't be alone in feeling overwhelmed and unsure about AI, about the safety concerns about AI. Just for those listeners who might feel the same way, let me orient you a little bit to the technology. Ai and machine learning has been around since the 1950s, and you might know the name, alan Turing, you might be familiar with the movie the Imitation Game, and so, if you go back to the 50s, there was a notion of machines having intelligence, or, at the very least, and it was contested at the time whether or not machines could have intelligence perhaps they could do tasks that intelligent humans were normally thought of as the only way to get a task done. And then, in the 1970s, we see the development of machine learning algorithms and there was something called the AI winter. That was. There was something called the AI winter. That was, I suppose, a dip in the interest and investment in AI, because people lost faith in its ability to create value into the 70s and 80s. But then a resurgence of interest in AI, beginning in the 1990s and certainly by the sort of 2010s, you see an incredible swell of interest in AI, and the reason is that some of the things that had hampered the technology in the past, like the lack of access to big data and the lack of access to strong compute capacity. Those things came together in around 2010, even a little bit before, and so sometimes I've read about it's the increase in compute capacity, the rise of big data and the maturity of algorithms that conspired to make AI a sort of a purposeful, valuable technology starting around 2010. And then we see, in the 2010s, the rise of neural nets, and then, most recently, we hear about generative AI, which is the new kid on the block. All of those technologies that I've described are still examples of something called artificial narrow intelligence, and there's the question out there when will we have artificial general intelligence, agi, and what AGI is is artificial intelligence that is presumed to have the same intelligence as a human being, and the reason that I share all this with you is to say that all of the technology that exists today does not meet that requirement. It's still artificial narrow intelligence Until the rise of generative AI, until the introduction of chat, gpt.

Mardi:

For the most part, when people talked about AI, they talked about machine learning, the ability of a machine to replicate tasks that normally require human intelligence, and, generally speaking, it's about making predictions and finding patterns. Now, on November, the 30th 2022, a lot changed because OpenAI introduced ChatGPT, and ChatGPT was the world's for the most part, it was most people's first introduction to generative AI. Gen AI had been being developed for years, but lay people like us weren't familiar with it, and generative AI is different than machine learning with it. And generative AI is different than machine learning. As opposed to making predictions or drawing inferences, generative AI is literally about generating content. Now, some people would say it's new content. Some people would say it's composites of content that it's been trained on that. These foundational models, like large language models, are being trained on a wide swath of information from the internet images, text, everything. So if you sit back and think about what AI is, that AI is essentially algorithms that get trained on data. One of the things that people are concerned about is is there any bias in that data? Because if there is, then even a well-trained model will perpetuate historical tendencies towards discrimination and bias. It won't be fair.

Mardi:

And so, to answer your question around how concerned I am with AI, actually, I heard Geoffrey Hinton on a podcast speak about this quite intelligently. I like the way he approached it. He spoke in terms of the near-term, the medium-term and the longer-term concerns associated with AI. The near-term concerns are things that are you know they're a concern right now with machine learning and with generative AI. Things like the fairness of AI, the degree to which it breaches privacy rules or at least stakeholder expectations around privacy. Does it introduce us to new vulnerabilities from a security standpoint? From a safety standpoint, the medium term concerns that he spoke about and this is a while ago related more to things like job loss and the possibility of mass disinformation. I would argue, and I suspect he would agree today, that generative AI exposes us in the near term to the threats of mass disinformation and that's one of the risks that's specific to generative AI.

Mardi:

The long-term concerns really revolve around this existential threat that is potentially posed by AI, and that's a tough nut to crack. I mean, I think we're aware of the fact that Jeffrey Hinton left Google because he felt, with some of the improvements or enhancements or developments in AI, especially around generative AI models, that the world was marching much more quickly towards artificial general intelligence and a world where AI could even supersede human intelligence, and that makes the risk of existential threat much more real. But at the same time, jan LeCun, who's with Meta and is one of the forefront leaders in AI, vehemently disagrees. So when you get some of the top minds in the world from AI disagreeing so strenuously about the reality of existential threat, it's really hard for somebody like me to comment. What I can say is where we see the possibility of that kind of threat, why wouldn't we be trying to address it? And I think that's what regulators, countries and hopefully, companies are trying to do.

Mardi:

And so it's rare that you see a technology that is developing at the rate that it is, where also the governance and the regulatory aspects are proceeding just as quickly and the regulatory aspects are proceeding just as quickly.

Mardi:

And three years ago, in 2021, when I talked about AI governance and the regulatory environment, I'd often say there's almost no regulation in place, and that's not true today. The EU, just earlier this year, passed its Artificial Intelligence Act, the world's first example of a comprehensive approach to regulating AI. And even though in the US, which maybe we could see as the opposite end of the spectrum that has, apart from President Biden's executive order, no federal level regulation state by state, there are many, many states that have passed use case specific regulations around AI states that have passed use case specific regulations around AI. So the world is concerned about the risk with AI, and I think a number of different approaches are being taken. Some of them are actually to do with passing laws, like I've talked about, and in some cases it's voluntary codes of conduct that companies themselves are subscribing to, and we can talk a little bit more about AI governance and the frameworks, if you and your listeners are interested in that.

Amy:

I absolutely do. It was my next topic Before we get to your involvement in governance in AI. I just want to ask you mentioned that you think, because AI moves so fast, that the governance side is actually not keeping pace but moving in an acceptable manner in order to try and handle all the challenges you described.

Mardi:

Yeah, I think, generally speaking, it's.

Mardi:

I mean, it's a commonly held wisdom that regulation never keeps up with technology, and I think in this case that's still going to be true.

Mardi:

But what I think is reasonable to observe and conclude is that the regulatory entities and governance is trying, it's leaning in almost at the beginning of the trajectory of AI and there isn't the lag that you see with other industries If you go back to computers. I mean, computers and data have been around for a long time and all that we regulated was interoperability standards, for the most part, I suppose, with data. Countries like Canada have leaned in from a privacy standpoint and the EU is very privacy focused. But I think it's fair to say we didn't concern ourselves so much with data until the rise of AI, because in some respects, data is, it's inert, it's sitting there as long as it isn't breached, as long as there isn't data leakage. But enter AI, the tools through which we can harvest insights from data or even make decisions that have impacts on human beings, or even make decisions that have impacts on human beings, and I think it's this realization that, oh my gosh, data and the tools that use data really matter because they're going to have impacts on human beings.

Mardi:

They can impact their pocketbook, they'll impact the environment, they'll impact their medical access to essential services, and so I do see the regulatory entities and the governors awake to this very, very early well, that's encouraging.

Amy:

So let's get into the governance in the ai side. So you're involved in governance of ai at a variety of levels, including at the federal level. Now, I didn't realize that you were involved with ai governance before poly ml, so I I'd like to know a little bit more about that journey that you took, because I thought it was either simultaneous or the other way around. And then can you tell us a little bit about what AI governance actually is?

Mardi:

So I spent a couple of years working with a fabulous young Canadian company called New Energy AI. That is Ottawa based Canadian company called New Energy AI. That is Ottawa based. New Energy works principally with federal government clients, but also with private sector clients, and their business is entirely around AI governance. So AI governance is the ability of an entity that is using AI to establish policies and guardrails that ensure that AI is implemented in safe and ethical ways, and so, starting back in about 2018, we can see the rise of frameworks for AI governance, principle-based frameworks that articulate.

Mardi:

I mean there are hundreds of them around the world, frankly, but for the most part, they stipulate the same types of principles AI should be explainable. Ai should be transparent. Ai should be secure. It should consider requirements of privacy and safety. There should be parties within an entity using AI that are accountable for that AI and to whom anybody impacted can go as a matter of recourse or redress if necessary. It should be human centric. Ai should exist to serve humans, not the other way around.

Mardi:

There tend to be 10 or 15 commonly held principles that speak to not just the performance of the AI in technical terms, but sort of the behavior of the AI in trustworthy terms. What's happened in the last couple of years is an interest and aptitude for operationalizing these principles, and so how do companies or enterprises that are using AI go about operationalizing the explainability of AI? You know, if we look to the most progressive entity in terms of regulating AI and that's the EU and I don't comment on that in terms of whether it's good or bad, because there are different legal jurisdictions around the world that would argue that that's overly prescriptive and will dampen innovation. So I'm not commenting on whether it's good or bad. Operationalize it with things like requirements for risk management and requirements for management systems that consider how to implement AI in ways that ensure it's high performing, but it can not just from a technical standpoint, from the sort of these other trust-based principles. And I think, as human beings who are potentially going to be impacted by AI in many, many different areas, it's reassuring to know that there are not just frameworks that are principles and people sort of pay lip service, but that regulators and countries and companies are leaning in to try to figure out how to operationalize these principles.

Mardi:

A challenge with AI is it's what we consider a life cycle technology. It's not sort of a one and done. You don't just deploy it and walk away from it. Once it exists out there in the world, it can drift because data can drift. So if you train a model on data to do with summertime and then wintertime comes, you know that model is going to be acting on different data and it's going to be coughing up different projections or recommendations or insights or decisions, and so, through design, development, deployment and monitoring, there's a need for all of these principles to be operationalized and for guardrails to be established and you'll hear that term in AI commonly.

Mardi:

What we mean when we talk about guardrails is be established, and you'll hear that term in AI commonly. What we mean when we talk about guardrails is that there are metrics that are defined in measurable terms and that somebody is actually measuring them and monitoring the behavior of the AI according to those metrics. So that's really what AI governance is. It's a whole process and most companies are familiar with risk management, so many larger enterprises will simply establish risk management processes that are specific to AI and data and ingest that into their existing risk management framework. I think the concern for smaller companies is to not be left behind not be left behind in terms of accessing the value of data, leveraging the value of data through AI, but doing it in trustworthy ways. It's a challenge and I'm concerned that this technology will, you know, increase the chasm between small companies and big companies, just because there's a lot of overhead in doing this kind of work.

Amy:

Yeah, I can see how that might happen, absolutely. So, more specifically, do you sit on a council at the federal level, or how does that work exactly? And how does it work within the framework of government? Does it change if government changes, or is it more on the civil service level? How does all of that work?

Mardi:

I sit on two panels. One is Ontario's, so the province of Ontario has an expert working group that contributes and volunteers ideas to the provincial government in establishing rules for their own selves using AI, and then, in addition, I sit on the Digital Governance Council of Canada's ethical AI panel, and the Digital Governance Council of Canada serves the private sector and works with companies who are seeking to develop frameworks for the implementation of safe and trustworthy AI. It's great that these advisory panels exist and bring together people with different types of expertise. You know I can speak broadly to some extent on AI governance and the regulatory environment and help companies to establish frameworks, but because of the work that I do with PolyML, I can speak you can speak in some depth about explainability of AI in particular, and I'll say that for those listeners who don't know what explainability of AI means inside the industry.

Mardi:

It's a commonly understood term and it means can you explain how AI is coming up with whatever output it's coming up with?

Mardi:

How AI is coming up with whatever output it's coming up with, and so if AI is trained on data, the data is the input and then it's coming up with an output. Maybe it's a recommendation of a Netflix show that you're going to like, but maybe it's drawing a conclusion about your health, or maybe it's predicting whether you're going to default on a loan and a bank is going to use that in determining whether or not they're going to actually extend credit to you. Can we understand how AI models translate inputs into outputs, and I think explainability is foundational for trust. It's also a fundamental requirement, in my opinion, for being able to assess how AI is behaving on lots of other metrics. So we're in a better position to assess whether or not AI is making inputs into outputs in a fair way if we can actually see how the AI is doing it right, and so one of the things that I'm so charged up about polyML technology is the fact that it lends itself to fully interpretable AI models.

Amy:

There's just so many aspects of it. It's such a broad topic I guess that needs everybody's opinion and thoughts on it to make it work, I think.

Mardi:

And there actually are jobs cropping up of people called translators, because in companies today you have technical people, people who are coding and who develop models and who do data analytics, and then you have business minded people who have a business unit that they're running, who have a bag of money that they want to use to leverage data, but they don't speak tech speak. I do a little bit of this to some extent, but they don't speak tech speak. I do a little bit of this to some extent, understand the technology and talk to the engineers and the data scientists, but an ability to translate that into how it's a benefit to a business unit, but also how it poses risk to a business unit and how you strategically employ AI while mitigating the risks. That translation function is going to be needed. It'll be interesting to see how it gets filled in enterprises going forward, but you can see it as a challenge for the regulators too.

Mardi:

I don't know if you're familiar with Canada's approach to regulating AI federally. The federal government proposed the AI and Data Act in I think it was June of 2022. And it's passed through second reading and it's in INDU today, the Committee for Industry and Technology, but there's quite a hue and cry around whether or not there was sufficient consultation with the public, and I think it's a challenge for regulators. With new technology, you know to tread quite carefully, and there's an education function as well as a consultation function, so we'll see where that goes.

Amy:

Yeah, that will be interesting to follow along. I also can see a need for a translator role. Absolutely, that would be beneficial to many people.

Mardi:

Yeah, I think when you read about AI, some of the information is digestible to an average human being, but a lot of it isn't, and another complicating factor is there's contradictory information out there. It's a very interesting topic. It's still evolving and I think this chasm between the technical people and the business people will narrow over time. But in the meanwhile, the translator function is quite important.

Amy:

Okay, I'd like to go back one more step, because you mentioned new energy and that's how you started with governance and AI, and then that obviously led to PolyML. How did you get to new energy? Because AI is fairly new and you haven't been doing it your entire career, so how did you get there?

Mardi:

Yeah. So for women listening to this, they might be interested that I'm new enough to AI that four years ago I didn't know anything about it and I'm a mom of four kids. My children are 25, 23, 22, and 20. And four years ago when our youngest was in grade 10, as an at-home mom who did a fair bit of board work and sort of work in the community, I sort of asked myself what will I do when Mike heads off to university? I thought you know, I'll be in my late 50s. It will be too early to just retire and play golf.

Mardi:

And for a number of different reasons, in particular intellectual stimulation, I wanted to do something interesting. So at the time I went back and did some more governance education because I really enjoyed the board work that I've done. So my first course I took was put on by a group called Competent Boards and it was a designation program called ESG Competent Boards, really around board governance but with an ESG lens. I didn't really know much about ESG at the time but actually it was a fantastic program and really, really interesting and I can say that while there's this discourse out there in the media and in particular in the US, around ESG being a woke term. You know, from my perspective it's a very business oriented concept. It's a concept of taking a broader lens to risk and looking at the longer term value creation or value destruction for companies through not just strictly near term bottom line considerations but looking at the social aspects, the governance aspects and the environmental aspects. So I did the ESG Competent Boards course. I also did the ICD designation program through Rotman and when I finished that program I was really impressed with it. I saw in the ICD booklet a half day course on something called AI governance and I guess I was just hungry to learn at the time and signed myself up. And I did this half day program on AI governance and after thinking about what I was taught through that half day program I realized that the way that the teachers were approaching the risk associated with AI and the strategic implications of the need to manage that risk that there was a lot of overlap with how the world of ESG was thinking about the risks associated with environmental and social and governance concerns and the role that that plays in long-term value creation and destruction.

Mardi:

And so I decided to lean in on AI governance and learn more about it. It was still sort of late pandemic time and there wasn't a lot to do. So I sat down and literally took about eight months that I read just on my own time about AI governance. I started with the proposed legislation out of the EU, the proposed Artificial Intelligence Act, which was brought to the table in 2021. And actually it followed three years worth of extensive global consultation on AI risk and governance and I engaged a friend who lives locally, tim Fleming, who's a lawyer and has a master's in EU law, to help me understand 130 pages of proposed Artificial Intelligence Act out of the EU.

Mardi:

And that's really how I became involved in AI governance. I wrote a white paper, sort of a primer for boards, for how to think about the risk associated with AI. I reached out to a friend that I knew who was involved with PolyML, joe LaFleur, our CEO at the time, to see if he could help me understand a couple of questions that I had, in particular, after reading articles from places like Harvard Business Review and MIT Journal, where they sort of were talking about the same thing but not always agreeing, and he said to me oh, you should speak with our chief scientist, dr Gaston Gonne. So Gaston is a brilliant man. He's one of the original co-founders of the original OpenText an OpenText that is different than the company that exists today, but OpenText. Its roots were in the digitization of the old English dictionary and Dr Garnet was involved in that project, and so I sat for months and learned about AI-related risk and had the opportunity to work with a brilliant mind in AI and machine learning in the form of Dr Ghosnay, and that's really what led to my path both into AI governance and into PolyML.

Mardi:

But I did a lot of the learning myself. I reached out to Jim Balsley, actually, and asked him for some advice, advice, and that was my route into an introduction to CG, where I have become, you know, a reasonably frequent writer on policy related matters when it comes to AI and machine learning and innovation more broadly, and I think very, very highly of the work that CG does. So I'm a bit unusual in terms of being somebody whose interests and you spend, at this point, to a degree at least, the technology, the regulatory governance and the policy aspects. It's just such an interesting area and I think there's an overlap between all of those things, overlap between all of those things. So, even in the work that I do with PolyML, the policy and regulatory aspects are valuable to the clients and the prospects that we're speaking to, because I can talk, you know, with some degree of intelligence, to what the regulators are saying and why explainable, interpretable AI really, really matters.

Amy:

I love that journey. I mean, that's amazing. I love the intersection of your curiosity and your desire to learn has led you to this brand new place, and I just I love to hear that, because there are lots of ways to get to a new career or a second act, or whatever you want to call it, and I like the way that you've gone about it.

Mardi:

Yeah, it's interesting. I didn't think I thought where it would lead was more board positions. I thought somebody would tap me on the shoulder to be either an ESG or a technology expert for a board and I mean, I suppose I got distracted early enough along that I really didn't apply to very many boards. I applied to a couple and got you know down to maybe the final two or four and none of them went my way. It hasn't mattered. I clearly have landed where I was meant to be. I will say I think it's still tough for women finding their way onto a lot of big deal boards and I wasn't even seeking a big deal board, but it's still a challenge. It's very, it's very competitive. I think they're often still looking for people with a more traditional trajectory.

Mardi:

I don't have 20 years of having sat in the C-suite. I worked for NCR Corporation. I had, you know, pretty senior positions in terms of our global division, but then I made the decision to stay home and raise my kids and even with the board work that I did and going back for higher education and all the governance education I've had, it probably would have been tough sledding if I, for the last number of years, just been pursuing board positions. I did not expect to land in the startup community and I sure didn't expect to land as the CEO, and that has been its own path and it's very rewarding. We're a wonderful team, so I'm thrilled at where I've landed. But yeah, you don't know where you're going. Necessarily, I kept my mind open, but it was out of a sense of curiosity and a desire for, I mean, maybe in my case, a third act, because I had a fabulous first career with NCR.

Mardi:

AT&T. And then a second act, raising my children, who are all launched and on their way. And yeah, this is a third act, I suppose.

Amy:

I love it. It's great, and I'd like to go back to the comment you made about some of the challenges you had finding board positions, even though they may weren't big ones. I can relate to that. I find, as an entrepreneur, if you don't have C-suite or specific designations, you're not always on the list. But ironically I have international board experience. But sometimes all of those things don't matter because they're looking for somebody to fit in a box that we don't fit in. So it's good to hear someone else have that struggle and talk about it, because I'm sure we're not the only two.

Mardi:

Yeah, I think many women find that and my first entree to start was through a terrific organization called Women Get On Board that was founded and is run by Deborah Rosati, who is an enormous advocate for women and women seeking governance positions, board positions, and I'm sure if I'd spent the last four years strictly focused on that, you know I'd be sitting on a rewarding board today. It's just not the path that I ended up taking in the end, but that notion of if you want to go and create an opportunity for yourself, you really can Even. You know, I was in my mid 50s at the time, but it took me leaning in and doing a lot of learning on my own so that by the time I was noticed by New Energy and I was noticed by PolyML, I had sort of a portfolio of knowledge underneath me already and writing that I'd done myself.

Amy:

Well, I think everybody should be happy that you took the path you did, because I think PolyML and all the governance, but all the governance advice that you're giving and providing, is beneficial to the space.

Mardi:

I have to say I had no idea when this started, when this journey in AI started, that AI would be such a flashpoint, such a matter of interest out there in the world. And really when I first started talking about AI and machine learning, I would start a talk. I did an ICD Southwestern chapter talk in 2021. And I think I opened it up with a question like how many of you think you've come in contact with AI today? I mean, not everybody. A lot of them said they thought they had.

Mardi:

The point is that I even asked the question today you wouldn't ask that question no you'd know that everybody's been in contact with AI today, and generative AI, open AI and chat, gpt changed the field because it meant that AI went from being a tool that was in the hands of data scientists within large enterprises, within banks and governments and big healthcare companies, to being a tool that's in the hands of everyone who wants to use it. It changes the vector of risk, it changes the vector of opportunity, but it certainly changed awareness. And so, yeah, if I did a talk today, I'd never started out with how many of you think you've been in contact with AI today. We just know we have been, and there would be AI that you know you've come into contact with, but you don't know where, because you now know it's in more places than you're even aware of.

Amy:

So, on all of that, the AI, your incredibly diverse journey. What is your advice for the next generation of women leaders?

Mardi:

It's interesting because I sat on a panel for Women's Day where a question like this was asked and one of the comments that was made is still fewer women go into STEM education. And the comment was made with generative AI to a degree, you don't necessarily need to have the depth of technical knowledge that historically you've needed to have in order to have an impact, because Gen AI is a facilitator of doing work. Now a part of me feels like I don't really want to send that message. I want to say to women please do consider STEM careers, please do consider you're well up to the task of the technical education. What I would say is don't be frightened of the technology.

Mardi:

I still operate at a largely conceptual level and that's very, very meaningful. I'm not a coder. I've never coded anything. I didn't take computer science at school. Maybe I wish I had, because then I'd even do a better translator function. But lean in and learn and read and even if the limits of your understanding are conceptual, you can make a career of that I have. I started in AI governance, reading about the regulatory environment and policy, and through working in a company that has technical people who are smart and patient with my questions, I've also amassed some degree of technical insight and ability to talk intelligently to a point. I mean, when we're pitching new customers or talking to existing customers, I only take it so far. Then we bring in our technical team. But my advice to women is it's a really interesting, fast moving area, so there's always something more to learn and it's never too late to get in. And don't be frightened by the degree to which it's technical, because there's still a range of opportunity to play, from the conceptual to the technical.

Amy:

Well, I love that advice because it opens up more options for people if they aren't going the coding route, because coding isn't the only way in to the space that you're in right now.

Mardi:

Absolutely. That's been my experience. Anyway, I'm living it.

Amy:

You are. You definitely are. Thank you so much for coming on the show. I enjoyed all of your stories and your journey is fascinating and admirable and very inspiring.

Mardi:

Well, thanks, amy. Thanks for having me, and I hope people listening find it interesting. It's certainly very top of mind. I fear that when many people crack the newspaper open and read about AI, they don't really know what it is. I hope to demystify it a little bit in terms of tools that help to analyze data, and I feel more hopeful about the opportunities that will be created with AI than I do about the risks that will be posed by AI, but I am reassured by the fact that all over the world, regulators, countries and companies are thinking about how to implement it safely and to ensure that it's trustworthy. So one of the things that I've noted is through human history, when the stakes are very, very high, we as a species tend to do a pretty good job of bringing people together globally to work through agreements to save ourselves, and I'm confident that we'll do that with AI too.

Amy:

Well, I can't wait to see. You'll have to come back when it all changes again, which might be next week, but it might be, thank you.

Mardi:

Thanks for having me, amy, take care. Have a great weekend.

Amy:

Thank you for listening today. If you enjoyed this episode, please take a moment to rate and subscribe to our podcast. When you do this, it raises our podcast profile so more leaders can find us and be inspired by the stories our Voices of Leadership have to share. If you would like to connect with us, please visit the Voices of Leadership website. It can be found in our show notes.

People on this episode