Episode #74 Daniel Faggella – Use Cases for AI in Financial Services and where funding really goes

Daniel Faggella is the Founder, CEO, and Head of Research at Emerj Artificial Intelligence Research. He is also a Member of the OECD Network of Experts on AI, a Fellow at the Institute for Ethics and Emerging Technologies. Daniel previously founded Science of Skill, CLVboost, and Black Diamond Mixed Martial Arts Academy, so we have someone with a very specific and similarly broad experience set. Dan also hosts a total of 3 podcast: The AI in Business Podcast, The AI in Consulting Podcast, The AI in Financial Services Podcast. Dan holds a Masters degree from the University of Pennsylvania in Positive Psychology and a bachelors in Psychology and Kinesiology with a Minor in business.

In this episode, we get into AI for financial services. Daniel talks about the true distribution of AI funding for banks, how one needs to cut through the tweets and press releases to check where money in AI is actually flowing. We talk about how to research AI use cases, and how to think about investing in AI as a bank. How to think about ROI (Return on Investment) before doing an AI project, and the answer might surprise you: It is not always hard Dollars that might be a project’s true ROI. And of course Daniel gives us a lot of places to further dive into AI for the finance space. 

Links

https://www.linkedin.com/in/danfaggella/

Contact

If you want to get in touch: contact@thewallstreetlab.com

We look forward to your mail and will do our best to reply.

If you want to reach out to us personally, here are our LinkedIn profiles, please mention the podcast.

https://www.linkedin.com/in/andreasvonhirschhausen/
https://www.linkedin.com/in/leonardoseverino/
https://www.linkedin.com/in/lukaszmusialski/

As always, please do not forget to take 17 seconds to leave us a 5-star review on Apple Podcasts or wherever you get your podcasts from.

Be well

Andy, Luke & Leo

TRANSCRIPT

EPISODE 74

[INTRODUCTION]

[00:00:04] ANNOUNCER: Welcome to The Wall Street Lab Podcast, where we interview top financial professionals and deconstruct their practices to give you an insider look into the world of finance. 

[00:00:23] AVH: Hello and welcome to another episode of The Wall Street Lab Podcast. With me today is Daniel Faggella. Daniel is the founder, CEO and head of research at Emerj Artifical Intelligence Research. He’s also a member of the OECD Network of Experts on AI, a fellow at the Institutes for Ethics and Emerging Technologies. Then previously founded Science of Skill, CLV Boost and Black Diamoned Mixed Martial Arts Academy.

We have someone here with a very specific and at the same time, very broad skillset and experience set. Then also holds a total of free different podcast, the AI in Business podcast, the AI in Consulting podcast, the AI in Financial Services podcast. Dan holds a master’s degree from the University of Pennsylvenia in Positive Psychology and a Bachelor’s in Psychology and Kinesiology with Minor in Business. He was such an interesting guest.

[INTERVIEW]

[00:01:25] AVH: Daniel, welcome on the show. It’s a pleasure to have you.

[00:01:28] DF: Andy, glad to be here, brother. It’s going to be a lot of fun.

[00:01:32] AVH: Oh, yeah. It will be. In the research for this podcast was already a lot of fun, but let’s start off with, what the heck is positive psychology as master’s degree?

[00:01:43] DF: Sure. Well, it’s sort of an interesting story. In order to pay for graduate school, I didn’t really want to get a job. While I was an undergrad, I started a mixed martial arts academy. If we were recording video right now, you’d see that my years are all messed up from a lot of combat sports and a lot of fighting. Yeah, instead of getting a job, I started training more fighters. I was doing a lot of national competitions. I have some national accolades in Brazilian Jiu-Jitsu, so did a lot of seminars around the United States. Even taught a seminar in Brazil. Combat sports was my life, and so when I went to graduate school, I said, “I want to study the cognitive science of how people learn to learn faster.” As it turns out, a lot of the psychology world is focused on amelioration. How can we make sad people feel better or come up with treatments for maybe bipolar folks or something like that? My focus is more on how to kind of lift the baseline and learn to learn faster.

As it turns out, there’s a very rich academic literature for skill acquisition and skill development. I was able to be advised by people like Anders Ericsson and the founders of goal setting theory, where people I got on the phone to mold out my master’s thesis. My interest is how to make athletes learn more quickly. But while I was there at UPenn, I was also learning about machine learning. Part of the filed that I was studying is called adult learning, which is how grown adults learn to develop skills sort of later in life. I was getting tapped on the shoulder like, “Hey! You know all these neuron stuff you’re looking at, there’s some cool stuff in computer science with this machine learning that’s really cool.” This was the really early days of sentiment analysis of Twitter data and identifying pictures of trees and flowers with image net, with image recognition.

By the time I graduated, I got a degree I really enjoyed. I applied it. I won a national championship not that long after I graduated from graduate school and continue to run my academy. But I was pretty convinced at that point as a 23-year-old that AI is going to be radically powerful. Because even 10 years ago was really exciting. I just said, I’m going to find a way to get into this field and determine the kind of impacts this technology is going to have. Around-about way, it took me to where I am.

[00:03:45] AVH: I’m previewing a section that I want to jump in later and that’s how to learn about AI. How did you make the switch? What did you do to teach yourself into the way? I mean, you never had a kind of formal education of this, you didn’t study computer science. Looking at your CV, there’s more like, you founded one company after the other, but you never had like an internship at Microsoft or something. How did you develop the skills and then what brought you to where you are today?

[00:04:15] DF: Yeah. I mean, to be frank, it was a lot of research and then it was a lot of interviewing, and eventually getting to the point where I moved to San Francisco for two and a half years to really get emerge off the ground. We had enough of a presence at that point to get into the AI lab for Baidu and talk to the folks who are leading that lab, a fellow by the name of Adam Coates, who’s now leading artificial intelligence for Apple, a little-known company in the Bay Area. Got to go talk to the head of core machine learning at Facebook at Facebook headquarters. Got to go in a LinkedIn headquarters and got to also talk to the venture capital community in that most exciting part of the world when it comes to tech. It was really through an exposure with founders, and exposure with researchers and a media presence that granted more and more of that exposure and then developing some focus.

We really decided that areas want to be really, really strong in, where the impact of artificial intelligence with financial services. Looking at all the use cases and banking, insurance and wealth management and really studying the ROI, the precedence of use and the applications there. Then there were a couple other core areas like life sciences for example and retail e-commerce, where we just wanted to become expert when it comes to the use cases and applications of AI. Yeah, it was primary access to the folks who are doing the work and then a lot of research focus on specific sectors to the point where we can learn what was going on without having to go get a degree.

Just so, the folks who are tuned in know at Emerj, we don’t really help people write code. Nobody pays Emerj to go write line with Python. Maybe an organization as big as let’s say the World Bank who was one of our research customers down to a superregional bank or life sciences company will work with Emerj to look at the landscape of AI possibilities for their particular line of business and figure out a potential path to ROI. Looking at the vendors, looking at the potential threat and kind of the estimates of what it would take to actually get to a successful use case and certain kinds of workflows. That kind of market research work is more of what we do. Luckily, didn’t have to learn to code. We just needed to talk to as many of the vendors and the real players as we can.

[00:06:14] AVH: In which areas did you focus on and why.

[00:06:19] DF: Yeah. Our focus was always communicating AI ROI to kind of a business audience. Our readers are directors, VPs, heads of normally functional business leaders. Some of them are technical people, like their CTOs or heads of AI of companies. On our podcast, we’ve had head of AI, Raytheon, leading AI researchers at Comcast, and IBM and you name it. But most of our listeners are non-technical. Our focus is on very simple explanations of what the business value and the business problem is, what kind of data is used, what the outputs of these systems are. Then ultimately, what the evidence return on investment is. It’s very, very hard to go out into the world and figure out what was the ROI of this kind of use case. But if you talk to enough vendors, you talk to enough buyers, you figure out the kind of impacts that different folks have seen, then you can kind of get down to that ROI focus. 

Business audience was the focus, and like I said before, financial services was the are we decided to have the most depth, because those people value what we do.

[00:07:15] AVH: I mean, yeah. That’s what you see. I feel like so many financial institutions, they have so much data, but they’re very sadly well known for not using their data right, right? It’s like, it’s all the gurus, the Amazon, Microsoft is starting using AI. But could you tell us what are the use cases, financial institutions really start to do. I mean, we’ve had Brad King on the show, he talked a lot about like what kind of use case in retail banking you can do to facilitate credit, to help with onboarding, all those kinds of things. Could you elaborate a bit more what use cases do you see in the banking world?

[00:07:57] DF: Sure. Yeah. Just to give you some context here. I can talk about insurance with you, I could talk about wealth management, I could talk about corporate banking. Let’s just talk about banking generally. We do something called the AI opportunity landscape and banking is one of the areas where we maintain this landscape. This is some 150 vendor companies across the AI ecosystem who kind of intersect with the banking world in some way shape or form and looking at their levels of traction, looking at the customers they serve, looking at the features that they purport, looking at the maturity of their products. Essentially getting a sense of where is the spend actually happen and what kind of deployments are actually going down.

There’s a panoply of use cases. I’m going to bring up a good number of them and hopefully we can talk about which ones are actually sucking up most of the money and which of them aren’t, because there’s actually some big of misnomers. If you just look at press releases as to where the money is actually going, as opposed to when you get on the phone with the head of AI US Bank. You get on the phone with the VPs of AI at AMX and you grind out sort of where the real investments are happening. In terms of banking, we break this down by capability and then we also break this down by sort of workflow or department. Let’s just talk about departments because it makes a lot more sense to people intuitively. 

In the customer service space, we have AI that potentially can reply to very low-hanging fruit customer service questions. We also have AI that can do things arguably more simple and often more valuable, such as routing questions if they come in through text or through some kind of a form field. There’s some kind of an SMS system or even a recording on a call line. Routing those calls to the proper human expert that they want to refund, that they want to learn about a product. Are they going to go to a random person that says, “Yeah! What do you want?” Are they going to go to the person that’s going to satisfy their intents?

Routing as it turns out is actually a lot higher ROI, generally, certainly much easier than actual responding. Some responding is possible. Most banks have miserably flopped and fallen on their face there. They won’t tell you that. I will tell you that. That’s customer service and we could go on and on about IVR and some other areas. We can also look at things like compliance. Compliance actually sucks up a tremendous amount of money when it comes to AI adaption and there’s all sorts of examples here. We could talk about something like in GDPR, where as a customer and individual person, maybe I’m able to call my financial services organization and say, “Hey! I want to know what information you have on my family” or “Hey! I want you to remove me from your database. I’m no longer your customer. That’s my right now.” As it turns out, if we’re unable to remove all that stuff and then we get examined in some regulatory sense, there are really horrendous consequences for that.”

Having document search and discovery tools that are able to find everything that might be appended to certain individuals and contact records and being able to find that 360 view, can not only help us serve that customer, but it can help us stay compliant. There’s other examples in fraud, which also vacuums up a lot of money. If I’m sending a bank wire, it is possible to set rules for bank wires. Let me give you an example, Andy.

Let’s say that I, Dan Faggella have never sent a bank wire larger than $70,000. One morning, I sent a bank wire for $7 million and I’m sending it like –I’m apparently in Romania and I’m apparently sending it to, I don’t know, Zimbabwe. I’m just giving you random places. There are hard rules for overt anomalies that don’t require AI to say, “No, we’re not going to let this go through. We’re going to call the customer.” But it turns out, it’s very nuanced. There’s no way to permanently hardcore of rules and what we call it adversarial system. For payment fraud with credit cards or anti-money laundering. Let’s say funding terrorist for example, very common anti-money laundering sort of instance here. We’re not able to say, “Okay. Here’s the rules. Now, if we put these rules in place, no one will do fraud.” Well, the’re going to work around the bush of course. 

We need systems that can live, and breathe and understand patterns of normal. What are patterns of normal per user? What are the patterns of normal in different geo route? What are the pattern of normal in different industries in terms of these kinds transfers of money or when something flags as anomaly or flags as a match to certain fraudulent activity? We want to immediately put that on our radar in a way that a hardcoded system never could. AML and fraud are gigantic use cases in the banking space and extremely important. They’re also getting a lot of funds. We could go on and on division by division in banking. It depends on where you want to focus, but this is an area where, again, if I told you in the last four years, I’ve spoken to 300 companies and been serving AI, it would be absolutely no exaggeration at all. We can go wherever you want to go.

[00:12:11] AVH: Okay. Interesting. Yeah, I mean, it makes sense, because the rules you said in the beginning, if there’s really like a $10 or $100x transfer made from some entirely different country, I think that’s a bit too easy to pick up on. That would be too easy. But you said one thing that banks wouldn’t tell us, but you would. We like to expose our listeners to things that people wouldn’t normally tell. Let’s start with the dark side for now, and then go over to the good side, like to the marketing side where everything is amazing and every dollar investment was the best return ever, right? What are actually like the typical use cases in banking or in the finance world that you can now think of, at the tip of your head, that completely flop, but like nobody ever really put it on the papers?

[00:13:05] DF: Yeah. There’s a meta idea and then we’ll talk about how that meta idea applies to finance. We can talk about life sciences. In our research, again, our research is in purely in AI. People don’t pay us to research like CRM software. It’s exclusively the ROI of AI applicants. That’s are Rolodex. Hey! You can get on the phone with the head of AI US Bank. You can get on the phone – like we want you to research that stuff. That’s what we do. Now, that a very nascent space as you’re aware. AI is extremely nascent. What we are dealing with is more kind of who we more made-up nonsense, more kind of marketing fluff probably than would be the case in very, very long-established technology spaces, where the capability sets a more concrete, the products are more comparable, the ROI has maybe been measured, more understood. 

AI is kind of bubbling and creeping up in all these dark corners and it’s always flashy and [inaudible 00:13:54]. The fact that we have to swim through that means that we need mechanisms in our research that help us cut through it. There’s a concept that we refer to internally, that I hope we’ll write articles about at some point, which we refer to as the lens of incentives.

Here’s the general idea. I’ll give you the meta idea, then we’re going to talk about banking. The meta idea is this. If I’m an enterprise and this applies outside of AI, but we just happen to see a lot of it in AI, in the space where we play, which is exclusively artificial intelligence. If I’m investing money in a technology as a bank, or if I buy a company, or if I do business with the company, there are some of those applications that I can spin up in a way that will make me, I think, look as a banking leader, maybe a comms leader in a bank, look better to my customers, look better to my investors, look cool, look hit, look modern, look like I’m going to serve you better, look like, I don’t know, like we’re hot, fun, modern business, whatever it is, whatever perception we want to manage. Generally, that’s, we can serve you better and we’re cool and hip. It’s something that came to that.

There will be some investments that will enhance that. There will be some other investments that might scare people. Like in other words, if we announce how many tens of millions we’re spending on AI for cyber security, it doesn’t comfort you as a customer to know that. It just kind of reminds you that you are under threat, that there are people all over the world who would love to vacuum up all of your information and just use your credit cards, use your bank accounts and do all kinds of horrible and various things to you. Same thing with something like any money laundering. Wells Fargo is not like, “We’re excited to have less sex trafficking money go through our system.” Like they’re not going to announce that, because the fact that any of it happens at all is so jarring and disruptive, that like people can’t even deal with that fact.

Again, the lens of incentives says, anything that we invest in or we acquire, or any partner we do business with that can enhance some perception management element that we think will make us look better, we will either reveal If it happen, we’ll call you about it or we’ll exaggerate it. We’ll talk about it as if maybe it’s farther along, more mature, more successful than it was. If it will make us not look good, then we are either going to conceal it. An example here would be cyber sec, Dark Trace. There are ton of these companies that we’ve had on the podcast, it’s super well-funded AI cyber sec companies. They have basically zero customer testimonials for obvious reasons. If I’m a hacker and I explored Dark Trace’s system and I know which banks are using them, now, I get to do all kinds of stuff to it, right? It doesn’t behoove the banks to talk about it, because it scares customers, maybe it scares people, maybe it puts them at risk.

The way it applies to banking in this case again, remember what I said. If it makes us a pure good, we’re going to reveal it or we’re going to exaggerate it. If it makes us not feel good, we’re going to conceal it or we’re going to downplay it.

[00:16:42] AVH: Sorry to interrupt you. Before you jump into banking, could you tell – because I got just super interested to what is Dark Trace.

[00:16:50] DF: Dark Trace is –

[00:16:50] AVH: Just in case anybody is wondering.

[00:16:51] DF: Yeah, they’re an AI company. If people type in like AI in business podcast, Dark Trace or something. They’ll hear our interview with one of their leaders. We’ve covered them on Emerj on a number of occasions. If you type in Dark Trace on emerj.com, you’ll see some of our articles about them. They’re basically a very well-funded AI base cybersecurity company. Cybersecurity solutions to big enterprise companies.

[00:17:13] AVH: Okay. Thanks. I just got really curious when you said dark web, Dark Trace and stuff.

[00:17:18] DF: Yeah, it’s a vendor. It sounds pretty scary and efferous now that you mentioned it. But yeah, they’re on the good guy side I hope, despite their name when it comes to banking. Bearing in mind the lens of incentives, which says, we reveal and exaggerate what makes us look good. We conceal and downplay what would not make us look good. In the banking space, to general dynamic, I could talk about a lot of individual specific and general dynamic is this. Customer facing things, a lot of this being chat or some kind of mobile interface that will recommend cool new products to you and things like that. These things, we’re going to exaggerate.

Now, two and a half years ago, this was much worse than it is today. But I’ll still tell you, it’s still leaning way heavy on the side of exaggeration. If I’m a bank and I do some silly little pilot project with some silly little chatbot company, before the ink is even dry in the contract, they haven’t built a pilot. They haven’t even looked at my data, they haven’t done anything. Before the ink I even dry in the contract, I’m doing a press release about it. Because why? Because the lens of incentives is why, “Because it makes me look cool. Everybody else is doing a press release. Chatbots are the hip new thing. We’re doing it too, I promise.” That kind of stuff. Wells Fargo, Allied Bank, a lot of these guys chopped off their chat interface after a year of complete floundering, but again, they’re not going to do a press release about that. But they will do a press release when they don’t even have any progress yet with some frankly kind of nominal, incapable vendor because it makes them look good similarly.

Customer facing gets exaggerated. Anything risk and compliance related gets downplayed or gets concealed. Again, anti-money laundering, cybersecurity, they’re not going to brag about it. But I will tell you right now, our estimate, this is maybe a year old now, but I’m sure it still holds. Our estimate is, that something like four or five times more dollars are being invested in compliance and risk related applications in big banks than anything customer facing. But if you look at the banks press release volume, and you look at the preponderance of those things that are kind of customer service, customer experience market, you’d think it’s completely the opposite, but it’s absolutely not. 

Press releases are not market researcher. Market research is grinding out individual human beings, talking to customers of individual vendors and looking at the relative funding of different vendors that are servicing different capabilities and really getting a sense of where the actual traction is going. In the banking world, customer facing is not happening nearly as much as people think, and fraud and compliance-related functions are astronomically better funded than anybody reading Twitter, or reading press releases, whatever. So, there you have it. The lens of incentives in banking, that’s the five-minute version.

[00:19:47] AVH: I like it. I mean, it’s really interesting at a mix. When you work with your clients, let’s say to state their large bank, is it often that you get those fluffy, non-saying, just meaningless? They’re like, “Yeah. We want to do just like AI thing,” can you recommend the best chatbot engine or are they coming to you more for the real stuff and have their innovation lab taking care of the fluffy chatbots?

[00:20:20] DF: Well, as a market research firm, we do a good deal of work with innovation strategy leaders. Those are often the people that would consume the kinds of research products that we produce. We have a product called Emerj plus, which is kind of a very light-level access and kind of search application for some of our existing content. It’s often those kinds of people and the folks that purchase at higher level in the enterprise are also innovation strategy people. It’s a mix of both to be honest.

Some of the project that we’ll field, we’ll essentially be, “Hey! We’re looking to transform customer experience. We believe that voice and chat are critical, but we don’t know the roadmap, the realistic timelines so we don’t know how other banks or other insurance companies our size or even bigger than us have actually build that out. Which of them have hit dead end, which of them have hit gold? We want to be able to actually fund research to see those realistic roadmaps. We want to know the sand traps and we want to know what it is and isn’t possible. We will get projects along those lines. Sometimes though, a good deal of the time in market research and there’s nothing inherently wrong with this. It’s just part of the industry.

Somebody in the company already knows what they want to do and they work with a market research firm to sort of get data that they can then grab the slides that they want and they can make a convincing argument to their boss. My job is not do anything other than objective research here, and my researchers and our process, it’s not bending reality to make chatbots a higher ROI application than they are. But there are times where, I can think of an example of a company that had a very large retail presence. It’s sort of like a four-billion dollar business in mostly healthcare but a lot of physical retail locations, and they wanted to do some in-store computer vision stuff. There was enthusiasm around this. They did take some recommendations around what to pilot in their logistics and supply chain side of things, which they had ask for from us, from our opportunity landscape research.

But we really did not – we did not say very nice things about the viability of these in-person vision applications that they wanted us to investigate, because in all frankness, they’re outlandishly nascent. I think it would require a lot more stores, a lot more in-house talent thatn these folks had. And frankly, the use case just wasn’t that very strong. It’s just was not a very big financial lift as far as we could tell. But I happen to know that. They took whatever the parts of that that they thought were interesting. They took the vendors that ranked highest in that and kind of showed those parts to their boss.

As a market research firm, all that we can do is lay out the objective info. Sometimes folks are really going to look through that lens if reality and do their darndest push forward on what’s really going to deliver result. Sometimes they’ve already got their enthusiasms and maybe they think it’s in line with the businesses’ needs and I’m not going to judge. Maybe sometimes it’s just throwing personal enthusiasm in their own career muscle. They want to do something cool and they’re going to use the research for those reasons.

Our responsibility, what lets me sleep at night is knowing that, where we produce a research product, we’re producing something objective and we’re providing guidance that they can use to their own best use hopefully. Fingers crossed. The reality of our industry.

[00:23:19] AVH: Yeah. I think it’s across many, many industries. Let’s dig deeper. Let’s jump into an example. Let’s go back to financial industry and to one of the real use cases, not the fluffy chatbots. But let’s say, you get hired by a company and they ask you to come up like worth of good anti-money laundering system. Then, what are the factors that you look at in vendors that you say, “Okay. They are legit”? Do you talk to their other clients? Do you say you look at their funding? What kind of other factors are there to evaluate a good vendor in AI? Because I feel like being in the FinTech industry, I don’t know much about AI, but I just feel like now, every company that has the slightest one plus one equals two algorithm now calls himself an AI or machine learning companies. How do you go through just all the nonsense? How do you cut through the BS and decide who is for real and who’s not?

[00:24:33] DF: Well, primary research is really big part of this. Market research involves kind of primary, secondary work. Secondary research is, looking at the data that is out in the world and being able to collect, collate, label and make sense of that information. Primary research is getting on the phone and asking hard questions to leaders of vendor companies, user companies, et cetera. It is the Rolodex on the primary research side, which is why people generally business with us. They know who we can get on the phone and that’s why they work with us.

That said, secondary is also very important. We have two scores that we draw from secondary sources, that service some kind of baseline and are undergird some of our primary individual interviews that we run after the fact. One of those is the relative skill and talent score. When it comes to AI, hey, we do AI. I’ll give you a couple rules of thumb. Actually, we have a whole article about this. How to cut through AI hype? If somebody types that into Google with the word E-M-E-R-J, so E-M-E-R-J and then how to cut through AI hype. 

We actually have some of these rules of thumb that we’ve actually published. If whoever is heading up artificial intelligence at this company, we’re talking about vendors right now. So we’re not talking about the US Bank. We’re talking about a company that’s selling to the US Bank. For those firms, if whoever is heading up AI has a bachelor’s degree in physics from University of Wisconsin and they worked as a consultant at a little hoke poke IT firm for five years, and now all of a sudden, they’re at this company and they’re the head of AI. That’s not such a good sign. We score base on academic pedigree and this is degrees in the hard sciences, computer sciences. It’s not always AI overtly, but it’s experience there.

Then also, in terms of career expertise, did they spend time with a marquee larger AI firm? Were they working on recommendation engines at Amazon for three years before they jump over this company? That’s worth more than a degree sometimes, so there are ways of scoring and ways of weighing that. Being able to look across leadership and get a sense of, are we strong or not? Some companies score at like a flat one out of five in that category. Almost certainly just have less going on with AI than whatever their homepage actually says. There’s more rules of thumb than that, but that’s one.

But at the end of the day, how much of what they’re doing is AI or not is somewhat important. What’s more important is, can they deliver results or not? This comes down to secondary research around who they’ve worked with and what the results of those engagements have been, and also talking to them and talking to their clients. If we’re going to be conducting research and they want to be featured in that research, then if they have clients that things have worked out well for, they’ll do their darndest to try to pull those folks onto the phone and let us ask some questions around, what is challenging about deploying this and what the end result was, and what kind of resources were involved. Grinding through that is what in-depth market research for a user company would actually look at. 

We got a secondary base. It’s a great place to start, looking at companies they’ve work with and we have a general level of adoption score. Again, kind of a one through five. We have a skill and talent score, one through five. We have a number of those other scores, but then it really is going to be the in-person interview that will help paint a clear picture and allow us to have confidence to say, “You should go with these two vendors and talk to them.” You should not even talk to these other four, just don’t even get on the phone. Absolutely not worth your time. It takes some heavy lifting to get to those conclusions.

[00:27:51] AVH: Then when you do those interviews, do you – I mean, you have done this for the last many, many years. Do you also have like, you know, the crazy PhD data scientist on your team that is like – well, actually, this guy just talks nonsense and this doesn’t make sense. Or, do you just get a feeling from talking to so many people. Right?

[00:28:12] DF: Yeah. It is the bulk of what we produce is intended to be put in front of people who cut big fat checks not write code. Our audience is a check cutting audience. They’re not a pecking semicolons into keyboard, audience, so you understand. That’s who we talk to. That said, there are times where we did a project on computer vision, where we need to pull in advisors. We have formal advisors. One of my formal advisors is the head of AI, the company called HubSpot. One of our rare technology unicorns here in the Boston area. Another is an NLP PhD who’s based in Spain, who’s worked on all kinds of stuff in ecommerce and other domains.

If we need to assess product by product, certain elements of what it is that they’re actually selling and certain elements of, do these claims makes sense. Certain elements of what they said in these interviews actually line up with what reality looks like, boots on the ground. We can pull those folks in. But a lot of the time, we’re talking about things that won’t involve them talking about support vector machines and linear algebra. They’ll involve talking about the relative amount of data infrastructure overhaul that they had to do for a certain project or the purported return on investment that a certain project rained in or something along those lines. They’re often going to be non technical topics. We need to drill down. We’ve definitely got folks we can pull in. But generally speaking, we’re yanking out what is the used case, what’s the data used, what were the resources spent, what’s the return on investment. Because the people we’re writing for, they just don’t give a crap if it’s monkeys or if it’s code. It doesn’t matter. It only matters if you spend X and you make what. It’s the only thing that matters. That’s the audience that we right for.

But yeah, you can tell when people are spinning up BS for sure. I literally – twice a year, I’ll record a podcast, it will just never be aired, because I’ll just be like, “That was entire BS.” It’s twice a year. Two times a year, I’ll end a recording and I’ll say, “Damn, that was a mistake.”

[00:30:03] AVH: What do you then say to those people? Do you have those conversation? Do you like, “Hey! Sorry mate, I got to tell you, but this is not going to air” or?

[00:30:12] DF: Yeah. I mean, normally, I’ll wrap up the interview like kind of quickly and I might just, “Hey! We’ve really got to consider if we can get this up here based on how much insight we have. If this does go live, I’m happy to be in touch or what have you.” There’s certainly have been instances where we’re seven minutes in, it’s like, “Hey! I just don’t know if this series is really exactly the right fit.” I mean, I’m still going to be polite. I think they’re probably well intended, but when someone is kind of spinning up what AI can do or spinning up stories about where they’ve seen AI deployed where there’s really no meat undergirding it whatsoever and it becomes kind of clear, yeah. They won’t see the light of day and that’s what matters for our listeners. 

[00:30:49] AVH: Okay. I want to spin one more site story and then I want to go into this topic that’s been coming up over and over, like the ROI, hitting goals. But like, just out of pure curiosity, what’s the craziest thing somebody told you, like, “Yeah, this is where I’ve seen AI” or “This is what AI will do”, where you’re like, “Oh, come on!”? You’re not like, we all love those futurists. I had this conversation with [inaudible 00:31:13] who’s like very well researched in AI, but it was like, “Yeah. This is where I would love to see AI. This is what I would like to happen, but under the premise that someday, this should be really cool.” But what’s something where people said, “Yeah! AI is going to clone your grandmother.” 

[00:31:30] DF: Yeah. One of the wildest claims that we’ve heard, I mean – you know, I don’t have any singular really wacky, wild tale to go off of here. The things that tend to get under my skin are general claims of why a certain product is better than anything else that’s ever happened before. Like it’s some transcendent technological leap over what Google is doing, or what Microsoft is doing, and what any other AI researchers in world are doing. There’s an episode in kind of the chat interface space where they’re talking about of some symbolic approach to language and why Google could never do this and Amazon could never do – like using words like never just extremely bloviated product-specific claims of betterment and superiority are things that are usually signals of somebody blowing hot air, and just somebody who’s using the podcast as a marketing opportunity, which I don’t really let people do. I’m pretty hard on people to get into marketing mode on my show.

The other one is, claims around the generalizableness of the product. Let me give you an idea here. If a company says something like, “Well, it could work for any size business or this application could work in any industry. That’s essentially a telltale sign that they have no real traction, real customers. We do a really great job at screening these people before they get on the phone now. But every now and again, you see a company that’s really well-funded. As it turns out, it’s just hubbub. They’ve got good investors like – and they’ve got smart people in leadership team, but it’s straight hubbub. There is nothing going on in terms of actual tracks with customers.

One of the telltales to that is grand [inaudible 00:33:10] of the generalizability of the technology. Oh, yeah, well, you could also use this in insurance like this. In logistics, it’s really the same technology. You could also use in logistics and do this. Anybody that has experience applying AI realizes that actually, the subject matter expertise of who the buyer is, what are the workflows we’re impacting, what are the IT systems we’re working into, what are the specifics of the data we’re sucking into the system, what do the stakeholders care about in this particular industry. That stuff is actually 85% of good deployment or a bad deployment.

If you have zero experience and you just have a technical degree, then you’ll simply say, the technology works everywhere. But if you actually have – if you’ve been bloodied and bruised by the harsh, horrendous reality of the market, because this stuff is tough, man. There’s nothing easy about bringing AI into the life of the enterprise. I’m not joking. There’s nothing easy about it. If you’ve actually – you have scar tissue. You never talk of it like that. You dial in on specifics. You dial in the impact on workflows. You dial into particular sectors and workflows so you can impact. Even if you got a broader platform, you end up being able to talk about this particular areas of work and actually add value.

Folks who bloviate their specific product and why it’s better than anything Google’s ever done and folks who talk about how you can plug it in anywhere, it’s always going to work because it’s same base technology. Those are two where it’s like, “Okay! We got to wrap this up.” We got to bring this thing to a close.

[00:34:32] AVH: The only thing that comes to mind is, Amen, brother. I’ve been in FinTech for too long that I [inaudible 00:34:40], it’s not only AI.

[00:34:43] DF: I’m sure, yeah. I’m sure.

[00:34:44] AVH: Tech, FinTech in general, those – as you said, like if people tell you it works everywhere and either they have one exact use case, it’s a bit like, your curve of ignorance versus confidence. You know the chart. It’s like, the less you know, the higher your confidence is. Then at one point when you actually start to learn your confidence plums and you’re like in the conversation, it’s like, “Actually, this is like 0.1 mm different use case than what we’ve done before.” I think it can work, but I’m really not sure because the last use case was such a pain. Then you add like rock bottom, and then you got to build yourself up there. When you’re just like, “Okay. Well.” I mean, no, I guess – please correct me, but from my experience, no to use cases are exactly the same, but they mostly are kind of related and you have to rebuild everything a bit from scratch every time. But when you say, “Well, without understanding the use case, you can do it.” That’s just mostly not true.

[00:35:50] DF: Yeah. The nuanced contextual understanding of the workflows you’re impacting, the stakeholders you have to get to say yes, the users you have to get to use it. That is 85% of the value. Luckily, we see more and more companies being founded, where they have some grounded understanding and not of the get-go as oppose to four years ago, five years ago where you just have an AI degree and raise a bunch of money with zero context on any workflow would never get you anywhere anyway. I know you wanted to touch on ROI quickly before we got to wrap up today.

[00:36:18] AVH: Exactly. Let’s talk about that, because I think it’s really interesting. In the end, every bank should do this one thing. How can I define ROI? How do you define ROI? Now, can you find products that do provide ROI?

[00:36:33] DF: Yeah. This is extremely, extremely important topic. A couple of points here. One, in my opinion, if I’m short time with you. We have a lot of articles from very, very – in fact, one of our recent pieces, if people are on emerj.com and they search for the word opportunity in the search bar, there’s an article with former head of AI at Slack. Slack was bought for $30 billion. This guy headed up AI there for four or five years. Very, very smart, Sandford PhD with a tremendous amount of rich business experience. Talks about his kind of process of finding AI opportunities within particular parts of the business. Lots of really cool resources and opportunities.

But let’s talk about ROI. More than even finding the opportunities, more than that is actually understanding how to think about ROI within an enterprise for artificial intelligence. We break this down into three kinds of ROI. Again, if people go on Google and you type in, the three kinds of AI ROI Emerj, E-M-E-R-J, they’ll find this article and it’s pretty nice infographic. But the basic idea is this, I’ll give you the three.

The first super obvious to everybody, I don’t even have to say. Measurable return on investment. Normally, this is financial. How much more money do we make? How much more money do we save? Maybe we’re saving time. Okay. That’s saving time. Maybe we’re saving time and we’re improving the customers’ experience. Okay. It’s customer service score. Sometimes this customer service score and some proxies like that, user satisfaction score primarily measurable ROI is money made, money saved. Something that came to that.

The second is strategic return on investment. This means that this project is bringing us closer towards a strategic mandate. Well, we call it strategic anchors when it comes to selling AI, communicating its value. A strategic anchor could be a three to five-year goal that the business has. A strategic anchor could be the digital transformation vision that we want to move closer to. We know we want to start having 50% of our sales in ecommerce, so we know we want to improve our loyalty. That might be part of our digital transformation journey path that we have not done. It might even be a current technology thrusts, somewhere that’s already getting a lot of funding, is already getting a big push in the business. The question we ask ourselves is, “Is this AI project in line with the strategic mandate? Is it helping to drive value to something long-term?”

I’ll tell you the reason that’s critical is because, most AI projects are not going to plug in and make money on day one. They’re going to plug in and we’re going to learn all kinds of horrible things around the state of our data. We’re going to learn a lot of hard lessons around data scientists, we need to work with subject matter experts. We’re going to learn how we weren’t really collecting the features within our fraud data to actually detect which of these kinds of payments are going to be fraud because we were never really tracking this particular time element in these two databases, whatever the case may be. Lots of hard lessons are going to come up.

Now, there’s learning there and there’s value to that. I’m going to get to that monetarily. But because ROI is not push button and because AI is a capability we need to build over time, it is extremely advisable that we discern the ROI of our AI projects and we at least align our aspirations for the ROI of AI project to, “Is this bringing us closer to a strategic mandate that you leadership actually cares about?” 

The third kind is what we refer to as capability ROI. As far as I know, we’re the only people that beat this drum. Capability ROI implies building a stronger level of AI maturity through the project. This might be jarring and disturbing for people to hear, but at the end of the day, if a bank randomly picks an AI project, hopefully it’s something a little bit better than random. My fingers crossed for that. But they pick an AI projects, they should always hold it accountable to some kind of measurable ROI that they could prove some kind of result. Absolutely. I’ve never advocate against that. But they should also ensure that they retain the lessons learned. I’ll explain what some of these lessons learned are.

If people type in E-M-E-R-J and then critical capabilities, we have another infographic about what AI maturity actually implies to [inaudible 00:40:16] critical capabilities. Some of these lessons learned could be, what do we learn about our data and data infrastructure. We might have learned that our European and US data silos for X kind of customer data are almost irreconcilable and that they really need to be, but golly, they’re not and this is an area we need to improve. 

We may have learned that a new way to harmonize certain elements of our data to make it something that we can now feed to algorithms. Before, certain things with last names, or addresses, or whatever the case may be was very hard to harmonize. At the end of this project, we now have a way to harmonize that. We have a better data soil for future project. Even if this fraud project was okay, it worked out okay, well, yes. But also, we learned these things about our data and we have a better data soil to grow future projects in. Here are other things that we can learn. We can learn how subject matter experts, and data scientists can work together. We’re going to hopefully use some best practices. We’ve written about a bunch of them on Emerj, but we’re going to learn, okay, if we’re doing a customer service related project, we actually need a full-blown designated customer service leader who we pull out of functional customer service job and have them just be embedded with the data science team.

Reality check in the data science team’s assumptions because they don’t know anything about customer service. Helping them determine how well the algorithm is being trained, helping to pull in other resources and pull another perspective from the customer service department so it’s not always a foreigner from data science knocking on the door, it’s one of their own leaders. We’re going to learn about new team dynamics, which inevitably come up with AI. If we retain those learnings, if we improve our data infra, if we improve our run books for how teams are going to work together. If we improve our ability to kind of assess our data and assess the cost of an AI project, we can retain those learnings. Often, that would be the highest ROI we can hope for our early projects.

We often cannot jack it in, make $50 million more next year. That’s actually quite unrealistic, but it’s rather realistic to do our damnedest, to hold them accountable to measurable RO, but ensure that we retain learnings. Long-term transformation from this technology requires capability ROI to be something what we frame and we put in place. Again, the three kinds of AI, ROI emerges is the infographic for this, but I can’t beat the drum enough. Achieving ROI means thinking about it in a mature way, not as plug-and-play surface level technology, which is never ultimately going to transform the enterprise. That’s the most important thing that I could share here as we wrap up.

[00:42:37] AVH: It really is important. I mean, especially with such a nascent technology, you can’t just think you will get it right in the beginning, but it’s good that you define what you want to get out of it, right? Not just, I just start doing it and then afterwards, I look at my KPIs and just like, “Oh! This has been a great success just because I want to be a great success.” Don’t be like, “I need to make $50 million” because it’s just bogus that you will do it on your first project. But still be aware. I want to at least like half a team score of 50 afterwards, like it have this learning, stuff like a 20-page document, something like that.

Now, maybe if you could spend two more minutes before we wrap-up, just giving some – you mentioned a lot of resource. Do we have any other resources or anything that you can point us to learn more about AI?

[00:43:26] DF: Yeah. In terms of places to start in the financial services space, we have a series of cheat sheets that are kind of industry focused. One of them, it was our AI and financial services executive cheat sheet. This is basically a short distillation of key use cases and also key terminology in terms and business people language to understanding impact of AI in fin serv. There’s probably no better four or five-page doc that we have than this. It’s just emerj.com/fin1, then people can download that cheat sheet. I think that in the financial services space, that’s a super useful resource. Then otherwise, they can just go on emerj.com, just on the homepage, type in any keyword that they want in the search bar and we’ve got literally thousands of interviews and articles with AI experts about the impacts and use cases of these technologies and that could be a really good place for people to start as well. 

[00:44:21] AVH: Awesome. Dan, thanks so much for your time. I’ll let you get back to your day. It’s been really interesting and I think we’ve got a lot of future reading material and a lot of the things that I want to touch on, you mentioned where people or myself we can dig deeper, so thanks a lot.

[00:44:37] DF: Of course, brother. Thank you for having me.

[END OF INTERVIEW]

[00:44:41] ANNOUNCER: Thank you for listening to the Wall Street Lab podcast. For the show notes and much more, visit us at www.thewallstreetlab.com to see what we’re up to before anyone else. Subscribe to our newsletter on our website and follow us on Facebook and Twitter.

[00:45:12] AVH: Disclaimer: Information contained in this podcast constitutes the opinions of individuals and should not be treated as investment, tax, financial or legal advice. We take no responsibility for the accuracy of any statements made in this podcast. This podcast is for informational and educational purposes only and it does not contain and offer to sell or buy any sorts of financial products and should not be treated as advertisement for such. Any copying, distribution or production of this podcast without the prior commission of the creators of the podcast is strictly prohibited.

[END]