AI in Marketing: Are Marketers Ready To Get Return on AI? – B2B Marketing Expo 2019 Recap

A recent research study, 2019 Data Science and Machine Learning Market Study Report, found that 40% of marketing and sales teams say data science encompassing artificial intelligence and machine learning is critical to their success as a department. The most important reason cited was to find new revenue opportunities.

Importance Of Data Science and Machine Learning by Function

We wanted to learn more. How are they using, or planning to use, AI to make their teams successful? What are some ways they currently use AI? Or, how far are they in their AI journey? To find out, we flew to Los Angeles to attend B2B Marketing Expo California 2019.

Ople at B2B Marketing Expo 2019
Ople at B2B Marketing Expo 2019
Ople at B2B Marketing Expo 2019

The two-day long conference brought together marketing professionals from around the world, and offered many interesting sessions - including how to get Return on AI, presented by our founder and CEO, Pedro Alves.  The most valuable experience, however, was learning from our conversations with the marketers.

 

Is AI Hype Real?

From our research, similar to the study shared above, we have found that many reports state that AI is crucial for businesses, especially marketing departments. However, the majority also focused on how to utilize more readily available services, like chatbots and operational automation. If you are familiar with Artificial Intelligence, you probably know that AI has much more capability than that. 

Artificial Intelligence is basically a machine or a program that has the ability to mimic human intelligence, or the ability to react to changes in its environment with the goal of accomplishing some task. There are countless applications of AI - from simple chatbots, to more sophisticated product recommendation engines, and robotic process automation (RPA).

As a part of our market research on the floor, we asked around to find out, “Do marketers understand AI enough to look for more data analytical AI software like Ople?” Surprisingly, there were hundreds of marketers interested in learning more about how our software could help them.

 

What are the key challenges?

The next step was to learn more about the specific challenges marketers are facing that can be solved with AI. To ask more appropriate questions, we prepared a handful of possible use cases to start conversations. Like we mentioned above, new revenue opportunity was one of the key reasons to implement AI. Without a doubt, the majority of marketers were interested in finding better ways to connect with the right target audience for growth.  For example: “How do you optimize ad retargeting?”, “How do you optimize email campaign conversions?”, and “What are the chances of a customer buying more?”

 

Are they ready?

When marketers showed interest in the use cases, the next questions we asked were centered around readiness, with the main question being:  “Now that you’ve seen the demo, do you have the necessary data to start using AI in similar experimentations?” This question, which would resonate the most with other AI companies, is the most important qualifying question. AI is only as good as the data it receives to train on. Simply put, if there is no data, there is no AI. Throughout the two-day conference, 20% of marketers we spoke to said that they had data available, 5% wanted to learn how to collect the necessary data, in 15% of cases the conference attendee wasn’t the right person to discuss their company’s data, and 60% showed interest in implementing AI, but they weren’t really sure what to do. Although only a quarter of the audience was ready to implement AI, this was larger than the conversations we had throughout the year.

B2B Marketing Expo - Are Marketers Ready for AI

Once AI is in place, how do you get #Return on AI?

What we also wanted to do at the conference, was to help marketers understand how they can see positive returns after implementing artificial intelligence in their work.  Perhaps the most important piece of education we provided regarding AI, was how they can get their return on an AI investment. If you want to get your RoAI now, watch the full video (or read the TLDR version below the video).

TLDR:

  1. AI can help every business.
  2. AI is hard because of the lack of communication and the wrong allocation of resources - aka Data Science Technical Chasm.
  3. Data Scientists should focus on strategic problem formulation - translating a business problem into a data science problem.
  4. Software, such as Ople, can help align all stakeholders.
  5. Find small wins, not big moonshots.

If you are interested in learning more about how Ople can help you get Return on AI now, contact us today!

Click to see transcribed Script

So just a little bit on myself. I’ve been doing AI for about 18 years in a lot of different industries; so I worked in genomics, proteomics, insurance, healthcare, fashion, retail, computer vision, text, consulted for fortune 50 companies, and then companies with just three people. Like all the way down to the beginning of a startup, and all the way to you know big massive companies. Saw a lot of the failures and successes of implementing AI right in companies. And unfortunately, a lot more failures than successes, right? That is what took me down the path of starting the startup that I’m in now which is Ople; which is what are the problems with implementing AI and getting an actual return on investment. Most people, most companies, believe it or not, don’t even try to measure return on investment. They just know it’s a lost cause. They just pour money into it without ever even trying to measure something. That’s why I said you more elusive than Bigfoot for the actual return on investment for AI, right.

So this is just the six points that I’m going to address. So why AI, centralizing or not, building or buying, shooting for the moon, from model to money, and then the two biggest hurdles. So actually three out of these six topics, I did a whole panel of Venture Beat on where we had you know experts talking about, and I was helping moderate and finding out the answer to these questions. So first why AI; so it’s hot right now. Everybody’s talking about it. I see some smiles. This is an old meme, it’s like the original meme, it’s a classic right. So anyway, it’s hot right now, right? CEOs want it, even if they don’t know what it is or understand it. There’s tons of funding, right? Everyone sees the promises that it can deliver, right? Well, at least in their minds they see the promises. If you’re the person now that’s in charge of AI, the CEO looks around says you know and this is sometimes is CTO, sometimes CIO, director of machine learning. There’s like a million different titles to the person that’s responsible for this kind of innovation to a company. This is how they’re feeling right now because that’s what’s going to happen. The CEO is just going to dump a ton of money on some big AI initiative. And for a while they’re going to be feeling just like this. So that’s the why AI? The CEO wants to do it, and the person responsible for doing it is pretty happy right now.

So the next is should you centralize it or not centralize it, right? So let me first explain the two approaches: The first approach is you create a center of excellence in AI team, a data science team, machine learning type team, and it’s one core team that then services all the other teams around the company. On the other side is forget about that every team, every organization or company should have their own team of data scientists or individual data scientists, and the centralizing type methodology is the most akin approach it’s the most common one. They both have problems and you know advantages. With centralizing it’s easier to hire talent because it’s much easier to say we have this cool AI team, you’re going to get the better talent there, it’s going to be easier to hire. Low in volume, it’s one team you’re not going to produce a ton of volume to service all the other teams around them. On the communication side, they’re good in intra-communication; meaning all the data scientists are on the same team so they’ll talk to each other really well in order to the same type of methodology and to propel how they do things. Inter team communication is bad, meaning the data scientists don’t belong to these other teams around them, so they don’t communicate well with them so there’s something lost there. On the other side; bad for hiring because it’s harder to hire good talent saying you’re going to work at a sales team or marketing team. It’s not as enticing to a data scientist to say this whole AI team. Good for volume because every team has their own so they then can try to produce more. On the communication, it’s the opposite. Data scientists have bad intra communication because they’re all isolated one in each team, but there’s good intercommunication meaning between the data scientists and the teams that are wanting data science; meaning the sales teams and the marketing because these people are embedded in there. So I did a panel for this at Venture Beat, and I had four people from really big companies, startups, and I was posed to the task of mitigating the fight that was about to ensue between centralized verse decentralized and I was all prepared and had all these questions; there was zero disagreement between the panels. None of them chose centralized, and none of them chose decentralized. All of them said, well you need to do a hybrid approach, so I’m going to address the hybrid approach in a second, but the bottom line is neither method works, so they end up having to do a hybrid. By the way, smaller companies can’t really try to do a hybrid approach because it’s too costly. So let me talk about how this plays out. How this plays out is, you have the central data science team serving all these other teams, and usually, there’ll be two teams that have the highest priority; they’ll get the attention of the data science team and those are the green ones, they’re happy, the other ones in red. Quick fact – if you’re third in line to the data science team, you will never be seen by them because it’s not like a queue that goes through the line. The top two teams with priority, they will have new projects every quarter. So if you’re third in line, you’ll literally never get the attention of the data science team. I’ve seen that, every single time, play out this way. So with the decentralized team, the unhappiness is due to the difficulty of hiring. It’s going to be a while before they can hire people and when you do the quality is going to be lower. So there’s going to be few people that are medium happy. The actual reality is this. Nobody is really green and happy, they’re kind of yellow. The two teams that are getting the attention of the data science team. They are upset about the volume, like how many projects they can get from these teams per year. And they’re also unhappy that because there’s lack of communication and the data scientists don’t really understand the businesses that they’re servicing, there’s miscommunication which generates projects that are kind of unusable. And then on the other side, what happens is the people that did hire the good ones, they either realize a year later that the people they hired are actually not good because they don’ have technical people in order to evaluate who’s technical enough to do the job. Or the people that they hire are not producing enough because of where they’re spending their time. So again, this is why there’s this hybrid approach that most companies take. And by the way, the reason for the hybrid approach is so that they can solve the communication problem. So what they do is they have these centralized teams build platforms that the embedded people can use. That way, the embedded people have the good communication with their teams, and through the platform, software usually, the different data scientists amongst the different teams have communication because they’re all using the same platform. So it’s communication through software basically, but not direct communication. Just they’re forced to use the same methodologies because the software forces them to do so. Please interrupt me if you have any questions at any point.

Moving forward on the build versus buy. This was another actual panel that I did, and we exited the panel with the decision tree of questions you should ask on making sure that if you go down the tree that tells you buy or build. And this is that platform that thing that your data scientists should be using in order to get uniform type of output quality so that you’re not risking hiring people that are not as good. Because if they’re using a platform that’s automating this stuff, then you derisk having a bad person that doesn’t know what they’re doing when they’re tuning your own networks, etc. Something that’s a little more technical. So the first question you ask is - okay, I need to know if I’m going to build or buy. First question is can AI help? If your answers no, you’re wrong, ask again. Go back to the top of the tree. It can always help, literally. And I’m going to run through an example of even if you sell candy at a side of the street at stoplight like hanging from your neck little built thing, you can get AI to work for you. I’ll run through that example later. So you finally realize, yes it can help, which is the right answer. Then, are you AI? A lot of people say yes when I ask that question. Oh yes, we’re AI. No. Not do you use AI, does your product heavily rely on AI. Are you AI; meaning that’s the sole purpose of your software, of your company, of your product is AI. So not even Spotify, Netflix, Google, they don’ t fit the yes to the answer to this question. Because they do search or music listening, that’s their product. They use AI heavily in the product, that’s different. This is your startup, all you do is you build computer vision models to help cars drive, but you don’t build the cars, you just build the AI platform that drives the cars. That’s a yes, I am AI. So very few people should actually answer yes. If yes, then build, because if you’re not building, and that’s all you are, you have no differentiator, there’s no reason anybody would value your company at anything. So if the answer is no, then you ask the question, can software help? And you need to look at what’s available. And this is the final if really no is the answer, then I would say build, but only if you’ve looked at enough of your projects. Because maybe you’re thinking of this one project that software can’t help, but there’s probably 99 projects that software can help. So do those 99 projects first. So the majority of people are going to land in this bucket, of yes they can find software that can help projects they’re wanting to do, and so they’re going to end up here in this bucket. Because there’s plenty of low hanging fruit, etc…

Next, moonshot conundrum. Most companies end up doing these crazy projects, moonshot projects, and I’ll give you the example. So you sell candy, right? Side of the street, light goes red, you have your thing hanging on your neck, you have a bunch of candy. And you go sell candy to people driving cars. Can AI help? Yes. People are visual buyers; if they see that things look neat and enticing, they’ll buy more often. So the way you arrange your candies on your little cardboard box or whatever can increase your sales. So that’s a project problem formulation, they formulate a problem. Grab data. How you grab data, arrange candies in different ways, sell it in different tries and measure how much you sold, now you have data, easy. Then you build a model that says, given this arrangement of candies, what’s the probability that I’m going to make how many dollars. Now you have a model that’s helping you make more money, really simple. Where most companies fail in this process, they’ll start with that idea, and before they execute, somebody’s going to say, well yeah but I mean this is really cool, think of all the things we could do because every person driving a car probably has a different preference. So we could really personalize it. So what we need is we need to build these AR glasses and put them on the vendors face so when they’re looking down the road it also has computer vision recognition, scans the faces of the people, sends their pictures to the cloud, builds a model for that particular person, and thinks of like for this person, that’s how you should arrange the candies. That message gets sent back to the box, which has nanorobots that automatically rearrange the candies in that particular order configuration for that person. I’m sure they can also think of a way to include drones somehow and 3D printing. And it becomes this massive project that involves robotics, computer vision, augmented reality, machine learning, and they’re like, yeah let’s do it! 10 years later, a hundred million dollars later, they have nothing. The reason for that is that everybody knows or feels like AI is so difficult and expensive that it’s not worth doing a small project. It’s just not. So then they fall into this fallacy of, if it’s that expensive and hard, I must do this moonshot project. Big project. When in reality, you know there’s plenty of low hanging fruit that can be done easily and cheaply and they just don’t realize that and that’s where you should start because there’s plenty of impact with really simple projects that AI can do for you. So that’s what I recommend.

Now when actually making the decision of, ok I’m going to do a project, it’s low hanging fruit, the question then is, how do you measure the impact of a product. And I have this model to money coefficients. The three different ways that you should measure how an AI impacts your company. The first is time. What I mean by time, is how fast do you get a return? So for example, ads retargeting; it’s pretty fast. Let’s say you have a model. Now it’s really accurate. It predicts exactly who we should retarget to and that those people are going to click on your ads. You wait an hour if you have enough volume, you’ll see that the rate of clicking on your ads goes up. The time taken to measure the model was successful is sometimes measured in hours. But now let’s say you have this mail campaign, like actual mail in the letters, right? And you want AI to help you figure out which letters to send to which people. That’s a slow project. It takes months for this project to go through, for you to send the letters out, for you to get responses. Similar thing if you’re doing like preventive maintenance on big engines. Those engines break once every three years. You’re doing predictive maintenance, you’re not going to see that you’re making money on this for years at a time sometimes. So deciding on something that has a quick turnaround time is always helpful. The second point is determinism. So determinism is if the model is accurate, how is it going to impact your bottom line. Is it deterministic or not? For example, you are trading stocks. Wall street – if you could predict if the stocks going to go up or down and the model is actually accurate, you’re going to make money. Nothing else can change that, if it’s actually accurate. It’s completely deterministic. If you can predict the numbers in the lottery, you’ll make money. Now an example of non-deterministic, let’s say you’re selling shirts online and you want to help reduce the numbers of times people return shirts to you because it costs a lot of money, mail back and forth. So you say a bigger reason people return shirts is because they buy the wrong size. So then we have to exchange and it’s on us, the cost. So let's build a model that predicts the sizing, and when the person is going to hit order, the model runs and says you know what, given what you’ve bought before, given this brand and style, this shirt runs big, you should probably buy a medium instead of a large. And then the person buys the right size. It’s not so deterministic, why? First of all, the person might not do what you suggested. So even though the model isn’t one hundred percent accurate, if the UI is not presented the right way the person might go, no I don’t trust AI, I trust me, I’m still going to buy what I want. Secondly, even if they buy it, they might have a bias against it because let's say they bought the shirt that they chose and it’s just a little bit big, they might be able to live with that. But now they bought a shirt that they didn’t want to buy, they were going to buy large, you told them to buy medium, and it’s just a little bit tight. They might be like, I knew it, that stupid AI, I was going to buy a large, and I knew it, and that stupid AI made me buy a medium, I’m going to return this. So you don’t know how people are going to react. So there’s all these other elements that make it less deterministic. So obviously you want to choose something that’s a little more deterministic. But you know, there’s a big spectrum for that. And then finally, the impact. You could have a model that’s a hundred percent accurate. That’s a marvel, a miracle. And it makes your company ten thousand dollars a year. Or you could have a model that’s a half a percent more accurate than random and you’re using it to play blackjack, and you could make fifty million dollars in a year, and it’s just half a percent more accurate, which by the way, most people that count cards they were roughly within the range of one percent more accurate than a regular player. But that’s enough to make a margin in actually making money. So that’s the impact. It’s not just how accurate it is, it’s how much money is this going to make for you. And then finally, the biggest hurdles I see in AI - so first one is finding people I alluded to that before. I think and I’ve interviewed over 300 data scientists. I think a great data scientist is a unicorn. It is a person that is software engineer, machine learning scientist, mathematician, statistician, good communicator, and understands the business. They exist, but they’re rare. They’re always going to be rare. It’s like the early days of aviation. The Wright brothers, they flew the planes, they built the planes. You needed a Wright brother to fly a plane back then. You can’t build United Airlines because you can’t train and hire a hundred thousand Wright brothers because the requirements are too high. When the field of aviation advanced the requirements of flying a plane got lowered, and then you could mass produce pilots, and we’re still in that transition for AI now.

Most people think finding that unicorn is going to be this easy, the needle in the haystack. And in reality, it’s this. it’s actually really really difficult. People spend one to two years to find a team to hire. And then they spend the next one to two years figuring out they hired the wrong people, firing them and then starting over. That’s the normal path, I guarantee you every time I talk to people they say, “yeah we just went through that, how did you know?” Yeah, cause you’re not the exception, you’re the norm, which is really sad. And part of that is again the discrepancy in supply and demand because of how hard it is to become a data scientist, which by the way is something that has to change. We can’t keep requiring people that are that technical to do the job, to fly the planes if you will. The second problem is the technical chasm. A regular data science project starts on the left. Business user says problem formulation. I want to reduce, right, the shirt sizing thing. I want to reduce returns. They then have to turn it into a data science problem, which is, even though I want to reduce returns, I don’t want to predict returns. I don’t want to build the model that predicts returns, I want to predict shirt sizing. And then they create a data set faceted data scientists. They go through a bunch of really technical stuff I spend most of my life doing. And then somebody on the other side has to use it.

So there’s an interesting story here which is where the data scientists see success and where the business people see success. Like I said, I’ve interviewed plenty of data scientists, and I’ve asked them, tell me about project that was successful. They explained the product. The project they stay in the middle, they wanted to impress me technically, and then they say it was a super successful project, and I say why. And they say well, state of the art was eighty percent accurate, I got eighty point one. It’s awesome. And I’m like, is anybody at that company using that model? Almost every time their answer is, I don’t know. And it’s like, you’re calling your project successful, but you don’t know if anyone in the company is even using it. Here’s the newsflash, if that model's not being used, it’s making zero predictions per year, which means it’s making zero correct predictions per year, which means it’s zero percent accurate, not eighty one percent accurate. And they just don’t realize that.

I actually hired in the current company that I have, the one number one data scientist in the world. Undeniably number one ranked, famous, gives talks all the time, nobody can beat him. And I talked to him, and I did a webinar about what percentage of your projects were successful. And he started to answer, and I said wait, first let's define success. And I told him the story, and I said, I don’t care how accurate it was, I want to know, did it impact the business. And he said, the vast minority. This is the best data scientist in the world. The vast minority of his projects were successful. And here you have a sea of data scientists trying to be like him. They’re trying to learn more of this technical stuff, and it’s pointless. Almost every project dies in the white areas. Either problem didn’t get formulated correctly, meaning someone said, I want to reduce returns and they turn to the data science team and says go build me a model that predicts returns. And they go and build it without the communication that I mentioned. And then when they put the plot on production, somebody’s going to order the product, they order it, and then the model runs and says, yeah that person’s going to return it. And? There’s nothing you can do. That model is useless. You can predict returns, great, but the persons still going to return the shirt. You think it’s a joke but this exact project has happened. And I saw that happen and it’s like, how did you not think this through till you finished the three-month project. Nobody talked. Communication – that’s where the projects die. The second half of projects die on the other end. Let’s say did communicate and build oh wait stop you’re not going to build models to predict returns, let’s predict shirt sizing. It dies in the end because, again, lack of communication, somebody on the other side doesn’t trust the model, doesn’t understand the model, or doesn’t know what to do with the output. It’s that simple. I’ve built models where the person on the other end says, I’m not going to use it. Why? I don’t trust. Is there any measurement for why you don’t trust it? No, just don’t trust it. Gut feeling. And that kills projects, you’d be surprised the number of times it does. And it can be solved only with communication and understanding. It’s like human stuff, not technical stuff, kills the majority of projects.

So this at the end of the day is how people view how they’re putting money into AI. Not the first picture of yeah, you’re swimming in money. And CEOs have that face, they know, they’re just putting money in the fire, most times. And they’re just not trying to look for something to solve that problem. Because AI is important enough that they have to spend money, at the very least for PR reasons. And part of this can be solved with bridging this technical chasm with software. I think if you do that you solve both of these hurdles I mentioned. One is if the entire middle part is automated by software, the data scientists will have no choice but to spend more time doing the beginning and end, because there’s nothing else for them to do. That will become their job. It’s helping people problem formulate and helping people understand and know what to do with the output. That’s a hundred percent of their job, not the middle. Secondly, because the software is doing the middle solves the hiring problem as well because they don’t have to learn all that, which means it’s easier to train them, which means it’s going to be a bigger supply for the demand and then now you can hire these people.

*Answers a question from audience* I think there are companies, so there’s different ways you can solve this chasm in the middle. Way number one is consulting. You hire a consulting firm to say I have project X, go build that project, give me the stuff. Solution number two is there’s pre-built models. That one is becoming less and less popular because there’s a lot of hurdles for that, but there’s still some around let’s say we have a generic model that predicts X, so if you want to just plug your stuff in here, that’s probably the least effective. The third one is software platforms that allow less technical data scientists, somebody that just understands the beginning and end. Somebody that understands the business to help formulate problems, and they can automate the entire middle part. And there’s software. That’s actually where I live, is that. Yea so that’s what I think there.

I am a big fan of a Dr. Seuss, so I wrote something that really quickly succinctly summarizes all the six points.

One – Why AI you ask? Because in its promises CEOs bask. Two – if in the decision to centralize your deadlocked, it doesn’t matter either way you go a solution you will not concoct. Three – for build or buy an easy decision there is, buy and be thankful for easy decisions are not common in this biz. Four – attempting to shoot the moon might result in a shot foot, look for the low hanging fruit to obtain great output. Five – AI success equals business success, with model to money all should obsess. And last – yes, hurdles are there, but that is why of Ople you should be aware.

And that’s that.

*Speaker opens it up to questions*
Question: I come from Sales, but I want to become a Data Scientist, or use AI in my role. Where do I begin?
Answer: Yeah so, the data scientist role is going to change is what I call the data scientists 2.0. They’re going to be way less technical. I’ve seen all the courses out there. There’s six-week courses to six month courses to six year courses. It’s not something you can dive into. The more you understand, the more dangerous you’ll be in a bad way. Not dangerous like oh yeah, he’s dangerous. Because thinks like Keras and Tensor Flow make it so easy that people think there’s not a lot. It’s like getting a fighter F-16 and like replacing the entire, that super complicated panel with like Tonka toy buttons. Like a yellow button and a red button and then a two-year-old is like, I could fly an F-16. And it’s like crazy dangerous, but it looks like they can fly because there’s only three buttons and they’re all colorful and bright. And I think that’s way Kerris does. It’s really scary because they don’t understand the math and science behind what’s happening, so then there’s so many little things that can happen that can make them think, this model is a hundred percent accurate, but it’s actually not. Or, this model’s not accurate, but it actually is. And the intricacies. I’ve spent 12 years in school and after that I spent close to a decade in the industry, and I’m still learning. It’s a lot of material, a lot of material. And that’s why I said, it’s just not feasible, period. It’s also a waste. Today, I learned enough, and any data scientist that I think fits that category, they learn enough that they’re like a jet engine engineer they can invent jet engines. They’re being paid to fly planes. They’re happy. They’re the only people that are happy in the entire equation because they’re getting big salaries, they’re getting ten job offers a day, they can do whatever they want. So they’re smiling, right. They don’t care that they’re flying planes, but it’s a waste of money, for everybody. It’s a waste of their effort and time. The only thing that people that know that much technical stuff should be doing, is inventing the next generation of jet engines, not flying planes.

So data scientists 2.0 are the people that can fly planes. The things they need to learn are things like, how do you think about problems that can be solved with AI? Problem formulation, right? How do you approach data? How do you match what the business wants with, oh yeah I know where that data is, and this is what I’m going to use, and these are the features, and that’s what I’m going to try to predict, and this is how I’m going to explain to the business people why this is valuable. That beginning and end, those are the things that you need to learn because what I’ve seen already is a person that knows that, that beginning and end, powered by something that automates the middle, will generate five to twenty times more projects than a person like me that spent twelve years in school. And then it becomes a no brainer of wait, that person’s giving me two projects a year, and they love to do research. And most of those projects fail because they don’t even talk to the business people enough. Or, this guy understands the business, they know what they’re doing, and they can do twenty projects a year. It’s that simple.

Question: Will AI replace Human Intuition?
Answer: I think it has it today the way I would define intuition. It’s a gut feeling of why you’re guessing some things. So let’s say you say shoes at a store in person and somebody walks in, and you immediately look that person up and down and say, I know what shoes I’m going to show that person first. And it’s your intuition, but it really isn’t. Your brain is processing what they look like, how tall they are, how expensive their clothes look, why they’re wearing a certain watch or glasses, their style, how they walk, how they carry themselves. All this stuff is intuition, but it’s really not. You just have a mental model that’s hard to transcribe to paper, and you call it intuition. But then you get that same data by taking pictures of everybody that walked into your store, then get a data set of which shoes they ending up buying, which shoes that got shown and which ones not, and then you’ve trained a model to predict, given what they look like, that models going to learn that exact same intuition. And that’s machine learning today. I could build you a model that does exactly that, given a single picture of a person, which shoes I think they’re going to buy.

Question: Is Ople.ai real? And what differentiates your company from the competitors?
Answer: So first is, is it real? A lot of them are not. I’m not referring to the one you mentioned, DataRobot is real I think, very real, but there’s a lot of little ones that come out a year later in the news that they just had people cranking a machine manually in the backend. Which is true. But I think it’s a matter of the ease of use, and how much of it is automated. I think one way that we differentiate ourselves and we have technical ways we differentiate ourselves. Meaning some of the stuff we invented, learning to learn to learn capabilities. So it’s like AI that builds AI that builds AI, I think is a really big differentiator. But from a product perspective, the biggest differentiators are ease of use, and the breadth of things we serve. So if you look at that process, a lot of companies will pick one thing to do really well. SigOpt optimizes, they do hyperparameter optimization. Some companies, all they do is the productionize your model, some companies all they do is algorithm selection. So they try to find one small niche and they do that. So we try to do everything, right? We’re the only ones that are doing proper automated cleaning of the data for machine learning purposes that I know of. And then the entire pipeline from you had an idea, all you need to have is the data set, you click five times, I believe. The only time you use your keyboard in our software is to name your project, and then you have an output. Which you don’t have to make a decision. And competitors of ours, this does a lot of technical stuff that’s being shown to you, and you have to make technical decisions that I’ve talked to people that use those software, and they say, yeah we get the output, we still have to build our own models. We use it as like a step one in research. They run it through one of those automated systems, the one you mentioned. And then they say, it gives us clues of what models to go build based on the results, and then we go build it by hand. Because they don’t actually hyperparameter tune the models for your data set, it’s pre-boxed and prebuilt. So you still need a technical person. If you don’t have a technical person, you’re going to be losing in a couple different places. And then part of also the breadth is, can you handle different types of data, different types of problems, supervised, unsupervised, unstructured data, image, vision, like can you really handle the full gamut? Because what I don’t want to see is companies saying, oh yeah, I got this piece of software, but for like half my projects, I can’t use it because they don’t do X, Y, and Z. I want to minimize that X, Y, Z, so it’s oh no, we can do ninety-nine point five percent of all our projects, and eventually a hundred. So that I think is a big differentiator.

Related Post

Ople @ Fintech & Cybersecurity Summit

Ople CEO Listed as a Top 40 Under 40 Innovator