Artificial Intelligence: Ople’s Pedro Alves on Learning to Learn to Learn, Big Data and Neural Networks

Our founder and CEO, Pedro Alves, had the opportunity to sit down for an interview with The Kamla Show to talk about Ople’s novel approach to Artificial Intelligence. Pedro and Kamla discussed the future of AI, as well as its current state and limitations.

Artificial Intelligence: Ople's Pedro Alves on Learning to Learn to Learn, Big Data and Neural Networks Studio Shot 1

Pedro is driven to make AI something that is easy, cheap, and ubiquitous. The goal is to make AI accessible to every employee working in an organization and applicable to every aspect of the business.

While working as a Data Scientist, Pedro found that many of the tasks he was performing could be delegated to and automated by a machine. He felt that his time and energy would be better spent on a more strategic and developmental level rather than on the time-consuming processes a machine could learn to automate.

The analogy he used to compare the current state of AI to was the early days of aviation. When the airplane was first invented by the Wright brothers, a jet engine engineer and a pilot were both necessary. However, you wouldn’t want a technically trained jet engine engineer also flying the plane. Flying the plane can easily be done with a trained pilot, while the jet engine engineer can be left to focus solely on the task of inventing, designing, and developing the mechanics of the plane. It’s not cost effective to have an overpaid jet engineer doing what a pilot can do.

Pedro finds the current state of Data Scientists to be in a similar predicament. They are losing valuable time performing tasks that can be automated with AI, neural networks, and deep learning. Without AI automation, Data Scientists are effectively being bottlenecked from achieving their highest potential. Identifying this need in the field of data science is what inspired him to create Ople as a tool to massively empower Data Scientists.

Artificial Intelligence: Ople's Pedro Alves on Learning to Learn to Learn, Big Data and Neural Networks Studio Shot 3

Ople’s technology is based on the concept of AI building AI, in a process Pedro calls Learning to learn to learn.

The concept of “learning” in machine learning is that an algorithm, such as a random forest or a neural network, will “learn” how to map a set of inputs to set of outputs. To begin the learning process, you push data into various algorithms with different sets of parameters and let it run overnight. The next morning, you see how well they performed and based on those results, you can choose the next set of parameters and run the next iteration the following night. This continuous tweaking is the first stage of learning, which is often a daily routine for data scientists.

The second stage, “learning to learn,” was first introduced in a paper called “Learning to Learn.” The paper introduced the concept of  “hyperparameter optimization,” which uses another model to watch the first stage of learning and make predictions about the next set of parameters to try. In essence, “learning” how to improve “machine learning.”

When Pedro read the paper, he realized that he was instinctively doing hyperparameter optimization every day. He also found out that he was getting better at hyperparameter optimization (“learning to learn”) with each consecutive project, regardless of the problem or industry. He himself was “learning to learn to learn.” Pedro was inspired by the huge impact and tremendous value of the higher level of learning that could be achieved with AI, so he founded Ople.

Artificial Intelligence: Ople's Pedro Alves on Learning to Learn to Learn, Big Data and Neural Networks Studio Shot 2

Data Scientists are currently an amalgam of a great mathematician, statistician, machinery scientist, software engineer, and good communicator. In other words, a unicorn. This is neither a practical nor sustainable approach for the industry. There are not enough Data Scientist unicorns in the market to provide the support the AI industry needs. Pedro’s Learning to learn to learn AI technology streamlines the role Data Scientists fill to make their jobs easier and more accessible.

You can catch the entire interview with The Kamla Show here, or read the transcript below.

 

Begin Transcript

Kamla: Hello, I'm Kamla. My guest today is Pedro Alves. He's the CEO of a Silicon Valley startup called Ople. They deal with artificial intelligence and data mining, and the way Pedro puts it is, he wants to make AI easy, cheap and ubiquitous. Welcome to the show, Pedro.

Pedro: Thank you.

Kamla: How did you get Ople started? Because you worked in a bunch of companies, and they're very interesting companies. One is Banjo, and you were the Chief Data Science Officer there, and then you work for Sentient. The founders of sentiment also contributed towards Siri. Because you deal with AI, and AI in some ways, it's about decision making and the branching how you come to a decision. That's why I was asking you that question. What is it that you learned at Banjo, what is it that you learned at Sentient, and what is it that you're bringing to Ople that you started in 2017? Why did you want to start a company?

Pedro: My first job in the industry, I was already doing this. I was a data scientist; I was building models; I was handling data. I saw where I was needed, and I saw where a company had about 15,000 employees, and I saw the necessity for a person like me. But, at the same time, I thought a lot of the places that I was spending my time in, I thought, "They're overpaying me." It's a nice problem to have-- not always. Actually, no, you don't want to be overpaid. It's not a pretty day when they realize that they overpaid. But, I don't mean in general for the position. I mean, for some of those tasks. And I thought a machine needs to take over these tasks.

A person that goes through the training that I've been or other data scientists have been, they need to spend their time and their brain power in other places that's more valuable. And so, I immediately started thinking, "Okay, how do you automate and make this job easier?" Because it's the equivalent of requiring a jet engine engineer to fly a plane. It's a waste, right? You'd be overpaying. You want him building new engines and developing new technologies; you don't want him applying the technologies he invented.

Kamla: But then you have to create models where he can test that engine.

Pedro: Yes. And it's two completely separate positions. A pilot and a person that invents and designs engines, and they're both needed. But when you have just the same person having to do both, the industry gets blocked by that. And that's one of the blockages or limiting factors of AI today. That's the state of things today, I believe.

Kamla: Okay, so that's how you got the idea to start your company Ople.

Pedro: Yes. I've been thinking about it for many, many years, about how do you, like you said, turn AI into something that's easy, cheap and ubiquitous--

Kamla: I didn't say, that's what you--

Pedro: Yes, yes. You quoted me in the beginning; now I'm quoting you quoting me. Yes.

Kamla: Yeah. So you're very good at learning to learn because that's what your company does, it is--

Pedro: Learning to learn to learn.

Kamla: Okay, meta-learning?

Pedro: Right. So this is meta, meta-learning.

Kamla: Oh, it's meta, meta-learning? Okay, explain to us what it is.

Pedro: You have an algorithm machine learning model. If you want, I can explain a little more what a deep layer neural network is. I don't know if I need to go there?

Kamla: Yeah, you can.

Pedro: So a lot of people talk about neural networks, and it's actually really simple to explain. Imagine that you have this neural network, which is neurons connected--

Kamla: But it is mimicking your brain, the neuron--

Pedro: I don't believe it's mimicking the brain. I think our understanding of the brain today is like our understanding of the universe of thousand years ago. I think we can't even understand the brain right now. So I don't like that analogy.

It's a network of connected neurons if you will. Imagine I was trying to teach a neural network to this side, "Is this a picture of a rhino or an elephant?" You have a row of people, right here - five people next to each other - and I tell them, "Look, when you see a picture of an ear that has the ear of an elephant, you raise your hand." And this person, when they see a trunk of an elephant, and this person when they see the horn of a rhino, and then the row behind, I say to a person, "You, when you see the ear and the trunk of an elephant, you raise your hand," because you're the head of the elephant, and so forth. Then the behind them--

Kamla: So there's clustering?

Pedro: It's not clustering; you're learning in layers. The first layer learns simple, smaller things. The next layer learns from the previous. Then, the last layer, I'll tell the person, "If you see the person that raises their hand for the head of an elephant and the body of the elephant, you raise your hand for elephant." So the first row is the only row looking at the picture. The second row is just looking at the people in the first row, and the third row, looking at the second row.

How the algorithm learns is at first I'll tell you, "Raise your hand when you see the ear and of an elephant." But you've never seen the ear of an elephant, and I'm not telling you what it looks like. You could have randomly raised your hand. I show you a picture, you literally, randomly, raise your hand, and then it causes a chain reaction of them raising their hand and them raising their hand. At the very end, if the person that raised their hand is an elephant, and the picture was an elephant, I give that person piece of candy and say, "Good job, you're right!" He tells the next row, "Hey, good job, here's some of the candy." He tells you, "Good job."

And if it wasn't, I'll slap his hand and say, "Nope, that was a rhino," and he will slap their hands, and she'll slap your hand. That's how they learn. That's basically how neural networks and the algorithm works and how you teach it. You keep showing pictures; they'll keep randomly doing things until you incentivize them positively or negatively and then they learn. So that's learning, so going back to the question--

Kamla: So is it repetition, pattern recognition, all of that come into play there?

Pedro: Yes. There's a lot of math into how that candy is being given or the slap on the hand happens. That's actually mathematically done. But basically, that's it.

We were talking about learning to learn to learn, that's level one of learning. You show a bunch of pictures, you slap some hands, and the algorithm learns, right? Done.

Now, maybe it only got to 90% accuracy telling the difference between a rhino and an elephant. That's not good enough. Maybe we need to have more people. Maybe a person for feet of an elephant-- we didn't have that. So you add more people, more neurons; that's called parameters of this network, and there's an infinite set of parameters you could choose from. You could have many, many layers, not just 3, 10, 50, 100.

So how do you know which parameters to choose? That learning to learn, the second learning, is a learning on how these algorithms are learning. I can watch you learn and say, "Yeah, you got 90%." I added another person; it got to 91%. I add another person, 92%. Okay, every time I add a person seems to get a little better. I'm learning how you're learning. Then I'll choose the parameters based on that until I find a nice set of parameters that works really well.

Up to recently, people used to do that. Data scientists would run 10 algorithms overnight, come back the next morning, see how they did and choose the next set of parameters. So, learning to learn is when you automate that. You have a machine that watches how the algorithms learn, and it learns to choose the parameters. That's what it's choosing, so that's why it's called learning to learn, because it's learning how the algorithm is learning. When I saw that, I was excited about it, because it was--

Kamla: Where did you see it?

Pedro: In a paper. There's a paper from Google called Learning to Learn. I believe it's probably about three years old now, maybe a little older. Yeah, right around there. I saw that and I thought, "Awesome." That's a little bit of what I was thinking of automating what I've been doing manually. But, when I looked at it, my first thought was, that's not exactly what I'm doing because, given this learning to learn framework, it is within a project.

So the elephant versus rhino project, the algorithm is going to learn. The machine is going to learn how it learns and choose the parameters. When you do a new project it starts from scratch; it's going to start learning to learn again. I thought if that's mimicking what I was doing, how come every time I did a new project, in a new industry, in a new field, I was better at it? And I was faster at it. I was growing as a data scientist. If I was getting better at learning to learn, that means that I was learning to learn to learn, right? So, if I was learning to learn to learn, I can teach a machine to learn to learn to learn. That's what that third learning is; it's something that carries over from project to project, and there's something that is generalizable enough that allows you to get better at this whole process. That's what I was trying to get at with one of the technologies behind Ople.

Kamla: So what you explained was, you said scientists set up models, and then they may set up 10 experiments or models, they'll come the next day and see what it is, and you talked about parameters. So those would be variables, independent, dependent variables, how they interact-- I'm just trying to understand from a lay person's perspective. So what you're doing is you're tweaking those variables to see which parameter works. Did I get that right?

Pedro: Yeah.

Kamla: Okay, so now that you've learned this, how do you know if you've got the right information because the data has to be good? Do you have the right data? Do you have good data on those questions that you should be asking when you teach a machine to learn?

Pedro: Yes, I don't think data is ever going to be perfect or clean.

Kamla: Or correct.

Pedro: Or correct. There's going to be mistakes in the data for a lot of reasons. I mean, some of them are how it's stored, in the software itself might be buggy. Some of it is human-caused mistakes in the data. I think it's unreasonable to expect that a piece of software can only work with perfect data, because no data scientists will ever get a perfect piece of data. I believe the expectation for the software should be the same as the expectation for hiring a great data scientist, which is the data is not going to be great.

Kamla: I think I was being the devil's advocate because when you're handling the data, sometimes you can make out where why your results are not coming out right. You may do a stepwise regression, you may do other things and see what works. But if the data itself is flawed in some ways, and if you don't catch it in the first iteration, that's where I'm coming from.

Pedro: Okay, so there's a couple of things there. One of them is the question of how do you know that this model that was built on this data is really working because just telling me inaccuracy doesn't necessarily mean it's working. You want to know that when new data comes along, it's not going to make some crazy decision, right? If it's a piece of a model that is going to determine how many milligrams of a drug to give a patient, you want to make sure that it didn't learn that if the person is wearing polka dotted shoes that it should give more of the drug because clearly, that's wrong, right?

There's a paper that came out, also a few years old, but it was very interesting, it highlights that. There's this data set called image net and it's for detecting objects within images; it's computer vision. It's 1000 different types of objects and some model was perfectly capable of determining the difference between a Siberian husky and a wolf. They are very similar looking animals. They said, "Okay, let's understand why. Let's dig into the model and understand why the model and how the model could tell the difference between the Husky and the wolf." They saw that it wasn't looking at the animal to make that distinction at all. It was looking at the background. If the background was snow, it was a husky. If it was woods, it was a wolf, because the entire data set consisted of these animals in those environments--

Kamla: That's an outlier.

Pedro: Right. So what happens when that animal is in a vet clinic table being operated on? It will have no idea if it's a wolf or a Siberian husky. It doesn't even understand; it wasn't looking at the animal at all for telling the difference. That's a problem. So one way of tackling that problem is if you say, "You know what, I've already trained this model with every picture ever taken in history, and every picture that will ever be taken." So even though it might be overfitting, it doesn't matter because the model is only going to execute on pictures it's seen, period. That is never the case. Right? So you can't do that.

What you can do is through what's called model transparency or model explainability. That's what that paper was hinting at. You need to understand why the model is making the decisions it's making so that you can feel comfortable that even though it hasn't seen every possible picture and variation of a husky, if you see that ah, look, there's these minute differences in the corner of the eye or the nose, then you'll feel more comfortable that when it sees a crazy new picture in a new environment, you know that it will know because you believe that the how and why it's operating is correct. It learned a real thing.

Kamla: So what your company does is B2B then, your customers are businesses or enterprises?

Pedro: Yes.

Kamla: Right, so you founded it in 2017?

Pedro: Correct.

Kamla: And you've already raised 10 million?

Pedro: Yes.

Kamla: Okay. You raised 8 million in 2018, and you said after raising money, you had a little celebration and you went home and you were stressed out.

Pedro: Yes.

Kamla: Because usually, people are very happy when they raise money. You know, you should have had a caipirinha. But why are you stressed?

Pedro: So, we raised on a Friday, we went out for lunch, celebrated with the company. It was great. I came home. The whole ecstatic feeling had already died off by the time I came home Friday. Saturday and Sunday, my wife kept asking me, "You look really depressed. What's wrong? You seem depressed." And I kept trying to think, am I depressed? I don't feel depressed, but something is off and it's making her-- you know, she can read me really well, so I trusted her reading.

Then by the end of the day, Sunday, I realized what it was. I wasn't depressed. I was confused and worried that I didn't know what to be worried about, because I had been worried about raising money. And that's always a big worry. It was on my mind for so many months that I felt relief. But then, that triggered a responsive worry, because the relief meant I wasn't worried. And I knew I should be worried about a lot of things-- we're a startup. So, the first thing I did immediately Sunday night and then Monday morning, was put a list of here are all the things I need to be worried about now, because it's going to be even harder than before getting this 8 million dollars, and that was it. Once I did that, then I was back to being worried but worried about the right things, not just worried that I wasn't worried about anything.

Kamla: And then your venture capitalist, one of your VC guys showed up.

Pedro: Yes. That same week, he showed up, and we had a nice first talk of, "I just invested this money, these are the things that I expect, this is the onboarding." Fantastic conversation, we were very open to each other. But he went through that process of a lot of CEOs get too excited, and they'll spend months in this like--

Kamla: Basking in the--

Pedro: In a cloud in heaven and I want to make sure that you're not doing that. That look, if you want to raise B and you start backtracking all the targets that you need to hit, you should have started four months ago to raise B, so you're already four months behind.

I told him, "We're on the same page. On Monday, I was already there. But thank you for bringing that up. I'm glad that we're thinking very alike."

Kamla: So are you already worrying about the next round of funding?

Pedro: Always.

Kamla: Always. Okay, before we go any further, what is the definition of a data scientist? I mean, that's such a hot new skill set. Wherever you turn people want to become a data scientist.

Pedro: Yes, it is a very exciting field.

Kamla: Are you a data scientist?

Pedro: I believe so, yes.

Kamla: Okay. So what's a data scientist?

Pedro: It's this amalgam right now of being a mathematician, statistician, machinery scientist, software engineer, good communicator--

Kamla: Too big, too big!

Pedro: Yeah, I had to take a big breath to even say it.

Kamla: That's a unicorn.

Pedro: Yes, I believe that's one of the biggest problems. The early days of aviation had the Wright Brothers building the planes and flying the planes. They were the pilots-- you can't build United Airlines when it requires you to train and hire a million Wright Brothers. You can't have a million Wright Brothers. I think, obviously, the field of aviation advanced to a point that the requirements for becoming a pilot got lowered and more reasonable. And then you can start mass producing pilots or different types of professionals.

I think that's where we are with today, with AI, data science, etc. It's that transition; it's still the requirements are still too high and unreasonable. There's people that can do that, but it's never going to be in the masses. You're promising AI to be the new electricity? Well, if it's new electricity, it must be able to touch everyone in every company, and they'll never do that when there's a small select few that can do it really well, right?

Kamla: So it's not a utility still?

Pedro: It's not there yet.

Kamla: Okay. So my next question is Elon Musk. He has lots to talk about AI and he's a little worried. He talks about what AI - especially what you're doing, learning to learning to learn - the machines are learning to learn. He's a little worried that we don't understand how these machines learn and how they think. What is your response to that? Should we be worried?

Pedro: Sure, why not? Those examples-- the machines don't understand what they're doing. It doesn't have any real understanding beyond the paper towel dispenser in the bathroom, the one that can simulate human conversation and spit out texts, or the one that can tell the difference between objects, or the one that can play Go; it's just numbers. It doesn't really understand, which is a frightening thing. One of the things that I think about this is first, the level of AI today is not where people think it is. As far as fear, the fear comes from not understanding of AI being way ahead of where it is today. So that's right there, already, there's time. To build something that's truly intelligent, you need to understand intelligence, and we don't even understand intelligence.

So, the idea that we're going to build a truly General - it's called Generally AI - General Intelligence, I think we're really far away from that. But, the fear of smarter and smarter and smarter, I think I like to counter that with machines that are going to inherently make decisions that might affect life or death of humans, and I don't mean advanced machines. I mean a bear trap, an old cartoonish bear trap. That's a machine and it could certainly kill someone if somebody steps there, right? Would you rather have a bear trap that has less intelligence? Meaning step, it snaps. Or one that's more intelligent that says, "Wait a minute, this isn't a bear." Or, "This is a person," or "Maybe I don't need to snap that hard," or "Maybe I shouldn't." That's a safer machine; you make it more intelligent. It's actually going to be safer, right? The same thing with a landmine or any of these things that are making decisions, the more they know, the better.

Now, the same thing with a car, right? The car that's self-driving right now making the decision do I turn left, do I turn right, it doesn't understand the value of human life. Yet, we're letting it choose if it's going to run somebody over or not. To me, that's frightening that it's not smarter. That is frightening. The fact that it's making life or death decisions, and it doesn't understand it's just numbers and it says, "Turn left, turn right," because this algorithm says probability is better here.

Kamla: So a self-driving car is a little bit more worrisome for you?

Pedro: Well, I guess it's a more concrete example because it's a life or death situation. It's more immediate and people see that every day, and it doesn't understand. If it understood the value of human life and it had more intelligence behind it at understanding why, that would actually make me feel more comfortable because it would have some kind of sympathy. Then it goes into the conversation of can we train machines to have empathy and sympathy, which would also make me feel more comfortable because if they can relate and understand human value, there's a lesser chance and the whole protection of ourselves and our future. I think that's probably a better way because trying to put rules and constrain the intelligence, if it truly becomes one day that intelligent, it will be so beyond our level of intelligence that we won't be able to outsmart it.

Kamla: Does it worry you that we could reach that point? Is that a possibility?

Pedro: I think we're very far away from that. But, that's why I said we shouldn't be able to try to outsmart it because everybody's pushing to make it smarter than any human has ever been. There's no way we're going to outsmart it. If it's brought up from a place of can we think of how a machine would understand sympathy and empathy, then that's a little better, because if it has those, then there's less of a chance of us needing to outsmart because it has empathy and sympathy. It's basically growing up like a human would, right? When you have humans that have serious problems with doing things that would put them in jail, you see that a lot of times, they have less ability to have empathy and sympathy, right?

Kamla: So that's what is missing. Let me find out. When did you first get introduced to AI? How did you encounter AI?

Pedro: AI was in undergrad, probably my second or third year as an undergrad. I took a course, it had algorithms and then AI, and I think probably three more courses as an undergrad.

Kamla: And you were hooked.

Pedro: Yes. I already knew of it. I hadn't approached it from a technical perspective. From a non-technical perspective, then yes. Even from movies in this since being little that the whole idea of the machines that are intelligent was very cool and interesting.

Kamla: And you wanted to be a mad scientist?

Pedro: Mad scientist? Still do! That hasn't changed since I was about five or six.

Kamla: That's what you told your grandma, right?

Pedro: Right.

Kamla: This is growing up in Brazil?

Pedro: Yeah.

Kamla: Okay. So how did you come here to the US then?

Pedro: For college. When I finished high school in Brazil--

Kamla: You wanted to study here?

Pedro: Right.

Kamla: Why?

Pedro: Just a lot of really great universities, and I think the leading research, I think is the idea there.

Kamla: So you thought this was the place? But then you came just a month before schools we're going to open without any seat. You were not registered at any school. And you landed up in the Midwest? Why?

Pedro: Why the Midwest? Well--

Kamla: People from Brazil, go to New York. I'm just wondering why the Midwest, you know, the middle of snow?

Pedro: Yes, that's true. I wanted to be close to family. We had family in two or three places in the US, and I wanted to see which place had some great university options. And that was one of them.

Kamla: So that's how we landed. So then tell us, how did you then start studying because you just arrived a month before schools started to open?

Pedro: Right. So, I was to go to the University of Notre Dame; they were already closed for applications. Holy Cross College is a community college - actually, they're a full college now - across the street from Notre Dame. Fantastic place, great people. So I enrolled there and I wanted to transfer into the engineering program at Notre Dame. I was told, "Look, if you're coming from Holy Cross College into Notre Dame, it'll be easier if you transfer into Arts and Letters and then spend the year and then transfer internally within Notre Dame," which meant losing a year and I really didn't want to do that.

So I said, "What do I need to do?" And they say, "Well, you need to take the exact same curriculum for the first two semesters before you transfer to Notre Dame." They have a partnership program with St. Mary's University, that's an all-girls school, Notre Dame Holy Cross College, and Indiana University, South Bend. I ended up taking courses at four different universities in the same semester in order to get that. I was driving a lot.

Kamla: I can imagine.

Pedro: Yeah. I can say in that first year I attended a Catholic University, an all-girl university, a community college, and then State University, the IUSB-- I attended all four of those my first year.

Kamla: And you got all your grades with the requirements.

Pedro: I was able to transfer into the engineering program.

Kamla: So then you graduated from Notre Dame?

Pedro: Yes.

Kamla: And then you went and did your masters where?

Pedro: Indiana University.

Kamla: And then you went to Yale?

Pedro: Yes.

Kamla: Why Yale?

Pedro: I visited a lot. I think I visited 11 different schools.

Kamla: So this time you visited all the schools you wanted to go to?

Pedro: Oh, yes. I visited, I think, 11 different schools to make a decision. I liked the program. I liked the professor a lot. I thought that was the best opportunity for me to have a very unique type of Ph.D. program for what I wanted.

Kamla: What was unique about the Ph.D. program?

Pedro: The uniqueness was more with the professor; the program was fantastic. And I liked that. The professor was just out of this world-- how he runs things so much to learn, besides the academic part on the management side, and understanding because he has such a big lab. I mean, our best year in that lab, I think the professor published 42 papers, academic papers at the top publications, Science and Nature and Genome Research. 42 is--

Kamla: A lot

Pedro: A lot. That's like a lifetime of a professor. And that was in a year. I think the other years he did 35, 33. I was attracted to that, that level of efficiency and so those two aspects--

Kamla: You liked the intensity?

Pedro: The intensity, the stress, the pressure, I love, it pushes you to do more than what you otherwise would, if you feel comfortable. Also, the ability to manage that, the ability to actually get work done at that rate and that level was inspiring to me. And I wanted to learn from him.

Kamla: And you're already married at this time?

Pedro: Yes, I got married right before my master's, after undergrad, right at my masters.

Kamla: You were married and you're in a Ph.D. program. This is in bioinformatics?

Pedro: Computational biology,

Kamla: Okay. And then your wife says, "Let's have a kid." And you freak out?

Pedro: Yes, a little bit. We had talked about it. We were going to have babies and kids, and I loved that idea. I just didn't know when, exactly, and she brought that up. We had been married for two years, I was in a Ph.D. program. I thought I'm a student; I'm not making a big salary. I have to finish my Ph.D. The two big things in my head where one, financially, I'm not ready. I'm not a provider yet. And secondly, will I be able to finish my Ph.D.? Will I be able to get all this work done? I had this very long argument, all these reasons why not. I think it took her about 30 seconds to a minute to change my mind. She said, "Look, it's always going to be hard having a kid. It'll never be easy. And there's always going to be something going on in our lives. We're never going to be 'Oh, my goodness, there's nothing that I'm doing now, let's have a kid.' It's going to be your first job, then your first promotion. And then your first house that you buy." It made perfect sense. It played in my head, "Yeah, she's right." These arguments I just used now I'll be able to use in perpetuity till the day we die, so that's a flawed argument because then it's incompatible with the idea that I want to have kids. So I said, "Okay, you're absolutely correct. Let's do it."

Kamla: So you had a kid and not only one, but now you have four kids?

Pedro: Four kids, yes.

Kamla: And the fourth kid was born just before you were running out of your money at the startup.

Pedro: Yes, it was stressful. Isla was born, I think within two weeks of when the money was going to run out when we started the company. Yes.

Kamla: So you like to live on the edge?

Pedro: Yeah, I guess. I guess that's not my edge. But for some people, that's the edge.

Kamla: How does your wife handle that?

Pedro: Beautifully, spectacularly. She is wonderful.

Kamla: She does not freak out?

Pedro: Well, everybody does. If I wasn't freaking out either I'd be worried about my sanity, but it's the right kind of freaking out and then working through it, and thinking through it.

Kamla: Okay, my final question is those around startups, they usually have problems doing business calls at home, they end up sitting in the car and making calls. Have you done that?

Pedro: No, I've done it from home. But there will be two kids next to me, and there'll be a kid in my arms, and it's intense sometimes.

Kamla: You don't go into the car and do the calls?

Pedro: No. Sometimes, I'll find a room that's empty. But a lot of times, inevitably, they'll find me, they'll be like, "Daddy's here!" I can manage, it's fine.

Kamla: Pedro, muito obrigado for doing this interview. Thank you so much.

Pedro: Thank you. This was a lot of fun.

Kamla: Thank you for watching. If you missed any of our episodes, you can watch them on our website and join me next week for another new conversation. Until then, goodbye.

End Transcript

Related Post

AI Expo 2017 - Highlights

It is always/never the right time.