Marc Cuban is Dead Wrong

Mark Cuban is dead wrong

The recent excitement about AI is undeniable. In every industry, there is room for AI to shine and bring tremendous value, and we now believe that the technology has evolved enough to deliver.

The number of startups working on this space (or at least claiming to) is massive and the amount of noise is equally daunting.

Mark Cuban has recently made a statement claiming that the world’s first trillionaire will be someone who masters AI. My biggest point of disagreement with that statement is with the word “someone”.

I believe that when done right, AI can beat humans in virtually any task, as we have seen recently with examples such as the one below:

The Washington Post - Google’s AlphaGo beats the world’s best Go player — again

I have always tried to model the framing of problems to machines as close to reality as possible, but being careful not to introduce the human bias on how to solve it. Both of these aspects are important when using AI. If the problem is not framed correctly, it will not be solved correctly. If human bias is inserted into the problem formulation, then you are restricting the ability of the AI to find the true best possible solution, which might be remarkably different from a solution found by a human.

This is a topic (the framing of problems) I plan to write an article about as I think it is of enough importance and relevance in our field.

“I am telling you, the world’s first trillionaires are going to come from somebody who masters AI…” Mark Cuban

Mark Cuban: The world’s first trillionaire will be an artificial intelligence entrepreneur

My point is that the world’s first trillionaire will be the first human that creates the first AI that can master AI. One of the tricks here is the formulation of the problem. What does it mean to master AI? I do not believe this is simple hyperparameter optimization as it has been talked about in articles such as this:

Artificial intelligence may soon be able to build more AI

Most of the articles on the topic, also referred to as AutoML, are focused on either tuning algorithms or configuring them. Beyond choosing parameters and neural network architectures one paper has innovated by having automatically generated optimization functions for backpropagation. This type of thinking is simply not innovative enough. We are still constraining AI too much with our own biases and beliefs. In order to achieve a true breakthrough we need to think very carefully about what it is that we are thinking about when solving problems, building AI models, and developing new algorithms. This exercise is not easy, but will allow us to step back and remove our artificially created constraints. I have been working on this, and will soon enough be writing some more articles on the topic.

Back to the issue at hand, the AI wars will be won by AI built by AI, whoever builds the builder. We, at Ople.ai, are working towards this and have seen promising results so far. We have been working directly with some of the world’s top data scientists in order to compare their work and results on some datasets to our AI-building AI. We will be launching a product focused on the AI community soon (in 2018) related to these efforts. We believe that having AI beat people at chess, go, pacman or any other game is cute but we are setting ourselves up to have AI beat people at AI.

This is an exciting field, and we will continue to increase our contributions to the science and discussions shortly. In the meantime, if you are a kickass data scientist and would like to compete against our AI, or perhaps join the team, let me know.

Remember, in the game of AI, AI will eventually (very soon) win 10 times out of 10.