Panel Discussion – Deep Learning: Where do we go from here?

Last Tuesday, our founder & CEO Pedro Alves was invited to speak at a panel “Deep Learning: Where Do We Go From Here?” led by Adam Vogel from SignalFire. Pedro discussed the current state, limitations, and future of Deep Learning with other panelists Tom Annau, Walid Saba, and Erin Ledell in front of industry professionals such as data scientists, machine learning engineers, and AI researchers.

The panel started by acknowledging the current limitations of Deep Learning. Tom shared slides that showed mistakes that deep learning algorithms make such as interpreting a Stop sign as a 45mph speed limit sign. Apparently, the model was trying to remove graffiti from the original sign and this caused misjudgment (please refer to the paper “Robust Physical-World Attacks on Deep Learning Visual Classification” by Eykholt and et al.  for more info).

Stop-sign

Pedro shared an example that highlighted the importance of Model Transparency. A couple of years ago, a study was conducted at UCI to test how well a deep learning model can classify a wolf vs. husky. The result was phenomenal. The model would identify the animal with incredible accuracy. But, the underlying question was, “how is it making the prediction?” By looking inside the black box, researchers found out that the model was not even looking at the animals. It was looking at the background. If the background was snow, it output wolf. If the background was woods, it output husky. In the author’s words, the model was a snow identifier.

Husky vs Wolf

What if the animal was lying on a vet table? The model would not be able to correctly identify the animal. So what caused this? It was the data. Pedro continued by sharing that deep learning relies heavily on the data to train a model. Within the boundaries set by the data, it performs well. But if it is outside the boundaries, like a husky swimming in a lake, it would act randomly, which is scary depending on where the model is being applied.

When the discussion moved on to the next topic, ‘Where are we with the AI tools that democratize data science?’, Pedro re-emphasized Model Transparency. To be more specific, he asserted that for AI to be more widely accepted, we need human-readable Model Transparency. Something that allows anyone to look at and understand why the model is making the predictions it is.  AI models are ultimately going to be used by decision-makers who are not familiar with data science. If these professionals cannot understand the model, they will have a difficult time trusting it, and they will not use the model to solve their business challenges.

Walid, the most senior AI expert in the panel, shared how AI has evolved from the 80s. In terms of the core technology, we are still using the same foundations of deep learning. What has changed is that we now have the processing power and tools required to analyze data and try new things, like adding more layers to train a model and increase accuracy. Walid also mentioned that we need to be mindful about the difference between neural networks and the actual human brain. He argued that a human mind performs multiple reasonings in parallel - such as probabilistic reasons, logical reasoning, inductive and reductive reasoning - and we are still far away from mimicking the human mind with computer algorithms.

Pedro built on Walid’s point and suggested why are we even trying to mimic a human brain? We do not understand how our brain works, so how can we make an algorithm that mimics the brain? Pedro’s theory is that technology does not advance when we try to mimic nature, it only advances when we are inspired by nature. For example, planes don’t flap their wings. Do planes fly faster than birds? Yes. Therefore, the approach of using AI to mimic what a human does is artificially limiting the power of what AI can do.

The panel faced one final question “Where do we go from here?” Tom and Walid revisited the shortcomings of neural networks. The biggest gap they mentioned was models being limited by the training data. Pedro agreed with that problem, but added that the biggest gap he sees is AI getting implemented in real businesses. The shortcomings of data may narrow the scope problems models can solve, but if the models don’t get into production, there is no point trying to bridge that gap. Therefore, Pedro argued that we first need to close the gap to have AI in production and start solving real cases, so that more companies invest in AI and use AI.

The graph below is the example he used:

The AI Gap

Walid followed up on Pedro’s comment and added that closing the “ROI” Gap may mean giving more tools to business professionals. Understanding the current state of AI and providing the necessary tools to accelerate data scientists, like removing the need to do feature engineering and hyperparameter tuning, would be a huge win for AI. The audience was concerned about AI taking over jobs, but Walid and Pedro explained that while AI might take over the repetitive tasks, that would result in increasing the productivity of humans.

The session went way over the scheduled time, but the panel discussion and questions from the audience were practical and helpful. If you would like to hear more about Pedro’s opinion on the current state of AI and how we can bridge the gap, contact us.

Related Post

A change is coming to AI.

Plug and Play Fall Summit 2018 Recap