in ,

Three Questions that Keep Me Up at Night, Hacker News

Three Questions that Keep Me Up at Night, Hacker News

A Google interview candidate recently asked me: “What are three big science questions that keep you up at night?” This was a great question because one’s answer reveals so much about one’s intellectual interests – here are mine:

Q1: Can we imitate “thinking” from only observing behavior ?

Suppose you have a large fleet of autonomous vehicles with human operators driving them around diverse road conditions. We can observe the decisions made by the human, and attempt to use imitation learning algorithms to map robot observations to the steering decisions that the human would take.

However, we can’t observe what the homunculus is thinking directly. Humans read road text and other signage to interpret what they should and should not do. Humans plan more carefully when doing tricky maneuvers (parallel parking). Humans feel rage and drowsiness and translate those feelings into behavior.

Let’s suppose we have a large car fleet and our dataset is so massive that we cannot train it faster than we can collect. If we train a powerful black-box function approximator to learn the mapping from robot observation to human behavior [1], and we use active-techniques like DAgger to combat false negatives, is that enough to acquire these latent information processing capabilities? Can the car learn to think like a human , and how much?

Inferring low-dimensional unobserved states from behavior is a well-studied technique in statistical modeling. In recent years, meta-reinforcement learning algorithms have increased the capability of agents to change their behavior in the presence of new information. However, no one has applied this principle to the scale and complexity of “human-level thinking and reasoning variables”. If we use basic black-box function approximators (ConvNets, ResNets, Transformers, etc.), will it be enough? Or will it still fail even with a million lifetimes worth of driving data?

In other words, can simply predicting human behavior lead to a model that can learn to think like a human?

One cannot draw a hard line between “thinking” and “pattern matching”, but loosely speaking I’d want to see such learned latent variables reflect basic deductive and inductive reasoning capabilities. For example, a logical proposition formulated as a steering problem: “Turn left if it is raining; right otherwise “.

This could also be addressed via other high-data environments:

(Read More) Brave BrowserPayeer

What do you think?

Leave a Reply

Your email address will not be published. Required fields are marked *

GIPHY App Key not set. Please check settings

Face Masks Are Useless Against America's Biggest COVID-19 Threat, Crypto Coins News

Face Masks Are Useless Against America's Biggest COVID-19 Threat, Crypto Coins News

Trump's Strategic National Stockpile looks empty as coronavirus peaks, Recode

Trump's Strategic National Stockpile looks empty as coronavirus peaks, Recode