Supported by Greenhaven Road Capital, finding value off the beaten path.
Andrew Ng has quite the resume to talk about AI, building the AI team at Baidu, cofounding Coursera, and leading Google Brain. His November 2017 talk on the Artificial Intelligence Channel is an overview.
We’re talking about AI now because we have enough data, passable algorithms, and people programming the two.
AI is a tool, like email and spreadsheets and it’s changing how we work. Ng gave this rule of thumb, “anything a typical person can do with less than a second of thinking we can probably now assume automate.” Ajay Agrawal said much the same, “AI doesn’t do workflow, it does tasks.” At Goldman Sachs for example, they figured out that an IPO has 146 distinct steps.
Ng gave the example of a security guard; notice, identify, categorize, respond, repeat. AI can do some of those things. AI will replace security guards, maybe. Jobs have always been about outcomes. People don’t want quarter-inch drill bits, they want quarter-inch holes. Job security will be about using new technology to create those holes.
And people adopt to adapt all the time. Blackboards became smartboards, books became digital, and grade books went online but teachers still teach.
To date, machines have vanquished only one occupation; elevator operator. Hal Varian said, “Automation generally eliminates dull, tedious, and receptive tasks.” AI, like other technologies, will make people better at their jobs. Accountants used to count, Computers used to compute.
Pedro Domingos told Shane Parrish, “People sometimes think the easiest jobs to automate are the blue collar ones but our experience is the opposite. It’s often white collar jobs that are easier to automate.” Computers don’t miss the gorilla inserted into lung scans.
Robots are not coming for our jobs but they will change them. Author of the book Humans+Machines, Paul Daugherty said, “The skills we see increasing in importance in the human plus machine age are creativity, reasoning, and socio-emotional intelligence.”
In Average is Over, Tyler Cowen proposes that income divergence will continue based on this qualifying question; can you work well with machines? The have and have-not trends “stem from some fairly basic and hard-to-reverse forces: the increasing productivity of intelligent machines, economic globalization, and the split of modern economies into both very stagnant sectors and some very dynamic sectors.”
If this were a movie about the changing economy AI could be a villain from central casting.
Ng compares AI to electricity, noting it will be ubiquitous thanks to more data and better models. Data accumulation is like the advice on planting a tree; the best time was thirty years ago, the next best time is today. That’s what Google did.
GOOG 411 began in 2007 as a directory information service. Google Economist Hal Varian said the call-in service, “learned how to recognize voices, learned how to recognize accents, learned all these different things in this very limited domain of directory information. That was enough to get started and now I think we have one of the best voice recognition systems in the world.”
We’ve been saying ‘Hey Google’ for over a decade.
Today, 2018, algorithms are less important than the data and people are more important than both. Varian said, “These days the scarcest factor by far is expertise.” Rory Sutherland worried that there are more data sets than mathematicians competent enough to handle them. But this prioritization may be shifting.
In a talk about his book AI Superpowers, Kai-Fu Lee suggests a shift of AI preeminence from the United States to China. The U.S. has the leading researchers but China has the best data, thanks in part to the number of people and the environment they live in. Mobile payments are one example, Lee explained, “People’s spending patterns are so much more valuable than their clicking patterns.”
Ng noted that more than 10% of Google and Baidu searches come through voice “because voice recognition is finally accurate enough.” Product adoption or abandonment is largely drive by if something is ‘good enough’, an idea at the heart of Disruption Theory.
AI like other tools, can improve the way people work but will never be perfect, just like other tools.
Data is the new electricity. Err, I mean oil, data is the new oil! Oops! Should have said gold, yes, data is the new gold! Wait, that’s not it either? Analogies can help but data is different.
Hal Varian said, “Some people say ‘data is the new oil’. My response is, they have one thing in common. To be useful they have to be refined. A barrel of oil isn’t worth much but turn it into gasoline, kerosene, or hydrocarbons and it’s worth something. It’s the same thing with data…(but data is non-rival)…It’s a mistake to talk about data ownership because it’s too narrow a concept.”
This is a divergence in AI opinion. People like Varian think data ownership is the wrong perspective. People like Ng believe data ownership is the only way to earn a competitive advantage, “algorithms from a company point of view are for the most part not defensible.”
Ng teaches his Stanford students about a virtual loop where data leads to a product that leads to users who generate data. 🔁 And, “after a period of time, you might have enough data to yourself have a have a defensible business.”
For example, Blue River Technology uses “cameras, computers, and artificial intelligence to allow Ag machines to see every plant in a field.”
Ng concluded his presentation with one lesson from his internet days; selling things online does not make you an internet company. “What defines the internet company is whether or not you have an architect at your organization to leverage internet capabilities to do the things that the internet allows you do really well.”
Things like a/b testing, short product cycles, and bottom-up actions. Internet companies had to “push decision making down from the CEO to the engineers and product managers because the internet products and uses are so complicated that a lot of knowledge about what needs to be done lives only in the heads of the engineers and product managers.”
In other words, it has to be a decentralized command. Pedro Domingos said, “Machine learning is computers programming themselves instead of having to be programmed by us…In general terms, tell them what you want them to do and let them figure out by themselves how to do it.”
The challenge then is to figure out what it means to be an AI company.
h/t Patrick O’Shaughnessy on Twitter.