Avi Goldfarb & Ajay Agrawal

Supported by Greenhaven Road Capital, finding value off the beaten path.

Avi Goldfarb and Ajay Agrawal spoke at Google about Prediction Machines. They wrote the book because they noticed more AI startups in Toronto and more AI investing in the Bay. “We got to see both the flood of companies coming into our lab in Toronto and the excitement that was starting to percolate in the Bay Area.” They had a ‘that’s interesting‘ moment.

Like other AI presentations, Goldfarb and Agrawal articulate the nuance. “The reason we’re talking about AI in 2018 and not in 2008 is not because of C3PO or Terminator technology, it’s because of advances in machine learning.” What is machine learning? Better, faster, and cheaper predictions.

Donning economists lenses – like Hal Varian did 15 years prior – reveals this framework:

1/ When things get cheaper we use more of it.
2/ When things get cheaper we use less of substitutes.
3/ When things get cheaper we use more of compliments.

If the price of coffee drops (1) the value of tea also drops (2) but the value of sugar and cream increases (3). Now, what if that happens to something like arithmetic?

“Your computer does one thing, arithmetic. But once arithmetic is cheap enough we find all sorts of opportunities for arithmetic that we might not have thought of before.”

Like?

“Photography used to be a chemistry problem but as arithmetic became cheap we transitioned to an arithmetic based solution.”

And?

Like accountants, who used to specialize in addition but now focus more on inquisition. “There are still accountants because it turned out the people who are best positioned to do all the arithmetic were also the best positioned to understand what to do when a machine did the arithmetic for them.”

Humans are great at finding new ways to use new tools. Cheap coffee has knock-on effects. Cheap arithmetic has knock-on effects. Cheap predictions have knock-on effects. And we are making predictions all the time about loans, tumors, behaviors, anymore.

Goldfarb wants people to ask, “What are the core compliments to prediction? What are the cream and sugar that become more valuable when coffee is cheap.” He thinks it’s decision making. “Prediction is not decision making, it is a component of decision making.” Other components are data, actions, and judgment. These are the tasks humans are better at than machines whereas machines are better at the arithmetic.

Around 17:30 Agrawal takes the lectern and provides a helpful framework. Like Paul Daugherty, Hal Varian, and Nicholas Christakis pointed out, people are excited but perplexed about AI and Machine Learning. It’s Dilbert’s boss suggesting, “we 3d print a blockchain and html it into a Bitcoin.” Daugherty said, “The problem a lot of business executives have is ‘What do you do with it all?'”

Agrawal points us in the right direction. For starters, think small. “AI doesn’t do workflow, it does tasks.” At Goldman Sachs, they figured out that an IPO has 146 distinct steps. “When organizations show up and say where do I even start, this (The AI canvas) is a coarse description of how we start.”

This canvas is simple. “The key point here is that there are senior-level people who have never written a line of code but can sit down and start filling these things out.”

The big effects will come, Agrawal said, when the returns from AI create cascades. “A common vernacular for this type of phenomenon is disruption.” For example, right now Amazon’s business model is shopping then shipping but what if their AI becomes good enough where their model is shipping then shopping. (26:10)

This may not need to happen at perfect prediction, just some good enough level. “There’s some number, it doesn’t have to be a Spinal Tap level of prediction accuracy, but there is some number where when they get to that level of prediction accuracy that someone at Amazon says ‘We’re good enough at predicting what people want, why are we waiting for them to order it, let’s just ship it.'”

“Amazon becomes transformational when the recommendations get so good that they no longer have to have the same business model as the Sears catalog.”

But betting on predictions will only work – like any other strategy – in the right culture. “The allocation of scarce resources is what makes something a strategy.”

Like marketing and design, AI researchers face the trouble with selling this to the boss. Researcher Grant McCracken said, “When a senior manager says, ‘Fine, that’s what you think, where are the numbers?’ And the best we can do is say, ‘Just trust us.’ It’s like, yeah right, ‘I’m not trusting my career, my children’s opportunity to go to college on your impression. Where’s the data?’ By data, they don’t mean, I did an ethnography in someone’s kitchen. They mean, please could I have some numbers.”

Numbers let people appear rational but we don’t have a (good) rational answer for what the killer app for AI is. Agrawal said, “I think our barrier is imagining all the things that we’re going to do.” To reimagine is one of Paul Daugherty’s key points too.

“The person who asks good questions of their data has a higher return to that part of their skillset.” Seth Godin said there are two things we should teach in school, leadership and how to solve interesting questions. With some imaginative intelligence, AI can help with the latter.

Thanks for reading.

1 thought on “Avi Goldfarb & Ajay Agrawal”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.