In this podcast, Seth discusses what artificial intelligence is and what responsibilities we have as the creator of this technology.
What is AI anyway? Seth’s definition of artificial intelligence is everything a computer cannot do yet. This means the definition of AI changes over time as humanity keeps moving the goal post. For example, the ultra-intelligent “AI” that could beat the chess grandmasters decades ago are now just a fancy computerized toy.
What is increasingly clear is that AI is more about tasks, not so much about intelligence. For any problem that AI can solve, those problems come with boundaries. AI can only operate within those boundaries because those boundaries carry a finite number of outcomes.
Computers are bad at what to do when it is something unexpected. When the computer encounters something that falls outside the boundaries, they have difficulties in handling those exceptions.
When we give a computer a bounded set of inputs and a known variety of decisions and outputs, sooner or later, it is going to get better at dealing with them than a human. Another word, they are good at doing tasks, manipulating data with known sources, and handling a finite set of decisions.
If we have a job where we do a task, AI will eventually take over. When our jobs involve a finite number of data sources and decisions, AI will eventually do them faster and cheaper.
Our opportunity is to figure out how to move from tasks to projects. We need to move to projects that involve organizing complex tasks in the face of a changing world. It turns out that there are lots of things AI cannot do, but people can, and vice versa. The things that are on the human side of the ledger is the idea that we can show up with emotional labor when it is required.
Learning to do those emotional labor things will require us to take responsibility. One problem we have with AI is that we are avoiding taking responsibility for what AI does. When AI do tasks and exhibit bias, we are at a loss to do the hard work necessary to make the situation right for humans. When we fail to do good AI work, it is easy in our culture to blame the system.
AI is a machine which was built by a person to be used by a person. When AI makes a mistake, we are responsible for AI’s actions. The mistake that we are making is getting all excited about how all the jobs are going to and worrying about where the evil AI is coming from.
When we build a system that we are inclined to trust, even though it has bad or no judgment, the responsibility is on us. Judgment is the last frontier for AI. Right now, AI does tasks, and they do it without knowing why they are doing. We may think it is our job to give computers a consciousness, but that is not going to happen.
On the other hand, it is our job to own the outputs of what we are teaching these machines to do.