Automating Wisdom

Jul 14, 2024

Generated by chatgpt

As the story goes, Croesus, the King of Lydia, sought the wisdom of the Oracle of Delphi before attacking the Persian Empire. She responded, "If you cross the river, a great empire will be destroyed."

Croesus, of course, failed to understand that it was his empire she was speaking of.

Was the Oracle's answer wise or probabilistic? In war, there are three outcomes: one side loses, or the other, or they come to a stalemate. It's the sort of answer that wouldn't require much wisdom. She played the odds and was right. Our current AI systems could give the same answer - the result of math, not wisdom.

And yet, her answer could've been wise. It contained the sort of wisdom acquired through experience. The wisdom to know that when two forces collide, some damage is bound to occur - a practical wisdom. Had Croesus not been foolish, he might have asked the Oracle, "Well, which empire are you talking about?" It's the sort of question you would only know to ask if you knew the story of Croesus or had the experience yourself.

The more we use AI systems, the more we will find ourselves in the King's position. Seeking the wisdom of our AI oracles, relying on them to guide our actions. We will also depend on these oracles to make wise decisions without our input. Here's the interesting question: Could we build wisdom into our AI systems? Could we ensure the advice they give is wise and not just probabilistic? Will they point out our blindspots - our gaps in experience, knowledge, and sound judgment? Could they make wise decisions on their own? What would that AI look like?

There is no universal definition of wisdom. Most modern attempts include some combination of knowledge, experience, and good judgment. This seems like a good place to start for a practical system.

Knowledge

Of the three, we are closest to achieving knowledge. Machines have moved past data and information as their only output. Today, AI’s have knowledge, or can at least approximate knowledge in a sufficiently useful way. They can take tests, write essays, make art, reason at a rudimentary level, and much more. They can seemingly understand, respond, and adjust. In the short time ChatGPT has been around it has already made significant leaps in knowledge. Much of the work in AI today will continue to advance the knowledge component.

Experience

A more complex question is how to get a machine to experience.

Indirect experience — that gleaned through the experience of others — could work like knowledge. It could mine history for patterns as a substitution for experience. The same way we can internalize the experience of King Croesus without going through it.

Direct experience by an AI is more complex. Imagine a system in London that monitors CCTV footage. While observing interactions, it notices some people saying to each other, "That's wicked." The response isn't negative as expected but positive and well received. The machine experienced slang—perhaps not in the strictest sense, yet it can teach the machine something about the world. 

Good Judgment

The most complex element of building wisdom will be building good judgment. Good judgment cannot be a set of rules. Wise people would know when to follow and when to break a rule. Instead, an AI must have a moral philosophy or values that guide its decision-making process.

Imagine an AI product built to replace traditional media. It can generate news, entertainment, and content. It can interact with comments and people. If the goal is to optimize engagement, we end up with polarizing content, half-truths, and sensationalism—not at all different from our current media landscape. What saves an AI from this race to the bottom is a set of values. For example, truth is more paramount than views or clicks based on a value system that prioritizes truth.

Who then decides what this moral system will be? What values an AI system should have? Without guiding values, money will come to dominate the direction of technology. It will optimize for economic value, even when the impact on society, democracy, and well-being is negative. Here, philosophers, researchers, and world leaders need to be most engaged. Knowledge and experience are, in many ways, technical challenges. A moral system though defines the sort of future we want with AI systems.

What if King Croesus had an AI with wisdom? He might have gotten the same answer that the Oracle gave. But such a system would have also asked, have you heard of this story or that story? Have you considered this blind spot? Have you thought through the consequences of your decision? It could have helped him avoid an unwise decision. And if this system had to decide on its own, he could trust it to make the wise decision. The real magic of such an AI would be in what it can learn about wisdom at scale. After millions of decisions guided by wisdom, it could look back at its history to see if there is more wisdom to be learned? Surely, the answer will be yes.

© 2023 Jaafar Mothafer.
© 2023 Jaafar Mothafer.
© 2023 Jaafar Mothafer.