Can AI Get Mad Cow Disease?
Going dark on AI.
(This post follows on the thoughts expressed in "The End of Your Knowledge Worker Profession Approaches" and Thoughts on AI and the Future of Agile Coaching posts.)
We assuage our fears by telling ourselves that AI in the form of software like ChatGPT is a tool. It will make good writers better. It will make good coders better. Maybe so. It will definitely make bad writers even worse, bad coders even sloppier, lazy lawyers (and lawmakers) more harmful, and deluded nobodies more dangerous. Tools don't care. Tools just do what the mind behind the grip guides them to do. So what about that mind?
Q: What do you fear most about the prospects of AGI in the world?
A: It’s the same thing I fear most about humanity. Our very nature. Our desire to evolve ourselves. Our willingness to burn everything to the ground. Our rage. Our patience with the unthinkable. Our inability to assess risk.
Michael Bowen, "Fear Itself"
I wrote previously on how I thought AI might impact my little corner of the world. I also mentioned the vibe I get that AI's potential exceeds the scale of what is generally consider innovative by an order of magnitude. But "potential" is a tricky thing.
Extending these thoughts and reflecting on a lot of articles by very smart people on the topic of AI ethics, it's easy to conclude it may not matter much whether AI masters programming or technical coaching. If some of the concerns I've read come true, AI will devise a programming language of it's own that is likely to be undecipherable, perhaps even undetectable, by any single human or team of experts. This thought is unsettling, but it's a FUD infused straw man and not a fact. It's a fact, though, that predictions are difficult, especially about the future. So I'm not predicting, just speculating.
Personally, I'm not concerned with adapting to the changes. I've had to do that before under much more trying circumstances. What concerns me is whether we're on the threshold of software technology emerging as a digital equivalent to thermonuclear power for petty tyrants. An article I read recently by Niall Ferguson reflects this concern:
"I still remember vividly, when I was researching my book Doom, [Eliezer Yudkowsky's] warning that someone might unwittingly create an AI that turns against us — “for example,” I suggested, “because we tell it to halt climate change and it concludes that annihilating Homo sapiens is the optimal solution.” It was Yudkowsky who some years ago proposed a modified Moore’s law: Every 18 months, the minimum IQ necessary to destroy the world drops by one point."
AI and killer machines don't build themselves. Not yet, anyway, and I believe it will be easier to see the wisdom for not going down that path than it will be for leveraging AI in ways rife with negative unintended consequences. Before any of this can happen, there is a lot of intentional physical work that regular human beings have to do. And this is where things will get interesting.
The future of AI is in the heads of a small group of extremely intelligent people driven by a phenomenally cosmic sized hubris and not enough System 2 neurons dedicated to asking "What if we're wrong?" This group flies in the stratosphere, lifted by expertly articulated good intentions and firm in the belief that a digital solution must be found to sloppy and messy human behavior (e.g. "self-driving" automobiles.) Their high altitude, low oxygen view has left them out of touch with the fact the human experience is analog. Some things are best left sloppy and messy. This is also the land of commercial AI or, as some people are calling it, "Big AI." Their incentives will be driven by the usual suspects: Secrecy, rent seeking, and hobbling any competition. I'll call these folks the AI+ tribe.
There is another group, smaller perhaps but equally intelligent and driven by phenomenally cosmic sized sense of self-importance and tainted righteousness. It's this group, driven more by System 1 and the human nature called out by Michael Bowen in the opening quote, that will apply AI to selfish and short-sighted ends. I'll call them the AI- tribe.
(The AI+ and AI- tribes are similar to what Marc Andreessen refers to as "Baptists" and "Bootleggers.")
As I see it, the danger with AI isn't it's capability or how it accomplishes what it does. Enabled as it is by hubristic technical experts, the danger is how it will be used by people in positions of power or authority. Particularly by those who fully recognize it's utility while having little understanding or care for how it does what it does (e.g. "policing for profit" or, more insidiously, "regulation enforcement for profit.") Enabled as they are likely to become, the AI- tribe will have to expend little effort in attracting a sizable portion of Milgram's 65%'ers to carry out their instructions.
As it's currently evolving, AI is like digital fire without the ubiquitous access to methods for creating or using it. Most of us will be shut out from the warmth and wealth it generates while being shut inside the walls of the social constraints it enables. We need only examine the trajectory started by a simple search engine twenty years ago to where it's landed today.
Searching for Information
Such a simple concept and yet control of that simple need in the digital world has resulted in the creation of vast amounts of wealth for an extremely small percentage of the population and a single company. Where simple search algorithms started with good intentions and vision, what we have today is massive information asymmetry and suspect quality.
I'm not advocating for a redistribution of wealth. And I don't begrudge that a few right-place-right-time-right-intelligence people became astronomically wealthy as a result of their hard work and willingness to take risks. We've all benefited by the ease with which Google has made searching for information. And yet, Google's early motto of "Don't be evil." has undergone an interesting evolution as the company grew and began, perhaps unavoidably, working on projects and with organizations that more and more people personally viewed as "evil."
The point to be made, indeed the lesson from more than a few examples from the past 200 years, is that what's coming on-line is far more pervasive in terms of it's ability to control the flow of information and manipulate social constructs. Like the 15th century printing press, and the prevailing fears that never came true, there is no shortage of fear and doomsaying about AI's imagined effects. Unlike the printing press, the technology is only available to a self-selected few.
The Monkey in the Machine
Seated within each and every human working on or near AI is a creepy little homunculus being pushed and pulled by a mini cosmos of cognitive biases, conflicting beliefs, untested values, indoctrinated programming, and irritable bowels fueled by processed food. Every single one of them firm in the expectation they occupy the center of the universe. Each one confident they know what's best...for other people. When I look at the level of maturity that has has been manufactured by public and higher education factories over the past 30 years I can find ample evidence to support Yudkowsky's modified Moore's law and Bowen's AI fear. Perhaps without intending to, our failing education system has produced the AI- tribal warriors - bad at math and science, ignorant of history and it's myriad of contexts, and imbued with a thuggish sense of entitlement to things they did not build.
That's spooky enough. Scanning the far greater number of people - each with the same homunculus squatting inside their heads - who will use and abuse AI in a zillion ways the AI Luminaries and Clerisy never imagined, it's easy to find genuine fear. The AI+ tribe will not see the AI- tribe coming.
And then there's AI itself. It's a truism software is never released. It escapes. Pandora has opened the AI box and let slip the dogs of bits and bytes. As systems become more and more complex, they are increasingly vulnerable to attack in unforeseen ways. AI has already been shown to be susceptible to prompt injections, data poisoning, and model collapse (where AI trains on AI generated content.) These and many other yet-to-be-discovered vulnerabilities risk devolving the benefits of AI into some sort of Promethean cyber-troglodyte. (To be fair, this, too, is a FUD soaked straw man. Or straw-troglodyte.)
While these and many other undiscovered vulnerabilities make the system less reliable and accurate, it's likely the perception among ignorant users will be that the AI system delivers "The Truth." If people have willfully followed directions from Google/Apple maps right into mud pits, lakes, trees, steps, and runways - trusting their phone rather then the senses Nature spent millions of years tailoring for life on Planet Earth - they will most certainly trust what streams out of more advanced AI systems. We love convenience and low effort answers.
No Checks, No Balances
Keep reading with a 7-day free trial