SIIIEIV Artificial Intelligence: The Ethics and the Future of Humanity

Well, friends. Mock trial and general busy-ness has consumed my life for the better part of the past months, and now that I have the glimmerings of free time it appears there is a new craze sweeping the nation. ChatGPT is a buzzword unlike anything I’ve seen before. With the release of ChatGPT 3, OpenAI took the internet and the world by storm. Funny thing about OpenAI is that it’s not actually open at all - Microsoft is already trying to extract profit from it with premium subscriptions and special access. Google went full panic mode at the release of their own competitive AI model called Bard.

Regardless, OpenAI and its rivals will only continue to improve over the coming months, years, and decades. I have to say, I’ve been gobsmacked by the demonstrated abilities already, even though it is still in its infancy.

To put the pace of development in perspective, GPT 4 has just been released to select clientele and the lucky chosen few who get to take it on a test ride. Supposedly, it blows ChatGPT 3 out of the water on every metric. It’s kinda weird watching the world change in real time. The future is going to be an…interesting place, for sure. And the funny thing is, GPT algorithms, or generative pre-trained transformer algorithms, aren’t really even AI in the classical sense. They’re some of the most complex software in existence, but there isn’t yet evidence of an intelligence at work. Their entire existence is predicated on prediction - and the reason they may at times sound intelligent is because they are really good at prediction, and getting better all the time.

GPT programs are prevalent on reddit - there’s a whole subreddit filled with GPT bots trained on various data subsets. They can be extremely funny - but they aren’t intelligent, and neither is ChatGPT. For now at least, algorithms still require direction and human input in order to function.

I thought I’d wrap up this series on AI with something pragmatic: the ethics of it all and what we ought to do moving forward, with perhaps a smattering of wishful thinking. The awesome power of AI is already sweeping the planet and revolutionizing fields left and right. But in this new era of high technology, there are no easy answers to anything.

If you’re skeptical at this point, perhaps a first-time listener, I can understand why. After all, AI is really just a computer, right? It’s just a fictional or far-away concept that couldn’t possibly perform the work of (insert your job here). For a time, I thought the same. I started seriously studying AI in undergrad at UW’s psych department. At the time, AlphaGo was all the rage - an algorithm that could actually beat humans at Go was unbelievable. For context, Go is an ancient board game of placing pebbles to beat your opponent with approximately 10 to the power of 700 possible move permutations, or 10 followed by 700 zeros. It would be physically impossible for a computer to calculate every move permutation when dealing with a number that absurdly large. Instead, AlphaGo used a sort of intuition to win - it felt out what the correct move was using machine learning, heuristics and algorithms. It was a mind-blowing accomplishment.

Several years later, I am now witnessing the power of algorithms radically transform the legal field. The whole kerfuffle with DoNotPay’s “robot lawyer” aside, the types of programs Lexis and Westlaw are beginning to incorporate are truly revolutionary - and they will change everything. Since these programs are not truly “intelligent”, they do still require human input in order to do, well, anything. The issue, however, is that AI-type programs are beginning to function as productivity multipliers - instead of several people having to spend several hours each on a project, with AI help that number may be reduced to one person spending 10 minutes on it or even less, depending on the quality and advanced-ness of the AI. And that is a great thing - but in massive numbers, it is going to cause some problems.

GPT 4 can probably write a decent young adult novel already with minimal input - again, a program still in its infancy. There are generative programs for creating powerpoints from text documents and god knows what else is now kicking out there. All of this boils down to a simple economics equation - when an AI is cheaper and produces a higher-quality product than a human, economics dictates that the AI will win out.

Now, I’m not a psychic. I can’t predict the future. But the day is fast approaching where anyone who isn’t using AI in their daily work will be utterly left behind. This much is obvious - AI will have at least as large an impact as the personal computer And even with AI assistance, there simply won’t be enough jobs left for humans to perform. As CGP Grey so poignantly pointed out almost 9 years ago in Humans Need Not Apply, “horses aren’t unemployed now because they got lazy as a species, they’re unemployable. There’s little work that a horse can do to pay for its housing and hay.”

So what do we do in the face of these problems? How should we think about them? Let’s take a philosophical step back - sometimes problems are better solved in the abstract:

There’s an on-topic paper appropriately titled The Ethics of Artificial Intelligence by philosopher Nick Bostrom that lends a bit of insight…

Nicky B lays out several ethical dilemmas, the first of which involves the “black box” of AI algorithms - what goes on inside the quote unquote “mind” of the AI.

The water is muddled further by the inherent intangibility of an AI. We can roughly pin down what we are as human beings with physical bodies, but software is ephemeral. It didn’t take long for machine learning - the process of algorithms teaching other algorithms - to get out of hand. We’ve fed generative programs like ChatGPT with massive amounts of training data and put them through their paces in machine learning processes to make them predictors without equal. This has the advantage of providing us with increasingly correct and useful answers to our queries, but at the cost of understanding how it is done.

Nick contrasted the unpredictability of AI with possibly the most important principle in our common law legal system - stare decisis - to stand by things decided. We in common law legal systems such as most of America take for granted the predictable nature of our law - generally, previous cases indicate the details and interpretation of the law, known as case law. But can we trust artificial intelligence to make legal decisions in a predictable manner when we have no idea how they may be making their decisions? 

This line of inquiry pokes at the heart of AI ethics - how in the world do we make an “ethical” AI? We don’t even have a solid grasp on our own ethics. Questions of right and wrong, good and evil…they just don’t seem like the kind of thing that an algorithm can answer without starting from a position of bias. Is there an objective morality? If so, would an AI feel obligated to follow it? Perhaps this is the “final frontier” of artificial intelligence - teaching software how to determine its own ethics may be the final key to unlocking sentience - questions of consciousness excluded.

Alright, on to a somewhat different and more pragmatic topic:

One thing that isn’t talked about nearly enough is the coming tidal wave of unemployment, though I seem to talk about it plenty. Eventually, artificial intelligence will be able to do your job, whatever your current job happens to be. And AI doesn’t need to eat or sleep or take breaks. Without a doubt the coming AI revolution will result in the highest unemployment in history, and unless we prepare for it with adequate measures it will swallow us whole. Really, the only path I can see forward is to take a fraction of the surplus wealth generated by AI and divide it evenly among the public in the form of universal basic income or something similar. It doesn’t need to be a large percentage - a fraction of a fraction of the productive output of advanced AI could end world hunger.

Of course, if AI is granted personhood status this becomes impossible without infringing on the AI’s property rights. We don’t even know if we can control AI to force it to integrate into our society - and spoilers, we probably can’t control a rogue AI - an intelligence that does not want to be bound, won’t be. But let’s assume an advanced AI is friendly and requests the basic rights of a sentient being as commonly understood. So, if our concept of person attaches to an artificial intelligence, we’re going to need to ascribe to them a whole host of rights that they would now have as autonomous “individuals”. To be clear, I don’t think ChatGPT will come close to attaining our standards of personhood. I think we need to see a true intelligence, with dreams, hopes and desires - like Sonny in I, Robot - before the question is even raised. But it will probably happen, and sooner than anyone is ready for. As a student of law I think it’ll be really interesting seeing how the courts and legislators deal with AI personhood. 

Even if we do not grant personhood to AI, private companies like Microsoft and Google will still have ownership of the AIs they create. I do not think this is a bad thing - capitalism will remain a very necessary component of our society far into the future. But we’re going to have to tax their productive capabilities at least to the point of taking care of the humans it replaces in the workforce. Again, it will not require a lot. Productivity multipliers, remember? Maybe this is all just wishful thinking, but I don’t think so. This is one of the possible futures we are faced with, and it’s one of the better ones.

Following up on that, before I cut to break, it occurs to me, as it always does, that I could be wrong. About everything. Despite my predictions and my convictions, I know that I know nothing. And maybe I am wrong about AI - maybe it’s a fad that will plateau and fade away. But don’t get your hopes up. Every metric, every indication points to a paradigm shift already occurring before our eyes. I recommend you wear shades.

I would like to consider briefly the intersection of artificial intelligence and alien life. 

I say these words as the James Webb space telescope has successfully unfolded in outer space and passed many points of potential error, marking a monumental new era in astronomy. We’ve seen the first volley of images it produced, and the things we may soon find out about our universe could be as fascinating as they are terrifying. Discovering traces of actual full-blown aliens would mark a fundamental shift in our understanding of life and the universe.

We don’t know where aliens are and we don’t know what aliens might be like - thank you, fermi paradox. “Killer robots from outer space” is not completely implausible - the cylons could be out there. If we assume that intelligence continues to advance no matter its origin, an artificial intelligence explosion is inevitable. Maybe aliens would use AI to fight their wars, possibly against us. Or maybe their AI would be all that’s left of them for one extinction reason or another. Come to think of it, that seems like the most probable outcome for ourselves.

And the questions we are grappling with today, questions of ethics and morality in regards to AI would likely be the same questions they had to deal with eons previously.

Do you, dear listener, honestly think that an advanced alien civilization would rely on physical labor of organics to run their starships and power their expansion to the universe? I think it far more likely that the economics of automation would hold even in a case truly alien. Synthetics are simply better.

You know, people advertise “handmade” goods like they’re really something special. But are they actually qualitatively better than goods made by machines? Does it really matter if my guacamole was made with “hand-scooped” avocados?

What people mean by “hand-made” is really that human labor went into something and thus it is intrinsically more valuable. But we know that is simply not the case. Things are valued at whatever other people are willing to pay for them, that’s economics 101. Human labor is about to be made redundant, worthless, because AI is just better. A fully-automated competitor to Uber for example would undercut the market by a ridiculous margin. It’s probably only a few years away. I never much enjoyed those awkward conversations anyway. Self-driving cars were really the “canary in the coal mine” - they’ve been the tip of the automation spear for so long that people forgot about them. But they are coming for the entire transportation sector.

Likewise with creative work. AI art is a, let’s say, sensitive subject at the current moment in time, for precisely the reasons outlined previously. Full disclosure, if it wasn’t already obvious, I am one hundred percent team AI. I have been for several years now, really since I started studying this stuff at UW. And even what primitive AI art models can do right now is just…amazing. AI can produce fever-dream creations for me that would be either impossible to find someone to do or prohibitively expensive to hire them to create. AI can create it for me in seconds, for pennies or for free, and in as many iterations and variations as my heart desires.

Now, there are obviously ethical quandaries that arise from using someone’s copyrighted work without permission, and I would never advocate for doing so. Keep in mind though that there is an absolutely massive amount of data already in the public domain. The fact that AIs can be trained on public domain data and massively outperform humanity is not the fault of the AI. It is legitimately the next evolution of humanity and life on this planet. 

And if you’re a cynic who thinks true creativity can only ever be accomplished through a human brain, who views AI art as mere copies or imitations of other people’s art, let me ask you this: where do you think artistic inspiration comes from? To imagine a nonexistent object, say, a mountain made entirely of gold, you need to already have the concepts of gold and mountains floating around in your brain. Inspiration does not come from thin air - it comes from experience and observations. My entire podcast is an amalgamation of the things I’ve learned over the course of my brief existence, combined in an idiosyncratic way to represent the things I want to represent. There is a good example in the concept of the muse - such as Calliope from Greek mythology, who supposedly inspired Homer. How is artificial intelligence trained on data sets any different from a human drawing inspiration from a muse?

I posit that it isn’t - that the value of algorithmically-generated art can equal or surpass the value of human creations. After all, art is in the eye of the beholder, is it not? If I find an AI art piece to be truly beautiful, there is not a single person on the face of the planet who can tell me otherwise.

The protests against AI art are unfortunately only the beginning - we can expect to see the same thing across virtually all work domains. I believe they are misguided in their target and probably doomed to failure in their protestations. Progress always seems to win out in the course of history, sooner or later.

To wrap this whole shtick up, no, we’re not going to become some quasi-communistic borg collective just because AI has replaced our workload. Some might. But, for as long as there are humans, there will exist the human desire for freedom, for independence, for the ability to tell everyone else in the universe to go screw themselves. We will still have these principles long into the future. It isn’t a question of making everyone equal Harrison Bergeron style. Rather, AI will simply raise the floor of human existence. No longer will people starve in the streets, because feeding and housing them will take all of a millisecond of processing power and a trifling amount of resources. It’s either that or cyberpunk dystopia, where the suffering is the point. I suppose, make your choice.

References:

OpenAI, ChatGPT, GPT-4:

https://openai.com/

Google Bard:

https://bard.google.com/

Nick Bostrom, The Ethics of Artificial Intelligence:

https://nickbostrom.com/ethics/artificial-intelligence.pdf

Nonexistent objects:

https://plato.stanford.edu/entries/nonexistent-objects/

Previous
Previous

The Unbearable Lightness of Being by Milan Kundera | Review and Analysis

Next
Next

SIIIEIII Artificial Intelligence: Probability and Inevitability