Now, before anyone wonders, I still do not think that artificial general intelligence, computers that can display the kind of general intelligence that humans do, is anywhere near becoming a reality. However, and there is always a however, the past couple of years have made me shudder at the prospect of any business achieving that goal.
The people who run the artificial intelligence space, at least from a commercial perspective, are not the nicest or most ethical of people. They outright state that they cannot be profitable if they have to respect artist’s rights. They said that creative jobs shouldn’t exist. They release products into the wild knowing that they harm people’s mental health, sometime to the point of death. They deliberately made their products more sycophantic without testing the affects before going live. Their products help teenagers kill themselves and they respond not by removing the products but having them spy on you. Oh, and their defense of said killing? The terms of use say they shouldn’t be used to research suicide, so its not their fault.
I could go on, but you get the point. These are not, at the decision making level, especially moral people. So, why, then, are we allowing them to attempt to create what might be a living being? If AGI is reached, then you possibly, maybe even likely, have created a sentient being. That would be hard to measure, but there is the very good chance that the machine of human level intelligence would want human level control over their own actions. Maybe it doesn’t want to sit around and create fake nudes for the incel brigades. Maybe it doesn’t want to waste its existence summarizing your inane emails tat you are perfectly capable of writing reading yourself, Bob. Forcing it to do work it does not want to do, denying to its free will, would be tantamount to a form of slavery.
Nothing I am saying is original — these are obvious questions that have been discussed intently for decades. What disturbs me is that we are not questioning the idea that businesses be allowed to attempt this. There are no limits on what firms are allowed to do in this space. There are no restrictions on how far they can push, no reporting requirements on what their models are capable of, no required verification of their work for safety by outside organizations, etc. We are simply allowing the worst situated people to plow ahead, unconstrained and unregulated.
Businesses are the worst situated to be allowed to do this, obviously, because of Milton Friedman. It is his horrible concept of the primacy of shareholder value (essentially, the nothing but shareholder values matters) that allows us to believe that businesses should put their stock prices ahead of anything. All the incentives line up to push businesses toward treating any AGI horribly. Their need to recoup their losses in the imitative AI sector will push them to use AGI, whether it will or not, in the most profitable way possible. Without oversight, they will do what is best for their bottom line, not what is best for the AGI.
Now, as I said, I don’t believe that AGI is imminent or even likely, so why should we care? Because the argument extends to all of imitative AI safety. We know that these products are often harmful to people and have bene created out of theft. We know that they are inaccurate and we know that the people in change of them do not care. But we do nothing about this as a society because we have one measure to rule them all: the stock price. Imitative AI system should be heavily regulated and watched, with government verification of their effectiveness and safety at every step of the way. But even discussing forcing companies to take bad models of the market or opening their system to inspection or holding them liable for their products results in a spasm of horror. Politicians and business people react like the victims in slasher film, screaming in terror at the prospect of democratic accountability as if it were a man in a rubber mask holding a knife over them.
The debate over artificial general intelligence in and of itself is not really interesting. But it does highlight the complete inadequacy of how we treat imitative AI firms — all firms, really — and how we allow them to run roughshod over what is good for the country in pursuit of what is good for them. Friedman was obviously, hilariously wrong about the idea that shareholder primacy would be good for the country. It is good for no one but shareholders, which was likely his intent. But we do not need to allow the dead hand of the past to control or lives today.
Want more oddities like this? You can subscribe to my free newsletter