Thought-provoking column by Scott Adams.
http://blog.dilbert.com/post/153301052341/working-for-the-machines
My initial comments in red:
http://blog.dilbert.com/post/153301052341/working-for-the-machines
My initial comments in red:
Working for the Machines
Today I see in the news that Google is trying to dehypnotize potential ISIS recruits by manipulating what content they see when they try to search for pro-ISIS stuff. That’s mind control. And it works.
Meanwhile, Facebook is trying to have it both ways by insisting that advertising on their platform is effective while claiming the tsunami of fake news articles about the election – which outnumbered legitimate stories – had no impact on the election. But either way, it’s mind control. Because ads work.
Mind control also takes the form of A-B testing, which is common practice for most tech companies. That involves rapidly testing up to thousands of variables for different ad variations until they know what is the most effective way to manipulate consumers. In other words, mind control. And it works.
Twitter is allegedly “shadowbanning” some users – including me – because they don’t like how I might be persuading people. Shadowbanning means limiting how many of my users see my content. That’s mind control, and it works. The fewer people that see what I tweet, the fewer I can influence.
In those four examples we can see that technology companies have already replaced some portion of human decision-making. [This is just another form of affecting human decision-making by controlling information, the way the USSR banned Bibles and many regimes censor books, movies, etc. It's what 1984 described. It's the reason anti-religion activists insisted on removing religion from public schools, etc. Nothing new here.]
Eventually machines will replace ALL of your decisions. [This is what is new. Even the USSR could not successfully coerce 100% of the population to make the "right" decisions.]
How’s that possible?
It’s possible because machines make better decisions than humans. Or they will. Consider your health-monitoring wristband. Someday it will tell you when you need to eat and what to eat. It will tell you when you are dehydrated and suggest that you take a drink. It will tell you the best time to exercise, and it will “train” you to do so, with rewards. In the short run, you will see your machines as making helpful suggestions. But once you learn that the machines always make good suggestions – and you do not – you will start taking the machine’s suggestions simply because it is easier. [This could be, but this is just a continuation of a long trend. Laws exist to help people make better decisions; i.e., don't kill someone else, and don't run a red light. Education does the same. And religion (usually). Most human progress consists of developing and using tools, both material and software, that help us make better decisions.]
I would argue that your political choices are already largely determined by Facebook, Google, Twitter and the other media companies. It feels exactly like free will to you, but it isn’t. [Except people using those media companies end up with opposite political choices; i.e., identical inputs, but different outcomes = free will.] And someday soon our technology will tell us how to eat, when to sleep, when to sip water, when to exercise, and even who to date. Once married, technology will tell you the best time of the month for procreation. It might even clear your calendar by rescheduling your day. [But laws, education, traditions, etc. have been guiding or nudging us forever.]
The inevitable conclusion of all of these forces is that machines will someday make all of our important decisions. We are probably less than ten years away from that. [Assistance in making life-sustaining choices is different in kind from decisions about life objectives and priorities.]
Losing your free will to machines might sound scary. But you never had free will in the first place. It was always an illusion. When the machines take over our important decisions we will do the same thing we do now – we will imagine that we are making the decisions on our own. Today our important decisions are made with emotions, and rationalized after the fact. We incorrectly call this process “thinking.” In the near future, our machines will make our daily decisions using Big Data and whatever they know about us as individuals to maximize our outcomes. You’ll like that future because the machines will make better decisions than you, and you’ll have better quality of life. [Except beyond material sustenance, quality of life is subjectively measured by one's preferences, and the machines can't determine our preferences.]
In the new world ahead, you will be the robot – albeit a moist one. The machines will be doing the thinking and making the decisions. You will simply do what they program you to do. Like a robot. And all of that will happen before Artificial Intelligence is popular. In terms of capability, all the machines need in order to take over for human decision-making is lots of relevant data, body monitor sensors, and some pattern recognition software. We’re almost there.




