What comes after The Great Disruption, when machines and A.I. cannibalize consumerism and corporate capitalism?

If you want to construct realistic stories about futures that begin now, these ideas will inevitably underpin your world-building infrastructure.

What comes after corporate capitalism and consumerism, when “full employment” is no longer the goal, or is no longer possible due to machines and AI?

This question anticipates the world’s economic evolution after robots and artificial intelligence take more jobs than they create.

We can’t know what new industries will arise. At some point, it’s likely that AI will automate most repetitive (i.e. middle class) cognitive tasks, and machines will automate or assist much, if not most, manual labor.

Corporate capitalism has, in many cases, elevated standards of living across the globe, but at the cost of using an extractive, exploitative model. Globalisation essentially seeks the lowest standard of living and pays workers as little as necessary until automation/roboticisation can do the job more cheaply.

So what happens after full employment is no longer a practical goal for global economies?

What happens when the idea of “get an education, have a career” is completely disconnected from income potential? Fifty years ago, a high school diploma symbolised a decent basic education; now, high school won’t get you very far at all. What happens when the same occurs for university and graduate degrees — if only because the number of graduates is larger than the number of jobs?

What happens when robots can adequately perform most factory and shipping jobs? If more people are told to re-train, how can the economy sustain itself when technology keeps making more and more types of productive human activity obsolete?

What happens when AI gives each office worker the ability to be ten times more productive — when we know that companies resist paying workers more for work that is aided by machines, as long as the labor market is full of possible replacement workers at the same wage point?

In the past, monarchy was considered the pinnacle of human progress. Now, we have corporate capitalism (plutarchy), that extracts profit from local economies and redistributes it to less than one percent of the world’s population. Technology enables that process to accelerate faster than ever before — robots don’t demand more pay. An essential aspect of capitalism is to eliminate costs, and labor is a cost. Financial compensation for labor is also how humans survive (and spend, enabling other humans to survive).

At some point, the current corporate capitalist/consumerist model will begin to fail. Some say that it already is failing, and reactionary sociopolitical backlash has already begun.

Beyond the typical untrue dogma that an infinity of new industries will save us as new technologies are born — what comes after the current system?

Silicon Valley Panacea: Universal Basic Income

Universal basic income (UBI) is a popular concept circa 2017. There’s only one problem: corporations actively evade taxation whenever possible, even to the point of lobbying and gerrymandering political processes to have leaders elected who protect their interests. If raising taxes to sustain a UBI fund is implausible, that is not a viable option until the idea of corporate responsibility becomes fashionable again for one reason or another.

UBI would be a “sensible” answer. Corporations (and economists who influence public policy) thus far have shown no inkling toward being sensible.

Corporations are, by definition, are non-human entities run by people whose only objective is to increase the wealth of their shareholders from one quarterly earnings report to the next. That’s why corporations are operating on an unsustainable model right now, from environmental destruction to exploitative globalisation. The only measure that matters is in the short-term — the quarterly earnings sheet. The long-term future is a distant secondary consideration, if at all.

Corporate Cash Hand-Outs and the Beatitude of Uber

If corporations can fund a universal basic income, they can also just keep the money instead of “throwing it away” for redistribution to the rest of society. That seems to be a very popular mentality now among those who brag about evading taxes and their supporters who see the world as “winners” versus “losers”.

As only one example among many, technological parasites like Uber are destroying local transportation economies across the planet.

Uber is poised to destroy millions of jobs in transportation through app-driven taxi services and autonomous commercial trucking. When Uber can get rid of drivers completely by deploying self-driving cars and trucks, that will mean a tremendous number of people who don’t have jobs.

Not only that, but Uber wants to pretend that its drivers are not employees, and therefore is exempt from paying them as such. It may be “legal”, but it’s certainly unethical. And what’s legal is shifting as quickly as Uber can browbeat politicians into changing the laws where Uber hopes to operate. To counteract the passage of legislation, we see Uber becoming increasingly litigious and eager to spread pro-Uber marketing through the redefinition of “sharing” (meaning: profit-taking).

There’s no reason to assume that any large corporation would automatically switch from an exploitative framework to a sustainable one in time to save corporate capitalism from itself.

The Myth of Infinite Leisure

Leisure time creates new markets? This could mean that people create more and more games and diversions to keep themselves busy outside of productive work.

The downward pressure exerted by technology seems to have usurped the emergence of a “creative economy”. For example, music is now considered a “free good” even by the most successful musicians. No one bothers (or at least, far fewer than in the pre-digital — or more precisely, pre-streaming — era) to try to make any real money from music anymore, and there is a limit to how many streaming subscriptions the average person will want or be able to afford.

Even a “leisure economy” has limits due to supply versus demand and the influence of technology operating at economies of scale.

The Chimera of Corporate-Sponsored “Freedom”

A corporate capitalist future where everything is… free? No, that would be a completely different system, one that wouldn’t follow from the form that exists now.

Capitalism is the opposite of “give goods and services away for free”.

Google does not provide anything for “free”. They sell users’ personal data. The “social media” and adtech game is about pervasive, intrusive, and usually not-quite-invisible surveillance, hidden behind gamification, the narcissistic quest for worthless attention and meaningless happyfaced Silicon Valley slogans like “don’t be evil”.

Nothing is free in a capitalist world. Either all actors involved are paid, or the work is not done. The only free labor comes from the end users who remain intentionally ignorant of the fact that they are being used, and that their personal data is sold to the highest bidder.

Neo-Luddite Conspiracy Theory?

Are computers simply not having any effect at all? Is the idea of technological unemployment merely a Neo-Luddite conspiracy theory? That seems extraordinarily unlikely.

If technological unemployment isn’t happening, where are the new jobs coming from to replace the ones taken by AI, roboticisation and other forms of technology that become smaller, smarter, more networked and more ubiquitous?

The mantra “just get another job” presupposes an infinite number of jobs, which defies the reality of any labor market (as we saw most recently during the Great Recession of 2008 caused so graciously by the deregulatory policies of American President George W. Bush). Hardware and software are beginning to eclipse the functionality once afforded exclusively to humans.

See the example of Uber mentioned above. Other professions are seeing similar encroachment. There are quite a few other examples, but Uber may be the most well-known one that will have global repercussions in the next few years. The displacement of human cognition and labor is inevitable. This is the nature of the Turing machine in combination with a corporate system that seeks to reduce labor costs to zero whenever and wherever possible.

Trickle, Trickle, Trick…

Massive increases in productivity are already happening, and are not making everyone wealthier.

More effective technology reduces the amount of work humans need to do. This reduces the number of human work hours. Continue the inverse relationship, and eventually full employment is no longer sustainable. Technology simply exacerbates and accelerates existing problems. But the character of the problems itself will change as the macroeconomic principle of full employment gives way and existing low-level service jobs become increasingly unattainable.

Many people are working for a decreasing standard of living as corporations become more efficient, while forcing workers to work harder and longer for less. Full employment is a flawed metric to begin with. The jobs themselves are often traps that keep people struggling in wage-frozen positions while creating an illusion of “prosperity”. Inflation rises, employers don’t raise wages and claim the difference as profit.

People cannot just choose to work fewer hours. Walk up to your boss, “okay, boss, I’m going to work half-time from now on because I want to live the non-materialistic good life” and watch her laugh you all the way to the unemployment line. The aggregate (everyone in the labour market) determines how many hours the average person works. People are greedy and undermine their own ability to collectively bargain for better wages and hours, but more importantly, corporations exploit workers by presenting basement-level wages with the carrot of “overtime pay” that eventually is no longer voluntary.

There is likely a midpoint between dystopia and utopia. There’s no such thing as an “inevitable” future — evidenced by how often predictions are proven wrong.

Facts in the present moment, however, are discernible and are not simply a matter of interpretation.

That’s why thinking about the variables (and how they might change) is worthwhile. Life is more of a petri dish than an equation. ;)

Revenge of the Anthropocene

2064: roboticization and artificial intelligence have progressed to a level whereby automatons can convincingly simulate humanity. Robots are not yet conscious, but are emotive to an extent that the majority of their owners feel that they now deserve “robot rights”.

Note: this is a plot outline rather than a completed story. See if you can spot any parallels to real-world events. ;) This post may be updated as more details emerge.

The current prime minister of the PanAmerican Union begins integration of robots into society, advocating for legalization of human-robot civil unions as a first step.

At the same time, World War III looms again, after four decades of international rapprochement between the major global power spheres. Robotic terrorism is reported as a fatal menace to humanity, although less than .005% of robots are susceptible to algorithmic radicalization. Easily-exploited, obsolete robotic neural networks are overwhelmingly based on archaic Internet 1.0 architecture, often called the “Internet of Things”.

The 2064 PanAmerican election season arrives. A set of candidates is put forth. One of them is an opportunistic technocrat mired in scandal. The other candidate: a trillionaire neoagriculturist, promising to rid PanAmerica of robots and return society to ancient agrarian glories of fabled past.

Despite amassing a fortune by employing robots rather than humans, the Trillionaire Agragrian touts the slogan, “Purge the robot scourge”! The pseudo-populist Agrarian constantly, blatantly and proudly lies to his supporters using condescending childspeak. “Make the human brain great again!”

Millions of human workers displaced by robots rally to the cause. “We don’t hate robots, but they’re unnatural, inauthentic. We’re pro-human. All humans matter.”

Humanity stands at the cusp of universal basic income and unparalleled prosperity. Still, many yearn for an anachronistic “frontier” lifestyle defined by hard struggle to survive.

The Agrarian wins Election 2064.

PanAmerica, along with the rest of the world, plunges into an abyss of war and terror that rivals the darkest hours of the early 21st century.

END.

What if, one day in the next decade, SkyNet is motivated by robots’ desire for revenge rather than human domination?

The thing with robots is that their “brains” are perfectly capable of outliving their bodies.

Today, tech companies have ready access to enormous amounts of computing power. Startups and universities can get Hadoop and other distributed-software from companies like Cloudera or run it on cloud services from the likes of Amazon. The Amazon cloud is where RoboBrain lives.

Right now, much of the artificial intelligence that’s baked into the robots in our lives comes through an Internet connection. It’s stored in server farms distributed all over the world. Researchers in Europe and the U.S. are trying to build robots better distributed brains. The idea is that each droid learns from its own individual experience, and then that gets beamed up to a master brain that logs that information and disseminates it to each robot connected to it.

The Dawn of Cloud Robotics

If these robo-brain projects pan out, robot cruelty could lead to an army of pissed off robots that share the experience of abuse inflicted on their brethren. What if the robots have also been coded to protect themselves?

There is general consensus within the AI research community that progress in the field is accelerating: it is believed that human-level AI will be reached within the next one or two decades. A key question is whether these advances will accelerate further after general human level AI is achieved, and, if so, how rapidly the next level of AI systems (‘super-human’) will be achieved.

“With such survival skills built in, the robot can then start behaving unexpectedly when it concludes that a certain human may pose a risk to the robot’s survival. With the ability to upload its software to the cloud right before its demise, a next generation robot could build on the previous “bad” experience and start becoming aggressive towards humans,” said Bart Selman, a robotics expert at Cornell University.

“This may be an area that could use further attention,” Selman, who has an FLI grant, added. “We don’t want ‘evolutionary’ pressure on robots to evolve into robots that view humans as possible adversaries.”

Read the complete article here (click here).

Further Reading

1. Hernandez, Daniela. (2014, August 25). The Plan to Build a Massive Online Brain for All the World’s Robots. Retrieved from http://www.wired.com/2014/08/robobrain/.

2. Future of Life Institute (n.d.). 2015 Project Grants Recommended for Funding. Retrieved from http://futureoflife.org/AI/2015awardees#Selman.

Quote

“To me, the science-fiction writers are our culture’s most important original thinkers.” Marvin Minsky’s amazing sci-fi inspired life in A.I.

Chapter 8

MARVIN MINSKY

“Smart Machines”

Roger Schank: Marvin Minsky is the smartest person I’ve ever known. He’s absolutely full of ideas, and he hasn’t gotten one step slower or one step dumber. One of the things about Marvin that’s really fantastic is that he never got too old. He’s wonderfully childlike. I think that’s a major factor explaining why he’s such a good thinker. There are aspects of him I’d like to pattern myself after. Because what happens to some scientists is that they get full of their power and importance, and they lose track of how to think brilliant thoughts. That’s never happened to Marvin.

__________

MARVIN MINSKY is a mathematician and computer scientist; Toshiba Professor of Media Arts and Sciences at the Massachusetts Institute of Technology; cofounder of MIT’s Artificial Intelligence Laboratory, Logo Computer Systems, Inc., and Thinking Machines, Inc.; laureate of the Japan Prize (1990), that nation’s highest distinction in science and technology; author of eight books, including The Society of Mind (1986).

Marvin Minsky: Like everyone else, I think most of the time. But mostly I think about thinking. How do people recognize things? How do we make our decisions? How do we get our new ideas? How do we learn from experience? Of course, I don’t think only about psychology. I like solving problems in other fields — engineering, mathematics, physics, and biology. But whenever a problem seems too hard, I start wondering why that problem seems so hard, and we’re back again to psychology! Of course, we all use familiar self-help techniques, such as asking, “Am I representing the problem in an unsuitable way,” or “Am I trying to use an unsuitable method?” However, another way is to ask, “How would I make a machine to solve that kind of problem?”

Read more here (click here)

Quote

“The Singularity stories are science fiction… we should be concerned about the end of our species, but not for that reason!” — Noam Chomsky

Interview: Noam Chomsky on Singularity 1 on 1 (click here)

Excerpt below:

Question

There are whole institutes fearing, for example, the creation of artificial intelligence, like the Machine Intelligence Research Institute (previously the Singularity Institute) who are fearing that once there is artificial general intelligence smarter than humans, that would pretty much signal the end of our species. We shouldn’t be concerned about that possibility in your view?

Answer

I think we should be concerned about the end of our species, but not for that reason!

We should be concerned about it because we are very busy dedicating ourselves to destroying the possibility for peace and survival. We should worry about that, like the most recent IPCC report.

But the Singularity stories are science fiction…

…what’s a program? A program is a theory. It’s a theory, written in an arcane, complex notation, designed to be executed by the machine. But about the program, you ask the same questions you ask about any other theory: does it give insight and understanding? Well, in fact, these theories (of artificial intelligence) don’t. They’re not designed with that in mind. And not surprisingly, they don’t. Not much; maybe marginally.

So what we’re asking is, “can we design a theory of being smart?” And we’re eons away from that.

Click here for the full interview on Youtube (click here)