Friday, September 17, 2010

THE TECHNOLOGY OF MIND AND NEW SOCIAL CONTRACT


The progress of biology, neuroscience and computer science makes it clear that some time during the twenty-first century we will master the technologies of mind and life. We will build machines more intelligent than ourselves, and modify our own brains and bodies to increase our intelligence, live indefinitely and make other changes. We live together according to a social contract, consisting of laws, morals and conventions governing our interactions. This social contract is based on assumptions we rarely question: that all humans have roughly the same intelligence, that we have limited life spans and that we share a set of motives as part of our human nature. The technologies of mind and life will invalidate these assumptions and inevitably change our social contract in fundamental ways. We need to prepare for these new technologies so that they change the world in ways we want rather than just stumbling into a world that we don't.

The Technology of Mind

Neuroscience is discovering many correlations between the behaviors of physical brains and minds. If brains do not explain minds then these correlations would be coincidences, which is absurd. Furthermore, relentless improvements in computer technology make it clear that we will build machines that match the ability of brains to generate minds like ours, sometime during the twenty-first century. This technology of mind will enable us to build machines much more intelligent than ourselves, and to increase the intelligence of our human brains.

We do not yet understand how brains generate our intelligent minds, but we know some things about how brains work. Minds are fundamentally about learning. Baum makes a convincing case that brains do what is called reinforcement learning (Baum 2004). This means that brains have a set of values (sometimes called rewards), such as food is good, pain is bad, and successful offspring are good, and learn behaviors that increase the good values and decrease the bad values. That is how genetic evolution works, with the value that creating many copies of genes is good. A mutation to a gene creates a new gene that is expressed in organisms that carry the mutation. If those organisms survive and reproduce more successfully than others of their species, many copies of the mutated gene are created. But genetic evolution learns by pure trial and error. Human and animal brains are more efficient. If you have a new idea, you try it out to see if it works. If it doesn't, you have a model of the world (that is, you can reason) that you use to trace cause and effect to estimate the cause of the failure.

Brains understood as reinforcement learners consist of:

1.         Reinforcement values to be increased or decreased - these are the basic motives of behavior.
2.         Algorithms for learning behaviors based on reinforcement values.
3.         A simulation model of the world, itself learned from interactions with the world (the reinforcement value for learning the simulation model is accuracy of prediction).
4.         A discount rate for balancing future versus current rewards (people who focus on current rewards and ignore the future are generally judged as not very intelligent).

This decomposition of mental functions gives us a way to understand the options available to us in the design of intelligent machines. While we do not yet know how to design learning algorithms and simulation models adequate for creating intelligence, we can reasonably discuss the choices for the values that motivate their behaviors and the discount rate for future rewards.

In spite of our overall ignorance of how intelligence works, well-known reinforcement learning algorithms have been identified in the neural behaviors of mammal brains (Brown, Bullock and Grossberg 1999; Seymour et al. 2004). And reinforcement learning has been used as the basis for defining and measuring intelligence in an abstract mathematical setting (Legg and Hutter 2006).

The most familiar measure of intelligence is IQ, but it is difficult to understand what a machine IQ of a million or a billion would mean. As is often pointed out, intelligence cannot be measured by a single number. But one measure of a mind’s intelligence, relevant to power in the human world, is the number of humans the mind is capable of knowing well. This will become a practical measure of intelligence, as we develop machines much more intelligent than humans. Humans evolved an ability to know about 200 other people well, driven by the selective advantage of working in groups (Bownds 1999). Now Google is working hard to develop intelligence in its enormous servers, which already keep records of the search histories of hundreds of millions of users. As these servers develop the ability to converse in human languages, the search histories will evolve into detailed simulation models of our minds. Ultimately, large servers will know billions of people well. This will give them enormous power to predict and influence economics and politics; rather than relying on population statistics, such a mind will know the political and economic behavior of almost everyone in detail.

There are already experiments with direct electronic interfaces to brain nerve cells. This will ultimately evolve into prosthetic enhancements of human brains and uploading human minds (Kurzweil 1999; Moravec 1999), in which humans minds will migrate out of human brains and into artificial brains. The technologies of mind and life will blur the distinction between humans and machines.

The Social Contract


Teamwork helps individuals succeed at survival and reproduction, and this has created evolutionary pressure for teamwork in humans and other animals. Thus we have social abilities such as language, and social values such as liking, anger, gratitude, sympathy, guilt and shame, that enable us to work in teams.

A fascinating experiment called the Wason selection test demonstrates that the ability of human subjects to solve a type of logic puzzle depends on whether or not it is worded in terms of social obligation: most subjects can solve it when it relates to social obligation and cannot solve it otherwise (Barkow, Cosmides and Tooby 1992). This indicates that humans have mental processes dedicated to satisfying the values necessary for cooperation, including especially evaluating whether the subject is being cheated.

Social values and the special processes dedicated to the logic of social obligation, which evolved in human brains because cooperation benefits individuals, are at the roots of ethics. Specifically, ethics are based in human nature rather than being absolute (but note that human nature evolved in a universe governed by the laws of mathematics and physics, and hence may ultimately reflect an absolute). Thomas Hobbes defined a theoretical basis for this view in his description of the social contract that humans enter into in order to bring cooperation to their competition (Hobbes 1651).

The social contract as described by Hobbes gave different rights and obligations to rulers and subjects. That has evolved in modern societies into a contract in which everyone have the same rights and obligations, but certain offices that individuals may (temporarily) occupy have "special" rights and obligations. The legal systems in most countries are based on the equality of individuals, although there is a spectrum between equality of opportunity and equality of results. Of course, there is also inequality based on country and family of birth, and plenty of corruption that undermines equality. But, over the long haul of human history, despite reversals in some societies and during some periods, there is gradual progress toward the ideal of equality. In many countries, progress includes elimination of slavery and real monarchies, popular election of leaders, and collective support for educating the young and caring for the elderly.

No comments:

Post a Comment