The researchers developed a software engine, called Entropica, and gave it models of a number of situations in which it could demonstrate behaviors that greatly resemble intelligence. They patterned many of these exercises after classic animal intelligence tests.
"It actually self-determines what its own objective is," said Wissner-Gross. "This [artificial intelligence] does not require the explicit specification of a goal, unlike essentially any other [artificial intelligence]." Entropica's intelligent behavior emerges from the "physical process of trying to capture as many future histories as possible," said Wissner-Gross. Future histories represent the complete set of possible future outcomes available to a system at any given moment. Wissner-Gross calls the concept at the center of the research "causal entropic forces." These forces are the motivation for intelligent behavior. They encourage a system to preserve as many future histories as possible. For example, in the cart-and-rod exercise, Entropica controls the cart to keep the rod upright. Allowing the rod to fall would drastically reduce the number of remaining future histories, or, in other words, lower the entropy of the cart-and-rod system. Keeping the rod upright maximizes the entropy. It maintains all future histories that can begin from that state, including those that require the cart to let the rod fall. "The universe exists in the present state that it has right now. It can go off in lots of different directions. My proposal is that intelligence is a process that attempts to capture future histories," said Wissner-Gross.
The new research was inspired by cutting-edge developments in many other disciplines. Some cosmologists have suggested that certain fundamental constants in nature have the values they do because otherwise humans would not be able to observe the universe. Advanced computer software can now compete with the best human players in chess and the strategy-based game called Go. The researchers even drew from what is known as the cognitive niche theory, which explains how intelligence can become an ecological niche and thereby influence natural selection. The proposal requires that a system be able to process information and predict future histories very quickly in order for it to exhibit intelligent behavior. Wissner-Gross suggested that the new findings fit well within an argument linking the origin of intelligence to natural selection and Darwinian evolution -- that nothing besides the laws of nature are needed to explain intelligence.
To the best of our knowledge, these tool use puzzle and social cooperation puzzle results represent the rst successful completion of such standard animal cognition tests using only a simple physical process. The remarkable spontaneous emergence of these sophisticated behaviors from such a simple physical process suggests that causal entropic forces might be used as the basis for a general—and potentially universal—thermodynamic model for adaptive behavior. Namely, adaptive behavior might emerge more generally in open thermodynamic systems as a result of physical agents acting with some or all of the systems’ degrees of freedom so as to maximize the overall diversity of accessible future paths of their worlds (causal entropic forcing). In particular, physical agents driven by causal entropic forces might be viewed from a Darwinian perspective as competing to consume future histories, just as biological replicators compete to consume instantaneous material resources . In practice, such agents might estimate causal entropic forces through internal Monte Carlo sampling of future histories  generated from learned models of their world. Such behavior would then ensure their uniform aptitude for adaptiveness to future change due to interactions with the environment, conferring a potential survival advantage, to the extent permitted by their strength (parametrized by a causal path temperature, Tc ) and their ability to anticipate the future (parametrized by a causal time horizon, ). Consistent with this model, nontrivial behaviors were found to arise in all four example systems when (i) the characteristic energy of the forcing (k BTc ) was larger than the characteristic energy of the system’s internal dynamics (e.g., for the cart and pole example, the energy required to lift a downward-hanging pole), and (ii) the causal horizon was longer than the characteristic time scale of the system’s internal dynamics (e.g., for the cart and pole example, the time the pole would need to swing through a semicircle due to gravity).
These results have broad physical relevance. In condensed matter physics, our results suggest a novel means for driving physical systems toward self-organized criticality . In particle theory, they suggest a natural generalization of entropic gravity . In econophysics, they suggest a novel physical denition for wealth based on causal entropy [28,29]. In cosmology, they suggest a path entropy-based renement to current horizon entropy-based anthropic selection principles that might better cope with black hole horizons . Finally, in biophysics, they suggest new physical measures for the behavioral adaptiveness and sophistication of systems ranging from biomolecular con- gurations to planetary ecosystems [2,3].
In conclusion, we have explicitly proposed a novel physical connection between adaptive behavior and entropy maximization, based on a causal generalization of entropic forces. We have examined in detail the effect of such causal entropic forces for the general case of a classical mechanical system partially connected to a heat reservoir, and for the specic cases of a variety of simple example systems. We found that some of these systems exhibited sophisticated spontaneous behaviors associated with the human ‘‘cognitive niche,’’ including tool use and social cooperation, suggesting a potentially general thermodynamic model of adaptive behavior as a nonequilibrium process in open systems.
Recent advances in ﬁelds ranging from cosmology to computer science have hinted at a possible deep connection between intelligence and entropy maximization. In cosmology, the causal entropic principle for anthropic selection has used the maximization of entropy production in causally connected space-time regions as a thermodynamic proxy for intelligent observer concentrations in the prediction of cosmological parameters . In geoscience, entropy production maximization has been proposed as a unifying principle for nonequilibrium processes underlying planetary development and the emergence of life [2–4]. In computer science, maximum entropy methods have been used for inference in situations with dynamically revealed information , and strategy algorithms have even started to beat human opponents for the ﬁrst time at historically challenging high look-ahead depth and branching factor games like Go by maximizing accessible future game states . However, despite these insights, no formal physical relationship between intelligence and entropy maximization has yet been established. In this Letter, we explicitly propose a ﬁrst step toward such a relationship in the form of a causal generalization of entropic forces that we show can spontaneously induce remarkably sophisticated behaviors associated with the human ‘‘cognitive niche,’’ including tool use and social cooperation, in simple physical systems. Our results suggest a potentially general thermodynamic model of adaptive behavior as a nonequilibrium process in open systems.
In a recent study, a group of psychologists decided to see if this differential reaction is simply behavioral, or if it actually goes deeper, to the level of brain performance. The researchers measured response-locked event-related potentials (ERPs)—electric neural signals that result from either an internal or external event—in the brains of college students as they took part in a simple flanker task. The students were shown a string of five letters and asked to quickly identify the middle letter. The letters could be congruent—for instance, MMMMM—or they might be incongruent —for example, MMNMM.
While performance accuracy was generally high, around 91 percent, the specific task parameters were hard enough that everyone made some mistakes. But where individuals differed was in how both they—and, crucially, their brains— responded to the mistakes. Those who had an incremental mindset (i.e., believed that intelligence was fluid) performed better following error trials than those who had an entity mindset (i.e., believed intelligence was fixed). Moreover, as that incremental mindset increased, positivity ERPs on error trials as opposed to correct trials increased as well. And the larger the error positivity amplitude on error trials, the more accurate the post-error performance.
So what exactly does that mean? From the data, it seems that a growth mindset, whereby you believe that intelligence can improve, lends itself to a more adaptive response to mistakes—not just behaviorally but neurally. The more someone believes in improvement, the larger the amplitude of a brain signal that reflects a conscious allocation of attention to errors. And the larger that neural signal, the better the subsequent performance. That mediation suggests that individuals with an incremental theory of intelligence may actually have better self-monitoring and control systems on a very basic neural level: their brains are better at monitoring their own, self-generated errors and at adjusting their behavior accordingly. It’s a story of improved online error awareness —of noticing mistakes as they happen, and correcting for them immediately.
Women who are given examples of females successful in scientific and technical fields don’t experience the negative performance effects on math tests. College students exposed to Dweck’s theories of intelligence— specifically, the incremental theory—have higher grades and identify more with the academic process at the end of the semester. In one study, minority students who wrote about the personal significance of a self-defining value (such as family relationships or musical interests) three to five times during the school year had a GPA that was 0.24 grade points higher over the course of two years than those who wrote about neutral topics—and lowachieving African Americans showed improvements of 0.41 points on average. Moreover, the rate of remediation dropped from 18 percent to 5 percent.
As noted earlier, mitochondrial degradation is a primary culprit in dwindling muscle mass. But recent evidence indicates that exercise can slow down this effect. According to Mark Tarnopolsky, a professor of pediatrics and medicine at McMaster University in Hamilton, Ontario, resistance training activates a muscle stem cell called a satellite cell. In a physiological process known as ‘gene shifting,' these new cells cause the mitochondria to rejuvenate. Tarnopolsky claims that after six months of twice weekly strength exercise training, the biochemical, physiological and genetic signature of older muscles are "turned back" by a factor of 15 to 20 years. That's significant — to say the least.
Studies involving middle-aged athletes indicate that high intensity exercise protects people at the chromosomal level as well. It appears that exercise stimulates the production of telomerase, what allows for the ongoing maintenance of genetic information and cellular integrity. Exercise also triggers the production of antioxidants, which boosts the health of the body in general.
And indeed, other studies are successfully linking athleticism to longevity. A recent analysis published in Deutsches Ärzteblatt International of more than 900,000 athletes (ranging in age from 20 to 79) showed that no significant age-related decline in performance appeared before the age of 55. And revealingly, even beyond that age the decline was surprisingly slow; in the 65 to 69 group, a quarter of the athletes performed above average among the 20 to 54 year-old group.
Essentially, exercise helps the body regenerate itself. This likely explains why older athletes are less susceptible to age-related illnesses than their sedentary counterparts. Moreover, ongoing exercise has been shown to preserve lean tissue, even during rapid and substantial weight loss. It also helps to maintain strength and mobility, which can significantly reduce risk of injury and stave off health problems that would otherwise linger.
Even more remarkable is how resistance training can stave off cognitive decline — what is arguably just as important as physical well being. In a study led by Teresa Liu-Ambrose of the University of British Columbia, women between the ages of 70 and 80 who were experiencing mild cognitive impairment were put through 60-minute classes two times per week for 26 weeks. They used a pressurized air system (for resistance) and free weights, and were told to perform various sets of exercises with variable loads. The results were remarkable: Lifting weights improved memory and staved off the effects of dementia. It also improved the seniors' attention span and ability to resolve conflicts.
[T]here are some common animal behaviors that seem to favor the development of intelligence, behaviors that might lead to brainy beasts on many worlds. Social interaction is one of them. If you're an animal that hangs out with others, then there's clearly an advantage in being smart enough to work out the intentions of the guy sitting next to you (before he takes your mate or your meal). And if you're clever enough to outwit the other members of your social circle, you'll probably have enhanced opportunity to breed..., thus passing on your superior intelligence. ... Nature—whether on our planet or some alien world—will stumble into increased IQ sooner or later.
The fundamental hypothesis of genetic epistemology is that there is a parallelism between the progress made in the logical and rational organization of knowledge and the corresponding formative psychological processes. With that hypothesis, the most fruitful, most obvious field of study would be the reconstituting of human history—the history of human thinking in prehistoric man. Unfortunately, we are not very well informed in the psychology of primitive man, but there are children all around us, and it is in studying children that we have the best chance of studying the development of logical knowledge, physical knowledge, and so forth.
Artificial intelligence is based on the assumption that the mind can be described as some kind of formal system manipulating symbols that stand for things in the world. Thus it doesn't matter what the brain is made of, or what it uses for tokens in the great game of thinking. Using an equivalent set of tokens and rules, we can do thinking with a digital computer, just as we can play chess using cups, salt and pepper shakers, knives, forks, and spoons. Using the right software, one system (the mind) can be mapped onto the other (the computer).
Men give me some credit for genius. All the genius I have lies in this: When I have a subject in hand, I study it profoundly. Day and night it is before me. I explore it in all its bearings. My mind becomes pervaded with it. Then the effort which I have made is what people are pleased to call the fruit of genius. It is the fruit of labor and thought.
know what you're thinking: we're smarter than bacteria.
No doubt about it, we're smarter than every other living creature that ever walked, crawled, or slithered on Earth. But how smart is that? We cook our food. We compose poetry and music. We do art and science. We're good at math. Even if you're bad at math, you're probably much better at it than the smartest chimpanzee, whose genetic identity varies in only trifling ways from ours. Try as they might, primatologists will never get a chimpanzee to learn the multiplication table or do long division.
If small genetic differences between us and our fellow apes account for our vast difference in intelligence, maybe that difference in intelligence is not so vast after all.
Imagine a life-form whose brainpower is to ours as ours is to a chimpanzee's. To such a species, our highest mental achievements would be trivial. Their toddlers, instead of learning their ABCs on Sesame Street, would learn multivariable calculus on Boolean Boulevard. Our most complex theorems, our deepest philosophies, the cherished works of our most creative artists, would be projects their schoolkids bring home for Mom and Dad to display on the refrigerator door. These creatures would study Stephen Hawking (who occupies the same endowed professorship once held by Newton at the University of Cambridge) because he's slightly more clever than other humans, owing to his ability to do theoretical astrophysics and other rudimentary calculations in his head.
If a huge genetic gap separated us from our closest relative in the animal kingdom, we could justifiably celebrate our brilliance. We might be entitled to walk around thinking we're distant and distinct from our fellow creatures. But no such gap exists. Instead, we are one with the rest of nature, fitting neither above nor below, but within.