March 24, 2021
Across the thousands of different languages spoken by humans, the way we use words to represent different colors is remarkably consistent. For example, many languages have two distinct words for red and orange, but no language has many distinct commonly used words for various tonalities of orange. (Of course, if you visit a paint store, you’ll see dozens of esoteric names for different shades of orange. But these are rarely used in daily conversation.)
Using mathematical tools, linguistic researchers have shown this consistency in color names is because humans optimize language to balance the need for accurate communication with a general biological drive toward minimizing effort. Having extra color words — cantaloupe or burnt sienna, for example — adds complexity without significantly improving how effectively people communicate with each other.
Facebook AI has now shown that cutting-edge AI systems behave similarly. When two artificial neural networks are tasked with creating a way to communicate with each other about what colors they see, they develop systems that balance complexity and accuracy much as people do.
We also found that in order for the color “language” used by these neural networks to be an optimal solution, it must use discrete symbols rather than continuous sounds. This leads to a fascinating speculation about how we communicate. Is it possible that our languages can be optimally structured only if they are made up of discrete symbols rather than, say, continuous whistling?
We built two neural networks, a Speaker and a Listener, and tasked them with playing the “communication game” illustrated below. In each round of the game, the Speaker sees one color chip from a continuous color space and then produces a symbol (which can be considered a “word”). The Listener sees the same color chip but also a different one, known as a distractor.
The Listener receives the word produced by the Speaker and then tries to point to the correct color chip. Initially, the Speaker produces words at random, but eventually these naturally come to denote areas of the color space. We repeated this experiment many times while varying the difficulty of the task by making target and distractor chips more similar or less so. These variations produced a number of different color-naming “vocabularies.”
At the end of training, we analyzed these vocabularies and consistently found that the neural networks developed color terms with properties similar to those of human languages. In particular, when organizing the resulting systems according to quantitative measures of complexity and accuracy—as we do in the chart below—we find that the distribution of the neural-network languages is virtually identical to that of real human languages. Moreover, both types of languages are near the boundary that formally defines the set of possible optimal balances between complexity and accuracy (the black line in the figure).
In further experiments, we removed various components of the simulation. We found that, crucially, when we allowed neural networks to communicate through continuous symbols instead of discrete ones, the optimal trade-off between complexity and accuracy no longer emerged. The networks still succeeded at the communication game, but their systems became highly inefficient.
Language is perhaps humanity’s most unique feature, but we still have a poor understanding of many of its core properties. Our study shows that advanced AI models, such as the ones developed at Facebook, are useful not only for practical applications, but also as experimental tools to answer scientific questions about human language (and cognition in general).
Recent research in linguistics and cognitive science points to the fact that language is a highly efficient system—but how did it evolve to that state, and why? By studying and dissecting computational models that mimic natural behavior, as in our research, we can shed light on the precise conditions under which such an efficient communication system is likely to arise in nature.
The results are also exciting from the perspective of building AI systems that are able to communicate with us through natural language, as it shows that neural networks trained to collaborate to accomplish a common task can develop communication systems that share core properties of human language.
Communicating artificial neural networks develop efficient color-naming systems
Research Scientist
Research Assistant
Research Engineer
Research Scientist
Foundational models
Latest news
Foundational models