Excerpt from Scholarpedia’s article about Affective Computing

Affective Computing is computing that relates to, arises from, or deliberately influences emotion and other affective phenomena. While the terms «emotion» or «affect» are often used interchangeably, there is a general acceptance that affect is the broader term, including such states as «interest,” which may be either positive or negative. Emotion is a narrower term, a type of affect that usually is negative or positive – including negative states such as «anger,» and positive states such as “joy.” 

Affective Computing addresses the broader sense of the two terms, and contributes to Artificial Intelligence, Pattern Recognition, Machine Learning, Human-Computer Interaction, Cognitive and Affective Sciences, Neuroeconomics, and many other areas where technology is used to detect, recognize, measure, model, simulate, communicate, elicit, handle, or otherwise understand and directly influence emotion and other affective phenomenon.

Research in Affective Computing can be organized into five areas, although these are not mutually exclusive:

  1. Technology for sending affective information – displaying or otherwise portraying an affective state, or mediating the expression or communication of emotion, e.g., “modulate graphics, pitch, font, word choice, or physical movements of the robot to make it look happy to see its master” or “help this non-speaking person to control a voice output device with expression”;
  2. Technology for receiving and interpreting affective information – sensing, recognizing, modeling and predicting emotional and affective states, e.g., “The customer looks and sounds happy” or “If this machine chooses to ignore the customer’s frustration now, it might make the customer angrier”;
  3. Methods for computers to respond intelligently and respectfully to handle perceived affective information, e.g., “Strategy 1: Change the voice to sound subdued and humble in response to this person who is upset”;
  4. Computational mechanisms that synthesize or simulate internal emotions, as if the machine had its own emotions, e.g., implementing regulatory and biasing functions known to be present in humans, such as searching for a new strategy when in a state akin to being «frustrated» or choosing a broader, more creative search when in a state akin to being in a «good mood;»
  5. Social, ethical, and philosophical issues related to the development and deployment of affective computing technologies, e.g., how should emotional data be treated, say compared to medical or personal preference data, and when (if ever) can one accurately say that a technology has feelings?

Affective Computing faces many challenges, likely to require decades of effort, before researchers might succeed in building comprehensive computational models of emotion; nonetheless, there are already useful spin-offs of its application in the commercial world. For example, over 400 Million US dollars were spent in 2006 on call center speech analytics software, including software that automatically detects if customers sound upset so that those calls can be flagged to learn how to handle them better. While current affective pattern analysis tools do not perfectly detect states such as “upset”, they are useful in helping detect a smaller subset of «potentially upset» cases for a person to search. Affective computing can thus aid businesses in boosting understanding of how to improve customer service, even while the computer does not have any comprehensive model for understanding customer emotion.

One of the challenges in Affective Computing research is how to deal robustly with naturally occurring affective information, which is usually not in either a pure or static form. Real-world affective data changes continuously, taking on meaning over time and in context. A smile followed by a headshake and raised eyebrows can have a different meaning than the same smile followed by a series of head nods and direct gaze. Affective technology is built to recognize complex affective-cognitive states by jointly analyzing head and facial movements and tracking how they change over time such as when a look of interest morphs into one of concentration, then into confusion, and perhaps from there into frustration or anger and other mixed feelings. Moreover, such temporal trajectories interact with gestures and with social and cultural circumstances, and with relationships where prior expectations and norms have been established. A person might express their pleasure very differently around children than around professional colleagues, and quite differently when meeting with a colleague in front of customers, than when meeting later with that same colleague over karaoke. While many basic facial expressions occur similarly across cultures, the rules for when they are displayed change with cultural, social, and relational circumstances. Decoding how and when these changes occur is part of making technology smart about handling and helping with affective communication.

Displaying, communicating, or mediating expression of affect

Technology can easily give the appearance of having emotion without having the components that traditionally accompany biological emotion. For years, Apple Macintosh computers displayed a smile when booting successfully, and a sad expression when not booting successfully, even though the computer had no underlying feelings of happiness or sadness. Artists can masterfully craft robotic dogs, animated characters, and other technologies to look, sound, and behave as if they have emotions. Technology that sends affective information – portraying affect through some modality – is easy to build. However, the hardest challenge in real-time interaction is figuring out when to communicate which emotion. Without understanding social display rules and other important cues about the interaction context, technology is quite likely to irritate people with its emotional outputs. For example, Microsoft Window’s operating system used to play a triumphant tune when the system booted, which fit the mood well when a new machine booted successfully. However, when a person experienced the triumphant tune right after having to reboot because of a system crash, this jolly tone was annoying. Interestingly, rebooting a Mac and encountering its smile does not usually have the same irritating effect; in fact, people commonly are seen to smile after they have made a mistake and are trying to redeem themselves and still appear likeable.

With affect-communicating technology, people who rely on text-to-speech devices can be given the choice to have some of their affective parameters (e.g., typing pressure, heart rate, skin conductance, or some combination of these) automatically modulate their synthetic speech output. For example, physiological arousal might be used to modulate pitch or loudness with a single on/off switch controlled by the typist, as opposed to having to annotate each word and phrase directly, which would be arduous. Sometimes affective technologies can be used to help people who are non-speaking, non-typing, and unable to express emotion through the usual nonverbal channels. Technology can sense physiological or other parameters and map these to an output that the disabled person can control; these might be used to communicate states such as «I’m very calm» or «I’m overloaded» or “I like this.”

Sensing, recognizing, modeling, and predicting affective state

Emotion researchers have traditionally used questionnaires, human observation, and physiological sensing to gather data for assessing emotional state. Affective Computing expands these options, enabling new kinds of real-time, automatic, mobile, and even less obtrusive measurements, giving technology the ability to read affective cues from complex patterns that include tone of voice, language or text, facial expressions, posture, gestures, autonomic nervous system measures, and whatever combinations of modalities that people are comfortable with having sensed.

Advances in affective technologies allow for more natural sensing, measurement, and modeling of emotion outside the laboratory. For example, small wearable sensors, cameras, and microphones can measure affective information in a social or other interactive setting without having to interrupt the interaction to ask «how do you feel right now? Today people usually have to fill out questionnaires to express their feelings, and these are problematic because of levels of cognitive bias that interfere– e.g. “how do I convert my 10 different feelings about this to one number?” and “what do I feel like I should say that I felt, that they want to hear?” Feelings are not simply discrete cognitions with a static label: they have dynamic forms that can change throughout an experience. Questionnaires are either interruptive (to capture the feelings of the moment, which is likely to include frustration if frequently interrupted) or are filled out after the event, and then are more affected by the feelings at the moment of filling them out, which is not usually the moment the questionnaire is asking about. In contrast, affect sensing technology provides opportunities to measure and communicate dynamically changing emotion while an experience is happening, without interrupting it. New affect-sensing methods improve the ecological validity of sampling affective information for a variety of scientific purposes.

Validity is a special challenge in emotion research because emotions change with what is truly meaningful and significant to a person, and a laboratory experiment is rarely as meaningful and significant to a participant as are things in the person’s real life. Technologies that sample data from real-life natural experiences increase the likelihood of developing scientific theories of emotion that fit real life.

Responding intelligently and respectfully to perceived emotions

When a person reveals affective information, the recipient can choose ways to respond that may be helpful or harmful. For example, if a person lets a computer (or robot or agent) know that its action is annoying then the computer could try to recognize its gaffe and take steps to not repeat the annoyance. The robot or agent could issue an acknowledgement of the frustration it has caused, and perhaps even apologize, and see if this helps undo some of the annoyance. Sometimes it might be appropriate for a robot or agent to display an empathetic or caring response. While some people object to a computer expressing feelings when it does not actually have them, it is possible for a computer to come across as empathizing, and for it to appear caring without it pretending to feel anything a person feels. Studies suggest that computer-provided empathy can reduce frustration and stress and can impact perceptions of caring; these perceptions matter greatly in applications ranging from customer service to education and health-care.

Synthesizing and simulating emotions

Emotion-like mechanisms inside a machine can perform functions that may or may not appear emotional. For example, an emotion model within the Hasbro/iRobot toy doll My Real Baby evaluates inputs and causes the doll’s facial expressions and vocalizations to change, making the doll appear to have emotions. Thus, we say an internal emotion model synthesizes elements of emotion, that is it creates an internal state that is capable of triggering the outward appearance of having an emotion. A model may alternatively not trigger any outward appearance, but may only change what happens inside. In neither case do the results imply that the doll has emotions like a person does: it is simply synthesizing some of the components of processes that might be involved in animal or human emotion. In fact, it might synthesize many aspects of emotion without ever synthesizing anything that approximates true human feelings. Today, scientists do not know how to give computers the rich kinds of feelings and experiences of emotion that people have.

Social, ethical, and philosophical issues

Affective technologies enable a wide variety of interesting new and beneficial advances; however, technological power to sense, measure, monitor, communicate, influence, and manipulate emotion could also be used for harmful or otherwise undesirable purposes. Any new technological capability raises social, ethical, and philosophical questions, and the fifth area of affective computing research attempts to address these with respect to the new capabilities this technology brings.

Given technology that senses and interprets affective information, how can you protect the privacy of people who do not want their information sensed? This research question is a cousin to historical questions governing the use of lie detectors, or polygraphs, which typically sense physiological stress that can accompany a person’s effort to deceive. The US government currently restricts the use of polygraphy in the workplace; however, the same government is actively funding research to develop technology to recognize people’s emotions in public places, including technology that would operate without people knowing that they are being sensed (e.g. through remote thermography, laser Doppler vibrometry, and other detection techniques that work at a distance in order to try to detect potential terrorists in airports and other large public areas.) Note that researchers in most government-funded universities and research institutions are prevented by institute review boards from sensing information from people without getting their informed consent. The deployment of affect-sensing technology that is used without people’s consent violates the wishes of many individuals; as such, it not only violates the fundamental principles of affective computing research to respect affective preference, but it also violates current standard ethical practice. While it is possible that affect sensing might be acceptable publicly in some places, e.g., with a robot’s vision system, where such vision would be expected) there are other places where it might be unwelcome.

The sensing of people’s affective data, with the possibility of its storage and real-time or delayed transmission to others, raises many questions concerning the use and possible misuse of affective information. For example, a driver might like to have her car navigation system sense and adjust its voice to her mood, which can increase driving safety; however, she might wish to prevent the data from being given to her insurance company, who might raise the price of her policy if they find out she frequently gets behind the wheel when angry. People may also object to being pummeled with ads or product promotions just because they showed interest (and the computer recognized it) when they were glancing at a billboard. Great care must be taken to respect people’s wishes about what is and is not sensed, stored, or shared. Designers of affective computing systems need to clearly communicate whether collected information could be associated with a person’s identity, who if anybody it might be shared with, and what benefits and harms might occur from sharing this information.

Technology that «has emotional state» raises philosophical questions about what it means to have feelings. While computers have been built to have mechanisms inspired by biological functions of emotion, these mechanisms, to date, remain different from giving feelings to a computer in the same sense a person experiences. When (if ever) can one accurately say that a robot or a piece of software has feelings, in the same sense that we talk about human feelings?

Affective computing researchers, while addressing technical challenges of making systems that can send, sense, intelligently handle and simulate affective information, need to not fall prey to the common scientific tendency to make something just because it can be done. An ever-present challenge is to work together with people from diverse backgrounds, accepting and giving constructive criticism on new findings, and seeking public input to steadily discern what should be done in developing technology to improve human experience.

Tecnología de la emoción – Roz Picard en TEDxSF

Rosalind Picard

Rosalind Picard is Co-founder, Chief Scientist, and Chairman at Affectiva

Director of Affective Computing Research, Co-Director, Autism Communication Technology Initiative, Co-Director, Things That Think Consortium.
MIT Media Lab.

@rosalindpicard


HOMO SAPIENS 3.0

Inexorablemente nuestro cuerpo ya es tecnología.
Enviamos mensajes telepáticos a Twitter con interfaces neuronales.
En el futuro nuestros cuerpos cambiarán para poder viajar a las estrellas.
Somos mutantes, somos cyborgs.

Más inteligencia, más diversión, cuerpos más fuertes, mayor control de las emociones, más vida. ¡Por supuesto! Ciencia frente a la ignorancia.

Amplificación cognitiva.
Ultra-inteligencia.
Extensión de los sentidos.
El prolongamiento indefinido de la vida.
Sexo en gravedad cero.
Traducción automática universal incorporada.
Acceso a estados alterados de conciencia. Velocidad de escape.
El espacio como destino Ingeniería neuromórfica de uso personal.
Branquias artificiales.
Cuerpos mejorados posthumanos. Homo Sapiens versión 3.0.
Descarga operacional de manuales para pilotar helicópteros.
Ampliar nuestro universo vital en realidades virtuales.
Devenir en software.
Baterías nucleares incorporadas.
Vidas con mayor alcance.
Un mundo mejor, un mundo Cyborg.

SERVANDO CARBALLAR Y ALEJANDRO SACRISTÁN

Por LVDLC