This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Affective Manifolds: Modeling Machine’s Mind to Like, Dislike, Enjoy, Suffer, Worry, Fear, and Feel Like A Human

\nameBenyamin Ghojogh \email[email protected]
Abstract

After the development of different machine learning and manifold learning algorithms, it may be a good time to put them together to make a powerful mind for machine. In this work, we propose affective manifolds as components of a machine’s mind. Every affective manifold models a characteristic group of mind and contains multiple states. We define the machine’s mind as a set of affective manifolds. We use a learning model for mapping the input signals to the embedding space of affective manifold. Using this mapping, a machine or a robot takes an input signal and can react emotionally to it. We use deep metric learning, with Siamese network, and propose a loss function for affective manifold learning. We define margins between states based on the psychological and philosophical studies. Using triplets of instances, we train the network to minimize the variance of every state and have the desired distances between states. We show that affective manifolds can have various applications for machine-machine and human-machine interactions. Some simulations are also provided for verification of the proposed method. It is possible to have as many affective manifolds as required in machine’s mind. More affective manifolds in the machine’s mind can make it more realistic and effective. This paper opens the door; we invite the researchers from various fields of science to propose more affective manifolds to be inserted in machine’s mind.

Keywords: affective computing, affective manifold, machine’s emotion, robot’s emotion, computer-computer interaction, human-computer interaction, manifold learning, dimensionality reduction

1 Introduction

Now that many machine learning algorithms have been proposed, it is time to combine them and put them together to make a powerful machine’s mind. On the one hand, this especially makes sense when we notice that human is a complex machine so it may be worth attempting to model the mind of this machine. On the other hand, the development of technology and science encourages us to improve machines. We never know; the machines may coexist with humans, as intelligent semi-human robots, in the near future.

Modeling machine’s mind is related to affective computing. Affective computing [14] is generally referred to two broad research areas: (1) either emotion recognition from signals such as facial images, or (2) how a machine feels and has emotions. The concept of machine’s mind and emotions, addressed in this paper, lies in the latter research area of affective computing. Moreover, machine’s feeling can be used in human-computer and computer-computer interactions; this will be discussed further in Section 4.

In this paper, we propose and define the concept of affective manifolds for modeling the mind of machine/robot. The proposed tool can be used in a machine to possess a mind and feel like a human. The machine, or a robot, needs to have a mind and feel like a human so it seems realistic to humans. Affective manifolds can provide this ability to machines. A machine’s mind can be considered as a set of affective manifolds (this will be defined formally in Section 2). A machine’s mind can have as many affective manifolds as required. The more affective manifolds are designed gradually by researchers, the better and more complete the machine’s mind can become. Eventually, the machine’s mind can become complex enough like a human’s mind. Consequently, we invite the researchers to design more and better affective manifolds to insert in the machine’s mind.

The remainder of this paper is organized as follows. We define and introduce the machine’s mind and affective manifolds, as well as some examples, in Section 2. One possible machine learning algorithm for learning the affective manifolds is proposed in Section 3. Section 4 introduces some possible applications of the proposed affective manifolds. Simulations are provided in Section 5. Finally, Section 6 concludes the paper and enumerates the possible future directions.

2 Machine’s Mind and Affective Manifolds

2.1 Modeling Alive Machine

Imitating any alive creature, we can model a machine. For example, we can have vegetable-like, animal-like, and human-like machines which can play the roles of robots as vegetables (plants), animals, and humans. These machines have some advantages and benefits to biological organisms. For example, an alive vegetable-like machine does not wilt. An alive animal-like machine, such as a dog robot, does not dye or can be taught very fast.

An alive human-like machine is more robust to damages and can work in dangerous environments. Moreover, they do not get tired from working. Although, as we will show in this paper, we can add the ability of getting tired or having pain to the machines. It is possible to have multiple types of human-like machines with different abilities where the worker robots do not posses the ability of pain and tiredness but the living robots have these characteristics to live among humans. Nonetheless, this may raise the ethical dilemmas of racism/distinction among robots and racism/distinction between human and robot, which can be serious problems in the future. This reminds us that affective computing usually faces ethical challenges [15].

2.2 Affective Manifolds

Diffusion MRI and fMRI experiments [9] have shown that different parts of brain are activated based on various mood characteristics, such as joy, suffer, love, fear, etc. Accordingly, we can define mind as a set of affective manifolds, each of which corresponds to a group of characteristics of mind. This models the different parts of machine’s mind. The affective manifolds play the role of latent variables for characteristics of mind. Some example effective manifolds will be mentioned in Section 2.4.

Definition 1 (Affective Manifold)

The characteristics of mind can be grouped into multiple categories. The affective manifold \mathcal{M} is a manifold corresponded to a characteristic group. The machine learns this manifold in order to distinguish between the states of that characteristic group. If 𝐱1\boldsymbol{x}_{1} and 𝐱2\boldsymbol{x}_{2} correspond to two different states 𝒮1\mathcal{S}_{1} and 𝒮2\mathcal{S}_{2} in a characteristic group, represented by the affective manifold \mathcal{M}, they are expected to fall away from each other on the manifold, compared to their own states:

(𝒙1𝒮1)(𝒙2,𝒙3𝒮2\displaystyle(\boldsymbol{x}_{1}\in\mathcal{S}_{1}\in\mathcal{M})\wedge(\boldsymbol{x}_{2},\boldsymbol{x}_{3}\in\mathcal{S}_{2} )(𝒮2𝒮2=)\displaystyle\in\mathcal{M})\wedge(\mathcal{S}_{2}\cap\mathcal{S}_{2}=\varnothing) (1)
𝔼[𝒙2𝒙32]𝔼[𝒙2𝒙12],\displaystyle\implies\mathbb{E}\big{[}\|\boldsymbol{x}_{2}-\boldsymbol{x}_{3}\|_{2}\big{]}\ll\mathbb{E}\big{[}\|\boldsymbol{x}_{2}-\boldsymbol{x}_{1}\|_{2}\big{]},

where 𝔼[.]\mathbb{E}[.] denotes the expected value. The affective manifold, corresponded to a characteristic group, is a collection of ss disjointed states:

:=i=1s𝒮i,\displaystyle\mathcal{M}:=\bigcup_{i=1}^{s}\mathcal{S}_{i}, (2)
𝒮i𝒮j=,i,j{1,,s},\displaystyle\mathcal{S}_{i}\cap\mathcal{S}_{j}=\varnothing,\quad\forall i,j\in\{1,\dots,s\}, (3)

where it is theoretically possible to have s=s=\infty. In other words, an affective manifold is partitioned into its states.

An example affective manifold is the love manifold containing the states of “love”, “like”, “dislike”, and “hate”. More detailed examples for affective manifold will be provided in Section 2.4.

Definition 2 (Affective Subspace)

If we use a linear method for learning an affective manifold, the affective manifold is linear. In this case, the affective manifold is reduced to an affective subspace. The affective subspace, which is a special case of the affective manifold, can be modeled as a linear column-space of a projection matrix 𝐔d×p\boldsymbol{U}\in\mathbb{R}^{d\times p} from a dd-dimensional input space to a pp-dimensional subspace, where pdp\leq d. Therefore, the affective subspace belongs to the Grassmannian manifold 𝒢(p,d)\mathcal{G}(p,d), which is a space of all pp-dimensional linear subspaces of the dd-dimensional vector space. The Grassmannian manifold 𝒢(p,d)\mathcal{G}(p,d) can be seen as the quotient space of the Stiefel manifold 𝒮t(p,d)\mathcal{S}t(p,d) [1]:

𝒢(p,d):=𝒮t(p,d)/𝒮t(p,p),\displaystyle\mathcal{G}(p,d):=\mathcal{S}t(p,d)/\mathcal{S}t(p,p), (4)

where the Stiefel manifold is defined as the set of orthogonal matrices as:

𝒮t(p,d):={𝑼d×p|𝑼𝑼=𝑰}.\displaystyle\mathcal{S}t(p,d):=\{\boldsymbol{U}\in\mathbb{R}^{d\times p}\,|\,\boldsymbol{U}^{\top}\boldsymbol{U}=\boldsymbol{I}\}. (5)

Any linear dimensionality reduction method, such as Fisher discriminant analysis [7], can be used to learn an affective subspace.

2.3 Machine’s Mind

Definition 3 (Machine’s Mind)

The characteristics of mind, in terms of mood, can be grouped into multiple categories. Let the manifold i\mathcal{M}_{i} denote the ii-th manifold corresponded to the ii-th characteristic group. The machine’s mind, denoted by 𝒬\mathcal{Q}, is a set of qq affective manifolds:

𝒬:=i=1qi,\displaystyle\mathcal{Q}:=\bigcup_{i=1}^{q}\mathcal{M}_{i}, (6)

where every affective manifold is responsible for a characteristic group, and it is theoretically possible to have q=q=\infty.

An illustration of parts of machine’s mind is in Fig. 1. Eq. (6) simulates the brain of human whose different parts are activated for different tasks, according to the diffusion MRI and fMRI experiments [9].

Remark 4

In a machine’s mind, it is possible to have overlapping affective manifolds where some states are shared between some characteristic groups. Therefore, for two states from two affective manifolds, i.e., 𝒮11\mathcal{S}_{1}\in\mathcal{M}_{1} and 𝒮22\mathcal{S}_{2}\in\mathcal{M}_{2}, we can have either 𝒮1𝒮2=\mathcal{S}_{1}\cap\mathcal{S}_{2}=\varnothing or 𝒮1𝒮2\mathcal{S}_{1}\cap\mathcal{S}_{2}\neq\varnothing. This means we can have either 12=\mathcal{M}_{1}\cap\mathcal{M}_{2}=\varnothing or 12\mathcal{M}_{1}\cap\mathcal{M}_{2}\neq\varnothing for every two affective manifolds 1\mathcal{M}_{1} and 2\mathcal{M}_{2} in a machine’s mind.

An example for two non-overlapping affective manifolds is where one manifold contains the states “frown”, “neutral lips”, “smile”, and “laugh”, and the other manifold includes the states “feared”, “worried”, “enjoy”, and “laugh”. The state “laugh” is shared between the two characteristic groups of these affective manifolds.

Refer to caption
Figure 1: The pipeline of sensing the signal by the machine/robot and reacting emotionally to it.

2.4 Example Affective Manifolds

We can define various affective manifolds. As many effective manifolds as required can be defined and used. Some example affective manifolds are provided in the following.

2.4.1 Affective Love Manifold

An example affective manifold is the affective love manifold to model the loving characteristic group. This manifold can include the states “hate”, “dislike”, “like”, and “love”. This enables the machine to love, like, dislike, and hate different situations, objects, humans, or other machines.

2.4.2 Affective Emotion Manifold

Another example affective manifold is the affective emotion manifold to model the emotion characteristic group. This manifold can include the states “angry”, “sad”, “neutral”, “glad”, and “excited”. This enables the machine to have emotions and feelings like a human.

2.4.3 Affective Joy Manifold

Another example for affective manifolds is the affective joy manifold containing the states “suffered”, “feared”, “worried”, “enjoying”, “relaxed”, and “bored”. This enables the machine to have different states/levels of joy in different circumstances.

For two reasons, it may be plausible to insert the ability of feeling pain or suffering into the mind of machine. In the first glance, pain may be a negative characteristic which is not desired to have in an ideal machine. However, biology has shown us that pain has saved humans and living creatures to survive by understanding and preventing danger, sickness, and injuries. So, pain can prevent robots from being destroyed. Another benefit of pain/suffer is that it gives meaning to happiness; otherwise, if all feelings were happiness, there was no any concept for happiness. In other words, pain and suffer, as well as happiness and joy, give living creatures a mixture of feelings for life. For these reasons, pain may be plausible.

It is also noteworthy that the adversarial attack [3] and/or computer viruses [10] can model disease for machines. Anti-viruses can be developed for defeating machine’s diseases. Developing defense algorithms for robustness to adversarial attack can also defeat machine’s diseases.

2.4.4 Affective Belief Manifold

Another example for affective manifolds is the affective belief manifold where the machine believes in one or several of the existing belief systems. Like humans, the machine may change its mind in its belief gradually over time. The belief manifold may have bb existing belief systems, an agnosticism state, and an atheism state. It is also possible to have other belief systems and having agnosticism and atheism associated to some sets of beliefs. The number of belief systems in the machine’s mind may be infinite. The affective belief system can enable machines to have various beliefs like humans.

3 Affective Manifold Learning

We can use any dimensionality reduction and manifold learning model, such as FDA [7], locally linear embedding [16, 5], and deep metric learning [6], for learning every affective manifold. It is also possible to use the mathematical techniques such as differential geometry [11] or Riemannian optimization [1] to learn the affective manifolds. In this section, we provide one of the possible algorithms for affective manifold learning.

3.1 Dimensionality of the Affective Manifold

We can use different learning methods for different manifolds depending on their complexities. The dimensionality of every affective manifold or subspace, denoted by pp, depends on the complexity of that characteristic group. The more complex the characteristic is, the larger dimensionality of manifold is required. Both complexities of states and the number of states have impact on complexity:

  • The more number of states a characteristic group has, the more complex the affective manifold is, so a larger dimensionality is required for the affective manifold.

  • The more complex the states of a characteristic group are, the more complex the affective manifold is, so a larger dimensionality is required for the affective manifold.

3.2 The Learning Model

The goal is to learn a nonlinear mapping ff, with learnable parameters θ\theta, from a dd-dimensional input space to a pp-dimensional embedding space:

f(θ):dp.\displaystyle f(\theta):\mathbb{R}^{d}\rightarrow\mathbb{R}^{p}. (7)

The dd-dimensional data can be any signal captures by any sensor of the machine/robot. The input signal stimulates one of the states in the affective manifold, the same way as an input signal to human’s brain stimulates a state in human’s mind (for example, we can get sad by looking at a picture or listening to a piece of music). The input signals are measured by any sensor of the machine, where the sensors can model the five senses of a human. For example, the input signals can be images, observed by the cameras (eyes) of robot, sound signals heard by microphones, or any other signals. The pp-dimensional embedding space is the affective manifold. Therefore, the mapping (7) is a mapping from the input signal to the affective manifold in the machine’s mind. The pipeline of mapping the input signals to the affective manifolds in the machine’s mind is illustrated in Fig. 1.

We can learn an affective manifold using deep metric learning [6]. One useful network for deep metric learning is the Siamese network [2] containing two or three networks sharing their weights. The mapping ff can be a Siamese neural network whose weights are the learnable parameters θ\theta.

3.3 Margins Between the States

Based on the psychological experiments, philosophical studies, and biological facts, we can define the desired relative distances (or margins) between the states of an affective manifold. For this, in the embedding space of the manifold, we define the relative margins between the states of that manifold. Note that the actual distances are not important but the relative distances matter. Scaling up or down the actual distances can only have impact on the speed of convergence of training. In addition to the scale of distances, the rotation and mirroring are not important in manifold learning because the relative distances are preserved under these transformations.

Figure 2 depicts an example for imagination of the desired relative distances between the states of an affective manifold. Section 5 will provide examples for the relative distances of states in simulations. Note that defining the margins between the states of an affective manifold requires empirical social experiments, biological facts, psychological studies, or philosophical arguments. Some psychological and philosophical analyses used for determining the margins will be discussed in Section 5.

If an affective manifold has ss states, we define a symmetric margin matrix 𝑴+s×s\boldsymbol{M}\in\mathbb{R}_{+}^{s\times s} whose elements are the desired margins between the states. Let mi,jm_{i,j} denote the (i,j)(i,j)-th element of the matrix 𝑴\boldsymbol{M}, i.e., the desired margin between the ii-th and the jj-th states of the affective manifold.

3.4 Triplets

For learning an affective manifold, we can learn an embedding space where the variance of instances within every state is minimized and the inter-state distances are fixed to the desired distances. For this goal, we prepare mini-batches of dd-dimensional anchor-positive-negative triplets. In every triplet, the anchor and positive instances belong to the same state but the negative instance is from another state.

In every mini-batch, let the jj-th dd-dimensional anchor, positive, and negative instances be denoted by 𝒙ajd\boldsymbol{x}^{j}_{a}\in\mathbb{R}^{d}, 𝒙pjd\boldsymbol{x}^{j}_{p}\in\mathbb{R}^{d}, and 𝒙njd\boldsymbol{x}^{j}_{n}\in\mathbb{R}^{d}, respectively. If the bach size is denoted by bb, every mini-batch is {(𝒙aj,𝒙pj,𝒙nj)}j=1b\{(\boldsymbol{x}^{j}_{a},\boldsymbol{x}^{j}_{p},\boldsymbol{x}^{j}_{n})\}_{j=1}^{b}. The embeddings of triplets in the affective manifold are f(𝒙aj)pf(\boldsymbol{x}^{j}_{a})\in\mathbb{R}^{p}, f(𝒙pj)pf(\boldsymbol{x}^{j}_{p})\in\mathbb{R}^{p}, and f(𝒙nj)pf(\boldsymbol{x}^{j}_{n})\in\mathbb{R}^{p}. We denote the state of the instance 𝒙\boldsymbol{x} by s(𝒙)s(\boldsymbol{x}); hence, the states of a triplet are s(𝒙aj)s(\boldsymbol{x}^{j}_{a}), s(𝒙pj)s(\boldsymbol{x}^{j}_{p}), and s(𝒙nj)s(\boldsymbol{x}^{j}_{n}).

3.5 Training the Affective Manifold

Inspired by the triplet loss [18] and the contrastive loss [8], we propose a loss function for learning an affective manifold. We define the following loss function for training the affective manifold:

θ:=argminθ=argminθ(λpp+λnn),\displaystyle\theta:=\arg\min_{\theta}\mathcal{L}=\arg\min_{\theta}\big{(}\lambda_{p}\mathcal{L}_{p}+\lambda_{n}\mathcal{L}_{n}\big{)}, (8)

where λp>0\lambda_{p}>0 and λn>0\lambda_{n}>0 are the weighting parameters controlling the relative importance of the positive loss p\mathcal{L}_{p} and the negative loss n\mathcal{L}_{n}. The functions p\mathcal{L}_{p} and n\mathcal{L}_{n} are the loss functions for the positive and negative pairs:

p:=1bj=1bf(𝒙aj)f(𝒙pj)22,\displaystyle\mathcal{L}_{p}:=\frac{1}{b}\sum_{j=1}^{b}\big{\|}f(\boldsymbol{x}^{j}_{a})-f(\boldsymbol{x}^{j}_{p})\big{\|}_{2}^{2}, (9)
n:=1bj=1b(f(𝒙aj)f(𝒙nj)2ms(𝒙aj),s(𝒙nj))2,\displaystyle\mathcal{L}_{n}:=\frac{1}{b}\sum_{j=1}^{b}\Big{(}\big{\|}f(\boldsymbol{x}^{j}_{a})-f(\boldsymbol{x}^{j}_{n})\big{\|}_{2}-m_{s(\boldsymbol{x}^{j}_{a}),s(\boldsymbol{x}^{j}_{n})}\Big{)}^{2}, (10)

where .2\|.\|_{2} denotes the 2\ell_{2} norm.

The loss function (9) is the mean squared error of distances between the anchor and positive of triplets in a mini-batch. The loss function (10) is the mean squared error of difference of the desired margin and the current distance between the anchor and negative of triplets in a mini-batch. Therefore, on the one hand, the overall loss function (8) reduces the distances between instances of every state so that the veriance of every state is reduced. On the other hand, this loss function makes the distances between every two states equal to the desired margin between those two states.

It is noteworthy that machine’s every affective manifold can change gradually over time using transfer learning [20]. For this, we further train the already trained network using new triplets from new input signals. This transfer learning simulates experiencing new things by the machine. This is inspired by the gradual change of human’s mind and characteristics over time by experiencing.

3.6 Inferring the State of an Input Signal

For every dd-dimensional input signal, the machine shall have one of the states of an affective manifold. After training the network, we pass the input 𝒙d\boldsymbol{x}\in\mathbb{R}^{d} to the network and we get its embedding f(𝒙)pf(\boldsymbol{x})\in\mathbb{R}^{p} at the output of network. We can have various approaches for inferring the state of the input signal in the embedding space. One possible approach is explained in the following. Let the mean of embeddings of the training instances in the ii-th state be denoted by 𝒔ip\boldsymbol{s}_{i}\in\mathbb{R}^{p}. The state of the input signal can be determined as:

s(𝒙):=argmini{1,,s}f(𝒙)𝒔i2.\displaystyle s(\boldsymbol{x}):=\arg\min_{i\in\{1,\dots,s\}}\|f(\boldsymbol{x})-\boldsymbol{s}_{i}\|_{2}. (11)

Using other distance metrics such as the Mahalanobis distance or other norms is also possible. By inferring the state, the machine’s mind becomes able to possess a state of that characteristic group in reaction to the input signal. For example, the machine becomes happy or sad by looking at a specific picture or listening to a specific sentence/music.

4 Some Applications of Affective Manifolds

Affective manifolds can have various applications. Some example applications are provided in the following.

4.1 Machine-Machine Interaction

If two machines have minds, containing affective manifolds, they can interact with each other and react emotionally accordingly. For example, they can talk to one another and see each other and react to the chat or the body language of the other machine (if the machines are implemented in robots). In this meaning, two machines (or robots) can interact with each other similar to how two humans interact, if they have sufficient number of affective manifolds in their minds. For instance, two machines can fall in love with each other, dislike one another, or get angry at each other (e.g., if one of them verbally insults the other one and the other machine gets sad).

4.2 Human-Machine Interaction

If a machine or robot has a mind with affective manifolds, it can interact with human(s) or living creatures. This also opens the gate for forming positive and negative emotions between human and machine. For example, the robot and human can interact and get happy, get sad, or even fall in live with each other. Sci-fi movies have tried to show the possibility of such situations. For example, in the movie “Her” directed by Spike Jonze, a man falls in love with the operating system of his computer. Another example is the movie “Ex Machina”, directed by Alex Garland, in which a human falls in love with a robot.

Any positive and negative emotions and feedback can happen between human and machine. It is possible to have these emotions under control if the number of states in the affective manifolds are controlled (for example, if we remove some controversial states from the machine’s mind). Forming emotions between human and machine may open the gate for them to even fall in love with each other [17]; some researchers have even gone a step further and raised the possibility of marriage of human and robot in the future [13]. Obviously, this raises serious ethical challenges which need to be addressed [19]; however, we should note that affective computing is always faced with ethical difficulties [15].

4.3 Human-Human Interaction

The affective manifolds can also be useful for human-human interaction. This might seem strange as there is not any machine in the sides of this interaction. However, note that it is possible to model a human’s mind as a machine’s mind. This simplifies (reduces) the complex human’s mind to a simpler mind but with sufficiently good imitation. The modeled mind can almost produce same behavior as the human’s mind if it has enough affective manifolds with enough number of states and is trained well enough.

An example usage of this application is using this model in matchmakers for dating apps. For example, the behavior of the human users, based on their characteristics and their likes/dislikes, can be used to train an affective manifold with states “match” (“like”) and “non-match” (“dislike”) for every user. Then, for each user, the characteristics of other users are fed as input signals to the learning model to see where every other user falls in the affective manifold. The user with the closest position in the state of “match” (“like”) will be the best match for that user (see Eq. (11)).

5 Simulations

In this section, we provide some simulation examples of affective manifold learning. The codes of simulation for this paper can be found in the following link: https://github.com/bghojogh/Affective-Manifold.

5.1 Setting of Simulations

We used a Siamese network for affective manifold learning. The backbone network included two convolutional layers followed by two fully connected layers. The Parametric Rectified Linear Unit (PReLU) activation functions, max pooling, and dropout were used in the network. The batch size was set to b=32b=32 and the embedding dimensionality of affective manifold was set to p=2p=2 for better visualization of results. We set λp=λn=1\lambda_{p}=\lambda_{n}=1 to have equal weights in the loss function (8). For every experiment, we trained the network for ten epochs. For the sake of simulation, we used the MNIST dataset [12] as the input signal where every digit is associated to one of the states.

5.2 Affective Love Manifold

The affective love manifold can have the states “hate”, “dislike”, “like”, and “love”. Based on two different physiological observations, we can define the margins either linearly or nonlinearly.

5.2.1 Linear Affective Love Manifold

In the first simple glance, we can say that “hate” is worse than “dislike”, “dislike” is worse than “like”, and “like” is worse than “love”. Therefore, we can possibly see their relations over a line in a linear relationship, as illustrated in Fig. 2-a. Therefore, if “hate”, “dislike”, “like”, and “love” are the first to fourth states, the relative margins can be defined as:

𝑴:=[0123101221013210].\displaystyle\boldsymbol{M}:=\begin{bmatrix}0&1&2&3\\ 1&0&1&2\\ 2&1&0&1\\ 3&2&1&0\\ \end{bmatrix}. (12)
Refer to caption
Figure 2: The desired margins in the (a) linear and (b) nonlinear versions of love manifold. The numbers on lines are the desired distances and the number with a colored background is the desired angle in degrees.

5.2.2 Nonlinear Affective Love Manifold

A deeper investigation into the love-related emotions can show us that the relations of the states “hate”, “dislike”, “like”, and “love” is not necessarily linear. Love is not that simple. In one of her novels of Hercule Poirot, Agatha Christie correctly says [4]:

“Love can be a very frightening thing. That is why most great love stories are tragedies.”

Agatha Christie, “Death on the Nile”, 1937.

Humans have observed, in their daily lives, that love is not as stable as liking. It can suddenly convert to hate, unfortunately. Therefore, we can define the margins specified in Fig. 2-b, where the states of “hate” and “love” are slightly curved toward one another. In this case, we can calculate relative margins based on the desired angles and relative positions of states:

𝑴:=[011.8482.4041011.8481.8481012.4041.84810],\displaystyle\boldsymbol{M}:=\begin{bmatrix}0&1&1.848&2.404\\ 1&0&1&1.848\\ 1.848&1&0&1\\ 2.404&1.848&1&0\\ \end{bmatrix}, (13)

where “hate”, “dislike”, “like”, and “love” are the first to fourth states.

The trained embedding space for the nonlinear affective love manifold can be seen in Fig. 3-a. As this figure shows, the states are correctly learned to follow the desired margins determined in Eq. (13) and shown in Fig. 2-b. Note that, in manifold learning, the relative distances are important and the overall scale, rotation, and mirroring are not important.

Refer to caption
Figure 3: The trained embedding spaces of (a) nonlinear affective love manifold and (b) affective joy manifold.

5.3 Affective Joy Manifold

The affective joy manifold models the level of joy in the machine’s mind. This manifold can contain the states “suffered”, “feared”, “worried”, “enjoying”, “relaxed”, and “bored”. One possible way, but not the only way, to define the margins between these states is illustrated in Fig. 4. As this figure shows, suffering is the opposite of relaxing and being bored in terms of comfort. We set the state “feared” away from “suffered” and “worried” with the same distances. However, among “feared” and “suffered”, it seems better to have “feared” closer to “worried” because suffering is more severe. Among the negative joy levels, the state “worried” can be the closest to the state “enjoying” which is positive. For the neutral joy levels, which are “relaxed” and “bored”, we define the margins equal from the state “enjoying”.

The reason for why we put the positive state “enjoying” between the negative states (i.e., “suffered”, “feared”, and “worried”) and the neutral states (i.e., “relaxed”, and “bored”) is explained in the following. Previously, humans thought that suffering and joy are the two opposite ends of the levels of joy in life. However, philosophical investigations into joy and meaning of life have shown that enjoying is merely a short period of time, between suffering and boredom, in every experience of life. Human starts from the suffer of not having something, tries and obtains it, then enjoys it for some short time, and then the human becomes bored of that. This cycle starts again for obtaining something else. In fact, boredom, and not enjoying, is the opposite of suffering in life. This was initially thought through by Gautama Buddha. In the modern philosophy, Arthur Schopenhauer has addressed this concept:

“Life swings like a pendulum backward and forward between pain and boredom.”

Arthur Schopenhauer, 1788–1860.

According to the above explanations and Fig. 4, we can calculate relative margins based on the desired angles and relative positions of states:

𝑴:=[011.4142.4143.3183.3181011.7882.5732.7611.4141011.9321.9322.4141.78810113.3182.5731.9321013.3182.7611.932110],\displaystyle\boldsymbol{M}:=\begin{bmatrix}0&1&1.414&2.414&3.318&3.318\\ 1&0&1&1.788&2.573&2.761\\ 1.414&1&0&1&1.932&1.932\\ 2.414&1.788&1&0&1&1\\ 3.318&2.573&1.932&1&0&1\\ 3.318&2.761&1.932&1&1&0\\ \end{bmatrix}, (14)

where “suffered”, “feared”, “worried”, “enjoying”, “relaxed”, and “bored” are the first to sixth states.

The trained embedding space for the affective joy manifold is illustrated in Fig. 3-b. As this figure shows, the states are correctly learned to follow the desired margins determined in Eq. (14) and depicted in Fig. 4. Again, note that the relative distances are important and the overall scale, rotation, and mirroring do not matter.

Refer to caption
Figure 4: The desired margins in the joy manifold. The numbers on lines are the desired distances and the numbers with colored background are the desired angles in degrees.

6 Conclusion and Future Directions

In this paper, we proposed the affective manifolds in the machine’s mind. The machine’s mind is a set of affective manifolds and every affective manifold is a collection of states. Each affective manifold corresponds to a characteristic group of mind. We enumerated various examples and applications of the affective manifolds.

This work opens a door to designing machine learning and manifold learning models for learning various affective manifolds as characteristics of mind. In this work, we proposed a loss function, with the desired margins between states, for deep metric learning in affective manifold learning. Other loss functions and/or other machine learning models can be used for affective manifold learning.

It is also possible to learn the affective manifolds using differential geometry and Riemannian optimization. We invite researchers from various fields of science to propose more affective manifolds to include in the machine’s mind. In addition, because of the concept of mind in affective manifolds, it is possible to use psychology and psychoanalysis for affective manifolds in the machine’s mind.

References

  • [1] P-A Absil, Robert Mahony, and Rodolphe Sepulchre. Optimization algorithms on matrix manifolds. Princeton University Press, 2009.
  • [2] Jane Bromley, James W Bentz, Léon Bottou, Isabelle Guyon, Yann LeCun, Cliff Moore, Eduard Säckinger, and Roopak Shah. Signature verification using a “Siamese” time delay neural network. International Journal of Pattern Recognition and Artificial Intelligence, 7(04):669–688, 1993.
  • [3] Anirban Chakraborty, Manaar Alam, Vishal Dey, Anupam Chattopadhyay, and Debdeep Mukhopadhyay. Adversarial attacks and defences: A survey. arXiv preprint arXiv:1810.00069, 2018.
  • [4] Agatha Christie. Death on the Nile. Collins London, 1937.
  • [5] Benyamin Ghojogh, Ali Ghodsi, Fakhri Karray, and Mark Crowley. Locally linear embedding and its variants: Tutorial and survey. arXiv preprint arXiv:2011.10925, 2020.
  • [6] Benyamin Ghojogh, Ali Ghodsi, Fakhri Karray, and Mark Crowley. Spectral, probabilistic, and deep metric learning: Tutorial and survey. arXiv preprint arXiv:2201.09267, 2022.
  • [7] Benyamin Ghojogh, Fakhri Karray, and Mark Crowley. Fisher and kernel Fisher discriminant analysis: Tutorial. arXiv preprint arXiv:1906.09436, 2019.
  • [8] Raia Hadsell, Sumit Chopra, and Yann LeCun. Dimensionality reduction by learning an invariant mapping. In 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06), volume 2, pages 1735–1742. IEEE, 2006.
  • [9] Derek K Jones. Diffusion MRI. Oxford University Press, 2010.
  • [10] Jeffrey O Kephart and Steve R White. Measuring and modeling computer virus prevalence. In Proceedings 1993 IEEE Computer Society Symposium on Research in Security and Privacy, pages 2–15. IEEE, 1993.
  • [11] Wolfgang Kühnel. Differential geometry, volume 77. American Mathematical Soc., 2015.
  • [12] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
  • [13] David Levy. Love and sex with robots: The evolution of human-robot relationships. HarperCollins, 2007.
  • [14] Rosalind W Picard. Affective computing. MIT press, 2000.
  • [15] Rosalind W Picard. Affective computing: challenges. International Journal of Human-Computer Studies, 59(1-2):55–64, 2003.
  • [16] Sam T Roweis and Lawrence K Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290(5500):2323–2326, 2000.
  • [17] Hooman Aghaebrahimi Samani, Adrian David Cheok, Foo Wui Ngiap, Arjun Nagpal, and Mingde Qiu. Towards a formulation of love in human-robot interaction. In 19th International Symposium in Robot and Human Interactive Communication, pages 94–99. IEEE, 2010.
  • [18] Florian Schroff, Dmitry Kalenichenko, and James Philbin. FaceNet: A unified embedding for face recognition and clustering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 815–823, 2015.
  • [19] John P Sullins. Robots, love, and sex: the ethics of building a love machine. IEEE transactions on affective computing, 3(4):398–409, 2012.
  • [20] Karl Weiss, Taghi M Khoshgoftaar, and DingDing Wang. A survey of transfer learning. Journal of Big data, 3(1):1–40, 2016.