This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

The Least Restriction for Offline Reinforcement Learning

Zizhou Su
Beijing, P. R. China
[email protected]
Abstract

Many practical applications of reinforcement learning (RL) constrain the agent to learn from a fixed offline dataset of logged interactions, which has already been gathered, without offering further possibility for data collection. However, commonly used off-policy RL algorithms, such as the Deep Q Network and the Deep Deterministic Policy Gradient, are incapable of learning without data correlated to the distribution under the current policy, making them ineffective for this offline setting. As the first step towards useful offline RL algorithms, we analysis the reason of instability in standard off-policy RL algorithms. It is due to the bootstrapping error. The key to avoiding this error, is ensuring that the agent’s action space does not go out of the fixed offline dataset. Based on our consideration, a creative offline RL framework, the Least Restriction (LR), is proposed in this paper. The LR regards selecting an action as taking a sample from the probability distribution. It merely set a little limit for action selection, which not only avoid the action being out of the offline dataset but also remove all the unreasonable restrictions in earlier approaches (e.g. Batch-Constrained Deep Q-Learning). In the further, we will demonstrate that the LR, is able to learn robustly from different offline datasets, including random and suboptimal demonstrations, on a range of practical control tasks.

1 Introduction

One of the main reasons behind the success of deep supervised learning [28] is the availability of large and diverse datasets such as the ImageNet [20] to train expressive deep neural networks. By contrast, almost all the RL algorithms assume that every agent has to interacts with an online environment (i.e. a real world environment or an artificial simulator environment) [14]. In this way, the agent collects its own experience for training the actor network and the critic network. Unfortunately, active data collection in the real world (autonomous driving [15], healthcare [16], etc.) would be expensive and unsafe. Moreover, building a high-fidelity simulator is not easy too.

Offline RL [14, 15] concerns the problem of learning a policy from a fixed dataset of trajectories, without any further interactions with the environment. This setting could leverage the vast amount of existing logged interactions for real world decision-making problems, like robotics [17], recommender systems [18], and dialogues [19]. The effective use of such datasets would not only make the real world RL more practical, but would also enable better generation by incorporating diverse prior experience.

In offline RL, an agent does not receive any new corrective feedback from the online environment. And the agent needs to generalize from a fixed dataset to new online environment during evaluation. In principle, off-policy RL algorithms could learn from data collected by any (unknown) policy. Nonetheless, recent work [26, 27] proposes a discouraging view that standard off-policy deep RL algorithms diverge or otherwise yield poor performance in the offline setting.

The rest of this paper is organized as the following: Section 2 explains the background of the offline RL. Section 3 analyzes the reason of why the online algorithms failing in the offline dataset. Then, Section 4 describes a creative offline RL framework, the LR, in details. Finally, Section 5 has a summary discussion.

2 Background

At this section, some necessary prior knowledge for offline RL will be introduced.

2.1 Online Reinforcement Learning

An interactive environment in RL is typically modeled as a Markov Decision Process (MDP) [24] <𝒮,𝒜,R,P,γ><\mathcal{S},\mathcal{A},R,P,\gamma>, where 𝒮\mathcal{S} is the state space, 𝒜\mathcal{A} is the action space, R(s,a)R(s,a) is the reward function, P(s|s,a)P(s^{\prime}|s,a) is the transition distribution, and γ[0,1]\gamma\in[0,1] is the discount factor. A stochastic policy π(|s)\pi(\cdot|s) maps each state s𝒮s\in\mathcal{S} to a probability distribution (density) over actions.

For an agent following the policy π\pi, the action-value function, denoted Qπ(s,a)Q^{\pi}(s,a), is defined as the expectation of cumulative discounted future rewards, i.e.:

Qπ(s,a):=𝔼[t=0γtR(st,at)]Q^{\pi}(s,a):=\mathbb{E}[\sum_{t=0}^{\infty}\gamma^{t}R(s_{t},a_{t})] (1)
s0=s,a0=a,stP(|st1,at1),atπ(|st)s_{0}=s,a_{0}=a,s_{t}\sim P(\cdot|s_{t-1},a_{t-1}),a_{t}\sim\pi(\cdot|s_{t})

The goal of RL is to find an optimal policy π\pi^{*} that attains maximum expected return, for which Qπ(s,a)Qπ(s,a)Q^{\pi^{*}}(s,a)\geq Q^{\pi}(s,a) for all π,s,a\pi,s,a. The Bellman optimality equations characterize the optimal policy in terms of the optimal Q-values, denoted Q=QπQ^{*}=Q^{\pi^{*}}, via:

Q(s,a)=𝔼[R(s,a)]+γ𝔼sPmaxa𝒜Q(s,a)Q^{*}(s,a)=\mathbb{E}[R(s,a)]+\gamma\mathbb{E}_{s^{\prime}\sim P}\text{max}_{a^{\prime}\in\mathcal{A}}Q^{*}(s^{\prime},a^{\prime}) (2)

The optimal policy π\pi^{*} could be obtained by the Q-Learning algorithm [23], via iterating the Bellman optimal operator 𝒯\mathcal{T}, defined as:

(𝒯Q^)(s,a):=R(s,a)+γ𝔼sP[maxaQ^(s,a)](\mathcal{T}\hat{Q})(s,a):=R(s,a)+\gamma\mathbb{E}_{s^{\prime}\sim P}[\text{max}_{a^{\prime}}\hat{Q}(s^{\prime},a^{\prime})] (3)

For large and complex state spaces, the Q-values can be approximated by using neural networks, e.g. the Deep Q Network (DQN) [22]. The DQN optimizes the Q-values network’s parameters θ\theta by minimizing the mean squared Bellman error 𝔼ν[(Q𝒯Q^)2]\mathbb{E}_{\nu}[(Q-\mathcal{T}\hat{Q})^{2}], where ν\nu is the state occupancy measure under the behavior policy.

In a continuous action space, the maximization maxaQ(s,a)\text{max}_{a’}Q(s’,a’) is generally intractable. In this case, actor-critic methods [21] are commonly used, where action selection is performed through another policy network π(s;θπ)\pi(s;\theta_{\pi}), called the actor, and updated following the Deterministic Policy Gradient Theorem [20]:

θπargmaxθπ𝔼[Q(s,π(s;θπ);θQ)]\theta_{\pi}\leftarrow\text{argmax}_{\theta_{\pi}}\mathbb{E}[Q(s,\pi(s;\theta_{\pi});\theta_{Q})] (4)

which corresponds to learning an approximation to the maximum of Q(s,a;θQ)Q(s,a;\theta_{Q}), by propagating the gradient through both π\pi and QQ. When combined with the DQN to learn Q(s,a;θQ)Q(s,a;\theta_{Q}), this algorithm is referred to as the Deep Deterministic Policy Gradient (DDPG) [20].

2.2 Offline Reinforcement Learning

Modern off-policy deep RL algorithms (as discussed above) perform remarkably well on common benchmarks, such as the Atari 2600 Games [12] and the continuous control MuJoCo tasks [13]. Such off-policy RL algorithms are considered “online”, because they alternate between optimizing a policy and using that policy to collect more data. Typically, these algorithms keep a sliding window of most recent experiences in a finite replay buffer, throwing away stale data to incorporate most fresh and “on-policy” experiences.

Offline RL, in contrast to online RL, describes the fully off-policy setting of learning using a fixed dataset of experiences, without any further interaction with the environment. We advocate the use of offline RL to help isolate an RL algorithm’s ability to “exploit” experience and generalize VS. its ability to “explore” effectively. The offline RL setting removes design choices related to the replay buffer and exploration. Therefore, it is easier to experiment and reproduce than the typical online setting.

Refer to caption

Figure 1: Online on-policy RL

Refer to caption

Figure 2: Online off-policy RL
Refer to caption
Figure 3: Offline RL

Figure 1, 2, 3 illustrate online on-policy RL, online off-policy RL and offline RL, respectively. At on-policy RL, the learned policy πk\pi_{k} is updated with streaming data collected by πk\pi_{k} itself. At the classic off-policy setting, the agent’s experience is appended to a experience buffer 𝒟\mathcal{D}, and each new policy πk\pi_{k} collects additional data, such that 𝒟\mathcal{D} is composed of samples from π0\pi_{0}, π1\pi_{1},…,πk\pi_{k}, and all of this data is used to train an updated new policy πk+1\pi_{k+1}. In contrast, offline RL employs a dataset 𝒟\mathcal{D} collected by some (potentially unknown) behavior policy β\beta. The dataset is collected once, and is not altered during training, which makes it feasible to use large previous collected datasets. The training process does not interact with the MDP at all, and the policy is only deployed after being fully trained.

Offline RL is considered challenging due to the distribution mismatch between the current policy π\pi and the offline data collection policy β\beta, i.e., when the policy being learned takes a different action than the data collection policy, we do not know the reward it would have gotten.

In this situation, the policy constraint methods are the most common methods for offline RL. When the action-value function Q(s,a)Q(s,a) is being iterated via the following equation:

Q(s,a)R(s,a)+γQ(s,a)Q(s,a)\leftarrow R(s,a)+\gamma Q(s^{\prime},a^{\prime}) (5)
aπ(|s)a^{\prime}\sim\pi(\cdot|s^{\prime})

These methods ensure that, explicitly or implicitly, the distribution over actions under which we compute the target value, π(|s)\pi(\cdot|s^{\prime}), is “close” to the collecting dataset behavior distribution β(|s)\beta(\cdot|s^{\prime}).

For instance, the Batch-Constrained Deep Q-Learning (BCQ) forces π\pi being same as β\beta via training a Variational Auto-Encoder (VAE) [9] to fit the latent probability distribution in the fixed dataset. Furthermore, the Bootstrapping Error Accumulation Reduction (BEAR) [2] constrains the policy by shrinking the Maximum Mean Discrepancy (MMD) [25] between the unknown behavior policy β\beta and the learned policy π\pi. These constraints are sufficient condition for offline RL, nevertheless are not necessary condition. This reason will be discussed at the next section.

3 The Key for Offline Reinforcement Learning

Off-policy RL algorithms, which are with the critic for estimating the action-value function Q(s,a)Q(s,a), almost fail to learn on a fixed offline dataset. [1] and [2] demonstrate that, this failure is not caused by the lack of the <s,a,r,s><s,a,r,s’> transition record, but the error from the Q(s,a)Q(s,a) bootstrapping. The source of this instability could be understood by examining the form of the equations for iterating the Q(s,a)Q(s,a). Although minimizing the mean squared error corresponds to a supervised regression problem, the targets Q(s,a)Q(s,a) for this regression are themselves derived from the current Q(s,a)Q(s^{\prime},a^{\prime}) estimate. The targets Q(s,a)Q(s,a) are calculated by maximizing the learned Q(s,a)Q(s^{\prime},a^{\prime}) with respect to the action aa^{\prime} at the next state ss^{\prime}. However, the Q(s,a)Q(s,a) estimator is only reliable on inputs from the same distribution as its training set.

Have a review on the Equation (2) and (3). For estimating the value of Q(s,a)Q(s,a), all the values of Q(s,a),a𝒜Q(s’,a’),\forall a^{\prime}\in\mathcal{A} must be estimated reliably. And for estimating the value of Q(s,a)Q(s’,a’), all the values of Q(s′′,a′′)Q(s^{\prime\prime},a^{\prime\prime}) must be estimated reliably. And so on and so forth…… Finally, the correct value of Q(snexttolast,alast)Q(s_{next-to-last},a_{last}) is vitally needed. In the online setting, the agent is easily able to transfer from <s,a,r,s><s,a,r,s^{\prime}> to <s,a,r,s′′><s^{\prime},a^{\prime},r^{\prime},s^{\prime\prime}>. Then, from <s,a,r,s′′><s^{\prime},a^{\prime},r^{\prime},s^{\prime\prime}> to <s′′,a′′,r′′,s′′′><s^{\prime\prime},a^{\prime\prime},r^{\prime\prime},s^{\prime\prime\prime}>. And so on and so forth…… End at the final state <snexttolast,alast,rlast,sfinal><s_{next-to-last},a_{last},r_{last},s_{final}>.

Unfortunately, the offline dataset is fixed, without any opportunity to supply new data. Hence, at the offline setting, when transfer from <s,a,r,s><s,a,r,s^{\prime}> to <s,a,r,s′′><s^{\prime},a^{\prime},r^{\prime},s^{\prime\prime}>, many a𝒜a’\in\mathcal{A} are not in this dataset. These actions, which are missing or appear only a little times, are defined as Rare Actionsafew\textbf{Rare Actions}\ a_{few} in this paper.

In other words, at the state ss’, selecting the action afewa’_{few}, what reward rr^{\prime} will get and which next state s′′s^{\prime\prime} will arrive, are all unknown, i.e. <s,afew,?,?><s’,a’_{few},?,?>. Then, the value of Q(s,afew)Q(s’,a’_{few}) always keep the initial random value. As a result, naïve maximizing the Q(s,a),a𝒜Q(s’,a’),\forall a^{\prime}\in\mathcal{A} usually absorb the wrong value. Furthermore, the error spread through the Bellman backup, like Equation (2). Naturally, based on the critic with wrong Q(s,a)Q(s,a) (see Equation (4) ), the actor could not perform optimally.

Overall, the key for offline RL is that preventing the Rear Actions afewa_{few} interfering the action-value Q(s,a)Q(s,a) backups. Previous works [4], [11], [12] explicitly constrain the learned policy π\pi to be not far from the behavior policy β\beta, similarly to behavior cloning. While this is enough to ensure that actions lie in the fixed dataset with high probability, it is overly restrictions. For example, if the behavior policy is close to uniform distribution, the learned policy will behave randomly too, resulting in poor performance, even the data is quite sufficient. The next section will propose a more flexible frameworks for offline RL, though.

4 The Least Restriction for Offline Reinforcement Learning

At standard online RL, there is an interesting phenomenon that, the off-policy algorithms [6, 7, 8], whose behavior policy (also called exploration policy) β\beta is different from the learned policy (also called target policy) π\pi. But they could succeed in training optimal policies. The reason is that π\pi is not same as β\beta though, they are quite close, only vary in a random noise or ϵ\epsilon-greedy.

Accordingly, most state-of-the-art offline RL methods force π\pi near the β\beta. The earliest proposed offline RL method BCQ trains a VAE to simulate the β\beta distribution. When the action-value function Q(s,a)Q(s,a) is updated, the action selection depends on this VAE.

Besides straightly make π\pi close to β\beta, later researchers try to restrict the “distance” between the π\pi distribution and the β\beta distribution. Under state ss, the Rare Action afewa_{few} one-to-one corresponds to the point, which has a low probability density at β(a|s)\beta(a|s). Avoiding Rare Action means that a learned policy π(a|s)\pi(a|s) has positive density only where the density of the behavior policy β(a|s)\beta(a|s) is more than a threshold, i.e.:

a,β(a|s)ϵπ(a,s)=0\forall a,\enspace\beta(a|s)\leq\epsilon\Longrightarrow\pi(a,s)=0 (6)

Based on the above analysis, we propose a creative offline RL framework, the Least Restriction (LR) Framework, which is able to combine with almost all online algorithms.

The LR Framework firstly train a Generative Adversarial Network (GAN) [10] to simulate the dataset collected by β\beta. For simplicity, the GAN only considers the state-action pair (s,a)(s,a). After the training completed, the generator of the GAN is very similar with β\beta and the discriminator can provide a confidence degree of a (s,a)(s,a) pair whether belongs to the dataset.

In fact, the offline RL algorithms do not extraordinary diverse from those online. Any online algorithm could turn into the corresponding offline one, via dropping out the procedures of interacting with the environment and iterating the experience buffer.

The extra procedures added to the primary off-policy algorithm by the LR are all at action selection. When the action-value function is being iterated (i.e. Q(s,a)R+Q(s,a)Q(s,a)\leftarrow R+Q(s^{\prime},a^{\prime})), an action aa’ (under the state ss’) has been selected by the primary algorithm. As Equation (6), the density of aa’ at β(a|s)\beta(a’|s’) has to be bigger than the threshold. So the selected (s,a)(s’,a’) is sent to the discriminator of the GAN, obtaining a confidence degree of this pair. If this degree is below the given threshold, then aa’ is overlayed by a random noise 𝒩\cal{N}, (𝒩\cal{N} is a Gaussian noise and its mean value is zero.) until the confidence degree of (s,a+𝒩)(s’,a’+\cal{N}) is bigger than the threshold. Afterwards, the (s,a+𝒩)(s’,a’+\cal{N}) pair instead of (s,a)(s’,a’) is used to iterate the Q(s,a)Q(s,a). The rest procedures are same as the primary algorithm.

Almost all off-policy online algorithm with the action-value function Q(s,a)Q(s,a) (without the policy actor is okay) could combine with our LR Framework. In this section, the DDPG is served as an example. The offline LR-DDPG algorithm is summarized in Algorithm 1.

Algorithm 1 The offline LR-DDPG algorithm

Input: fixed dataset 𝒟\mathcal{D}, horizon TT, target network update rate τ\tau, mini-batch size NN, factor γ\gamma, the threshold pp, the standard deviation σ\sigma of the Gaussian noise 𝒩\cal{N}

Pretrain: Train the GAN, getting the discrimitor DisDis

Initialize the DDPG’s networks: the Q-networks Q(|θQ)Q(\cdot|\theta_{Q}), Q(|θQ)Q^{\prime}(\cdot|\theta_{Q^{\prime}}) and the policy networks π(|θπ)\pi(\cdot|\theta_{\pi}), π(|θπ)\pi^{\prime}(\cdot|\theta_{\pi^{\prime}})
Set  θQθQ\theta_{Q^{\prime}}\leftarrow\theta_{Q}, θπθπ\theta_{\pi^{\prime}}\leftarrow\theta_{\pi}

for t=1t=1 to TT do

   Randomly sample mini-batch of NN transitions
   <s,a,r,s><s,a,r,s^{\prime}> from 𝒟\mathcal{D}
   Update the Q-network:
  Set a=π(s;θπ)a^{\prime}=\pi^{\prime}(s^{\prime};\theta_{\pi^{\prime}})
  
   while Dis(s,a)<pDis(s^{\prime},a^{\prime})<p do
    aa+𝒩a^{\prime}\leftarrow a^{\prime}+\cal{N}
  end while
  
   Set yt=rt+γQ(st+1,π(st+1;θπ);θQ)y_{t}=r_{t}+\gamma Q^{\prime}(s_{t+1},\pi^{\prime}(s_{t+1};\theta_{\pi^{\prime}});\theta_{Q^{\prime}})
   Update θQ\theta_{Q} by minimizing the loss function:
   LQ=1Nt[ytQ(st,at;θQ)]2L_{Q}=\frac{1}{N}\sum_{t}[y_{t}-Q(s_{t},a_{t};\theta_{Q})]^{2}
  
   Update the policy network using the sampled gradient:
   θππ|st1NtaQ(s,a;θQ)|s=st,a=π(st)θππ(s;θπ)|st\nabla_{\theta_{\pi}}\pi|_{s_{t}}\approx\frac{1}{N}\sum_{t}\nabla_{a}Q(s,a;\theta_{Q})|_{s=s_{t},a=\pi(s_{t})}\nabla_{\theta_{\pi}}\pi(s;\theta_{\pi})|_{s_{t}}
  
   Update the target networks:
   θQτθQ+(1τ)θQ\theta_{Q^{\prime}}\leftarrow\tau\theta_{Q^{\prime}}+(1-\tau)\theta_{Q}
   θπτθπ+(1τ)θπ\theta_{\pi^{\prime}}\leftarrow\tau\theta_{\pi^{\prime}}+(1-\tau)\theta_{\pi}

end for

5 Discussion

The goal in our work is to study offline reinforcement learning with fixed datasets. We firstly analyze how error propagates in standard off-policy RL algorithms. It is due to the use of Rare Actions for computing the target values in the Bellman backup. This naturally leads to that the key for offline RL is avoiding selecting a Rare Action. Armed with this insight, we develop a framework for mitigating the effect of Rare Actions, which we call MR. The LR constrains the backup to use actions that have non-negligible support under the data distribution, but without being overly strict in constraining the learned policy. This LR Framework perfectly keep the balance between the training convergence and the optimal learning. The creative LR Framework is able to combine with almost all off-policy RL algorithms with the action-value function Q(s,a)Q(s,a) (without the policy actor is okay). Hence, the novel framework proposed in this paper has a significant superiority that it could conveniently transform the advanced online RL algorithm just comes up to the offline one. This advantage may help the offline RL develop more quickly by the way of absorbing the online RL algorithms.

References

  • [1] S. Fujimoto, D. Meger, D. Precup, Off-Policy Deep Reinforcement Learning without Exploration, in: Proceedings of International Conference on Machine Learning, Milan, Italy, June 2019, 257–265.
  • [2] A. Kumar, J. Fu, G. Tucker, S. Levine, Stabilizing Off-Policy Q-Learning via Bootstrapping Error Reduction, in: Proceedings of Conference and Workshop on Neural Information Processing Systems, Vancouver, Canada, December 2019, 749–768.
  • [3] D. Pathak, P. Agrawal, A.A. Efros, T. Darrell, Curiosity-driven Exploration by Self-supervised Prediction, in: Proceedings of International Conference on Machine Learning, Sydney, Australia, June 2017, 442–454.
  • [4] R. Agarwal, D. Schuurmans, M. Norouzi, An Optimistic Perspective on Offline Reinforcement Learning, ArXiv (2020).
  • [5] S. Levine, A, Kumar, G. Tucker, J. Fu, Offline Reinforcement Learning: Tutorial, Review and Perspectives on Open Problems, ArXiv (2020).
  • [6] J.M. Mendel, R.I John, Type-2 Fuzzy Sets Made Simple, IEEE Trans. Fuzzy Syst., 10(2002) 117–127.
  • [7] L.A. Zadeh, The concept of a linguistic variable and its application to approximate reasoning-1, Informat. Sci. 8(1975) 199–249.
  • [8] D. Wu,J.M. Mendel, Designing Practical Interval Type-2 Fuzzy Logic Systems Made Simple,in: Proceedings of IEEE FUZZ Conference,Beijing,China,July 2014, 800–807.
  • [9] J.M. Mendel, Type-2 Fuzzy Sets and Systems:An Overview, IEEE Computational Intelligence Magazine. 2(2007) 20–29.
  • [10] J.M. Mendel, Advances in type-2 fuzzy sets and systems, Informat. Sci. 177(2007) 84–110.
  • [11] S. Miller, C. Wagner, J.M. Garibaldi,S. Appleby, Constructing General Type-2 Fuzzy Sets from interval-valued data, in:Proceedings of IEEE FUZZ Conference, Brisbane, Australia, June 2012, 357–365
  • [12] J.M. Mendel, H. Wu, Type-2 fuzzistics for symmetric interval type-2 fuzzy sets:part 2,inverse problems, IEEE Trans. Fuzzy Syst. 15(2007) 301–307.
  • [13] J.M. Mendel, Computing with words and its relationships with fuzzistics, Informat. Sci. 177(2007) 988–1006.
  • [14] H.M.Hersch, A. Caramazza, A fuzzy set approach to modifiers and vagueness in natural languages, J.Exp.Psychol. 105(1976) 254–276.
  • [15] J.Lawry, An alternative to computing with words, Int.J.Uncertainty,Fuzzinesss Knowledge-Based Syst. 9(Suppl.)(2001)3–16.
  • [16] J.M. Mendel, Computing With Words, When Words Can Mean Different Things to Different People, in: Proceedings of Third International ICSC Symposium on Fuzzy Logic and Applications, Rochester University, Rochester, NY, 1999, 126–159.
  • [17] J.M. Mendel, Uncertain Rule-Based Fuzzy Logic Systems: Introduction and New Directions, Prentice-Hall, Upper Saddle River, NJ, 2001.
  • [18] J.M. Mendel, Fuzzy sets for words: a new beginning, in: Proc. IEEE Int. Conf. Fuzzy Systems, St. Louis, MO, 2003, 37–42.
  • [19] L.A. Zadeh, Fuzzy Sets, Inform. Control , 8(1965) 338–353.
  • [20] L.A. Zadeh, The concept of a linguistic variable and its application to approximate reasoning¡ª1, Inform. Sci. 8 (1975) 199–249.
  • [21] L.A. Zadeh, Fuzzy logic¡ªcomputing with words, IEEE Trans. on Fuzzy Syst. 4 (1996) 103–111.
  • [22] G.J. Klir, B. Yuan, Fuzzy sets and fuzzy logic: theory and applications. Prentice-Hall, 1994.
  • [23] F. Liu, J.M. Mendel, Encoding Words Into Interval Type-2 Fuzzy Sets: Using an Interval Approach. IEEE Trans. Fuzzy Syst. 6(2008) 1503–1521.
  • [24] D. Wu, J.M. Mendel, S. Coupland, Enhanced interval approach for encoding words into interval type-2 fuzzy sets and its convergence analysis. IEEE Trans. Fuzzy Syst. 3(2012) 499–513.
  • [25] J.M. Mendel, Historical reflections and new positions on perceptual computing, Fuzzy Optimization & Decision Making. 4(2009) 325–335.
  • [26] J.M. Mendel, Computing with words and its relationships with fuzzistics, Informat. Sci., 177(2007) 988–1006.
  • [27] J.C. Bezdek, K. Ludmila,Fuzzy Pattern Recognition, Wiley, 1999.
  • [28] J.L. Castro,J.J. Castro-Schez, J.M. Zurita, Use of a fuzzy machine learning technique in the knowledge acquisition process, Fuzzy Sets and Systems, 3(2001) 307–320.