Reinforcement learning with Takagi-Sugeno-Kang fuzzy systems

Eric Zander , Ben van Oostendorp , Barnabas Bede

Complex Engineering Systems ›› 2023, Vol. 3 ›› Issue (2) : 9

PDF
Complex Engineering Systems ›› 2023, Vol. 3 ›› Issue (2) :9 DOI: 10.20517/ces.2023.11
Research Article
Research Article

Reinforcement learning with Takagi-Sugeno-Kang fuzzy systems

Author information +
History +
PDF

Abstract

We propose reinforcement learning (RL) architectures for producing performant Takagi-Sugeno-Kang (TSK) fuzzy systems. The first employs an actor-critic algorithm to optimize existing TSK systems. An evaluation of this approach with respect to the Explainable Fuzzy Challenge (XFC) 2022 is given. A second proposed system applies Deep Q-Learning Network (DQN) principles to the Adaptive Network-based Fuzzy Inference System (ANFIS). This approach is evaluated in the CartPole environment and demonstrates comparability to the performance of traditional DQN. In both applications, TSK systems optimized via RL performed well in testing. Moreover, the given discussion and experimental results highlight the value of exploring the intersection of RL and fuzzy logic in producing explainable systems.

Keywords

Explainable AI / Fuzzy systems / Takagi-Sugeno-Kang fuzzy systems / Adaptive neuro-fuzzy inference systems / Reinforcement learning

Cite this article

Download citation ▾
Eric Zander, Ben van Oostendorp, Barnabas Bede. Reinforcement learning with Takagi-Sugeno-Kang fuzzy systems. Complex Engineering Systems, 2023, 3(2): 9 DOI:10.20517/ces.2023.11

登录浏览全文

4963

注册一个新账户 忘记密码

References

AI Summary AI Mindmap
PDF

351

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/