• 【模糊神经网络】基于simulink的模糊神经网络控制器设计


    1.软件版本

    MATLAB2010b

    2.模糊神经网络理论概述

            由于模糊控制是建立在专家经验的基础之上的,但这有很大的局限性,而人工神经网络可以充分逼近任意复杂的时变非线性系统,采用并行分布处理方法,可学习和自适应不确定系统。利用神经网络可以帮助模糊控制器进行学习,模糊逻辑可以帮助神经网络初始化及加快学习过程。通常神经网络的基本构架如下所示:

          整个神经网络结构为五层,其中第一层为“输入层“,第二层为“模糊化层”,第三层为“模糊推理层”,第四层为“归一化层”,第五层为“解模糊输出层”。 

          第一层为输入层,其主要包括两个节点,所以第一层神经网络的输入输出可以用如下的式子表示:

            第二层为输入变量的语言变量值,通常是模糊集中的n个变量,它的作用是计算各输入分量属于各语言变量值模糊集合的隶属度。用来确定输入在不同的模糊语言值对应的隶属度,以便进行模糊推理,如果隶属函数为高斯函数,那么其表达式为:

    其中变量的具体含义和第一层节点的变量含义相同。

    第三层是比较关键的一层,即模糊推理层,这一层的每个节点代表一条模糊规则,其每个节点的输出值表示每条模糊规则的激励强度。该节点的表达式可用如下的式子表示:

     

    第四层为归一化层,其输出是采用了Madmdani模糊规则,该层的表达式为: 

    第五层是模糊神经网络的解模糊层,即模糊神经网络的清晰化. 

    3.算法的simulink建模

            为了对比加入FNN控制器后的性能变化,我们同时要对有FNN控制器的模型以及没有FNN控制器的模型进行仿真,仿真结果如下所示:

            非FNN控制器的结构:

    其仿真结果如下所示:

    FNN控制器的结构:

        其仿真结果如下所示:

    前面的是训练阶段,后面的为实际的输出,为了能够体现最后的性能,我们将两个模型的最后输出进行对比,得到的对比结果所示:

       从上面的仿真结果可知,PID的输出值范围降低了很多,性能得到了进一步提升。

    调速TS模型,该模型最后的仿真结果如下所示:

        从上面的仿真结果可知,采用FNN控制器后,其PID的输出在一个非常小的范围之内进行晃动,整个系统的性能提高了80%。这说明采用模糊神经网络后的系统具有更高的性能和稳定性。

    4.部分程序

    Mamdani模糊控制器的S函数

    1. function [out,Xt,str,ts] = Sfunc_fnn_Mamdani(t,Xt,u,flag,Learn_rate,coff,lamda,Number_signal_in,Number_Fuzzy_rules,x0,T_samples)
    2. %输入定义
    3. % t,Xt,u,flag :S函数固定的几个输入脚
    4. % Learn_rate :学习度
    5. % coff :用于神经网络第一层的参数调整
    6. % lamda :神经网络的学习遗忘因子
    7. % Number_signal_in :输入的信号的个数
    8. % Number_Fuzzy_rules :模糊控制规则数
    9. % T_samples :模块采样率
    10. %输入信号的个数
    11. Number_inport = Number_signal_in;
    12. %整个系统的输入x,误差输入e,以及训练指令的数组的长度
    13. ninps = Number_inport+1+1;
    14. NumRules = Number_Fuzzy_rules;
    15. Num_out1 = 3*Number_signal_in*Number_Fuzzy_rules + ((Number_signal_in+1)*NumRules)^2 + (Number_signal_in+1)*NumRules;
    16. Num_out2 = 3*Number_signal_in*Number_Fuzzy_rules + (Number_signal_in+1)*NumRules;
    17. %S函数第一步,参数的初始化
    18. if flag == 0
    19. out = [0,Num_out1+Num_out2,1+Num_out1+Num_out2,ninps,0,1,1];
    20. str = [];
    21. ts = T_samples;
    22. Xt = x0;
    23. %S函数的第二步,状态的计算
    24. elseif flag == 2
    25. %外部模块的输出三个参数变量输入x,误差输入e,以及训练指令的数组的长度
    26. x = u(1:Number_inport);%输入x
    27. e = u(Number_inport+1:Number_inport+1);%误差输入e
    28. learning = u(Number_inport+1+1);%训练指令的数组的长度
    29. %1的时候为正常工作状态
    30. if learning == 1
    31. Feedfor_phase2;
    32. %下面定义在正常的工作状态中,各个网络层的工作
    33. %层1:
    34. In1 = x*ones(1,Number_Fuzzy_rules);
    35. Out1 = 1./(1 + (abs((In1-mean1)./sigma1)).^(2*b1));
    36. %层2:
    37. precond = Out1';
    38. Out2 = prod(Out1)';
    39. S_2 = sum(Out2);
    40. %层3:
    41. if S_2~=0
    42. Out3 = Out2'./S_2;
    43. else
    44. Out3 = zeros(1,NumRules);
    45. end
    46. %层4:
    47. Aux1 = [x; 1]*Out3;
    48. %训练数据
    49. a = reshape(Aux1,(Number_signal_in+1)*NumRules,1);
    50. %参数学习
    51. P = (1./lamda).*(P - P*a*a'*P./(lamda+a'*P*a));
    52. ThetaL4 = ThetaL4 + P*a.*e;
    53. ThetaL4_mat = reshape(ThetaL4,Number_signal_in+1,NumRules);
    54. %错误反馈
    55. e3 = [x' 1]*ThetaL4_mat.*e;
    56. denom = S_2*S_2;
    57. %下面自适应产生10个规则的模糊控制器
    58. Theta32 = zeros(NumRules,NumRules);
    59. if denom~=0
    60. for k1=1:NumRules
    61. for k2=1:NumRules
    62. if k1==k2
    63. Theta32(k1,k2) = ((S_2-Out2(k2))./denom).*e3(k2);
    64. else
    65. Theta32(k1,k2) = -(Out2(k2)./denom).*e3(k2);
    66. end
    67. end
    68. end
    69. end
    70. e2 = sum(Theta32,2);
    71. %层一
    72. Q = zeros(Number_signal_in,Number_Fuzzy_rules,NumRules);
    73. for i=1:Number_signal_in
    74. for j=1:Number_Fuzzy_rules
    75. for k=1:NumRules
    76. if Out1(i,j)== precond(k,i) && Out1(i,j)~=0
    77. Q(i,j,k) = (Out2(k)./Out1(i,j)).*e2(k);
    78. else
    79. Q(i,j,k) = 0;
    80. end
    81. end
    82. end
    83. end
    84. Theta21 = sum(Q,3);
    85. %自适应参数调整
    86. if isempty(find(In1==mean1))
    87. deltamean1 = Theta21.*(2*b1./(In1-mean1)).*Out1.*(1-Out1);
    88. deltab1 = Theta21.*(-2).*log(abs((In1-mean1)./sigma1)).*Out1.*(1-Out1);
    89. deltasigma1 = Theta21.*(2*b1./sigma1).*Out1.*(1-Out1);
    90. dmean1 = Learn_rate*deltamean1 + coff*dmean1;
    91. mean1 = mean1 + dmean1;
    92. dsigma1 = Learn_rate*deltasigma1 + coff*dsigma1;
    93. sigma1 = sigma1 + dsigma1;
    94. db1 = Learn_rate*deltab1 + coff*db1;
    95. b1 = b1 + db1;
    96. for i=1:Number_Fuzzy_rules-1
    97. if ~isempty(find(mean1(:,i)>mean1(:,i+1)))
    98. for i=1:Number_signal_in
    99. [mean1(i,:) index1] = sort(mean1(i,:));
    100. sigma1(i,:) = sigma1(i,index1);
    101. b1(i,:) = b1(i,index1);
    102. end
    103. end
    104. end
    105. end
    106. %完成参数学习过程
    107. %并保存参数学习结果
    108. Xt = [reshape(mean1,Number_signal_in*Number_Fuzzy_rules,1);reshape(sigma1,Number_signal_in*Number_Fuzzy_rules,1);reshape(b1,Number_signal_in*Number_Fuzzy_rules,1);reshape(P,((Number_signal_in+1)*NumRules)^2,1);ThetaL4;reshape(dmean1,Number_signal_in*Number_Fuzzy_rules,1);reshape(dsigma1,Number_signal_in*Number_Fuzzy_rules,1);reshape(db1,Number_signal_in*Number_Fuzzy_rules,1);dThetaL4;];
    109. end
    110. out=Xt;
    111. %S函数的第三步,定义各个网络层的数据转换
    112. elseif flag == 3
    113. Feedfor_phase;
    114. %定义整个模糊神经网络的各个层的数据状态
    115. %第一层
    116. x = u(1:Number_inport);
    117. In1 = x*ones(1,Number_Fuzzy_rules);%第一层的输入
    118. Out1 = 1./(1 + (abs((In1-mean1)./sigma1)).^(2*b1));%第一层的输出,这里,这个神经网络的输入输出函数可以修改
    119. %第一层
    120. precond = Out1';
    121. Out2 = prod(Out1)';
    122. S_2 = sum(Out2);%计算和
    123. %第三层
    124. if S_2~=0
    125. Out3 = Out2'./S_2;
    126. else
    127. Out3 = zeros(1,NumRules);%为了在模糊控制的时候方便系统的运算,需要对系统进行归一化处理
    128. end
    129. %第四层
    130. Aux1 = [x; 1]*Out3;
    131. a = reshape(Aux1,(Number_signal_in+1)*NumRules,1);%控制输出
    132. %第五层,最后结果输出
    133. outact = a'*ThetaL4;
    134. %最后的出处结果
    135. out = [outact;Xt];
    136. else
    137. out = [];
    138. end

    TS模糊控制器的S函数

    1. function [out,Xt,str,ts] = Sfunc_fnn_TS(t,Xt,u,flag,Learn_rate,coffa,lamda,r,vigilance,coffb,arate,Number_signal_in,Number_Fuzzy_rules,x0,Xmins,Data_range,T_samples)
    2. %输入定义
    3. % t,Xt,u,flag :S函数固定的几个输入脚
    4. % Learn_rate :学习度
    5. % coffb :用于神经网络第一层的参数调整
    6. % lamda :神经网络的学习遗忘因子
    7. % Number_signal_in :输入的信号的个数
    8. % Number_Fuzzy_rules :模糊控制规则数
    9. % T_samples :模块采样率
    10. Data_in_numbers = Number_signal_in;
    11. Data_out_numbers = 1;
    12. %整个系统的输入x,误差输入e,以及训练指令的数组的长度
    13. ninps = Data_in_numbers+Data_out_numbers+1;
    14. Number_Fuzzy_rules2 = Number_Fuzzy_rules;
    15. Num_out1 = 2*Number_signal_in*Number_Fuzzy_rules + ((Number_signal_in+1)*Number_Fuzzy_rules2)^2 + (Number_signal_in+1)*Number_Fuzzy_rules2 + 1;
    16. Num_out2 = 2*Number_signal_in*Number_Fuzzy_rules + (Number_signal_in+1)*Number_Fuzzy_rules2;
    17. %S函数第一步,参数的初始化
    18. if flag == 0
    19. out = [0,Num_out1+Num_out2,1+Num_out1+Num_out2,ninps,0,1,1];
    20. str = [];
    21. ts = T_samples;
    22. Xt = x0;
    23. %S函数的第二步,状态的计算
    24. elseif flag == 2
    25. x1 = (u(1:Data_in_numbers) - Xmins)./Data_range;
    26. x = [ x1; ones(Data_in_numbers,1) - x1];
    27. e = u(Data_in_numbers+1:Data_in_numbers+Data_out_numbers);
    28. learning = u(Data_in_numbers+Data_out_numbers+1);
    29. %1的时候为正常工作状态
    30. if learning == 1
    31. NumRules = Xt(1);
    32. NumInTerms = NumRules;
    33. Feedfor_phase;
    34. %最佳参数搜索
    35. New_nodess = 0;
    36. reass = 0;
    37. Rst_nodes = [];
    38. rdy_nodes = [];
    39. while reass == 0 && NumInTerms
    40. %搜索最佳点
    41. N = size(w_a,2);
    42. node_tmp = x * ones(1,N);
    43. A_AND_w = min(node_tmp,w_a);
    44. Sa = sum(abs(A_AND_w));
    45. Ta = Sa ./ (coffb + sum(abs(w_a)));
    46. %节点归零
    47. Ta(Rst_nodes) = zeros(1,length(Rst_nodes));
    48. Ta(rdy_nodes) = zeros(1,length(rdy_nodes));
    49. [Tamax,J] = max(Ta);
    50. w_J = w_a(:,J);
    51. xa = min(x,w_J);
    52. %最佳节点测试
    53. if sum(abs(xa))./Number_signal_in >= vigilance,
    54. reass = 1;
    55. w_a(:,J) = arate*xa + (1-arate)*w_a(:,J);
    56. elseif sum(abs(xa))/Number_signal_in < vigilance,
    57. reass = 0;
    58. Rst_nodes = [Rst_nodes J ];
    59. end
    60. if length(Rst_nodes)== N || length(rdy_nodes)== N
    61. w_a = [w_a x];
    62. New_nodess = 1;
    63. reass = 0;
    64. end
    65. end;
    66. %节点更新
    67. u2 = w_a(1:Number_signal_in,:);
    68. v2 = 1 - w_a(Number_signal_in+1:2*Number_signal_in,:);
    69. NumInTerms = size(u2,2);
    70. NumRules = NumInTerms;
    71. if New_nodess == 1
    72. ThetaL5 = [ThetaL5; zeros(Number_signal_in+1,1)];
    73. dThetaL5 = [dThetaL5; zeros(Number_signal_in+1,1)];
    74. P = [ P zeros((Number_signal_in+1)*(NumRules-1),Number_signal_in+1);
    75. zeros(Number_signal_in+1,(Number_signal_in+1)*(NumRules-1)) 1e6*eye(Number_signal_in+1); ];
    76. du2 = [du2 zeros(Number_signal_in,1);];
    77. dv2 = [dv2 zeros(Number_signal_in,1);];
    78. end
    79. %层2:
    80. x1_tmp = x1;
    81. x1_tmp2 = x1_tmp*ones(1,NumInTerms);
    82. Out2 = 1 - check(x1_tmp2-v2,r) - check(u2-x1_tmp2,r);
    83. %层3:
    84. Out3 = prod(Out2);
    85. S_3 = sum(Out3);
    86. %层4:
    87. if S_3~=0
    88. Out4 = Out3/S_3;
    89. else
    90. Out4 = zeros(1,NumRules);
    91. end
    92. Aux1 = [x1_tmp; 1]*Out4;
    93. a = reshape(Aux1,(Number_signal_in+1)*NumRules,1);
    94. %层五
    95. P = (1./lamda).*(P - P*a*a'*P./(lamda+a'*P*a));
    96. ThetaL5 = ThetaL5 + P*a.*e;
    97. ThetaL5_tmp = reshape(ThetaL5,Number_signal_in+1,NumRules);
    98. %错误反馈
    99. %层4:
    100. e4 = [x1_tmp' 1]*ThetaL5_tmp.*e;
    101. denom = S_3*S_3;
    102. %层3:
    103. Theta43 = zeros(NumRules,NumRules);
    104. if denom~=0
    105. for k1=1:NumRules
    106. for k2=1:NumRules
    107. if k1==k2
    108. Theta43(k1,k2) = ((S_3-Out3(k2))./denom).*e4(k2);
    109. else
    110. Theta43(k1,k2) = -(Out3(k2)./denom).*e4(k2);
    111. end
    112. end
    113. end
    114. end
    115. e3 = sum(Theta43,2);
    116. %层2
    117. Q = zeros(Number_signal_in,NumInTerms,NumRules);
    118. for i=1:Number_signal_in
    119. for j=1:NumInTerms
    120. for k=1:NumRules
    121. if j==k && Out2(i,j)~=0
    122. Q(i,j,k) = (Out3(k)./Out2(i,j)).*e3(k);
    123. else
    124. Q(i,j,k) = 0;
    125. end
    126. end
    127. end
    128. end
    129. Thetass = sum(Q,3);
    130. Thetavv = zeros(Number_signal_in,NumInTerms);
    131. Thetauu = zeros(Number_signal_in,NumInTerms);
    132. for i=1:Number_signal_in
    133. for j=1:NumInTerms
    134. if ((Out2(i)-v2(i,j))*r>=0) && ((Out2(i)-v2(i,j))*r<=1)
    135. Thetavv(i,j) = r;
    136. end
    137. if ((u2(i,j)-Out2(i))*r>=0) && ((u2(i,j)-Out2(i))*r<=1)
    138. Thetauu(i,j) = -r;
    139. end
    140. end
    141. end
    142. %根据学习结果辨识参数计算
    143. e3_tmp = (e3*ones(1,Number_signal_in))';
    144. du2 = Learn_rate*Thetavv.*e3_tmp.*Thetass + coffa*du2;
    145. dv2 = Learn_rate*Thetauu.*e3_tmp.*Thetass + coffa*dv2;
    146. v2 = v2 + du2;
    147. u2 = u2 + dv2;
    148. if ~isempty(find(u2>v2))
    149. for i=1:Number_signal_in
    150. for j=1:NumInTerms
    151. if u2(i,j) > v2(i,j)
    152. temp = v2(i,j);
    153. v2(i,j) = u2(i,j);
    154. u2(i,j) = temp;
    155. end
    156. end
    157. end
    158. end
    159. if ~isempty(find(u2<0)) || ~isempty(find(v2>1))
    160. for i=1:Number_signal_in
    161. for j=1:NumInTerms
    162. if u2(i,j) < 0
    163. u2(i,j) = 0;
    164. end
    165. if v2(i,j) > 1
    166. v2(i,j) = 1;
    167. end
    168. end
    169. end
    170. end
    171. %WA由学习结果更新
    172. w_a = [u2; 1-v2];
    173. %上面的结果完成学习过程
    174. Xt1 = [NumRules;reshape(w_a,2*Number_signal_in*NumInTerms,1);reshape(P,((Number_signal_in+1)*NumRules)^2,1); ThetaL5;reshape(du2,Number_signal_in*NumInTerms,1);reshape(dv2,Number_signal_in*NumInTerms,1);dThetaL5;];
    175. ns1 = size(Xt1,1);
    176. Xt = [Xt1; zeros(Num_out1+Num_out2-ns1,1);];
    177. end
    178. out=Xt;
    179. %S函数的第三步,定义各个网络层的数据转换
    180. elseif flag == 3
    181. NumRules = Xt(1);
    182. NumInTerms = NumRules;
    183. Feedfor_phase;
    184. u2 = w_a(1:Number_signal_in,:);
    185. v2 = 1 - w_a(Number_signal_in+1:2*Number_signal_in,:);
    186. %层1输出
    187. x1 = (u(1:Data_in_numbers) - Xmins)./Data_range;
    188. %层2输出
    189. x1_tmp = x1;
    190. x1_tmp2 = x1_tmp*ones(1,NumInTerms);
    191. Out2 = 1 - check(x1_tmp2-v2,r) - check(u2-x1_tmp2,r);
    192. %层3输出
    193. Out3 = prod(Out2);
    194. S_3 = sum(Out3);
    195. %层4输出.
    196. if S_3~=0
    197. Out4 = Out3/S_3;
    198. else
    199. Out4 = zeros(1,NumRules);
    200. end
    201. %层5输出
    202. Aux1 = [x1_tmp; 1]*Out4;
    203. a = reshape(Aux1,(Number_signal_in+1)*NumRules,1);
    204. outact = a'*ThetaL5;
    205. out = [outact;Xt];
    206. else
    207. out = [];
    208. end
    209. function y = check(s,r);
    210. rows = size(s,1);
    211. columns = size(s,2);
    212. y = zeros(rows,columns);
    213. for i=1:rows
    214. for j=1:columns
    215. if s(i,j).*r>1
    216. y(i,j) = 1;
    217. elseif 0 <= s(i,j).*r && s(i,j).*r <= 1
    218. y(i,j) = s(i,j).*r;
    219. elseif s(i,j).*r<0
    220. y(i,j) = 0;
    221. end
    222. end
    223. end
    224. return

    A05-04

  • 相关阅读:
    Angular 集成 StreamSaver 大文件下载
    【PyTorch深度学习项目实战100例】—— 基于Pytorch的语音情感识别系统 | 第71例
    网站绑定证书的情况下是否可以避免流量劫持呢?
    [软件工具][原创]yolov7快速训练助手使用教程傻瓜式训练不需要写代码配置参数
    创新型中小企业评价标准有哪些?
    如何制作一个开屏引导轮播图?
    项目开发的详细步骤(精华版)
    JAVA计算机毕业设计在线课程教学大纲系统Mybatis+系统+数据库+调试部署
    Deep InfoMax (DIM)
    STM32FreeRTOS任务通知(STM32cube高效开发)
  • 原文地址:https://blog.csdn.net/ccsss22/article/details/125903974