• 【改进极限学习机】双向极限学习机(B-ELM)(Matlab代码实现)


     📝个人主页:研学社的博客 

    💥💥💞💞欢迎来到本博客❤️❤️💥💥

    🏆博主优势:🌞🌞🌞博客内容尽量做到思维缜密,逻辑清晰,为了方便读者。

    ⛳️座右铭:行百里者,半于九十。

    目录

    💥1 概述

    📚2 运行结果

    🎉3 参考文献

    🌈4 Matlab代码实现

    💥1 概述

    文献来源:

    很明显,神经网络的学习效率和学习速度通常远低于要求,这一直是许多应用的主要瓶颈。最近,黄提出了一种简单高效的学习方法,称为极限学习机(ELM),该方法表明,与一些常规方法相比,神经网络的训练时间可以减少一千倍。然而,ELM研究中的一个悬而未决的问题是,是否可以在不影响学习效果的情况下进一步减少隐藏节点的数量。本简报提出了一种新的学习算法,称为双向极限学习机(B-ELM),其中一些隐藏节点不是随机选择的。理论上,该算法倾向于在极早的学习阶段将网络输出误差降低到0。此外,我们在提出的B-ELM中发现了网络输出误差与网络输出权重之间的关系。仿真结果表明,所提方法比其他增量ELM算法快几十到几百倍。 

    📚2 运行结果

    部分代码:

    load fried.mat;
    data=fried';
    data=mapminmax(data);
    data(11,:)=(data(11,:)/2)+0.5;
    data=[data(11,:); data(1:10,:)]';
    rand_sequence=randperm(size(data,1));
        temp_data=data;
        data=temp_data(rand_sequence, :);
        Training=data(1:20768,:);
        Testing=data(20769:40768,:);

        
    [train_time, test_time,  train_accuracy11, test_accuracy11]=B_ELM(Training,Testing,0,1,'sig',kkk);


        D_ELM_test(rnd,1)=test_accuracy11;
        D_ELM_train(rnd,1)=train_accuracy11;
        D_ELM_train_time(rnd,1)=train_time;
        

    end


       DD_ELM_learn_time(kkk)=mean(D_ELM_train_time);
       

       
       DD_ELM_train_accuracy(kkk)=mean(D_ELM_train);
       
       DD_ELM_test_accuracy(kkk)=mean(D_ELM_test);
       
       
    end

    %%%%%%%%%%% Load training dataset
    train_data=TrainingData_File;
    T=train_data(:,1)';
    aaa=T;
    P=train_data(:,2:size(train_data,2))';
    clear train_data;                                   %   Release raw training data array

    %%%%%%%%%%% Load testing dataset
    test_data=TestingData_File;
    TV.T=test_data(:,1)';
    TV.P=test_data(:,2:size(test_data,2))';
    clear test_data;                                    %   Release raw testing data array

    NumberofTrainingData=size(P,2);
    NumberofTestingData=size(TV.P,2);
    NumberofInputNeurons=size(P,1);


    %%%%%%%%%%% Random generate input weights InputWeight (w_i) and biases BiasofHiddenNeurons (b_i) of hidden neurons
    InputWeight=rand(NumberofHiddenNeurons,NumberofInputNeurons)*2-1;
    BiasofHiddenNeurons=rand(NumberofHiddenNeurons,1);
    tempH=InputWeight*P;
                                            %   Release input of training data 
    ind=ones(1,NumberofTrainingData);
    BiasMatrix=BiasofHiddenNeurons(:,ind);              %   Extend the bias matrix BiasofHiddenNeurons to match the demention of H
    tempH=tempH+BiasMatrix;

    D_YYM=[];
    D_Input=[];
    D_beta=[];
    D_beta1=[];
    TY=[];
    FY=[];
    BiasofHiddenNeurons1=[];

    %%%%%%%%%%% Random generate input weights InputWeight (w_i) and biases BiasofHiddenNeurons (b_i) of hidden neurons
    start_time_train=cputime;

    for i=1:kkk
        %%%%%%%%%% B-ELM when number of hidden nodes L=2n-1 %%%%%%%
    InputWeight=rand(NumberofHiddenNeurons,NumberofInputNeurons)*2-1;
    BiasofHiddenNeurons=rand(NumberofHiddenNeurons,1);
    BiasofHiddenNeurons1=[BiasofHiddenNeurons1;BiasofHiddenNeurons];
    tempH=P'*InputWeight';
    YYM=pinv(P')*tempH;
    YJX=P'*YYM;

     tempH=tempH';                                         %   Release input of training data 
    ind=ones(1,NumberofTrainingData);
    BiasMatrix=BiasofHiddenNeurons(:,ind);              %   Extend the bias matrix BiasofHiddenNeurons to match the demention of H
    tempH=tempH+BiasMatrix;

    %%%%%%%%%%% Calculate hidden neuron output matrix H
    switch lower(ActivationFunction)
        case {'sig','sigmoid'}
            %%%%%%%% Sigmoid 
            H = 1 ./ (1 + exp(-tempH));
        case {'sin','sine'}
            %%%%%%%% Sine
            H = sin(tempH);    
                %%%%%%%% More activation functions can be added here                
    end
                                         %   Release the temparary array for calculation of hidden neuron output matrix H

    %%%%%%%%%%% Calculate output weights OutputWeight (beta_i)
    OutputWeight=pinv(H') * T';                        % slower implementation
    % OutputWeight=inv(H * H') * H * T';                         % faster
    % implementation
    Y=(H' * OutputWeight)'; 


    %%%%%%%%%% B-ELM when number of hidden nodes L=2n %%%%%%%
    if i==1
        FY=Y;
    else
    FY=FY+Y;
    end
    E1=T-Y;
    E1_2n_1(i)=norm(E1,2);
    TrainingAccuracy2=sqrt(mse(E1));
    Y2=E1'*pinv(OutputWeight);
    Y2=Y2';
    switch lower(ActivationFunction)
        case {'sig','sigmoid'}
            %%%%%%%% Sigmoid 
            [Y22,PS(i)]=mapminmax(Y2,0.1,0.9);
        case {'sin','sine'}
            %%%%%%%% Sine
           [Y22,PS(i)]=mapminmax(Y2,0,1);
    end

    Y222=Y2;
    Y2=Y22';

    T1=(Y2* OutputWeight)';
    switch lower(ActivationFunction)
        case {'sig','sigmoid'}
            %%%%%%%% Sigmoid 
    Y3=1./Y2; 
    Y3=Y3-1;
    Y3=log(Y3);
    Y3=-Y3';
        case {'sin','sine'}
            %%%%%%%% Sine
           Y3=asin(Y2)';
    end

    T2=(Y3'* OutputWeight)';

    Y4=Y3;

    YYM=pinv(P')*Y4';
    YJX=P'*YYM;

    BB1=size(Y4);
    BB(i)=sum(YJX-Y4')/BB1(2);

    GXZ1=P'*YYM-BB(i);

    cc=pinv(P')*(GXZ1-Y4');
    Y5=P'*cc-(GXZ1-Y4');
    GXZ11=P'*(YYM-cc)-BB(i);
    BBB(i)=mean(GXZ11-Y4');
    GXZ111=P'*(YYM-cc)-BB(i)-BBB(i);
    BBBB(i)=BB(i)+BBB(i);
    switch lower(ActivationFunction)
        case {'sig','sigmoid'}
            %%%%%%%% Sigmoid 
    GXZ2=1./(1+exp(-GXZ111'));
        case {'sin','sine'}
            %%%%%%%% Sine
    GXZ2=sin(GXZ111');
    end


    FYY = mapminmax('reverse',GXZ2,PS(i));

    %FYY=GXZ2;
    OutputWeight1=pinv(FYY') * E1'; 
    FT1=FYY'*OutputWeight1;
    FY=FY+FT1';
    TrainingAccuracy=sqrt(mse(FT1'-E1));
    D_Input=[D_Input;InputWeight];
    D_beta=[D_beta;OutputWeight];
    D_beta1=[D_beta1;OutputWeight1];
    D_YYM=[D_YYM;(YYM-cc)'];
    T=FT1'-E1;
    E1_2n(i)=norm(T,2);

    end
    end_time_train=cputime;
    TrainingTime=end_time_train-start_time_train;

        


    start_time_test=cputime;
    %%%%%%%%%%%%%%%%%%%%%%%%%%%%test%%%%%%%%%%%%%%%%%%%%%

    tempH_test=D_Input*TV.P;
    clear TV.P;             %   Release input of testing data             
    ind=ones(1,NumberofTestingData);
    BiasMatrix=BiasofHiddenNeurons1(:,ind);              
    tempH_test=tempH_test + BiasMatrix;
     

    🎉3 参考文献

    部分理论来源于网络,如有侵权请联系删除。

    [1]Y. Yang, Y. Wang, and X. Yuan, "Bidirectional extreme learning machine for regression problem and its learning effectiveness," IEEE Transactions on Neural Networks and Learning Systems, Vol. 23, pp. 1498 - 1505, 2012

    🌈4 Matlab代码实现

  • 相关阅读:
    leetcode 380. Insert Delete GetRandom O(1)(在O(1)时间添加,删除,取随机)
    12306 火车票价格解析 (PHP 解析)
    Efficient Shapelet Discovery for Time Series Classification(TKDE)
    Mysql之备份(Mysqldump)
    【UV打印机】波形开发-矢量波形工具(五)
    C语言拾遗-机制
    Java之不通过构造函数创建一个对象
    BlueTooth
    Servlet详解
    【云原生之K8s】 K8s管理工具kubectl详解(二)
  • 原文地址:https://blog.csdn.net/weixin_46039719/article/details/127834389