• 基于稀疏约束的图像去噪算法研究(Matlab代码实现)


     💥💥💞💞欢迎来到本博客❤️❤️💥💥

    🏆博主优势:🌞🌞🌞博客内容尽量做到思维缜密,逻辑清晰,为了方便读者。

    ⛳️座右铭:行百里者,半于九十。

    目录

    💥1 概述

    📚2 运行结果

    🎉3 参考文献

    🌈4 Matlab代码实现

    💥1 概述

    图像数据在人们日常的沟通和交流中不可或缺,然而图像在传输和接收等过程中,往往会因为硬件设备等原因受到噪声的干扰,这会降低图像的质量,并影响后续对图像的处理与分析。因此,去除图像噪声至关重要。目前,如何在去除噪声的同时保护图像的纹理细节仍是亟待解决的问题。近年来,稀疏表示理论的兴起使图像去噪取得了较大的突破。

    📚2 运行结果

     部分代码:

    pathname        = uigetdir;
    allfiles        = dir(fullfile(pathname,'*.jpg'));
    xts=[];         % initialize testing inputs
    for i=1:size(allfiles,1)    
    x=imread([pathname '\\' allfiles(i).name]);
    x=imresize(x,gamma);
    x=rgb2gray(x);
    x=double(x);
    xts=[xts; x];% testing set building
    end

    %% Initialization of the Algorithm
    NumberofHiddenNeurons=500;  % number of neurons
    D_ratio=0.35;               % the ratio of noise in each chosen frame
    DB=1;                       % the power of white gaussian noise in decibels 
    ActivationFunction='sig';   % Activation function
    frame=20;                   % size of each frame
    %% Train and test
    %%

    %  During training, gaussian white noise and zeros will be added to 
    %  randomly chosen frames .
    %  The Autoencoder will be trained to avoide this type of data corruption.

    [AE_net]=elm_AE(xtr,xts,NumberofHiddenNeurons,ActivationFunction,D_ratio,DB,frame)
    %% Important Note: 
    %%

    %  After completing the training process,we will no longer in need  To use 
    %  InputWeight for mapping the inputs to the hidden layer, and  instead of 
    %  that we will use the Outputweights beta  for coding and decoding phases
    %  and also we can't use the activation  functon because  beta  is coputed 
    %  after the activation .
    %  The same thing is applied on biases (please for more details check the 
    %  function'ELM_AE' at the testing phase).

    %% Illustration
    subplot(121)
    corrupted=AE_net.x(:,1:gamma(2)*2);
    imshow(corrupted')
    title('corrupted images ');
    subplot(122)
    regenerated=AE_net.Ytr_hat(:,1:gamma(2)*2);
    imagesc(regenerated'), colormap('gray');
    title('regenerated images');

    %% scale training dataset
    T=Tinputs';T = scaledata(T,0,1);% memorize originale copy of the input and use it as a target
    P=Tinputs';
    %% scale training dataset
    TV.T=Tsinputs';TV.T = scaledata(TV.T,0,1);% memorize originale copy of the input and use it as a target
    TV.P=Tsinputs';TV.P = scaledata(TV.P,0,1);% temporal input
    TVT=TV.T;%save acopy as an output of the function

    %% in the 1st and 2nd step we will corrupte the temporal input
    PtNoise=zeros(size(P));
    i=1;
    while i < size(P,2)-frame
    gen=randi([0,1],1,1);
    PNoise=[];

    %%% 1st step: generate set of indexes to set some input's values to zero later 
    %%% (here we set them randomly and you can choose them by probability)%%%
    [zeroind] = dividerand(size(P,1),1-D_ratio,0,D_ratio);% generate indexes
    %%% 2nd step: add gaussian noise 
    if gen==1
    Noise=wgn(1,size(P,1),DB)';% generate white gaussian noise
    else
    Noise=zeros(1,size(P,1))'; 
    end

    for j=1:frame;%copy  noise
    PNoise=[PNoise Noise];
    end
    if gen==1
    for j=1:length(zeroind);% set to zero
        PNoise(zeroind(j),:)=0;
        P(zeroind(j),i:i+frame)=0;
    end
    end

    PtNoise(:,i:i+frame-1)=PNoise;
    i=i+frame;
    end
     

    🎉3 参考文献

    部分理论来源于网络,如有侵权请联系删除。

    [1]P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P.-A. Manzagol, “Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion,” J. Mach. Learn. Res., vol. 11, no. 3, pp. 3371–3408, 2010.
    [2]L. le Cao, W. bing Huang, and F. chun Sun, “Building feature space of extreme learning machine with sparse denoising stacked-autoencoder,” Neurocomputing, vol. 174, pp. 60–71, 2016.
    [3]G. Bin Huang, “What are Extreme Learning Machines? Filling the Gap Between Frank Rosenblatt’s Dream and John von Neumann’s Puzzle,” Cognit. Comput., vol. 7, no. 3, pp. 263–278, 2015.

    🌈4 Matlab代码实现

  • 相关阅读:
    Spring @Component注解详解
    抖音的文案怎么做|成都聚华祥
    Bean的管理
    【Python】模拟windows文件名排序
    c++视觉处理----分水岭算法
    注意力机制 - Bahdanau注意力
    ElasticSearch系列——分词器
    un7.30:Linux——如何在docker容器中显示MySQL的中文字符?
    【LLM】大模型幻觉问题的原因和缓解方法
    Kafka的Java客户端-生产者
  • 原文地址:https://blog.csdn.net/weixin_46039719/article/details/127826314