Matlab深度学习笔记——深度学习工具箱说明

更新时间:2023-06-09 11:10:13 阅读: 评论:0

Matlab深度学习笔记——深度学习⼯具箱说明
警告:
这个⼯具箱已经过时,并且不再被保留。
有很多⽐这个⼯具箱更好的⼯具可⽤于深度学习,如 Theano, torch 或者 tensorflow。
我建议您使⽤上⾯提到的⼯具之⼀,⽽不是使⽤这个⼯具箱。
祝好,Rasmus
DeepLearnToolbox——⼀个⽤于深度学习的Matlab⼯具箱
深度学习是机器学习的⼀个新领域,它的重点是学习深层次的数据模型。它的灵感来⾃于⼈脑表⾯的深度分层体系结构。深度学习理论的⼀个很好的概述是学习⼈⼯智能的深层架构。
想得到更多相关介绍,请看下⾯的Geoffrey Hinton和Andrew Ng的视频。
The Next Generation of Neural Networks (Hinton, 2007)
国外旅游Recent Developments in Deep Learning (Hinton, 2010)
Unsupervid Feature Learning and Deep Learning (Ng, 2011)
⼯具箱中包含的⽬录
NN/ - ⼀个前馈BP神经⽹络的库
CNN/ - ⼀个卷积神经⽹络的库
DBN/ - ⼀个深度置信⽹络的库
SAE/ - ⼀个堆栈⾃动编码器的库
CAE/ - ⼀个卷积⾃动编码器的库
util/ - 库中使⽤的效⽤函数
data/ - 例⼦所⽤到的数据
tests/ - ⽤来验证⼯具箱正在⼯作的测试
设置
1.下载
2.addpath(genpath('DeepLearnToolbox'));
例:深度置信⽹络Deep Belief Network
function test_example_DBN
load mnist_uint8;
train_x = double(train_x) / 255;
test_x  = double(test_x)  / 255;
train_y = double(train_y);
test_y  = double(test_y);
%%  ex1 train a 100 hidden unit RBM and visualize its weights
rand('state',0)
dbn.sizes = [100];
opts.numepochs =  1;
opts.batchsize = 100;
opts.alpha    =  1;
dbn = dbntup(dbn, train_x, opts);
dbn = dbntrain(dbn, train_x, opts);
figure; visualize(dbn.rbm{1}.W');  %  Visualize the RBM weights
%%  ex2 train a 100-100 hidden unit DBN and u its weights to initialize a NN rand('state',0)
%train dbn
dbn.sizes = [100 100];
opts.numepochs =  1;
opts.batchsize = 100;
opts.alpha    =  1;
dbn = dbntup(dbn, train_x, opts);
dbn = dbntrain(dbn, train_x, opts);
%unfold dbn to nn
nn = dbnunfoldtonn(dbn, 10);
nn.activation_function = 'sigm';
%train nn
opts.numepochs =  1;
opts.batchsize = 100;
nn = nntrain(nn, train_x, train_y, opts);
[er, bad] = nntest(nn, test_x, test_y);
asrt(er < 0.10, 'Too big error');
例:堆叠式⾃动编码器Stacked Auto-Encoders
function test_example_SAE
load mnist_uint8;
train_x = double(train_x)/255;
test_x  = double(test_x)/255;
train_y = double(train_y);
test_y  = double(test_y);
桂花乌龙茶的功效与作用
%%  ex1 train a 100 hidden unit SDAE and u it to initialize a FFNN
%  Setup and train a stacked denoising autoencoder (SDAE)
rand('state',0)
sae = saetup([784 100]);
sae.ae{1}.activation_function      = 'sigm';
sae.ae{1}.learningRate              = 1;
sae.ae{1}.inputZeroMaskedFraction  = 0.5;
opts.numepochs =  1;
opts.batchsize = 100;
opts.batchsize = 100;
sae = saetrain(sae, train_x, opts);
visualize(sae.ae{1}.W{1}(:,2:end)')
% U the SDAE to initialize a FFNN
nn = nntup([784 100 10]);
nn.activation_function              = 'sigm';
nn.learningRate                    = 1;
nn.W{1} = sae.ae{1}.W{1};
% Train the FFNN
opts.numepochs =  1;
opts.batchsize = 100;
nn = nntrain(nn, train_x, train_y, opts);
[er, bad] = nntest(nn, test_x, test_y);
asrt(er < 0.16, 'Too big error');
Example: Convolutional Neural Nets
function test_example_CNN
load mnist_uint8;
train_x = double(reshape(train_x',28,28,60000))/255;
test_x = double(reshape(test_x',28,28,10000))/255;
train_y = double(train_y');
test_y = double(test_y');
%% ex1 Train a 6c-2s-12c-2s Convolutional neural network
%will run 1 epoch in about 200 cond and get around 11% error.
陈沐阳%With 100 epochs you'll get around 1.2% error
rand('state',0)
cnn.layers = {
struct('type', 'i') %input layer
struct('type', 'c', 'outputmaps', 6, 'kernelsize', 5) %convolution layer    struct('type', 's', 'scale', 2) %sub sampling layer
struct('type', 'c', 'outputmaps', 12, 'kernelsize', 5) %convolution layer    struct('type', 's', 'scale', 2) %subsampling layer
};
cnn = cnntup(cnn, train_x, train_y);
哥哥嘿
opts.alpha = 1;
opts.batchsize = 50;
opts.numepochs = 1;
cnn = cnntrain(cnn, train_x, train_y, opts);
[er, bad] = cnntest(cnn, test_x, test_y);
%plot mean squared error
figure; plot(cnn.rL);
asrt(er<0.12, 'Too big error');
例:神经⽹络Neural Networks
function test_example_NN
load mnist_uint8;
train_x = double(train_x) / 255;
test_x  = double(test_x)  / 255;
train_y = double(train_y);属鼠
test_y  = double(test_y);
% normalize
% normalize
[train_x, mu, sigma] = zscore(train_x);
test_x = normalize(test_x, mu, sigma);
%% ex1 vanilla neural net
rand('state',0)
nn = nntup([784 100 10]);
opts.numepochs =  1;  %  Number of full sweeps through data
opts.batchsize = 100;  %  Take a mean gradient step over this many samples
[nn, L] = nntrain(nn, train_x, train_y, opts);
[er, bad] = nntest(nn, test_x, test_y);
asrt(er < 0.08, 'Too big error');
%% ex2 neural net with L2 weight decay
rand('state',0)
nn = nntup([784 100 10]);
nn.weightPenaltyL2 = 1e-4;  %  L2 weight decay
opts.numepochs =  1;        %  Number of full sweeps through data
opts.batchsize = 100;      %  Take a mean gradient step over this many samples
nn = nntrain(nn, train_x, train_y, opts);
[er, bad] = nntest(nn, test_x, test_y);
asrt(er < 0.1, 'Too big error');
%% ex3 neural net with dropout
rand('state',0)
nn = nntup([784 100 10]);
nn.dropoutFraction = 0.5;  %  Dropout fraction
opts.numepochs =  1;        %  Number of full sweeps through data
opts.batchsize = 100;      %  Take a mean gradient step over this many samples
nn = nntrain(nn, train_x, train_y, opts);
[er, bad] = nntest(nn, test_x, test_y);
asrt(er < 0.1, 'Too big error');
%% ex4 neural net with sigmoid activation function
rand('state',0)
nn = nntup([784 100 10]);
nn.activation_function = 'sigm';    %  Sigmoid activation function
nn.learningRate = 1;                %  Sigm require a lower learning rate
opts.numepochs =  1;                %  Number of full sweeps through data
opts.batchsize = 100;              %  Take a mean gradient step over this many samples
nn = nntrain(nn, train_x, train_y, opts);
[er, bad] = nntest(nn, test_x, test_y);
asrt(er < 0.1, 'Too big error');
%% ex5 plotting functionality
rand('state',0)
nn = nntup([784 20 10]);
梦见种玉米opts.numepochs        = 5;            %  Number of full sweeps through data
nn.output              = 'softmax';    %  u softmax output
opts.batchsize        = 1000;        %  Take a mean gradient step over this many samples opts.plot              = 1;            %  enable plotting
nn = nntrain(nn, train_x, train_y, opts);
[er, bad] = nntest(nn, test_x, test_y);
asrt(er < 0.1, 'Too big error');
%% ex6 neural net with sigmoid activation and plotting of validation and training error
% split training data into training and validation data
vx  = train_x(1:10000,:);
tx = train_x(10001:end,:);
vy  = train_y(1:10000,:);
ty = train_y(10001:end,:);
rand('state',0)
nn                      = nntup([784 20 10]);
nn.output              = 'softmax';                  %  u softmax output
相对评价opts.numepochs          = 5;                          %  Number of full sweeps through data
opts.batchsize          = 1000;                        %  Take a mean gradient step over this many samples
opts.plot              = 1;                          %  enable plotting
nn = nntrain(nn, tx, ty, opts, vx, vy);                %  nntrain takes validation t as last two arguments (optionally)
买礼物的英文
[er, bad] = nntest(nn, test_x, test_y);
asrt(er < 0.1, 'Too big error');

本文发布于:2023-06-09 11:10:13,感谢您对本站的认可!

本文链接:https://www.wtabcd.cn/fanwen/fan/89/1030528.html

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。

标签:学习   深度   编码器
相关文章
留言与评论(共有 0 条评论)
   
验证码:
推荐文章
排行榜
Copyright ©2019-2022 Comsenz Inc.Powered by © 专利检索| 网站地图