当前位置: 首页 > news >正文

我建设的网站打开很慢台州黄岩网站建设

我建设的网站打开很慢,台州黄岩网站建设,个性化定制产品,wordpress 加微信号2023.2.11 通过已经实现的卷积层和池化层#xff0c;搭建CNN去实现MNIST数据集的识别任务#xff1b; 一#xff0c;简单CNN的网络构成#xff1a; 代码需要在有网络的情况下运行#xff0c;因为会下载MINIST数据集#xff0c;运行后会生成params.pkl保留训练权重…2023.2.11 通过已经实现的卷积层和池化层搭建CNN去实现MNIST数据集的识别任务 一简单CNN的网络构成 代码需要在有网络的情况下运行因为会下载MINIST数据集运行后会生成params.pkl保留训练权重 简单卷积层的基本参数 简单的ConvNetconv - relu - pool - affine - relu - affine - softmaxParameters ---------- input_size : 输入大小MNIST的情况下为784 hidden_size_list : 隐藏层的神经元数量的列表e.g. [100, 100, 100] output_size : 输出大小MNIST的情况下为10 activation : relu or sigmoid weight_init_std : 指定权重的标准差e.g. 0.01指定relu或he的情况下设定“He的初始值”指定sigmoid或xavier的情况下设定“Xavier的初始值”一开始时这里的超参数通过命名为conv_param的字典传入他会像{filter_num:30, filter_size:5, pad:0, stride:1},这样保存必要的超参数。 在初始化权重前这里将由初始化参数传入卷积层的超参数从字典中取出然后计算卷积层的输出大小再进行参数初始化 在生成层学习所需要的参数是第一层卷积层和剩余两个全连接层的权值和偏置将这些参数保存在实例变量的字典中将第一层卷积层的权重设为w1偏置设为b1。 在全连接层从前面开始按顺序像有序字典Orderdict的Layers中添加层。只有最后的SoftmaxWithLoss层被添加到变得变量lastlayers。 以上就是简单卷积神经网络SimpleConvnet的初始化中进行的处理。像这样初始化进行推理处理和求损失函数loss 用误差反向传播法传递参数的梯度最后把各个权重参数的梯度保存到grads字典中通过正向传播和反向传播组装在一起完成简单卷积神经网络的实现 简单卷积层的代码 class SimpleConvNet:def __init__(self, input_dim(1, 28, 28),conv_param{filter_num:30, filter_size:5, pad:0, stride:1},hidden_size100, output_size10, weight_init_std0.01):filter_num conv_param[filter_num]filter_size conv_param[filter_size]filter_pad conv_param[pad]filter_stride conv_param[stride]input_size input_dim[1]conv_output_size (input_size - filter_size 2*filter_pad) / filter_stride 1pool_output_size int(filter_num * (conv_output_size/2) * (conv_output_size/2))# 初始化权重self.params {}self.params[W1] weight_init_std * \np.random.randn(filter_num, input_dim[0], filter_size, filter_size)self.params[b1] np.zeros(filter_num)self.params[W2] weight_init_std * \np.random.randn(pool_output_size, hidden_size)self.params[b2] np.zeros(hidden_size)self.params[W3] weight_init_std * \np.random.randn(hidden_size, output_size)self.params[b3] np.zeros(output_size)# 生成层self.layers OrderedDict()self.layers[Conv1] Convolution(self.params[W1], self.params[b1],conv_param[stride], conv_param[pad])self.layers[Relu1] Relu()self.layers[Pool1] Pooling(pool_h2, pool_w2, stride2)self.layers[Affine1] Affine(self.params[W2], self.params[b2])self.layers[Relu2] Relu()self.layers[Affine2] Affine(self.params[W3], self.params[b3])self.last_layer SoftmaxWithLoss() #全连接层def predict(self, x):for layer in self.layers.values():x layer.forward(x)return xdef loss(self, x, t):求损失函数参数x是输入数据、t是教师标签y self.predict(x)return self.last_layer.forward(y, t)def accuracy(self, x, t, batch_size100):if t.ndim ! 1 : t np.argmax(t, axis1)acc 0.0for i in range(int(x.shape[0] / batch_size)):tx x[i*batch_size:(i1)*batch_size]tt t[i*batch_size:(i1)*batch_size]y self.predict(tx)y np.argmax(y, axis1)acc np.sum(y tt)return acc / x.shape[0]def numerical_gradient(self, x, t):求梯度数值微分Parameters----------x : 输入数据t : 教师标签Returns-------具有各层的梯度的字典变量grads[W1]、grads[W2]、...是各层的权重grads[b1]、grads[b2]、...是各层的偏置loss_w lambda w: self.loss(x, t)grads {}for idx in (1, 2, 3):grads[W str(idx)] numerical_gradient(loss_w, self.params[W str(idx)])grads[b str(idx)] numerical_gradient(loss_w, self.params[b str(idx)])return gradsdef gradient(self, x, t):求梯度误差反向传播法Parameters----------x : 输入数据t : 教师标签Returns-------具有各层的梯度的字典变量grads[W1]、grads[W2]、...是各层的权重grads[b1]、grads[b2]、...是各层的偏置# forwardself.loss(x, t)# backwarddout 1dout self.last_layer.backward(dout)layers list(self.layers.values())layers.reverse()for layer in layers:dout layer.backward(dout)# 设定grads {}grads[W1], grads[b1] self.layers[Conv1].dW, self.layers[Conv1].dbgrads[W2], grads[b2] self.layers[Affine1].dW, self.layers[Affine1].dbgrads[W3], grads[b3] self.layers[Affine2].dW, self.layers[Affine2].dbreturn gradsdef save_params(self, file_nameparams.pkl):params {}for key, val in self.params.items():params[key] valwith open(file_name, wb) as f:pickle.dump(params, f)def load_params(self, file_nameparams.pkl):with open(file_name, rb) as f:params pickle.load(f)for key, val in params.items():self.params[key] valfor i, key in enumerate([Conv1, Affine1, Affine2]):self.layers[key].W self.params[W str(i1)]self.layers[key].b self.params[b str(i1)]实验正确率对比  使用MINIST数据集训练SimpleConvnet可以对比以前文章没有使用“卷积”神经网络的MINIST识别任务的正确率这是一个一步步完善的过程 “深度学习”学习日记。神经网络的推理处理_Anthony陪你度过漫长岁月的博客-CSDN博客 “深度学习”学习日记。神经网络的学习。--学习算法的实现_Anthony陪你度过漫长岁月的博客-CSDN博客 “深度学习”学习日记。误差反向传播法--算法实现_Anthony陪你度过漫长岁月的博客-CSDN博客 卷积神经网络可以有效的读取图像的某种特征而调整参数权重测试集的识别率为0.987对于小型神经网络这个正确率不错了接下来通过叠加层来加深神经网络提高正确率。 实验代码1  可以减少测试数据节省时间同时也会降低正确率 # 读入数据 (x_train, t_train), (x_test, t_test) load_mnist(flattenFalse)# 处理花费时间较长的情况下减少数据 # x_train, t_train x_train[:5000], t_train[:5000] # x_test, t_test x_test[:1000], t_test[:1000] import sys, ossys.path.append(os.pardir) # 为了导入父目录的文件而进行的设定 from collections import OrderedDict import matplotlib.pyplot as plttry:import urllib.request except ImportError:raise ImportError(You should use Python 3.x) import os.path import gzip import pickle import os import numpy as npurl_base http://yann.lecun.com/exdb/mnist/ key_file {train_img: train-images-idx3-ubyte.gz,train_label: train-labels-idx1-ubyte.gz,test_img: t10k-images-idx3-ubyte.gz,test_label: t10k-labels-idx1-ubyte.gz }dataset_dir os.path.dirname(os.path.abspath(__file__)) save_file dataset_dir /mnist.pkltrain_num 60000 test_num 10000 img_dim (1, 28, 28) img_size 784def _download(file_name):file_path dataset_dir / file_nameif os.path.exists(file_path):returnprint(Downloading file_name ... )urllib.request.urlretrieve(url_base file_name, file_path)print(Done)def download_mnist():for v in key_file.values():_download(v)def _load_label(file_name):file_path dataset_dir / file_nameprint(Converting file_name to NumPy Array ...)with gzip.open(file_path, rb) as f:labels np.frombuffer(f.read(), np.uint8, offset8)print(Done)return labelsdef _load_img(file_name):file_path dataset_dir / file_nameprint(Converting file_name to NumPy Array ...)with gzip.open(file_path, rb) as f:data np.frombuffer(f.read(), np.uint8, offset16)data data.reshape(-1, img_size)print(Done)return datadef _convert_numpy():dataset {}dataset[train_img] _load_img(key_file[train_img])dataset[train_label] _load_label(key_file[train_label])dataset[test_img] _load_img(key_file[test_img])dataset[test_label] _load_label(key_file[test_label])return datasetdef init_mnist():download_mnist()dataset _convert_numpy()print(Creating pickle file ...)with open(save_file, wb) as f:pickle.dump(dataset, f, -1)print(Done!)def _change_one_hot_label(X):T np.zeros((X.size, 10))for idx, row in enumerate(T):row[X[idx]] 1return Tdef load_mnist(normalizeTrue, flattenTrue, one_hot_labelFalse):if not os.path.exists(save_file):init_mnist()with open(save_file, rb) as f:dataset pickle.load(f)if normalize:for key in (train_img, test_img):dataset[key] dataset[key].astype(np.float32)dataset[key] / 255.0if one_hot_label:dataset[train_label] _change_one_hot_label(dataset[train_label])dataset[test_label] _change_one_hot_label(dataset[test_label])if not flatten:for key in (train_img, test_img):dataset[key] dataset[key].reshape(-1, 1, 28, 28)return (dataset[train_img], dataset[train_label]), (dataset[test_img], dataset[test_label])if __name__ __main__:init_mnist()class SGD:def __init__(self, lr0.01):self.lr lrdef update(self, params, grads):for key in params.keys():params[key] - self.lr * grads[key]class Momentum:def __init__(self, lr0.01, momentum0.9):self.lr lrself.momentum momentumself.v Nonedef update(self, params, grads):if self.v is None:self.v {}for key, val in params.items():self.v[key] np.zeros_like(val)for key in params.keys():self.v[key] self.momentum * self.v[key] - self.lr * grads[key]params[key] self.v[key]class Nesterov:def __init__(self, lr0.01, momentum0.9):self.lr lrself.momentum momentumself.v Nonedef update(self, params, grads):if self.v is None:self.v {}for key, val in params.items():self.v[key] np.zeros_like(val)for key in params.keys():self.v[key] * self.momentumself.v[key] - self.lr * grads[key]params[key] self.momentum * self.momentum * self.v[key]params[key] - (1 self.momentum) * self.lr * grads[key]class AdaGrad:def __init__(self, lr0.01):self.lr lrself.h Nonedef update(self, params, grads):if self.h is None:self.h {}for key, val in params.items():self.h[key] np.zeros_like(val)for key in params.keys():self.h[key] grads[key] * grads[key]params[key] - self.lr * grads[key] / (np.sqrt(self.h[key]) 1e-7)class RMSprop:def __init__(self, lr0.01, decay_rate0.99):self.lr lrself.decay_rate decay_rateself.h Nonedef update(self, params, grads):if self.h is None:self.h {}for key, val in params.items():self.h[key] np.zeros_like(val)for key in params.keys():self.h[key] * self.decay_rateself.h[key] (1 - self.decay_rate) * grads[key] * grads[key]params[key] - self.lr * grads[key] / (np.sqrt(self.h[key]) 1e-7)class Adam:def __init__(self, lr0.001, beta10.9, beta20.999):self.lr lrself.beta1 beta1self.beta2 beta2self.iter 0self.m Noneself.v Nonedef update(self, params, grads):if self.m is None:self.m, self.v {}, {}for key, val in params.items():self.m[key] np.zeros_like(val)self.v[key] np.zeros_like(val)self.iter 1lr_t self.lr * np.sqrt(1.0 - self.beta2 ** self.iter) / (1.0 - self.beta1 ** self.iter)for key in params.keys():self.m[key] (1 - self.beta1) * (grads[key] - self.m[key])self.v[key] (1 - self.beta2) * (grads[key] ** 2 - self.v[key])params[key] - lr_t * self.m[key] / (np.sqrt(self.v[key]) 1e-7)def cross_entropy_error(y, t):if y.ndim 1:t t.reshape(1, t.size)y y.reshape(1, y.size)if t.size y.size:t t.argmax(axis1)batch_size y.shape[0]return -np.sum(np.log(y[np.arange(batch_size), t] 1e-7)) / batch_sizedef softmax(x):if x.ndim 2:x x.Tx x - np.max(x, axis0)y np.exp(x) / np.sum(np.exp(x), axis0)return y.Tx x - np.max(x) # 溢出对策return np.exp(x) / np.sum(np.exp(x))class Affine:def __init__(self, W, b):self.W Wself.b bself.x Noneself.original_x_shape Noneself.dW Noneself.db Nonedef forward(self, x):self.original_x_shape x.shapex x.reshape(x.shape[0], -1)self.x xout np.dot(self.x, self.W) self.breturn outdef backward(self, dout):dx np.dot(dout, self.W.T)self.dW np.dot(self.x.T, dout)self.db np.sum(dout, axis0)dx dx.reshape(*self.original_x_shape) # 还原输入数据的形状对应张量return dxclass SoftmaxWithLoss:def __init__(self):self.loss Noneself.y Noneself.t Nonedef forward(self, x, t):self.t tself.y softmax(x)self.loss cross_entropy_error(self.y, self.t)return self.lossdef backward(self, dout1):batch_size self.t.shape[0]if self.t.size self.y.size:dx (self.y - self.t) / batch_sizeelse:dx self.y.copy()dx[np.arange(batch_size), self.t] - 1dx dx / batch_sizereturn dxclass Relu:def __init__(self):self.mask Nonedef forward(self, x):self.mask (x 0)out x.copy()out[self.mask] 0return outdef backward(self, dout):dout[self.mask] 0dx doutreturn dxdef numerical_gradient(f, x):h 1e-4grad np.zeros_like(x)it np.nditer(x, flags[multi_index], op_flags[readwrite])while not it.finished:idx it.multi_indextmp_val x[idx]x[idx] float(tmp_val) hfxh1 f(x) # f(xh)x[idx] tmp_val - hfxh2 f(x) # f(x-h)grad[idx] (fxh1 - fxh2) / (2 * h)x[idx] tmp_val # 还原值it.iternext()return graddef im2col(input_data, filter_h, filter_w, stride1, pad0):N, C, H, W input_data.shapeout_h (H 2 * pad - filter_h) // stride 1out_w (W 2 * pad - filter_w) // stride 1img np.pad(input_data, [(0, 0), (0, 0), (pad, pad), (pad, pad)], constant)col np.zeros((N, C, filter_h, filter_w, out_h, out_w))for y in range(filter_h):y_max y stride * out_hfor x in range(filter_w):x_max x stride * out_wcol[:, :, y, x, :, :] img[:, :, y:y_max:stride, x:x_max:stride]col col.transpose(0, 4, 5, 1, 2, 3).reshape(N * out_h * out_w, -1)return coldef col2im(col, input_shape, filter_h, filter_w, stride1, pad0):N, C, H, W input_shapeout_h (H 2 * pad - filter_h) // stride 1out_w (W 2 * pad - filter_w) // stride 1col col.reshape(N, out_h, out_w, C, filter_h, filter_w).transpose(0, 3, 4, 5, 1, 2)img np.zeros((N, C, H 2 * pad stride - 1, W 2 * pad stride - 1))for y in range(filter_h):y_max y stride * out_hfor x in range(filter_w):x_max x stride * out_wimg[:, :, y:y_max:stride, x:x_max:stride] col[:, :, y, x, :, :]return img[:, :, pad:H pad, pad:W pad]class Convolution:def __init__(self, W, b, stride1, pad0):self.W Wself.b bself.stride strideself.pad padself.x Noneself.col Noneself.col_W Noneself.dW Noneself.db Nonedef forward(self, x):FN, C, FH, FW self.W.shapeN, C, H, W x.shapeout_h 1 int((H 2 * self.pad - FH) / self.stride)out_w 1 int((W 2 * self.pad - FW) / self.stride)col im2col(x, FH, FW, self.stride, self.pad)col_W self.W.reshape(FN, -1).Tout np.dot(col, col_W) self.bout out.reshape(N, out_h, out_w, -1).transpose(0, 3, 1, 2)self.x xself.col colself.col_W col_Wreturn outdef backward(self, dout):FN, C, FH, FW self.W.shapedout dout.transpose(0, 2, 3, 1).reshape(-1, FN)self.db np.sum(dout, axis0)self.dW np.dot(self.col.T, dout)self.dW self.dW.transpose(1, 0).reshape(FN, C, FH, FW)dcol np.dot(dout, self.col_W.T)dx col2im(dcol, self.x.shape, FH, FW, self.stride, self.pad)return dxclass Pooling:def __init__(self, pool_h, pool_w, stride1, pad0):self.pool_h pool_hself.pool_w pool_wself.stride strideself.pad padself.x Noneself.arg_max Nonedef forward(self, x):N, C, H, W x.shapeout_h int(1 (H - self.pool_h) / self.stride)out_w int(1 (W - self.pool_w) / self.stride)col im2col(x, self.pool_h, self.pool_w, self.stride, self.pad)col col.reshape(-1, self.pool_h * self.pool_w)arg_max np.argmax(col, axis1)out np.max(col, axis1)out out.reshape(N, out_h, out_w, C).transpose(0, 3, 1, 2)self.x xself.arg_max arg_maxreturn outdef backward(self, dout):dout dout.transpose(0, 2, 3, 1)pool_size self.pool_h * self.pool_wdmax np.zeros((dout.size, pool_size))dmax[np.arange(self.arg_max.size), self.arg_max.flatten()] dout.flatten()dmax dmax.reshape(dout.shape (pool_size,))dcol dmax.reshape(dmax.shape[0] * dmax.shape[1] * dmax.shape[2], -1)dx col2im(dcol, self.x.shape, self.pool_h, self.pool_w, self.stride, self.pad)return dxclass SimpleConvNet:def __init__(self, input_dim(1, 28, 28),conv_param{filter_num: 30, filter_size: 5, pad: 0, stride: 1},hidden_size100, output_size10, weight_init_std0.01):filter_num conv_param[filter_num]filter_size conv_param[filter_size]filter_pad conv_param[pad]filter_stride conv_param[stride]input_size input_dim[1]conv_output_size (input_size - filter_size 2 * filter_pad) / filter_stride 1pool_output_size int(filter_num * (conv_output_size / 2) * (conv_output_size / 2))self.params {}self.params[W1] weight_init_std * \np.random.randn(filter_num, input_dim[0], filter_size, filter_size)self.params[b1] np.zeros(filter_num)self.params[W2] weight_init_std * \np.random.randn(pool_output_size, hidden_size)self.params[b2] np.zeros(hidden_size)self.params[W3] weight_init_std * \np.random.randn(hidden_size, output_size)self.params[b3] np.zeros(output_size)self.layers OrderedDict()self.layers[Conv1] Convolution(self.params[W1], self.params[b1],conv_param[stride], conv_param[pad])self.layers[Relu1] Relu()self.layers[Pool1] Pooling(pool_h2, pool_w2, stride2)self.layers[Affine1] Affine(self.params[W2], self.params[b2])self.layers[Relu2] Relu()self.layers[Affine2] Affine(self.params[W3], self.params[b3])self.last_layer SoftmaxWithLoss()def predict(self, x):for layer in self.layers.values():x layer.forward(x)return xdef loss(self, x, t):y self.predict(x)return self.last_layer.forward(y, t)def accuracy(self, x, t, batch_size100):if t.ndim ! 1: t np.argmax(t, axis1)acc 0.0for i in range(int(x.shape[0] / batch_size)):tx x[i * batch_size:(i 1) * batch_size]tt t[i * batch_size:(i 1) * batch_size]y self.predict(tx)y np.argmax(y, axis1)acc np.sum(y tt)return acc / x.shape[0]def numerical_gradient(self, x, t):loss_w lambda w: self.loss(x, t)grads {}for idx in (1, 2, 3):grads[W str(idx)] numerical_gradient(loss_w, self.params[W str(idx)])grads[b str(idx)] numerical_gradient(loss_w, self.params[b str(idx)])return gradsdef gradient(self, x, t):self.loss(x, t)# backwarddout 1dout self.last_layer.backward(dout)layers list(self.layers.values())layers.reverse()for layer in layers:dout layer.backward(dout)grads {}grads[W1], grads[b1] self.layers[Conv1].dW, self.layers[Conv1].dbgrads[W2], grads[b2] self.layers[Affine1].dW, self.layers[Affine1].dbgrads[W3], grads[b3] self.layers[Affine2].dW, self.layers[Affine2].dbreturn gradsdef save_params(self, file_nameparams.pkl):params {}for key, val in self.params.items():params[key] valwith open(file_name, wb) as f:pickle.dump(params, f)def load_params(self, file_nameparams.pkl):with open(file_name, rb) as f:params pickle.load(f)for key, val in params.items():self.params[key] valfor i, key in enumerate([Conv1, Affine1, Affine2]):self.layers[key].W self.params[W str(i 1)]self.layers[key].b self.params[b str(i 1)]class Trainer:def __init__(self, network, x_train, t_train, x_test, t_test,epochs20, mini_batch_size100,optimizerSGD, optimizer_param{lr: 0.01},evaluate_sample_num_per_epochNone, verboseTrue):self.network networkself.verbose verboseself.x_train x_trainself.t_train t_trainself.x_test x_testself.t_test t_testself.epochs epochsself.batch_size mini_batch_sizeself.evaluate_sample_num_per_epoch evaluate_sample_num_per_epochoptimizer_class_dict {sgd: SGD, momentum: Momentum, nesterov: Nesterov,adagrad: AdaGrad, rmsprpo: RMSprop, adam: Adam}self.optimizer optimizer_class_dict[optimizer.lower()](**optimizer_param)self.train_size x_train.shape[0]self.iter_per_epoch max(self.train_size / mini_batch_size, 1)self.max_iter int(epochs * self.iter_per_epoch)self.current_iter 0self.current_epoch 0self.train_loss_list []self.train_acc_list []self.test_acc_list []def train_step(self):batch_mask np.random.choice(self.train_size, self.batch_size)x_batch self.x_train[batch_mask]t_batch self.t_train[batch_mask]grads self.network.gradient(x_batch, t_batch)self.optimizer.update(self.network.params, grads)loss self.network.loss(x_batch, t_batch)self.train_loss_list.append(loss)if self.verbose: print(train loss: str(loss))if self.current_iter % self.iter_per_epoch 0:self.current_epoch 1x_train_sample, t_train_sample self.x_train, self.t_trainx_test_sample, t_test_sample self.x_test, self.t_testif not self.evaluate_sample_num_per_epoch is None:t self.evaluate_sample_num_per_epochx_train_sample, t_train_sample self.x_train[:t], self.t_train[:t]x_test_sample, t_test_sample self.x_test[:t], self.t_test[:t]train_acc self.network.accuracy(x_train_sample, t_train_sample)test_acc self.network.accuracy(x_test_sample, t_test_sample)self.train_acc_list.append(train_acc)self.test_acc_list.append(test_acc)if self.verbose: print( epoch: str(self.current_epoch) , train acc: str(train_acc) , test acc: str(test_acc) )self.current_iter 1def train(self):for i in range(self.max_iter):self.train_step()test_acc self.network.accuracy(self.x_test, self.t_test)if self.verbose:print( Final Test Accuracy )print(test acc: str(test_acc))(x_train, t_train), (x_test, t_test) load_mnist(flattenFalse)# 处理花费时间较长的情况下减少数据 x_train, t_train x_train[:5000], t_train[:5000] x_test, t_test x_test[:1000], t_test[:1000]max_epochs 20network SimpleConvNet(input_dim(1, 28, 28),conv_param{filter_num: 30, filter_size: 5, pad: 0, stride: 1},hidden_size100, output_size10, weight_init_std0.01)trainer Trainer(network, x_train, t_train, x_test, t_test,epochsmax_epochs, mini_batch_size100,optimizerAdam, optimizer_param{lr: 0.001},evaluate_sample_num_per_epoch1000) trainer.train()network.save_params(params.pkl) print(Saved Network Parameters!)markers {train: o, test: s} x np.arange(max_epochs) plt.plot(x, trainer.train_acc_list, markero, labeltrain, markevery2) plt.plot(x, trainer.test_acc_list, markers, labeltest, markevery2) plt.xlabel(epochs) plt.ylabel(accuracy) plt.ylim(0, 1.0) plt.legend(loclower right) plt.show()二CNN的可视化 通过卷积层的可视化来学习卷积层到底卷积了什么如何处理 1第一层权重的可视化 在简单卷积层的一个权重参数 conv_param{filter_num:30, filter_size:5, pad:0, stride:1},中可以知道学习前的权重是30155这意味着卷积核的大小是5X5、通道数为1表示卷积核可以可视化为1通道的灰色图像 “cnn可视化”实验代码2写在实验代码1的末尾 def filter_show(filters, nx8, margin3, scale10):FN, C, FH, FW filters.shapeny int(np.ceil(FN / nx))fig plt.figure()fig.subplots_adjust(left0, right1, bottom0, top1, hspace0.05, wspace0.05)for i in range(FN):ax fig.add_subplot(ny, nx, i1, xticks[], yticks[])ax.imshow(filters[i, 0], cmapplt.cm.gray_r, interpolationnearest)plt.show()network SimpleConvNet() # 随机进行初始化后的权重 filter_show(network.params[W1])# 学习后的权重 network.load_params(params.pkl) filter_show(network.params[W1]) 运行结果学习前的卷积核随机初始化这样的图像在黑白分布上没有规律可以寻找 学习后卷积核被更新成了有规律的图像含有块状区域blob 学习前和学习后虽然权重元素都是实数但是在图像的显示上统一将最小值显示称为黑色最大值为白色这好像卷积核在观察着什么东西观察边沿颜色变化的分界线和斑块局部的块状区域等现在就研究一下这个问题。 实验代码写在实验代码1的末尾 注意图片的输入路径 代码来源于教材作者能力问题无法使用任意图片进行实验会附上教材的图片。                                                                                                                         2023.2.11 命名为lena_gray。 def filter_show(filters, nx4, show_num16):FN, C, FH, FW filters.shapeny int(np.ceil(show_num / nx))fig plt.figure()fig.subplots_adjust(left0, right1, bottom0, top1, hspace0.05, wspace0.05)for i in range(show_num):ax fig.add_subplot(4, 4, i 1, xticks[], yticks[])ax.imshow(filters[i, 0], cmapplt.cm.gray_r, interpolationnearest)network SimpleConvNet(input_dim(1, 28, 28),conv_param{filter_num: 30, filter_size: 5, pad: 0, stride: 1},hidden_size100, output_size10, weight_init_std0.01)# 学习后的权重 network.load_params(params.pkl)filter_show(network.params[W1], 16)img imread(../dataset/lena_gray.png) img img.reshape(1, 1, *img.shape)fig plt.figure()w_idx 1for i in range(16):w network.params[W1][i]b 0 # network.params[b1][i]w w.reshape(1, *w.shape)# b b.reshape(1, *b.shape)conv_layer Convolution(w, b)out conv_layer.forward(img)out out.reshape(out.shape[2], out.shape[3])ax fig.add_subplot(4, 4, i 1, xticks[], yticks[])ax.imshow(out, cmapplt.cm.gray_r, interpolationnearest)plt.show() 观察实验结果 可以观察到figure1对水平方向上的边缘有反应的卷积核在figure2垂直方向的边缘上有白色像素figure2对垂直方向上的边缘有反应的卷积核在figure2水平方向的边缘上有白色像素 由此得知卷积层的卷积核会提取边缘或斑块等的原始信息而刚才是实现的CNN会将这些原始信息传递给后面的层。 2基于分层结构的信息提取 像边缘、斑块这样的信息称为低级信息在只有一层卷积层的CNN被提取如果在叠加了多层的CNN中各层中优惠提取什么样的信息呢 根据深度学习的可视化相关研究随着层次的加深提取的信息反应强烈的神经元也会缘来缘抽象。最开始是对简单的边缘有相应接下来的层对纹理有反应再后面的层会对更加复杂的物体部件有反应。也就是说随着层次的加深神经元从简单的形状向“高级”信息变化 三、具有代表性的CNN 迄今为止已经提出了各种网络结构。有两个网络非常具有代表性 1LeNet LeNet在1988年首次被提出用于完成MNIST识别任务。 特点 1“抽选元素”的子采样层也是跟CNN一样的连续的卷积层和池化层 2LeNet的激活函数使用sigmoid函数而现在的CNN使用ReLU函数 3.原始的LeNet中使用子采样subsampling缩小空间中的各个数据的大小而现在的CNN中Max池化是主流操作 LeNet结构图 2AlexNet  AlexNet是引发深度学习热潮的导火线他的结构与LeNet基本上没有区别AlexNet叠有多个卷积层和池化层最后由全连接层输出结果。 特点 1激活函数使用ReLU函数 2使用进行局部正规化的LRNLocal Response Noramlization 3使用Droput 关于Droput可以参考“深度学习”学习日记。与学习有关的技巧--正则化_Anthony陪你度过漫长岁月的博客-CSDN博客权值衰减https://blog.csdn.net/m0_72675651/article/details/128786693 大多数情况下深度学习加深层次的网络存在大量参数。因此学习需要大量的计算并且需要时那些参数“配对”的大量数据。现在大多数人都可以获得大量的数据和高性能GPU的普及他们成为深度学习发展的原动力
http://www.hkea.cn/news/14377083/

相关文章:

  • 金山网站建设关键词排名上海市营业执照查询
  • 招标网站开发文档广告公司后期制作
  • 专业做书画推广的网站学做网站设计需要多少钱
  • 义乌网站备案为什么自己花钱做的网站竟然不是自己的?(
  • 柳州网站建设22阿里指数查询入口
  • 做坑网站需要html考试界面设计
  • 建设银行网站用户权限永州做网站
  • 做网站海报用什么app上海云站网络技术服务中心
  • 自己做的网站买域名多少钱免费网站安全软件大全游戏
  • 怎样申请建立自助网站北京律师网站建设平台
  • 最便宜做个网站多少钱德州手机网站建设费用
  • 做书评的网站有哪些营销型网站托管
  • 郑州优化网站 优帮云网页美工设计简单流程
  • 宝安附近公司做网站建设哪家效益快网站开发与设计课程设计
  • 网站上做视频如何盈利网站的标签修改
  • 网站竞价托管网站开发费 无形资产
  • 做美食教程的网站有哪些照片拼图制作
  • 梅州建站公司百度搜索优化怎么做
  • 做企业网站项目wordpress标题字体样式
  • 站长统计网站市场营销的十大理论
  • 建网站必须要服务器吗制作网页编码
  • 自己做的网站访问速度慢google推广一年3万的效果
  • 广州地产网站设计网页浏览器历史记录恢复
  • 网站美工工作流程所谓网页制作三剑客不包括
  • 做一个网站一般要多少钱wordpress 中文企业
  • 中国网站用Cn域名江苏屹峰建设网站
  • 做满屏网站的尺寸物流网站素材
  • 翻书效果的网站国内最大的网站建设公司
  • 上海企业网站建设费用网站分析表
  • 重庆推广网站的方法长沙网站设计建设