当前位置: 首页 > news >正文

烟台市城市建设发展有限公司网站2021最新新闻及点评

烟台市城市建设发展有限公司网站,2021最新新闻及点评,减肥网站如何做,ie常用网站设置1.数据集简介 2.模型相关知识 3.split_data.py——训练集与测试集划分 4.model.py——定义ResNet34网络模型 5.train.py——加载数据集并训练#xff0c;训练集计算损失值loss#xff0c;测试集计算accuracy#xff0c;保存训练好的网络参数 6.predict.py——利用训练好的网…1.数据集简介 2.模型相关知识 3.split_data.py——训练集与测试集划分 4.model.py——定义ResNet34网络模型 5.train.py——加载数据集并训练训练集计算损失值loss测试集计算accuracy保存训练好的网络参数 6.predict.py——利用训练好的网络参数后用自己找的图像进行分类测试 一、数据集简介 1.自建数据文件夹 首先确定这次分类种类采用爬虫、官网数据集和自己拍照的照片获取5类新建个文件夹data里面包含5个文件夹文件夹名字取种类英文每个文件夹照片数量最好一样多五百多张以上。如我选了雏菊蒲公英玫瑰向日葵郁金香5类如下图每种类型有600~900张图像。如下图 花数据集下载链接https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz 2.划分训练集与测试集 这是划分数据集代码同一目录下运复制改文件夹路径。 import os from shutil import copy import randomdef mkfile(file):if not os.path.exists(file):os.makedirs(file)# 获取 photos 文件夹下除 .txt 文件以外所有文件夹名即3种分类的类名 file_path data/flower_photos flower_class [cla for cla in os.listdir(file_path) if .txt not in cla]# 创建 训练集train 文件夹并由3种类名在其目录下创建3个子目录 mkfile(flower_data/train) for cla in flower_class:mkfile(flower_data/train/ cla)# 创建 验证集val 文件夹并由3种类名在其目录下创建3个子目录 mkfile(flower_data/val) for cla in flower_class:mkfile(flower_data/val/ cla)# 划分比例训练集 : 验证集 9 : 1 split_rate 0.1# 遍历3种花的全部图像并按比例分成训练集和验证集 for cla in flower_class:cla_path file_path / cla / # 某一类别动作的子目录images os.listdir(cla_path) # iamges 列表存储了该目录下所有图像的名称num len(images)eval_index random.sample(images, kint(num * split_rate)) # 从images列表中随机抽取 k 个图像名称for index, image in enumerate(images):# eval_index 中保存验证集val的图像名称if image in eval_index:image_path cla_path imagenew_path flower_data/val/ clacopy(image_path, new_path) # 将选中的图像复制到新路径# 其余的图像保存在训练集train中else:image_path cla_path imagenew_path flower_data/train/ clacopy(image_path, new_path)print(\r[{}] processing [{}/{}].format(cla, index 1, num), end) # processing barprint()print(processing done!) 二、模型相关知识 之前有文章介绍模型如果不清楚可以点下链接转过去学习。 深度学习卷积神经网络CNN之ResNet模型网络详解说明超详细理论篇 三、model.py——定义ResNet34网络模型 这里还是直接复制给出原模型不用改参数。模型包含34、50、101模型 import torch.nn as nn import torchclass BasicBlock(nn.Module):expansion 1def __init__(self, in_channel, out_channel, stride1, downsampleNone, **kwargs):super(BasicBlock, self).__init__()self.conv1 nn.Conv2d(in_channelsin_channel, out_channelsout_channel,kernel_size3, stridestride, padding1, biasFalse)self.bn1 nn.BatchNorm2d(out_channel)self.relu nn.ReLU()self.conv2 nn.Conv2d(in_channelsout_channel, out_channelsout_channel,kernel_size3, stride1, padding1, biasFalse)self.bn2 nn.BatchNorm2d(out_channel)self.downsample downsampledef forward(self, x):identity xif self.downsample is not None:identity self.downsample(x)out self.conv1(x)out self.bn1(out)out self.relu(out)out self.conv2(out)out self.bn2(out)out identityout self.relu(out)return outclass Bottleneck(nn.Module):注意原论文中在虚线残差结构的主分支上第一个1x1卷积层的步距是2第二个3x3卷积层步距是1。但在pytorch官方实现过程中是第一个1x1卷积层的步距是1第二个3x3卷积层步距是2这么做的好处是能够在top1上提升大概0.5%的准确率。可参考Resnet v1.5 https://ngc.nvidia.com/catalog/model-scripts/nvidia:resnet_50_v1_5_for_pytorchexpansion 4def __init__(self, in_channel, out_channel, stride1, downsampleNone,groups1, width_per_group64):super(Bottleneck, self).__init__()width int(out_channel * (width_per_group / 64.)) * groupsself.conv1 nn.Conv2d(in_channelsin_channel, out_channelswidth,kernel_size1, stride1, biasFalse) # squeeze channelsself.bn1 nn.BatchNorm2d(width)# -----------------------------------------self.conv2 nn.Conv2d(in_channelswidth, out_channelswidth, groupsgroups,kernel_size3, stridestride, biasFalse, padding1)self.bn2 nn.BatchNorm2d(width)# -----------------------------------------self.conv3 nn.Conv2d(in_channelswidth, out_channelsout_channel*self.expansion,kernel_size1, stride1, biasFalse) # unsqueeze channelsself.bn3 nn.BatchNorm2d(out_channel*self.expansion)self.relu nn.ReLU(inplaceTrue)self.downsample downsampledef forward(self, x):identity xif self.downsample is not None:identity self.downsample(x)out self.conv1(x)out self.bn1(out)out self.relu(out)out self.conv2(out)out self.bn2(out)out self.relu(out)out self.conv3(out)out self.bn3(out)out identityout self.relu(out)return outclass ResNet(nn.Module):def __init__(self,block,blocks_num,num_classes1000,include_topTrue,groups1,width_per_group64):super(ResNet, self).__init__()self.include_top include_topself.in_channel 64self.groups groupsself.width_per_group width_per_groupself.conv1 nn.Conv2d(3, self.in_channel, kernel_size7, stride2,padding3, biasFalse)self.bn1 nn.BatchNorm2d(self.in_channel)self.relu nn.ReLU(inplaceTrue)self.maxpool nn.MaxPool2d(kernel_size3, stride2, padding1)self.layer1 self._make_layer(block, 64, blocks_num[0])self.layer2 self._make_layer(block, 128, blocks_num[1], stride2)self.layer3 self._make_layer(block, 256, blocks_num[2], stride2)self.layer4 self._make_layer(block, 512, blocks_num[3], stride2)if self.include_top:self.avgpool nn.AdaptiveAvgPool2d((1, 1)) # output size (1, 1)self.fc nn.Linear(512 * block.expansion, num_classes)for m in self.modules():if isinstance(m, nn.Conv2d):nn.init.kaiming_normal_(m.weight, modefan_out, nonlinearityrelu)def _make_layer(self, block, channel, block_num, stride1):downsample Noneif stride ! 1 or self.in_channel ! channel * block.expansion:downsample nn.Sequential(nn.Conv2d(self.in_channel, channel * block.expansion, kernel_size1, stridestride, biasFalse),nn.BatchNorm2d(channel * block.expansion))layers []layers.append(block(self.in_channel,channel,downsampledownsample,stridestride,groupsself.groups,width_per_groupself.width_per_group))self.in_channel channel * block.expansionfor _ in range(1, block_num):layers.append(block(self.in_channel,channel,groupsself.groups,width_per_groupself.width_per_group))return nn.Sequential(*layers)def forward(self, x):x self.conv1(x)x self.bn1(x)x self.relu(x)x self.maxpool(x)x self.layer1(x)x self.layer2(x)x self.layer3(x)x self.layer4(x)if self.include_top:x self.avgpool(x)x torch.flatten(x, 1)x self.fc(x)return xdef resnet34(num_classes1000, include_topTrue):# https://download.pytorch.org/models/resnet34-333f7ec4.pthreturn ResNet(BasicBlock, [3, 4, 6, 3], num_classesnum_classes, include_topinclude_top)def resnet50(num_classes1000, include_topTrue):# https://download.pytorch.org/models/resnet50-19c8e357.pthreturn ResNet(Bottleneck, [3, 4, 6, 3], num_classesnum_classes, include_topinclude_top)def resnet101(num_classes1000, include_topTrue):# https://download.pytorch.org/models/resnet101-5d3b4d8f.pthreturn ResNet(Bottleneck, [3, 4, 23, 3], num_classesnum_classes, include_topinclude_top)def resnext50_32x4d(num_classes1000, include_topTrue):# https://download.pytorch.org/models/resnext50_32x4d-7cdf4587.pthgroups 32width_per_group 4return ResNet(Bottleneck, [3, 4, 6, 3],num_classesnum_classes,include_topinclude_top,groupsgroups,width_per_groupwidth_per_group)def resnext101_32x8d(num_classes1000, include_topTrue):# https://download.pytorch.org/models/resnext101_32x8d-8ba56ff5.pthgroups 32width_per_group 8return ResNet(Bottleneck, [3, 4, 23, 3],num_classesnum_classes,include_topinclude_top,groupsgroups,width_per_groupwidth_per_group) 四、train.py——训练计算损失值loss计算accuracy保存训练好的网络参数 第一步提前下载权重链接复制链接网址打开直接下载下载完放在同一个工程文件夹记得修改个名字后面要用。 ResNet34权重链接https://download.pytorch.org/models/resnet34-333f7ec4.pth 第二步 71行类数、63行之前下载权重文件名字、83行保存最终权重文件名字 net.fc nn.Linear(in_channel, 5)//修改5类的5model_weight_path ./resnet34-pre.pthsave_path ./resNext34.pth其他参数bach_size16;根据cpu或GPU性能选择32、64等 学习率 0.01 epoch 5 import os import sys import jsonimport torch import torch.nn as nn import torch.optim as optim from torchvision import transforms, datasets from tqdm import tqdm from model import resnet34,resnet101def main():device torch.device(cuda:0 if torch.cuda.is_available() else cpu)print(using {} device..format(device))data_transform {train: transforms.Compose([transforms.RandomResizedCrop(224),transforms.RandomHorizontalFlip(),transforms.ToTensor(),transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]),val: transforms.Compose([transforms.Resize(256),transforms.CenterCrop(224),transforms.ToTensor(),transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])])}data_root os.path.abspath(os.path.join(os.getcwd(), ../..)) # get data root pathimage_path os.path.join(data_root, zjdata, flower_data) # flower data set pathassert os.path.exists(image_path), {} path does not exist..format(image_path)train_dataset datasets.ImageFolder(rootos.path.join(image_path, train),transformdata_transform[train])train_num len(train_dataset)# {daisy:0, dandelion:1, roses:2, sunflower:3, tulips:4}flower_list train_dataset.class_to_idxcla_dict dict((val, key) for key, val in flower_list.items())# write dict into json filejson_str json.dumps(cla_dict, indent4)with open(class_indices.json, w) as json_file:json_file.write(json_str)batch_size 16nw min([os.cpu_count(), batch_size if batch_size 1 else 0, 8]) # number of workersprint(Using {} dataloader workers every process.format(nw))train_loader torch.utils.data.DataLoader(train_dataset,batch_sizebatch_size, shuffleTrue,num_workersnw)validate_dataset datasets.ImageFolder(rootos.path.join(image_path, val),transformdata_transform[val])val_num len(validate_dataset)validate_loader torch.utils.data.DataLoader(validate_dataset,batch_sizebatch_size, shuffleFalse,num_workersnw)print(using {} images for training, {} images for validation..format(train_num,val_num))net resnet34()# load pretrain weights# download url: https://download.pytorch.org/models/resnet34-333f7ec4.pthmodel_weight_path ./resnet34-pre.pthassert os.path.exists(model_weight_path), file {} does not exist..format(model_weight_path)net.load_state_dict(torch.load(model_weight_path, map_locationcpu))for param in net.parameters():param.requires_grad False# change fc layer structurein_channel net.fc.in_featuresnet.fc nn.Linear(in_channel, 5)net.to(device)# define loss functionloss_function nn.CrossEntropyLoss()# construct an optimizerparams [p for p in net.parameters() if p.requires_grad]optimizer optim.Adam(params, lr0.01)epochs 5best_acc 0.0save_path ./resNext34.pthtrain_steps len(train_loader)for epoch in range(epochs):# trainnet.train()running_loss 0.0train_bar tqdm(train_loader, filesys.stdout)for step, data in enumerate(train_bar):images, labels dataoptimizer.zero_grad()logits net(images.to(device))loss loss_function(logits, labels.to(device))loss.backward()optimizer.step()# print statisticsrunning_loss loss.item()train_bar.desc train epoch[{}/{}] loss:{:.3f}.format(epoch 1,epochs,loss)# validatenet.eval()acc 0.0 # accumulate accurate number / epochwith torch.no_grad():val_bar tqdm(validate_loader, filesys.stdout)for val_data in val_bar:val_images, val_labels val_dataoutputs net(val_images.to(device))# loss loss_function(outputs, test_labels)predict_y torch.max(outputs, dim1)[1]acc torch.eq(predict_y, val_labels.to(device)).sum().item()val_bar.desc valid epoch[{}/{}].format(epoch 1,epochs)val_accurate acc / val_numprint([epoch %d] train_loss: %.3f val_accuracy: %.3f %(epoch 1, running_loss / train_steps, val_accurate))if val_accurate best_acc:best_acc val_accuratetorch.save(net.state_dict(), save_path)print(Finished Training)if __name__ __main__:main() 训练开始截图我是用CPU训练 六、predict.py——利用训练好的网络参数后用自己找的图像进行分类测试 注意图片位置和权重参数名字 import os import jsonimport torch from PIL import Image from torchvision import transforms import matplotlib.pyplot as pltfrom model import resnet34def main():device torch.device(cuda:0 if torch.cuda.is_available() else cpu)data_transform transforms.Compose([transforms.Resize(256),transforms.CenterCrop(224),transforms.ToTensor(),transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])])# load imageimg_path ./1.jpgassert os.path.exists(img_path), file: {} dose not exist..format(img_path)img Image.open(img_path)plt.imshow(img)# [N, C, H, W]img data_transform(img)# expand batch dimensionimg torch.unsqueeze(img, dim0)# read class_indictjson_path ./class_indices.jsonassert os.path.exists(json_path), file: {} dose not exist..format(json_path)with open(json_path, r) as f:class_indict json.load(f)# create modelmodel resnet34(num_classes5).to(device)# load model weightsweights_path ./resNext34.pthassert os.path.exists(weights_path), file: {} dose not exist..format(weights_path)model.load_state_dict(torch.load(weights_path, map_locationdevice))# predictionmodel.eval()with torch.no_grad():# predict classoutput torch.squeeze(model(img.to(device))).cpu()predict torch.softmax(output, dim0)predict_cla torch.argmax(predict).numpy()print_res class: {} prob: {:.3}.format(class_indict[str(predict_cla)],predict[predict_cla].numpy())plt.title(print_res)for i in range(len(predict)):print(class: {:10} prob: {:.3}.format(class_indict[str(i)],predict[i].numpy()))plt.show()if __name__ __main__:main() 预测结果截图
http://www.hkea.cn/news/14297309/

相关文章:

  • 免费快速建站工具移动互联网开发技术题库
  • 做宣传网站wordpress 未分类
  • 乡村门户网站建设环保类网站模板免费下载
  • 学校的网站如何建设方案p2p网站开发维护
  • 赣州住房与城乡建设厅网站个人宽带弄网站可以吗
  • 建设工程造价管理协会网站石家庄网站排名优化哪家好
  • 金山做网站的公司电子商务网站建设专业主修课程
  • 哪个设计网站赚钱网站和app的关系
  • 新乡彩票网站建设天津做网站的公司有哪家
  • 做网站空间哪个好周末做兼职上什么网站找
  • 北京做网站公司电话上海网站建设规范
  • 湖南网站建设欧黎明东莞网站排名优化
  • 国外网站会让国内人做吗免费建站的手机app
  • 电商网站管理网站服务器建设方法
  • 厦门网站制作计划网站 短链接怎么做
  • 辽宁省建设工程招标投标协会网站蜜雪冰城网站建设策划方案
  • 国内卡一卡二卡三网站视频企业展厅设计理念
  • 惠阳网站优化常用网站域名
  • html简单的网站电子商务网站建设与管理—李建忠
  • 去哪个网站做兼职seo推广文章
  • 网站域名被做网站的公司擅自更改两学一做知识竞赛试题网站
  • 北京建站方案网站建设的探讨与研究
  • 做网站很赚钱wordpress 底部插件
  • 网站头页广西茶叶网站建设
  • 成都山而网站建设公司个性化网站成功的案例
  • 网站建设与管理的论文wordpress门户主题 门户一号下载
  • 做新零售这些注册网站和找货源6网站建设费维护费
  • 四惠网站建设wordpress整合discuz用户
  • 公司网站如何做分录口碑营销的产品有哪些
  • 网站开发背景和意义wordpress 分类页 获取别名