### 文章目录

 这两年开始毕业设计和毕业答辩的要求和难度不断提升，传统的毕设题目缺少创新和亮点，往往达不到毕业答辩的要求，这两年不断有学弟学妹告诉学长自己做的项目系统达不到老师的要求。

 **基于深度学习的植物识别算法 **

磊学长这里给一个题目综合评分(每项满分5分)

## 3 MobileNetV2网络

#### 主要改进点

MobileNetV2 Inverted residual block 如下所示，若需要下采样，可在 DW 时采用步长为 2 的卷积

#### 倒残差结构（Inverted residual block）

ResNet的Bottleneck结构是降维->卷积->升维，是两边细中间粗

tensorflow相关实现代码

```import tensorflow as tf
import numpy as np
from tensorflow.keras import layers, Sequential, Model
class ConvBNReLU(layers.Layer):
def __init__(self, out_channel, kernel_size=3, strides=1, **kwargs):
super(ConvBNReLU, self).__init__(**kwargs)
self.conv = layers.Conv2D(filters=out_channel,
kernel_size=kernel_size,
strides=strides,
use_bias=False,
name='Conv2d')
self.bn = layers.BatchNormalization(momentum=0.9, epsilon=1e-5, name='BatchNorm')
self.activation = layers.ReLU(max_value=6.0)   # ReLU6

def call(self, inputs, training=False, **kargs):
x = self.conv(inputs)
x = self.bn(x, training=training)
x = self.activation(x)

return x
class InvertedResidualBlock(layers.Layer):
def __init__(self, in_channel, out_channel, strides, expand_ratio, **kwargs):
super(InvertedResidualBlock, self).__init__(**kwargs)
self.hidden_channel = in_channel * expand_ratio
self.use_shortcut = (strides == 1) and (in_channel == out_channel)

layer_list = []
# first bottleneck does not need 1*1 conv
if expand_ratio != 1:
# 1x1 pointwise conv
layer_list.append(ConvBNReLU(out_channel=self.hidden_channel, kernel_size=1, name='expand'))
layer_list.extend([

# 3x3 depthwise conv
layers.BatchNormalization(momentum=0.9, epsilon=1e-5, name='depthwise/BatchNorm'),
layers.ReLU(max_value=6.0),

#1x1 pointwise conv(linear)
# linear activation y = x -> no activation function
layers.Conv2D(filters=out_channel, kernel_size=1, strides=1, padding='SAME', use_bias=False, name='project'),
layers.BatchNormalization(momentum=0.9, epsilon=1e-5, name='project/BatchNorm')
])

self.main_branch = Sequential(layer_list, name='expanded_conv')

def call(self, inputs, **kargs):
if self.use_shortcut:
return inputs + self.main_branch(inputs)
else:
return self.main_branch(inputs)```

## 4.1 softmax函数

Softmax函数由下列公式定义

softmax 的作用是把 一个序列，变成概率。

softmax用于多分类过程中，它将多个神经元的输出，映射到（0,1）区间内，所有概率的和将等于1。

#### python实现

```def softmax(x):
shift_x = x - np.max(x)    # 防止输入增大时输出为nan
exp_x = np.exp(shift_x)
return exp_x / np.sum(exp_x)```

#### PyTorch封装的Softmax()函数

dim参数：

dim为0时，对所有数据进行softmax计算
dim为1时，对某一个维度的列进行softmax计算
dim为-1 或者2 时，对某一个维度的行进行softmax计算

```import torch
x = torch.tensor([2.0,1.0,0.1])
x.cuda()
outputs = torch.softmax(x,dim=0)
print("输入：",x)
print("输出：",outputs)
print("输出之和：",outputs.sum())```

## 4.2 交叉熵损失函数

python实现

```def cross_entropy(a, y):
return np.sum(np.nan_to_num(-y*np.log(a)-(1-y)*np.log(1-a)))

# tensorflow version
loss = tf.reduce_mean(-tf.reduce_sum(y_*tf.log(y), reduction_indices=[1]))

# numpy version
loss = np.mean(-np.sum(y_*np.log(y), axis=1))```

PyTorch实现

```# 二分类 损失函数
loss = torch.nn.BCELoss()
l = loss(pred，real)```

```# 多分类损失函数
loss = torch.nn.CrossEntropyLoss()```

## 5 优化器SGD

pytorch调用方法：

`torch.optim.SGD(params, lr=<required parameter>, momentum=0, dampening=0, weight_decay=0, nesterov=False)`

#### 相关代码：

```def step(self, closure=None):
"""Performs a single optimization step.
Arguments:
closure (callable, optional): A closure that reevaluates the model
and returns the loss.
"""
loss = None
if closure is not None:
loss = closure()
for group in self.param_groups:
weight_decay = group['weight_decay'] # 权重衰减系数
momentum = group['momentum'] # 动量因子，0.9或0.8
dampening = group['dampening'] # 梯度抑制因子
nesterov = group['nesterov'] # 是否使用nesterov动量
for p in group['params']:
continue
if weight_decay != 0: # 进行正则化
# add_表示原处改变，d_p = d_p + weight_decay*p.data
if momentum != 0:
param_state = self.state[p] # 之前的累计的数据，v(t-1)
# 进行动量累计计算
if 'momentum_buffer' not in param_state:
buf = param_state['momentum_buffer'] = torch.clone(d_p).detach()
else:
# 之前的动量
buf = param_state['momentum_buffer']
# buf= buf*momentum + （1-dampening）*d_p