6个基本评价指标如下思维导图：

## [2] 介绍

y_true = np.array([[0, 1, 0, 1],
[0, 1, 1, 0],
[0, 0, 1, 0],
[1, 1, 1, 0],
[1, 0, 1, 1]])
y_pred = np.array([[0, 1, 1, 0],
[0, 1, 1, 0],
[0, 0, 1, 0],
[0, 1, 1, 0],
[0, 1, 0, 1]])

### [2.1] 子集准确率（Subset Accuracy）

from sklearn.metrics import accuracy_score
print(accuracy_score(y_true,y_pred)) # 0.4
print(accuracy_score(y_true,y_pred,normalize=False)) # 2

【注】
accuracy_score 有参数 normalize
normalize = False 时：返回完全正确的样本数，
normalize = True 时：返回完全正确的样本数的占比。

### [2.2] 准确率（Accuracy）

1 5 ∗ ( 1 3 + 2 2 + 1 1 + 2 3 + 1 4 ) = 0.65 \frac{1}{5} * (\frac{1}{3} + \frac{2}{2} + \frac{1}{1} + \frac{2}{3} + \frac{1}{4})= 0.65 0.65

def Accuracy(y_true, y_pred):
count = 0
for i in range(y_true.shape[0]):
p = sum(np.logical_and(y_true[i], y_pred[i]))
q = sum(np.logical_or(y_true[i], y_pred[i]))
count += p / q
return count / y_true.shape[0]

print(Accuracy(y_true, y_pred)) # 0.65

### [2.3] 精确率（Precision）

1 5 ∗ ( 1 2 + 2 2 + 1 1 + 2 2 + 1 2 ) = 0.8 \frac{1}{5} * (\frac{1}{2} + \frac{2}{2} + \frac{1}{1} + \frac{2}{2} + \frac{1}{2})= 0.8 0.8

from sklearn.metrics import precision_score
print(precision_score(y_true=y_true, y_pred=y_pred, average='samples'))# 0.8

### [2.4] 召回率（Recall）

1 5 ∗ ( 1 2 + 2 2 + 1 1 + 2 3 + 1 3 ) = 0.7 \frac{1}{5} * (\frac{1}{2} + \frac{2}{2} + \frac{1}{1} + \frac{2}{3} + \frac{1}{3})= 0.7 0.7

from sklearn.metrics import recall_score
print(recall_score(y_true=y_true, y_pred=y_pred, average='samples'))# 0.7

### [2.5] F1

2 ∗ 1 5 ∗ ( 1 4 + 2 4 + 1 2 + 2 5 + 1 5 ) = 0.74 2*\frac{1}{5} * (\frac{1}{4} + \frac{2}{4} + \frac{1}{2} + \frac{2}{5} + \frac{1}{5})= 0.74 0.74

from sklearn.metrics import f1_score
print(f1_score(y_true,y_pred,average='samples'))# 0.74

### [2.6] 汉明损失（Hamming Loss）

Hamming Loss衡量的是所有样本中，预测错的标签数在整个标签标签数中的占比。所以对于Hamming Loss损失来说，其值越小表示模型的表现结果越好。

1 5 ∗ 4 ∗ ( 2 + 0 + 0 + 1 + 3 ) = 0.3 \frac{1}{5*4} * (2 + 0 + 0 + 1 + 3)= 0.3 ( 2 + 3 ) = 0.3

from sklearn.metrics import hamming_loss
print(hamming_loss(y_true, y_pred))# 0.3