facenet中⼼损失函数(centerloss)详解(代码分析)含
tf.gather()。。。
我们来解读⼀下,中⼼损失,再来看代码。
我们的重点是分析代码,所以定义部分,⼤家详情参见上⾯的博客。
代码:
#coding=gbk
平方计算公式'''
笑红尘陈淑桦
Created on 2020年4⽉20⽇
@author: DELL
'''
import tensorflow as tf
import numpy as np
data = [[1,1,1,1,1],
[1,1,2,1,1],
[1,1,3,1,1],
[1,1,4,1,1],
[2,2,2,1,2],
[2,2,2,2,2],
[2,2,2,3,2],
[3,3,3,3,1],
[3,3,3,3,2]]
label = [0,0,0,0,1,1,1,2,2]
data = np.array(data,dtype = 'float32')
label = np.array(label)
data = tf.convert_to_tensor(data)
label = tf.convert_to_tensor(label)
def center_loss(features, label, alfa, nrof_class):
"""Center loss bad on the paper "A Discriminative Feature Learning Approach for Deep Face Recognition"
(ydwen.github.io/papers/WenECCV16.pdf)
"""
nrof_features = _shape()[1]
centers = tf.get_variable('centers', [nrof_class, nrof_features], dtype=tf.float32,stant_initializer(0), trainable=Fal)
#定义⼀个全零的centers, [nrof_class, nrof_features]->(类别数,特征维度)
#print(ss.run(centers))
label = tf.reshape(label, [-1]) #⼀维向量
centers_batch = tf.gather(centers, label) #[batch_size,nrof_features] #按照label将centers归类,形成的新矩阵维度为 [label_size,nrof_features]
diff = (1 - alfa) * (centers_batch - features) #乘上我们的因⼦alfa [label_size,nrof_features]
centers = tf.scatter_sub(centers, label, diff) #按照label⽤centers - diff,产⽣本次的centers
l_dependencies([centers]):#注意这个函数的作⽤,是限制计算顺序的,即先计算centers,在利⽤计算好的centers去计算centers_batch以求loss loss = tf.reduce_mean(tf.square(features - centers_batch))
return loss, centers,features,centers_batch,features - centers_batch
loss, cen, fea, cen_bat,a = center_loss(data,label,0.5,3)
电竞比赛
ss = tf.Session()
init = tf.global_variables_initializer()
ss.run(init)
print(ss.run(cen))
#print(ss.run(loss))
print(ss.run(fea))
#print(ss.run(cen_bat))
print(ss.run(a))
print(ss.run(fea - cen_bat))
print(ss.run(tf.square(fea - cen_bat)))
print(ss.run(loss))
张铎个人资料简介
'''验证tf.scatter_sub函数
ss = tf.Session()
ref = tf.Variable([1, 2, 3],dtype = tf.int32)
indices = tf.constant([0, 0, 1, 1],dtype = tf.int32) updates = tf.constant([9, 10, 11, 12],dtype = tf.int32) sub = tf.scatter_sub(ref, indices, updates)
with tf.Session() as ss:
四字词语及意思ss.run(tf.global_variables_initializer())
print (ss.run(sub))
'''
结果:
[[2. 2. 5. 2. 2. ]
[3. 3. 3. 3. 3. ]
[3. 3. 3. 3. 1.5]]
2.features:
[[1. 1. 1. 1. 1.]
咖啡色配什么颜色好看[1. 1. 2. 1. 1.]
[1. 1. 3. 1. 1.]
[1. 1. 4. 1. 1.]
[2. 2. 2. 1. 2.]
[2. 2. 2. 2. 2.]
[2. 2. 2. 3. 2.]
[3. 3. 3. 3. 1.]
人生而平等
[3. 3. 3. 3. 2.]]
[[2. 2. 5. 2. 2. ]
[2. 2. 5. 2. 2. ]
[2. 2. 5. 2. 2. ]
[2. 2. 5. 2. 2. ]
[3. 3. 3. 3. 3. ]
[3. 3. 3. 3. 3. ]
[3. 3. 3. 3. 3. ]
[3. 3. 3. 3. 1.5]
[3. 3. 3. 3. 1.5]]
4.features - centers_batch
放学路上作文[[-1. -1. -4. -1. -1. ]
[-1. -1. -3. -1. -1. ]
[-1. -1. -2. -1. -1. ]
[-1. -1. -1. -1. -1. ]
[-1. -1. -1. -2. -1. ]
[-1. -1. -1. -1. -1. ]
[-1. -1. -1. 0. -1. ]
[ 0. 0. 0. 0. -0.5]
[ 0. 0. 0. 0. 0.5]]
5.loss
1.4111111
主要⽤到的函数:1.tf.gather(data,labels),将data按labels扩充
2.tf.scatter_sub(data,label,data_1),按label⽤data - data_
3.l_dependencies(): ,限制运算顺序
在实验验证时注意的点是:不要多次ss.run()某个张量涉及到带有依赖关系的张量,⽐如这⾥的loss,计算loss时 会 主动更新⼀次值,导致运算结果出错。原理我还没搞清,⽇后补上