【Pytorch】Dice系数与DiceLoss损失函数实现

更新时间:2023-07-06 06:20:42 阅读: 评论:0

【Pytorch】Dice系数与DiceLoss损失函数实现
由于 Dice系数是图像分割中常⽤的指标,⽽在Pytoch中没有官⽅的实现,下⾯通过⾃⼰的想法并结合⽹上的⼀些参考进⾏详细实现。先来看⼀个我在⽹上看到的⼀个版本。
def diceCoeff(pred, gt, smooth=1, activation='sigmoid'):
r""" computational formula:
dice = (2 * (pred ∩ gt)) / (pred ∪ gt)
"""
if activation is None or activation == "none":
activation_fn = lambda x: x
elif activation == "sigmoid":
activation_fn = nn.Sigmoid()
elif activation == "softmax2d":
activation_fn = nn.Softmax2d()
wustel:
rai NotImplementedError("Activation implemented for sigmoid and softmax2d 激活函数的操作")
pred = activation_fn(pred)
N = gt.size(0)
pred_flat = pred.view(N, -1)
gt_flat = gt.view(N, -1)
interction = (pred_flat * gt_flat).sum(1)
uniont = pred_flat.sum(1) + gt_flat.sum(1)
loss = 2 * (interction + smooth) / (uniont + smooth)
return loss.sum() / N
整体思路就是运⽤dice的计算公式 (2 * A∩B) / (A∪B)。下⾯来分析⼀下可能存在的问题:
smooth参数是⽤来防⽌分母除0的,但是如果smooth=1的话,会使得dice的计算结果略微偏⾼,看下⾯的测试代码。
第⼀种情况:预测和标签完全⼀样
# shape = torch.Size([1, 3, 4, 4])
'''
1 0 0= bladder
0 1 0 = tumor
0 0 1= background
'''
pred = torch.Tensor([[
[[0, 1, 1, 0],
[1, 0, 0, 1],
[1, 0, 0, 1],
[0, 1, 1, 0]],
[[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0]],
[[1, 0, 0, 1],
[0, 1, 1, 0],
[0, 1, 1, 0],
[1, 0, 0, 1]]]])
gt = torch.Tensor([[
[[0, 1, 1, 0],
[1, 0, 0, 1],
[1, 0, 0, 1],
[0, 1, 1, 0]],
[[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0]],
[[1, 0, 0, 1],
[0, 1, 1, 0],
[0, 1, 1, 0],
[1, 0, 0, 1]]]])
dice_baldder1 = diceCoeff(pred[:, 0:1, :], gt[:, 0:1, :], smooth=1, activation=None)
dice_baldder2 = diceCoeff(pred[:, 0:1, :], gt[:, 0:1, :], smooth=1e-5, activation=None)
print('smooth=1 : dice={:.4}'.format(dice_baldder1.item()))
print('smooth=1e-5 : dice={:.4}'.format(dice_baldder2.item()))
# 输出结果
smooth=1 : dice=1.050
smooth=1e-5 : dice=1.0
赴法留学
我们最后预测的是⼀个3分类的分割图,第⼀类是baldder, 第⼆类是tumor, 第三类是背景。我们先假设bladder的预测pred和gt⼀样,计算bladder的dice值,发现当smooth=1的时候,dice偏⾼, ⽽smooth=1e-5时dice⽐较合理。
解决办法:我想这⾥应该更改代码的实现⽅式,⽤下⾯的计算公式替换之前的,因为之前加smooth的位置有问题。
# loss = 2 * (interction + smooth) / (uniont + smooth)  # 之前的
loss = (2 * interction + smooth) / (uniont + smooth)
替换后的dice如下:
def diceCoeff(pred, gt, smooth=1e-5, activation='sigmoid'):
r""" computational formula:
children的音标
dice = (2 * (pred ∩ gt)) / (pred ∪ gt)
"""
if activation is None or activation == "none":
activation_fn = lambda x: x
elif activation == "sigmoid":
activation_fn = nn.Sigmoid()
elif activation == "softmax2d":
activation_fn = nn.Softmax2d()
el:
rai NotImplementedError("Activation implemented for sigmoid and softmax2d 激活函数的操作")
pred = activation_fn(pred)
N = gt.size(0)
pred_flat = pred.view(N, -1)
gt_flat = gt.view(N, -1)
interction = (pred_flat * gt_flat).sum(1)
uniont = pred_flat.sum(1) + gt_flat.sum(1)
loss = (2 * interction + smooth) / (uniont + smooth)
return loss.sum() / N
上⾯⽤到的测试数据进⾏验证结果如下:dice计算正确
# smooth=1 : dice=1.0kid
rubber的复数
# smooth=1e-5 : dice=1.0
第⼆种情况:预测的结果不在标签中
如下⾯的代码,我们假设预测的pred中有⼀部分bladder,但gt中没有bladder,看计算出的dice值如何。
'''
1 0 0= bladder
0 1 0 = tumor
0 0 1= background
'''
pred = torch.Tensor([[
[[0, 1, 1, 0],
[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0]],
[[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0],
you re beautiful[0, 0, 0, 0]],
[[1, 0, 0, 1],
[1, 1, 1, 1],
[1, 1, 1, 1],
[1, 1, 1, 1]]]])
gt = torch.Tensor([[
[[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0]],
[[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0]],
[[1, 1, 1, 1],
[1, 1, 1, 1],
[1, 1, 1, 1],
[1, 1, 1, 1]]]])
帐号英文
dice_baldder1 = diceCoeff(pred[:, 0:1, :], gt[:, 0:1, :], smooth=1, activation=None)
dice_baldder2 = diceCoeff(pred[:, 0:1, :], gt[:, 0:1, :], smooth=1e-5, activation=None)
print('smooth=1 : dice={:.4}'.format(dice_baldder1.item()))
print('smooth=1e-5 : dice={:.4}'.format(dice_baldder2.item()))
# 输出结果
smooth=1 : dice=0.3333
smooth=1e-5 : dice=5e-06
从结果可以看到,smooth=1时的dice值为0.3333;⽽ smooth=1e-5时的dice值接近于0,较为合理。dice的另⼀种计算⽅式:这⾥参考提供的dice计算⽅法。
def diceCoeffv2(pred, gt, eps=1e-5, activation='sigmoid'):
r""" computational formula:
dice = (2 * tp) / (2 * tp + fp + fn)
"""
if activation is None or activation == "none":
activation_fn = lambda x: x
elif activation == "sigmoid":
activation_fn = nn.Sigmoid()
elif activation == "softmax2d":
activation_fn = nn.Softmax2d()
el:
rai NotImplementedError("Activation implemented for sigmoid and softmax2d 激活函数的操作")    pred = activation_fn(pred)
N = gt.size(0)
六级翻译出淤泥而不染
pred_flat = pred.view(N, -1)
ad china
gt_flat = gt.view(N, -1)
tp = torch.sum(gt_flat * pred_flat, dim=1)
fp = torch.sum(pred_flat, dim=1) - tp
fn = torch.sum(gt_flat, dim=1) - tp
loss = (2 * tp + eps) / (2 * tp + fp + fn + eps)
return loss.sum() / N
整理代码
def diceCoeff(pred, gt, smooth=1e-5, activation='sigmoid'):
r""" computational formula:
dice = (2 * (pred ∩ gt)) / (pred ∪ gt)
"""
if activation is None or activation == "none":
activation_fn = lambda x: x
elif activation == "sigmoid":
activation_fn = nn.Sigmoid()
elif activation == "softmax2d":
activation_fn = nn.Softmax2d()
>hardworking

本文发布于:2023-07-06 06:20:42,感谢您对本站的认可!

本文链接:https://www.wtabcd.cn/fanwen/fan/90/168606.html

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。

标签:结果   实现   预测   代码
相关文章
留言与评论(共有 0 条评论)
   
验证码:
Copyright ©2019-2022 Comsenz Inc.Powered by © 专利检索| 网站地图