算法伦理
For five years the British government ud a racist algorithm in order to help determine the outcome of visa applications. Last week they announced that it “needs to be rebuilt from the ground up”.
五年来,英国政府⼀直使⽤种族主义算法来帮助确定签证申请的结果。 上周,他们宣布它“需要从头开始重建”。
“The Home Office’s own independent review of the Windrush scandal, found that it was oblivious to the racist assumptions and systems it operates. This streaming tool took decades of institutionally racist practices, such as targeting particular nationalities for immigration raids, and turned them into software. The immigration system needs to be rebuilt from the ground up to monitor for such bias and to root it out.”
内政部⾃⼰对Windrush丑闻的独⽴审查发现,它没有遵守种族主义的假设和运作的制度。 这种流媒体⼯具采取了数⼗年的制度上种族主义的做法,例如针对特定国籍的移民袭击,并将其转变为软件。 移民系统需要从头开始重建,以监测这种偏见并消除这种偏见。
leb
”the star
,
the government has begun to identify parts of the racist machine and is now dismantling them. This is justice. But how did the government get their hands on a racist algorithm? Who designed it? And should they be punished?2012美国大选
, ,政府已经开始确定种族主义机器的各个部分,现在正在拆除它们。 这是正义。 但是,政府如何掌握种族主义算法呢? 谁设计的? 他们应该受到惩罚吗?
‘Hand-Written’ AlgorithmsHistorically algorithms were ‘hand-written’ whereby the developers would manually t parameters that would dictate the outcome of an algorithm. A famous example is , which would consider a measly three factors (ur affinity, content weighting and time-bad decay).
all怎么读“⼿写”算法从历史上看,算法是“⼿写的”,开发⼈员可以⼿动设置指⽰算法结果的参数。 ⼀个著名的例⼦是 ,该仅考虑三个因素(⽤户亲和⼒,内容权重和基于时间的衰减)。
) )
The score between urs is calculated bad on how ‘strong’ the developers of the algorithm perceive their interactions to they dictate that if you shared your mate’s post last week, you like them 20% more than your other mate who made a similar post and you only liked it instead.
⽤户之间的得分是根据算法开发⼈员对他们的互动的“强烈”程度来计算的,例如, 他们指出 ,如果您上周分享了同伴的帖⼦,那么您⽐其他发类似帖⼦的同伴要⾼20%⽽您只喜欢它。
At that point, it would’ve been quite easy for Facebook to be accountable for the results of their algorithms. They told it exactly what to do so they could be held liable for the outcomes of their algorithm. Unfortunately, this is no longer the ca, popular commercial algorithms have undergone a radical change in design. The parameters that were once ‘hand-written’are now decided by a ghost in the machine.
到那时, Facebook要对其算法的结果负责很容易。 他们确切地告诉了该怎么做,以便对算法的结果负责。 不幸的是,情况已不再如此,
流⾏的商业算法在设计上发⽣了根本性的变化。 曾经“⼿写”的参数现在由机器中的重影决定。
The Ghost in the MachineFrom the beginning of 2011 until around 2015 Facebook ud its new machine-learning algorithm to dictate what urs saw on their newsfeeds, instead of three parameters, this new beast considers at least 100,000 different factors that are weighted by the machine learning (ML) algorithm(s) ( — Facebook still u an ML algorithm today). Not a single one of the parameters is known to the developers of the algorithms, the AI is a black box that spits out an answer bad on whatever information it has been fed previously.
会计电算化培训
机器中的幽灵从2011年初到2015年左右,Facebook使⽤其新的机器学习算法来指⽰⽤户在其新闻源上看到的内容,⽽不是三个参数,该新的野兽考虑了⾄少100,000个由机器加权的不同因素学习(ML)算法( -今天的Facebook仍在使⽤ML算法)。 这些算法的开发者并不知道这些参数中的任何⼀个,AI是⼀个⿊匣⼦,它会根据先前提供的任何信息吐出答案。
— excellent read) -优秀阅读)
In the authors identify six ethical concerns that are raid by algorithms. The epistemic concerns (degree of validation of knowledge) ari from having poor datats, without sound data your AI isn’t about to make sound decisions. The results and behaviors that the algorithms invoke create the normative concerns, they help create a new undesirable norm from the low-quality datats. Then when it’s all said and done, none of the decisions can be traced back to their origins and no one is held accountable for the outcomes.
在 ,作者指出了算法提出的六个伦理学问题。 认知问题(知识的验证程度)是由不良数据集引起的,如果没有声⾳数据,您的AI将⽆法做出正确的决定。 这些算法调⽤的结果和⾏为创建了规范关注点,它们有助于从低质量数据集中创建新的不良规范。 然后,当⼀切都说完了,所有这些决定都⽆法追溯到其起源,也没有⼈对结果负责 。
Windrush DataThe ‘streaming tool’ (ML algorithm) ud by the British government ranked each visa applicant red, yellow, or green — this would heavily influence the outcome of the government’s decision of whether or not to grant a visa. The (racist) datats that were fed into this algorithm were definitely misguided, inconclusive and instructable. The outcomes were determined by datats crearvice unavailable是什么意思
ted in the past, created by people who judged someone a threat solely bad on the color of their skin or their country of origin. This racist data was fed into the algorithm and the algorithm made racist decisions.administrator什么意思
Windrush数据英国政府使⽤的“流⼯具”(ML算法)将每个签证申请⼈排名为红⾊,黄⾊或绿⾊-这将严重影响政府决定是否批准签证的结果。 馈送到此算法中的(种族)数据集绝对是错误的,不确定的和可指导的。 结果是由过去创建的数据集确定的,这些数据集是由仅根据某⼈肤⾊或原籍国来判断某⼈为威胁的⼈创建的。 该种族数据被输⼊到算法中,并且该算法做出了种族决策。
犀利的意思是什么So it’s not the algorithm’s fault that the creators of the datat were racist, this algorithm was completely neutral, it only showed us a reflection of the shortcomings of the data that it has received. What can be done to improve the ethics of our algorithms and of our datats in order to reduce unfair outcomes?
online dictionary因此,数据集的创建者是种族主义者不是算法的过错,该算法是完全中⽴的,它只向我们反映了它接收到的数据的缺点。 为了减少不公平的结果,可以采取什么措施来提⾼我们算法和数据集的道德标准?
Transparent Algorithms would help the developers and consumers understand the unethical bias
within the machine, once identified, the model or datat could be refined in order to eliminate the bias.
透明算法将帮助开发⼈员和消费者理解机器中不道德的偏差,⼀旦确定,可以对模型或数据集进⾏完善以消除偏差。
Improved Data Regulation is required in order to prevent data from being mistreated in order to create unfair outcomes for minorities and society as a whole.吸血鬼日记第4季
为了防⽌数据被滥⽤以对少数民族和整个社会造成不公平的结果,需要改进数据法规 。
Education will help everyone e without ro-tinted spectacles. To understand that it’s YOUR data and you should have adequate rights to protect it. Failing to do so only feeds the unethical algorithms.
教育将帮助每个⼈没有玫瑰⾊的眼镜。 要了解这是您的数据,您应该拥有⾜够的权利来保护它。 否则,只会滋养不道德的算法。
Final ThoughtsMachine Learning algorithms are popular for two very good reasons, they’re profitable and AI is xy. Naturally, there is a commercial interest in the former and most computer science stu
dents have the latter on their mind.
《 Final Thoughts》机器学习算法之所以受欢迎,有两个很好的原因,它们是有利可图的,⽽AI则很性感。 当然,前者具有商业利益,⼤多数计算机科学专业的学⽣都对后者感兴趣。
I encourage everyone to talk to their friends and family about this topic becau the algorithms are far more powerful than most people reali.
我⿎励每个⼈都与他们的朋友和家⼈谈论这个话题,因为这些算法⽐⼤多数⼈意识到的要强⼤得多。
Popular algorithms are dictating how we think by choosing what we e on a daily basis, they pick our news, they pick our films, they pick our wardrobe, they pick our romantic partners, they pick our friends… And no one knows how they do it, so no one is accountable.
流⾏的算法决定了我们如何选择每天看的东西,他们选择我们的新闻,他们选择我们的电影,他们选择我们的⾐橱,选择我们的浪漫伴侣,他们选择我们的朋友……⽽没⼈知道他们如何做它,所以没有⼈负责 。