Profile 网络社会年会

Harry Halpin | How to Do Things with Algorithms: Cognitive Transparency and Opacity

演讲者与讲题简介

Harry Halpin | How to Do Things with Algorithms: Cognitive Transparency and Opacity

halpin

[mks_dropcap style=”square” size=”52″ bg_color=”#ffffff” txt_color=”#dd3333″]C[/mks_dropcap]urrently, there is a vast confusion in academia and in the popular press over the power of algorithms is either absurdly dystopian (Rouvroy) or utopian (Shirky). Instead, we’ll build on work from both computer science, history, and philosophy to explicate a theory of algorithms that takes as its starting point the actual design and use of algorithms “in the wild” to accomplish a multitude of tasks, with a focus on the actual problem with algorithms Their ability to be comprehended by humans. The term “cognitive transparency” means that a resource or capability can be fully comprehended into the existing cognitive resources of a mind, while “cognitive opacity” means that it cannot for some reason. We’ll note that traditionally algorithms, far from being mysterious, are incredibly well-understood and cognitive transparent to computer scientists. What is the source of our current concern over algorithms is not the algorithm itself, but the input data used in these algorithms to give them parameters. Since this input data often reflects existing inequalities and discrimination (as shown by Microsoft’s Tay chatbot) Being based on statistics, the actual parameters of these algorithms are indeed, cognitively opaque. Yet all algorithms and their input data are the result of ultimately social processes that are embedded in an explicitly political world, as noted by Stiegler. Furthermore, it is a mistake to assume our cognitive resources are bounded: Building on Andy Clark’s work on the Extended Mind thesis, even the most complex of algorithms is capable of being aligned and understood both via distributed and extended cognitive processing that goes beyond a single individual. The future of how we automate our world and the kind of world we will live depends on not just doing things, but understanding algorithms.

[mks_dropcap style=”square” size=”52″ bg_color=”#ffffff” txt_color=”#dd3333″]H[/mks_dropcap]arry Halpin is a research scientist at W3C/INRIA, where leads research on the future of the philosophy of the Web, distributed cognition, security, and cryptography. Working with H Berners-Lee at the W3C, he has created the Web Cryptography API to harmonize cryptography across browsers and the Web Authentication API to phase out passwords. Currently at INRIA, he co-ordinates the European Commission project NEXTLEAP (https://nextleap.eu/) on developing next-generation cryptographic protocols. He has a Ph.D. in Informatics from University of Edinburgh under Andy Clark and his thesis has been published as the book “Social Semantics” and work on philosophy with Bernard Stiegler for his postdoctoral work, leading to the edited collection “Philosophical Engineering” with Alexandre Monnin. He is working on a book on the philosophical foundations of Internet-enabled collective intelligence in the era of the collapse of global capitalism.


目前,在学术界和大众媒体中有一个很大的困惑,就是算法的力量到底是荒谬的歹托邦(Rouvroy)还是乌托邦(Shirky)。取而代之的,我们的工作是建立在计算机科学、历史和哲学之上,去阐明一个算法的理论,是以实际的设计作为出发点,并使用“自然辨識”算法来完成多项任务,聚焦於算法的能力能够被人类所理解这一实际问题上。“认知透明度”是指一种资源或能力,可以被既有意识的认知资源所理解,而“认知不透明度”则意味着,由于某些原因,它不能。我们会注意到传统的算法,一点也不神秘,对计算机科学家们来说是可充分理解的,因而是认知透明的。我们最为关注的算法问题不是算法本身,而是输入的數據的參數。由于输入数据往往反映出不平等和歧視(如微软的Tay聊天机器人)。根据统计,这些算法的实际参数的确是认知不透明的。所有的算法和它们的输入数据,正如斯蒂格勒指出的那样,是在明確政治世界基礎上產生的最終社會過程的結果。此外,不应该假设我们的认知资源是有限的,在安迪·克拉克(Andy Clark)的拓展意识论点上,即使是算法最複雜的部分,也能透過超越個體的分布和擴展的認知過程,來排列與理解。我們如何自動化世界,生活在何種世界的未來並不僅僅靠著做事,而是了解算法。

哈尔平是万维网联盟/自动化研究院(W3C/INRIA)的研究员,他带队研究未来网络哲学、分散式认知、安全和密码学。与蒂姆·伯纳斯 – 李在万维网联盟共事时,他开创了网络加密API来协调跨浏览器的加密操作,开创了网络身份验证API从而淘汰了密码输入。最近在自动化研究院他协助欧盟的“下一跳(NEXTLEAP)”计划,该计划旨在发展下一代的加密协议。他在爱丁堡大学获得信息学博士,在安迪克拉克指导下完成论文,现已出版《社交语义学》,在斯蒂格勒指导下进行博士后的哲学工作,与亚历山大.莫宁(Alexandre Monnin)一同编撰“哲学工程学”文集。他正在撰写一本关于全球资本主义崩解的时代背景下网络集体智慧的哲学基础的著作。