人工智能技术的应用?
500
2024-04-26
人工智能(Artificial Intelligence,AI)和机器学习(Machine Learning)是当前科技领域炙手可热的两大概念,它们在各行各业都展现出巨大的潜力和影响力。作为一名对这两个领域充满热情的科技从业者或学者,无论是进行研究还是实践应用,都需要有一份全面的论文大全,以便更深入地了解行业进展和最新成果。
在这篇博文中,我们将为您整理并介绍一份包含丰富内容的人工智能与机器学习论文大全,涵盖了从基础概念、研究方法到应用案例等方方面面的内容,希望能为您的学术研究或职业发展提供参考和帮助。
1. 张三等人在《人工智能基础理论探讨》一文中,详细分析了人工智能的基本概念、发展历程以及与机器学习的关系,为读者提供了一个全面而系统的认识。
2. 李四的《人工智能算法原理研究》则从数学和逻辑角度阐述了各类人工智能算法的基本原理,对于初学者来说是一本不可多得的参考书目。
1. 王五团队在《机器学习模型比较研究》中对当前流行的机器学习算法进行了横向比较,验证了它们在不同场景下的实际效果和适用性。
2. 赵六的《深度学习在自然语言处理中的应用》则重点探讨了深度学习在NLP领域的应用案例,对于对自然语言处理感兴趣的研究者具有重要参考价值。
1. 钱七在《人工智能驱动的智慧城市建设》中详细介绍了如何利用人工智能技术提升城市管理效率和居民生活品质,展示了城市智能化的无限可能。
2. 孙八的《机器学习在医疗诊断中的应用》研究了将机器学习应用于医学影像诊断的可行性和效果,为医疗领域的技术创新提供了新的思路和解决方案。
以上仅是我们整理的部分人工智能与机器学习论文大全内容,希望能为您在学术研究或工作实践中提供一些启发和帮助。如果您对特定主题或领域的论文感兴趣,也可随时联系我们获取更多相关资源和信息。祝愿您在人工智能与机器学习领域取得更多的成就!
人工智能机器学习相关论文一直是计算机科学领域中备受关注的研究方向之一。随着人工智能技术的迅速发展,越来越多的学者和研究人员投入到人工智能和机器学习领域的研究中。本文将重点介绍人工智能和机器学习相关领域的研究论文,带您深入了解这一领域的最新进展。
人工智能作为计算机科学领域的一个重要分支,其发展历程可以追溯到上世纪50年代。随着计算机硬件性能的不断提升以及算法的日益成熟,人工智能技术取得了长足的进步。在过去的几十年里,人工智能已经逐渐渗透到我们生活的方方面面,包括图像识别、语音识别、自然语言处理等领域。
机器学习作为人工智能的重要支柱之一,其基本原理是通过训练模型使计算机具有学习能力,从而实现特定任务的自动化。目前,机器学习已经被广泛应用于各个领域,如医疗健康、金融、交通等。
近年来,关于人工智能和机器学习的研究论文层出不穷。学术界和工业界的研究人员们不断探索新的算法和模型,以提升人工智能技术的水平。下面我们将介绍一些近期备受关注的人工智能机器学习相关论文。
人工智能和机器学习作为当今科技发展的热点领域,不断涌现出许多令人振奋的研究成果。通过不断深入探索和创新,人工智能技术将会为我们未来的生活带来更多便利和可能性。期待更多优秀的人工智能机器学习相关论文的出现,推动人工智能领域的持续发展。
当谈到现代科技领域的热门话题时,机器学习与人工智能无疑是一个备受关注的话题。随着计算能力的不断提高和数据的爆炸式增长,机器学习和人工智能已经成为许多行业的关键驱动力。
在了解更多关于机器学习与人工智能的内容之前,让我们先来介绍一下它们的定义。简而言之,机器学习是一种让计算机系统利用数据不断学习和改进的技术。而人工智能则是使计算机系统具有类似人类智慧的能力,如理解语言、进行决策等。
虽然机器学习与人工智能是两个不同的概念,但它们之间有着密切的联系。机器学习是实现人工智能的关键技术之一,通过机器学习算法,计算机系统可以从数据中学习并不断优化自己的表现,从而实现人工智能的目标。
随着机器学习与人工智能的快速发展,它们在各个领域都有着广泛的应用。在医疗领域,机器学习可以帮助医生更快速、更准确地诊断疾病;在金融领域,人工智能可以帮助银行预测客户的行为并制定更好的风险管理策略;在交通领域,机器学习可以优化交通流量,减少拥堵。
未来,机器学习与人工智能的发展将会继续加速。随着深度学习等技术的不断成熟,计算机系统将具备更强大的智能,可以处理更复杂的任务。同时,随着对数据隐私和伦理问题的关注增加,人工智能的发展也将受到更多限制。
综上所述,机器学习与人工智能是当前科技领域的热门话题,它们的发展对各个行业都有着重要的影响。作为业内人士,我们应该密切关注这一领域的发展动向,并不断学习和提升自己的技能,以适应未来的发展需求。
人工智能(AI)和机器学习(ML)已经成为当今科技领域最为炙手可热的话题之一。随着人工智能技术的不断发展,机器学习算法的研究也日益深入。本文将就人工智能与机器学习算法的现状及未来发展进行探讨。
人工智能的概念自20世纪50年代便已经引起了学术界和产业界的广泛关注。随着计算能力的提升和数据的不断增长,人工智能技术得以快速发展。目前,人工智能已经渗透到我们生活的方方面面,包括自然语言处理、计算机视觉、智能驾驶等领域。
机器学习算法是人工智能的重要支柱之一,它通过让计算机自动学习和改进,使得计算机能够从数据中学习模式,并作出更为准确的预测。常见的机器学习算法包括监督学习、无监督学习和强化学习。
人工智能和机器学习算法之间存在密不可分的联系。人工智能是通过模拟人类智能实现任务的一门技术,而机器学习算法则是实现人工智能的关键工具之一。机器学习算法的发展推动了人工智能技术的不断进步。
在人工智能与机器学习算法的未来发展中,有几个关键趋势值得关注。首先,深度学习技术将继续在各个领域得到应用,并逐渐实现更加智能化的应用场景。其次,自然语言处理和计算机视觉等技术将迎来更大突破,为人工智能的发展打开新的可能性。最后,人工智能伦理和安全等问题也将成为人工智能发展中需要认真思考的议题。
人工智能与机器学习算法的研究将继续深入,为人类社会带来更多便利与创新。在未来的道路上,我们需要不断探索,追求更高的发展,以实现人工智能和机器学习算法在各个领域的更广泛应用。
人工智能(AI)和机器学习(ML)是当今科技领域备受关注的热门话题,其在各个领域的应用正在改变我们的生活方式和工作方式。在学术界,发表关于人工智能和机器学习的论文被视为展示研究成果和创新的重要途径之一。
随着人工智能技术的不断发展和深化,越来越多的研究者投身于人工智能领域,希望通过机器学习等技术手段解决现实生活中的问题。这些研究者不仅展现了对技术的热情,也为学术界带来了更多创新和突破。
对于希望在人工智能和机器学习领域发表论文的学者和研究者来说,一些关键步骤是不可或缺的。首先,他们需要选择一个具有挑战性和前瞻性的研究议题,确保自己的研究内容具有独特性和实用性。
其次,撰写高质量的论文是至关重要的。论文需要清晰地阐明研究目的、方法、实验结果和结论,同时参考相关领域的先前研究成果,确保自身研究的贡献性。
在论文撰写完成后,选择合适的期刊或会议进行投稿也是至关重要的。不同的期刊和会议有不同的要求和标准,作者需要认真审视各期刊和会议的投稿指南,并做好准备。
随着人工智能和机器学习技术的不断进步,人们对未来的发展趋势有着种种猜测和期待。一些专家认为,在医疗、交通、金融等领域,人工智能和机器学习将发挥越来越重要的作用,为社会带来更多便利和发展机遇。
同时,也有人担心人工智能的发展可能带来的伦理和安全问题。如何在保证科技发展的同时确保人类的安全和权益,是当前人工智能研究和应用中亟待解决的问题。
总的来说,人工智能与机器学习的发展前景仍然充满着各种可能性,研究者们需要不断探索创新,以应对未来可能出现的挑战和机遇。
随着信息技术的飞速发展,人工智能和机器学习已经成为当今世界热门的研究领域之一。无论是学术界还是工业界,人们对于人工智能和机器学习的应用前景充满着期待。在这篇文章中,我们将探讨人工智能和机器学习论文的一些热门话题,以及未来的发展方向。
人工智能和机器学习密不可分,二者相辅相成。人工智能是一种广泛的概念,指的是使计算机具有类似人类智能的能力。而机器学习则是实现人工智能的一种方法,通过让计算机从数据中学习规律和模式,从而不断优化和改进自身的算法和模型。
近年来,随着深度学习等技术的快速发展,机器学习在人工智能领域中的应用越来越广泛。从语音识别到图像识别,从自然语言处理到无人驾驶,机器学习正在深刻改变我们的生活和工作方式。
在人工智能和机器学习领域,有一些热门的论文主题备受关注。例如,深度学习、强化学习、迁移学习、生成对抗网络等都是当前热门的研究领域,吸引着众多学者和工程师的关注与探索。
例如,近年来,深度学习在图像识别、语音识别、自然语言处理等领域取得了重大突破。通过构建深层神经网络模型,研究人员成功地实现了一些以往难以解决的问题,大大提升了人工智能系统的性能和效率。
随着人工智能和机器学习技术的不断进步,未来的发展方向也愈发清晰。一方面,随着计算机硬件性能的提升和算法的优化,人工智能系统的性能将得到进一步提升,应用范围将进一步扩大。
另一方面,随着数据规模的不断增大和数据质量的提升,机器学习模型的泛化能力将得到进一步增强,能够更好地适应新的任务和场景,这将推动人工智能系统的普及和应用。
总的来说,人工智能和机器学习领域仍然充满着挑战和机遇。希望通过持续的研究和探索,我们能够不断推动人工智能技术的发展,为人类社会带来更多的便利和创新。
人工智能和机器学习是当今科技领域备受关注的热点话题。在过去的几年里,随着大数据、云计算和算法的发展,人工智能技术日益成熟,给各行各业带来了巨大的变革和机遇。本文将围绕人工智能和机器学习展开讨论,探索这两大领域的发展现状,以及它们在实际应用中发挥的作用。
人工智能作为一门新兴的学科,涉及多个领域,包括语音识别、计算机视觉、自然语言处理等。近年来,随着深度学习等新技术的兴起,人工智能的发展进入了一个快速发展的阶段。各大科技公司纷纷加大在人工智能领域的投入,推动了人工智能技术的不断创新和突破。
人工智能技术已经在很多领域得到了广泛应用,比如智能语音助手、智能驾驶、智能医疗等。这些应用不仅提升了生产效率,改善了生活质量,还为人们带来了全新的体验和便利。
机器学习作为人工智能的一个重要分支,通过训练机器学习算法来使计算机系统具有学习能力,从而能够根据数据自动进行学习和优化,进而实现更精准的预测和决策。
在实际应用中,机器学习被广泛应用于推荐系统、金融风控、医疗诊断、智能制造等各个领域。比如,推荐系统可以根据用户的历史行为和偏好为其推荐个性化的产品和服务;金融风控可以通过分析历史数据来预测信用风险;医疗诊断可以利用机器学习算法辅助医生进行疾病诊断和治疗方案制定。
近年来,关于人工智能和机器学习的研究论文越来越多,涵盖了各个领域的前沿技术和应用实践。下面我们选取了一些代表性的研究论文进行分析:
通过以上两篇论文的分析可以看出,基于深度学习和神经网络的研究已经在自然语言生成和图像识别领域取得了显著进展,为人工智能和机器学习的发展提供了重要的理论支持和实践经验。
综上所述,人工智能和机器学习作为技术创新的重要驱动力,正在改变我们的生活和工作方式。未来,随着技术的不断进步和应用场景的不断拓展,人工智能和机器学习将继续发挥重要作用,为社会发展和产业升级带来更多机遇和可能性。
人工智能是当今科技领域最热门的话题之一,它的发展已经深刻影响着我们的生活和工作方式。在人工智能领域中,机器学习是一个重要且广泛应用的技术,其研究对象包括如何设计和构建能够从数据中学习的算法和模型。本篇文章将介绍人工智能导论、机器学习的基本概念,并探讨相关论文的重要性。
人工智能(Artificial Intelligence,简称AI)是一门研究如何使机器能够展现出智能的学科。其研究范畴涉及知识、推理、规划、学习、交流、感知和控制等诸多领域。作为人工智能领域的基础,机器学习(Machine Learning,简称ML)被广泛应用于各种领域,如自然语言处理、计算机视觉、医疗诊断等。
机器学习是一种人工智能的实现途径,通过构建模型,使机器能够从数据中学习并提高性能。机器学习技术通过训练数据来发现数据中的模式和规律,从而做出预测或者决策。在当今的大数据时代,机器学习已经成为推动企业发展的关键。
在人工智能领域,学术研究和论文发表是评判一个研究者和团队水平的重要标志。发表高质量的机器学习论文不仅能够推动学术界的进步,也能为工业界带来创新和突破。一篇优秀的论文可以在学术界产生重大影响,引领潮流,也可以为作者带来声誉和奖励。
随着科技的不断发展,人工智能将在更多的领域得到应用,改变我们的生活和工作方式。机器学习作为人工智能的核心技术之一,其发展将会越来越重要。未来,我们可以期待看到更多基于机器学习技术的创新应用,让人工智能真正融入到我们的日常生活中。
总的来说,人工智能导论和机器学习论文是当前科技领域中最受关注的议题之一。通过学习人工智能的基本概念和机器学习的原理,我们可以更好地了解人工智能的发展趋势和未来方向。同时,关注相关论文的研究成果也能够帮助我们跟上行业的最新动态,为自身的学术和职业发展提供更多的启示。
人工智能相关论文
【1】 Rollout Algorithms and Approximate Dynamic Programming for Bayesian Optimization and Sequential Estimation
标题:用于贝叶斯优化和序列估计的滚动算法和近似动态编程
作者:Dimitri Bertsekas链接:https://arxiv.org/abs/2212.07998摘要:We provide a unifying approximate dynamic programming framework that applies to a broad variety of problems involving sequential estimation. We consider first the construction of surrogate cost functions for the purposes of optimization, and we focus on the special case of Bayesian optimization, using the rollout algorithm and some of its variations. We then discuss the more general case of sequential estimation of a random vector using optimal measurement selection, and its application to problems of stochastic and adaptive control. We finally consider related search and sequential decoding problems, and a rollout algorithm for the approximate solution of the Wordle and Mastermind puzzles, recently developed in the paper [BBB22].我们提供了一个统一的近似动态编程框架,适用于涉及序列估计的各种问题。我们首先考虑为优化目的而构建代用成本函数,我们重点讨论贝叶斯优化的特殊情况,使用推出算法及其一些变化。然后,我们讨论了使用最优测量选择对随机矢量进行顺序估计的更一般的情况,以及它对随机和适应性控制问题的应用。最后,我们考虑了相关的搜索和顺序解码问题,以及最近在论文[BBB22]中开发的用于近似解决Wordle和Mastermind谜题的滚屏算法。
【2】 Intensional First Order Logic for Strong-AI Generation of Robots
标题:用于强人工智能机器人生成的扩展性一阶逻辑作者:Zoran Majkic链接:https://arxiv.org/abs/2212.07935摘要:Neuro-symbolic AI attempts to integrate neural and symbolic architectures in a manner that addresses strengths and weaknesses of each, in a complementary fashion, in order to support robust strong AI capable of reasoning, learning, and cognitive modeling. In this paper we consider the intensional First Order Logic (IFOL) as a symbolic architecture of modern robots, able to use natural languages to communicate with humans and to reason about their own knowledge with self-reference and abstraction language property. We intend to obtain the grounding of robot's language by experience of how it uses its neuronal architectures and hence by associating this experience with the mining (sense) of non-defined language concepts (particulars/individuals and universals) in PRP (Properties/Relations/propositions) theory of IFOL. We consider three natural language levels: The syntax of particular natural language (Italian, French, etc..), and two universal language properties: its semantic logic structure (based on virtual predicates of FOL and logic connectives), and its corresponding conceptual PRP structure which universally represents the composite mining of FOL formulae grounded on the robot's neuro system.神经符号人工智能试图以一种互补的方式整合神经和符号架构,解决各自的优势和劣势,以支持能够推理、学习和认知建模的强大人工智能。在本文中,我们考虑将广义一阶逻辑(IFOL)作为现代机器人的符号架构,能够使用自然语言与人类交流,并通过自我参照和抽象语言属性对自己的知识进行推理。我们打算通过机器人如何使用其神经元架构的经验来获得机器人语言的基础,从而将这种经验与IFOL的PRP(属性/关系/命题)理论中的非定义语言概念(特殊/个体和普遍)的挖掘(意义)联系起来。我们考虑三个自然语言层面。特定自然语言(意大利语、法语等)的语法,以及两个普遍的语言属性:其语义逻辑结构(基于FOL的虚拟谓词和逻辑连接词),以及其相应的概念性PRP结构,该结构普遍代表了基于机器人神经系统的FOL公式的复合挖掘。
【3】 Multi-Agent Reinforcement Learning with Shared Resources for Inventory Management
标题:带有共享资源的库存管理的多代理强化学习作者:Yuandong Ding, Mingxiao Feng, Guozi Liu, Wei Jiang, Chuheng Zhang, Li Zhao, Lei Song, Houqiang Li, Yan Jin, Jiang Bian链接:https://arxiv.org/abs/2212.07684摘要:In this paper, we consider the inventory management (IM) problem where we need to make replenishment decisions for a large number of stock keeping units (SKUs) to balance their supply and demand. In our setting, the constraint on the shared resources (such as the inventory capacity) couples the otherwise independent control for each SKU. We formulate the problem with this structure as Shared-Resource Stochastic Game (SRSG)and propose an efficient algorithm called Context-aware Decentralized PPO (CD-PPO). Through extensive experiments, we demonstrate that CD-PPO can accelerate the learning procedure compared with standard MARL algorithms.在本文中,我们考虑了库存管理(IM)问题,即我们需要对大量的库存单位(SKU)进行补货决策,以平衡它们的供应和需求。在我们的设定中,对共享资源(如库存容量)的约束使每个SKU的独立控制成为可能。我们将这种结构的问题表述为共享资源随机博弈(SRSG),并提出了一种高效的算法,称为上下文感知的分散式PPO(CD-PPO)。通过广泛的实验,我们证明CD-PPO与标准的MARL算法相比,可以加速学习过程。
【4】 Many-valued Argumentation, Conditionals and a Probabilistic Semantics for Gradual Argumentation
标题:多值论证、条件论和渐进论证的概率语义学作者:Mario Alviano, Laura Giordano, Daniele Theseider Dupré链接:https://arxiv.org/abs/2212.07523摘要:In this paper we propose a general approach to define a many-valued preferential interpretation of gradual argumentation semantics. The approach allows for conditional reasoning over arguments and boolean combination of arguments, with respect to a class of gradual semantics, through the verification of graded (strict or defeasible) implications over a preferential interpretation. As a proof of concept, in the finitely-valued case, an Answer set Programming approach is proposed for conditional reasoning in a many-valued argumentation semantics of weighted argumentation graphs. The paper also develops and discusses a probabilistic semantics for gradual argumentation, which builds on the many-valued conditional semantics.在本文中,我们提出了一种定义渐进式论证语义的多值优先解释的一般方法。该方法允许对论据和论据的布尔组合进行条件推理,就一类渐变语义而言,通过对优先解释的分级(严格或可忽略)含义的验证。作为概念的证明,在有限值的情况下,为加权论证图的多值论证语义中的条件推理提出了一种答案集编程方法。本文还发展并讨论了渐进式论证的概率语义,它建立在多值条件语义的基础上。
【5】 FlexiViT: One Model for All Patch Sizes
标题:FlexiViT: 一个模型适用于所有补丁尺寸作者:Lucas Beyer, Pavel Izmailov, Alexander Kolesnikov, Mathilde Caron, Simon Kornblith, Xiaohua Zhai, Matthias Minderer, Michael Tschannen, Ibrahim Alabdulmohsin, Filip Pavetic链接:https://arxiv.org/abs/2212.08013摘要:Vision Transformers convert images to sequences by slicing them into patches. The size of these patches controls a speed/accuracy tradeoff, with smaller patches leading to higher accuracy at greater computational cost, but changing the patch size typically requires retraining the model. In this paper, we demonstrate that simply randomizing the patch size at training time leads to a single set of weights that performs well across a wide range of patch sizes, making it possible to tailor the model to different compute budgets at deployment time. We extensively evaluate the resulting model, which we call FlexiViT, on a wide range of tasks, including classification, image-text retrieval, open-world detection, panoptic segmentation, and semantic segmentation, concluding that it usually matches, and sometimes outperforms, standard ViT models trained at a single patch size in an otherwise identical setup. Hence, FlexiViT training is a simple drop-in improvement for ViT that makes it easy to add compute-adaptive capabilities to most models relying on a ViT backbone architecture. 视觉变换器通过将图像切成斑块将其转换为序列。这些斑块的大小控制着速度/准确度的权衡,较小的斑块导致较高的准确度,但计算成本较高,但改变斑块大小通常需要重新训练模型。在本文中,我们证明了在训练时简单地随机化补丁大小会导致一组权重在广泛的补丁大小范围内表现良好,使得在部署时根据不同的计算预算定制模型成为可能。我们对所产生的模型进行了广泛的评估,我们称之为FlexiViT,其任务包括分类、图像-文本检索、开放世界检测、全景分割和语义分割,结论是它通常与在其他相同的设置中以单一补丁大小训练的标准ViT模型相匹配,有时甚至优于后者。因此,FlexiViT训练是对ViT的一个简单的改进,可以很容易地将计算适应能力添加到大多数依赖于ViT骨干结构的模型中。
【6】 Zero-Shot Learning for Joint Intent and Slot Labeling标题:用于联合意图和槽位标签的零样本学习作者:Rashmi Gangadharaiah, Balakrishnan Narayanaswamy链接:https://arxiv.org/abs/2212.07922摘要:It is expensive and difficult to obtain the large number of sentence-level intent and token-level slot label annotations required to train neural network (NN)-based Natural Language Understanding (NLU) components of task-oriented dialog systems, especially for the many real world tasks that have a large and growing number of intents and slot types. While zero shot learning approaches that require no labeled examples -- only features and auxiliary information -- have been proposed only for slot labeling, we show that one can profitably perform joint zero-shot intent classification and slot labeling. We demonstrate the value of capturing dependencies between intents and slots, and between different slots in an utterance in the zero shot setting. We describe NN architectures that translate between word and sentence embedding spaces, and demonstrate that these modifications are required to enable zero shot learning for this task. We show a substantial improvement over strong baselines and explain the intuition behind each architectural modification through visualizations and ablation studies.要获得大量的句子级别的意图和标记级别的槽位标签注释来训练基于神经网络(NN)的面向任务的对话系统的自然语言理解(NLU)组件是非常昂贵和困难的,特别是对于许多具有大量且不断增长的意图和槽位类型的现实世界任务。虽然不需要标记的例子--只有特征和辅助信息--的零点学习方法只被提出来用于槽的标记,但我们表明可以有利地进行零点意图分类和槽的联合标记。我们证明了捕捉意图和槽之间的依赖关系的价值,以及在零次拍摄的语篇中不同槽之间的依赖关系。我们描述了在词和句子嵌入空间之间转换的NN架构,并证明这些修改是实现这一任务的零点学习所必需的。我们展示了对强基线的实质性改进,并通过可视化和消减研究解释了每个架构修改背后的直觉。
【7】 Manifestations of Xenophobia in AI Systems标题:人工智能系统中的仇外心理表现作者:Nenad Tomasev, Jonathan Leader Maynard, Iason Gabriel链接:https://arxiv.org/abs/2212.07877摘要:Xenophobia is one of the key drivers of marginalisation, discrimination, and conflict, yet many prominent machine learning (ML) fairness frameworks fail to comprehensively measure or mitigate the resulting xenophobic harms. Here we aim to bridge this conceptual gap and help facilitate safe and ethical design of artificial intelligence (AI) solutions. We ground our analysis of the impact of xenophobia by first identifying distinct types of xenophobic harms, and then applying this framework across a number of prominent AI application domains, reviewing the potential interplay between AI and xenophobia on social media and recommendation systems, healthcare, immigration, employment, as well as biases in large pre-trained models. These help inform our recommendations towards an inclusive, xenophilic design of future AI systems.仇外心理是边缘化、歧视和冲突的主要驱动因素之一,但许多著名的机器学习(ML)公平性框架未能全面衡量或减轻由此产生的仇外心理伤害。在这里,我们旨在弥补这一概念上的差距,帮助促进人工智能(AI)解决方案的安全和道德设计。我们对仇外心理的影响进行了分析,首先确定了不同类型的仇外伤害,然后将这一框架应用于一些著名的人工智能应用领域,回顾了人工智能和仇外心理在社交媒体和推荐系统、医疗、移民、就业以及大型预训练模型中的潜在相互作用。这些都有助于为我们对未来人工智能系统的包容性、排外性的设计提供建议。
【8】 Population Template-Based Brain Graph Augmentation for Improving One-Shot Learning Classification标题:基于群体模板的脑图增强,提高单次学习分类能力作者:Oben Özgür, Arwa Rekik, Islem Rekik链接:https://arxiv.org/abs/2212.07790摘要:The challenges of collecting medical data on neurological disorder diagnosis problems paved the way for learning methods with scarce number of samples. Due to this reason, one-shot learning still remains one of the most challenging and trending concepts of deep learning as it proposes to simulate the human-like learning approach in classification problems. Previous studies have focused on generating more accurate fingerprints of the population using graph neural networks (GNNs) with connectomic brain graph data. Thereby, generated population fingerprints named connectional brain template (CBTs) enabled detecting discriminative bio-markers of the population on classification tasks. However, the reverse problem of data augmentation from single graph data representing brain connectivity has never been tackled before. In this paper, we propose an augmentation pipeline in order to provide improved metrics on our binary classification problem. Divergently from the previous studies, we examine augmentation from a single population template by utilizing graph-based generative adversarial network (gGAN) architecture for a classification problem. We benchmarked our proposed solution on AD/LMCI dataset consisting of brain connectomes with Alzheimer's Disease (AD) and Late Mild Cognitive Impairment (LMCI). In order to evaluate our model's generalizability, we used cross-validation strategy and randomly sampled the folds multiple times. Our results on classification not only provided better accuracy when augmented data generated from one sample is introduced, but yields more balanced results on other metrics as well.收集神经系统疾病诊断问题的医疗数据所面临的挑战,为具有稀缺样本数量的学习方法铺平了道路。由于这个原因,一次性学习仍然是深度学习中最具挑战性和趋势性的概念之一,因为它提议在分类问题中模拟类似人类的学习方法。以前的研究集中在使用图神经网络(GNN)与连接体脑图数据生成更准确的群体指纹。因此,生成的人群指纹被命名为连接脑模板(CBTs),能够在分类任务中检测出人群的鉴别性生物标记。然而,从代表大脑连接的单一图数据中进行数据增强的反向问题以前从未被解决过。在本文中,我们提出了一个扩增管道,以便为我们的二元分类问题提供更好的指标。与之前的研究不同,我们通过利用基于图的生成对抗网络(gGAN)架构,对单一群体模板的分类问题进行增强。我们在由阿尔茨海默病(AD)和晚期轻度认知障碍(LMCI)的大脑连接体组成的AD/LMCI数据集上对我们提出的解决方案进行了基准测试。为了评估我们模型的普适性,我们使用了交叉验证策略,并对褶皱进行了多次随机采样。我们的分类结果不仅在引入由一个样本产生的增强数据时提供了更好的准确性,而且在其他指标上也产生了更均衡的结果。
【9】 A New Deep Boosted CNN and Ensemble Learning based IoT Malware Detection
标题:一种新的基于深度提升的CNN和集合学习的物联网恶意软件检测方法作者:Saddam Hussain Khan, Wasi Ullah (Department of Computer Systems Engineering, University of Engineering and Applied Science, Swat, Pakistan)链接:https://arxiv.org/abs/2212.08008摘要:Security issues are threatened in various types of networks, especially in the Internet of Things (IoT) environment that requires early detection. IoT is the network of real-time devices like home automation systems and can be controlled by open-source android devices, which can be an open ground for attackers. Attackers can access the network, initiate a different kind of security breach, and compromises network control. Therefore, timely detecting the increasing number of sophisticated malware attacks is the challenge to ensure the credibility of network protection. In this regard, we have developed a new malware detection framework, Deep Squeezed-Boosted and Ensemble Learning (DSBEL), comprised of novel Squeezed-Boosted Boundary-Region Split-Transform-Merge (SB-BR-STM) CNN and ensemble learning. The proposed S.T.M. block employs multi-path dilated convolutional, Boundary, and regional operations to capture the homogenous and heterogeneous global malicious patterns. Moreover, diverse feature maps are achieved using transfer learning and multi-path-based squeezing and boosting at initial and final levels to learn minute pattern variations. Finally, the boosted discriminative features are extracted from the developed deep SB-BR-STM CNN and provided to the ensemble classifiers (SVM, M.L.P., and AdaboostM1) to improve the hybrid learning generalization. The performance analysis of the proposed DSBEL framework and SB-BR-STM CNN against the existing techniques have been evaluated by the IOT_Malware dataset on standard performance measures. Evaluation results show progressive performance as 98.50% accuracy, 97.12% F1-Score, 91.91% MCC, 95.97 % Recall, and 98.42 % Precision. The proposed malware analysis framework is helpful for the timely detection of malicious activity and suggests future strategies.安全问题在各种类型的网络中都受到威胁,特别是在物联网(IoT)环境中,需要早期检测。物联网是由家庭自动化系统等实时设备组成的网络,可以由开源的安卓设备控制,这对攻击者来说是一个开放的场所。攻击者可以访问网络,启动不同的安全漏洞,并破坏网络控制。因此,及时发现越来越多的复杂恶意软件攻击是确保网络保护可信度的挑战。在这方面,我们开发了一个新的恶意软件检测框架,即深度挤压提升和集合学习(DSBEL),由新颖的挤压提升边界-区域分割-变换-合并(SB-BR-STM)CNN和集合学习组成。拟议的S.T.M.块采用多路径扩张卷积、边界和区域操作来捕捉同质和异质的全球恶意模式。此外,利用转移学习和基于多路径的挤压和提升,在初始和最终层面实现多样化的特征图,以学习微小的模式变化。最后,从开发的深度SB-BR-STM CNN中提取提升的判别特征,并提供给集合分类器(SVM、M.L.P.和AdaboostM1)以提高混合学习的通用性。拟议的DSBEL框架和SB-BR-STM CNN相对于现有技术的性能分析已经通过IOT_Malware数据集的标准性能指标进行了评估。评估结果显示,准确率为98.50%,F1分数为97.12%,MCC为91.91%,召回率为95.97%,精确度为98.42%。拟议的恶意软件分析框架有助于及时检测恶意活动,并提出了未来的策略。
人机交互相关论文
【1】 DOPAMINE: Doppler frequency and Angle of arrival MINimization of tracking Error for extended reality标题:DOPAMINE: 多普勒频率和到达角 延伸现实的跟踪误差最小化作者:Andrea Bedin, Alexander Marinšek, Shaghayegh Shahcheraghi, Nairy Moghadas Gholian, Liesbet Van der Perre链接:https://arxiv.org/abs/2212.07764摘要:In this paper, we investigate how Joint Communication And Sensing (JCAS) can be used to improve the Inertial Measurement Unit (IMU)- based tracking accuracy of eXtended Reality (XR) Head-Mounted Displays (HMDs). Such tracking is used when optical and InfraRed (IR) tracking is lost, and its lack of accuracy can lead to disruption of the user experience. In particular, we analyze the impact of using doppler-based speed estimation to aid the accelerometer-based position estimation, and Angle of Arrival (AoA) estimation to aid the gyroscope-based orientation estimation. Although less accurate than IMUs for short times in fact, the JCAS based methods require one fewer integration step, making the tracking more sustainable over time. Based on the proposed model, we conclude that at least in the case of the position estimate, introducing JCAS can make long lasting optical/IR tracking losses more sustainable.在本文中,我们研究了如何利用联合通信和传感(JCAS)来改善基于惯性测量单元(IMU)的扩展现实(XR)头戴式显示器(HMD)的跟踪精度。当光学和红外(IR)跟踪丢失时,就会使用这种跟踪,而其缺乏准确性会导致用户体验的中断。特别是,我们分析了使用基于多普勒的速度估计来帮助基于加速度计的位置估计,以及使用到达角(AoA)估计来帮助基于陀螺仪的方向估计的影响。虽然在短时间内的准确度不如IMU,但基于JCAS的方法需要较少的整合步骤,使跟踪随着时间的推移更加持久。基于所提出的模型,我们得出结论,至少在位置估计的情况下,引入JCAS可以使长期的光学/红外跟踪损失更加持续。
【2】 Improving Developers' Understanding of Regex Denial of Service Tools through Anti-Patterns and Fix Strategies标题:通过反模式和修复策略提高开发人员对拒绝服务工具的认识作者:Sk Adnan Hassan, Zainab Aamir, Dongyoon Lee, James C. Davis, Francisco Servant链接:https://arxiv.org/abs/2212.07979摘要:Regular expressions are used for diverse purposes, including input validation and firewalls. Unfortunately, they can also lead to a security vulnerability called ReDoS (Regular Expression Denial of Service), caused by a super-linear worst-case execution time during regex matching. Due to the severity and prevalence of ReDoS, past work proposed automatic tools to detect and fix regexes. Although these tools were evaluated in automatic experiments, their usability has not yet been studied; usability has not been a focus of prior work. Our insight is that the usability of existing tools to detect and fix regexes will improve if we complement them with anti-patterns and fix strategies of vulnerable regexes. We developed novel anti-patterns for vulnerable regexes, and a collection of fix strategies to fix them. We derived our anti-patterns and fix strategies from a novel theory of regex infinite ambiguity - a necessary condition for regexes vulnerable to ReDoS. We proved the soundness and completeness of our theory. We evaluated the effectiveness of our anti-patterns, both in an automatic experiment and when applied manually. Then, we evaluated how much our anti-patterns and fix strategies improve developers' understanding of the outcome of detection and fixing tools. Our evaluation found that our anti-patterns were effective over a large dataset of regexes (N=209,188): 100% precision and 99% recall, improving the state of the art 50% precision and 87% recall. Our anti-patterns were also more effective than the state of the art when applied manually (N=20): 100% developers applied them effectively vs. 50% for the state of the art. Finally, our anti-patterns and fix strategies increased developers' understanding using automatic tools (N=9): from median "Very weakly" to median "Strongly" when detecting vulnerabilities, and from median "Very weakly" to median "Very strongly" when fixing them.正则表达式被用于多种用途,包括输入验证和防火墙。不幸的是,它们也会导致一种叫做ReDoS(正则表达式拒绝服务)的安全漏洞,这种漏洞是由正则表达式匹配过程中的超线性最坏情况下的执行时间引起。由于ReDoS的严重性和普遍性,过去的工作提出了自动工具来检测和修复REGEXES。尽管这些工具在自动实验中得到了评估,但它们的可用性还没有被研究过;可用性还没有成为先前工作的重点。我们的见解是,如果我们用反模式和易受攻击的词组的修复策略来补充现有的检测和修复词组的工具,其可用性将会提高。我们开发了新的反模式,用于易受攻击的词组,并开发了一系列的修复策略来修复它们。我们的反模式和修复策略来自于一个新的关于词组无限模糊性的理论--这是一个容易受到ReDoS攻击的词组的必要条件。我们证明了我们理论的合理性和完整性。我们评估了我们的反模式的有效性,包括在自动实验和手动应用时。然后,我们评估了我们的反模式和修复策略在多大程度上改善了开发者对检测和修复工具结果的理解。我们的评估发现,我们的反模式对一个大型的词组数据集(N=209,188)是有效的。100%的精度和99%的召回率,提高了50%的精度和87%的召回率。我们的反模式在手动应用(N=20)时也比现有技术水平更有效:100%的开发者有效地应用它们,而现有技术水平只有50%。最后,我们的反模式和修复策略提高了开发者使用自动工具的理解能力(N=9):在检测漏洞时,从中位数 "非常弱 "到中位数 "强";在修复漏洞时,从中位数 "非常弱 "到中位数 "非常强"。
【3】 Beyond the Metaverse: XV (eXtended meta/uni/Verse)标题:元宇宙之外:XV(eXtended meta/omni/uni/Verse)作者:Steve Mann, Yu Yuan, Tom Furness, Joseph Paradiso, Thomas Coughlin链接:https://arxiv.org/abs/2212.07960摘要:We propose the term and concept XV (eXtended meta/omni/uni/Verse) as an alternative to, and generalization of, the shared/social virtual reality widely known as ``metaverse''. XV is shared/social XR. We, and many others, use XR (eXtended Reality) as a broad umbrella term and concept to encompass all the other realities, where X is an ``anything'' variable, like in mathematics, to denote any reality, X ∈ \{physical, virtual, augmented, \ldots \} reality. Therefore XV inherits this generality from XR. We begin with a very simple organized taxonomy of all these realities in terms of two simple building blocks: (1) physical reality (PR) as made of ``atoms'', and (2) virtual reality (VR) as made of ``bits''. Next we introduce XV as combining all these realities with extended society as a three-dimensional space and taxonomy of (1) ``atoms'' (physical reality), (2) ``bits'' (virtuality), and (3) ``genes'' (sociality). Thus those working in the liminal space between Virtual Reality (VR), Augmented Reality (AR), metaverse, and their various extensions, can describe their work and research as existing in the new field of XV. XV includes the metaverse along with extensions of reality itself like shared seeing in the infrared, ultraviolet, and shared seeing of electromagnetic radio waves, sound waves, and electric currents in motors. For example, workers in a mechanical room can look at a pump and see a superimposed time-varying waveform of the actual rotating magnetic field inside its motor, in real time, while sharing this vision across multiple sites.我们提出了XV(eXtended meta/omni/uni/Verse)这一术语和概念,作为广泛称为 "metaverse "的共享/社会虚拟现实的替代和概括。XV是共享/社会XR。我们和许多其他人使用XR(eXtended Reality)作为一个广泛的伞状术语和概念,以包括所有其他的现实,其中X是一个 "任何东西 "的变量,就像在数学中,表示任何现实,X∈{物理、虚拟、增强、ldots }现实。因此,XV从XR继承了这种一般性。我们以两个简单的构件开始对所有这些现实进行一个非常简单的有组织的分类:(1)由 "原子 "组成的物理现实(PR),以及(2)由 "比特 "组成的虚拟现实(VR)。接下来,我们介绍XV,将所有这些现实与扩展的社会结合起来,作为一个三维空间和分类法:(1)"原子"(物理现实),(2)"比特"(虚拟性),和(3)"基因"(社会性)。因此,那些在虚拟现实(VR)、增强现实(AR)、元空间及其各种扩展之间的边缘空间工作的人,可以把他们的工作和研究描述为存在于XV这个新领域中。XV包括元空间以及现实本身的延伸,如在红外线、紫外线中的共享视觉,以及对电磁波、声波和电机中的电流的共享视觉。例如,机械室里的工人可以看一个泵,并实时看到其电机内实际旋转磁场的叠加时间变化波形,同时在多个地点共享这一视觉。
【4】 Synthesizing Research on Programmers' Mental Models of Programs, Tasks and Concepts -- a Systematic Literature Review标题:综合研究程序员对程序、任务和概念的心理模型--系统的文献回顾作者:Ava Heinonen, Bettina Lehtelä, Arto Hellas, Fabian Fagerholm链接:https://arxiv.org/abs/2212.07763摘要:Programmers' mental models represent their knowledge and understanding of programs, programming concepts, and programming in general. They guide programmers' work and influence their task performance. Understanding mental models is important for designing work systems and practices that support programmers. Although the importance of programmers' mental models is widely acknowledged, research on mental models has decreased over the years. The results are scattered and do not take into account recent developments in software engineering. We analyze the state of research into programmers' mental models and provide an overview of existing research. We connect results on mental models from different strands of research to form a more unified knowledge base on the topic. We conducted a systematic literature review on programmers' mental models. We analyzed literature addressing mental models in different contexts, including mental models of programs, programming tasks, and programming concepts. Using nine search engines, we found 3678 articles (excluding duplicates). 84 were selected for further analysis. Using the snowballing technique, we obtained a final result set containing 187 articles. We show that the literature shares a kernel of shared understanding of mental models. By collating and connecting results on mental models from different fields of research, we uncovered some well-researched aspects, which we argue are fundamental characteristics of programmers' mental models. This work provides a basis for future work on mental models. The research field on programmers' mental models still faces many challenges rising from a lack of a shared knowledge base and poorly defined constructs. We created a unified knowledge base on the topic. We also point to directions for future studies. In particular, we call for studies that examine programmers working with modern practices and tools.程序员的心理模型代表了他们对程序、编程概念和一般编程的知识和理解。它们指导程序员的工作,并影响他们的任务表现。理解心理模型对于设计支持程序员的工作系统和实践非常重要。尽管程序员心智模式的重要性被广泛认可,但多年来对心智模式的研究却在减少。这些研究结果是分散的,没有考虑到软件工程的最新发展。我们分析了对程序员心理模型的研究状况,并对现有的研究进行了概述。我们把来自不同研究领域的关于心理模型的结果联系起来,以形成一个关于该主题的更统一的知识库。我们对程序员的心理模型进行了系统的文献回顾。我们分析了不同背景下的心理模型的文献,包括程序的心理模型、编程任务和编程概念。使用九个搜索引擎,我们找到了3678篇文章(不包括重复的)。挑选了84篇进行进一步分析。使用滚雪球技术,我们得到了一个包含187篇文章的最终结果集。我们表明,这些文献分享了对心理模型的共同理解的内核。通过整理和连接来自不同研究领域的关于心理模型的结果,我们发现了一些经过充分研究的方面,我们认为这些是程序员心理模型的基本特征。这项工作为未来关于心理模型的工作提供了一个基础。关于程序员心理模型的研究领域仍然面临着许多挑战,这些挑战来自于缺乏一个共享的知识库和定义不清的结构。我们创建了一个关于这个主题的统一的知识库。我们还指出了未来研究的方向。特别是,我们呼吁对使用现代实践和工具的程序员的研究。
【5】 Tensions Between the Proxies of Human Values in AI标题:人工智能中人类价值的代名词之间的紧张关系作者:Teresa Datta, Daniel Nissani, Max Cembalest, Akash Khanna, Haley Massa, John P. Dickerson链接:https://arxiv.org/abs/2212.07508摘要:Motivated by mitigating potentially harmful impacts of technologies, the AI community has formulated and accepted mathematical definitions for certain pillars of accountability: e.g. privacy, fairness, and model transparency. Yet, we argue this is fundamentally misguided because these definitions are imperfect, siloed constructions of the human values they hope to proxy, while giving the guise that those values are sufficiently embedded in our technologies. Under popularized methods, tensions arise when practitioners attempt to achieve each pillar of fairness, privacy, and transparency in isolation or simultaneously. In this position paper, we push for redirection. We argue that the AI community needs to consider all the consequences of choosing certain formulations of these pillars -- not just the technical incompatibilities, but also the effects within the context of deployment. We point towards sociotechnical research for frameworks for the latter, but push for broader efforts into implementing these in practice.出于减轻技术潜在有害影响的动机,人工智能界已经制定并接受了某些责任制支柱的数学定义:如隐私、公平和模型透明度。然而,我们认为这从根本上来说是错误的,因为这些定义是不完善的,是他们希望代理的人类价值的孤立的构造,同时也给这些价值充分嵌入我们的技术打上了幌子。在流行的方法下,当从业者试图孤立地或同时实现公平、隐私和透明的每个支柱时,就会出现紧张。在这份立场文件中,我们推动了方向性的改变。我们认为,人工智能界需要考虑选择这些支柱的某些形式的所有后果--不仅仅是技术上的不相容性,还有在部署背景下的影响。我们指出,社会技术研究是为了建立后者的框架,但也要推动更广泛的努力在实践中实施这些框架。
机器翻译由DeepL翻译生成,仅供参考。
【1】2-hop Neighbor Class Similarity (2NCS): A graph structural metric indicative of graph neural network performance标题:2跳邻居类相似度(2NCS):表明图形神经网络性能的图形结构指标作者:Andrea Cavallo, Claas Grohnfeldt, Michele Russo, Giulio Lovisotto, Luca Vassio链接:https://arxiv.org/abs/2212.13202
摘要:
图形神经网络(GNNs)在众多领域的图形结构数据上取得了最先进的性能。它们将节点表示为其附近区域的摘要的基本能力已被证明对同亲图特别有效,在同类型的节点倾向于连接。在异亲图中,不同类型的节点很可能相连,GNN的表现不太稳定,因为邻域信息可能不太具有代表性,甚至是误导性。另一方面,GNN在所有的异亲图上的表现并不差,而且对其他影响GNN表现的图的属性缺乏了解。
在这项工作中,我们强调了广泛使用的同亲率和最近的跨类邻里相似度(CCNS)指标在估计GNN性能方面的局限性。为了克服这些局限性,我们引入了两跳邻域相似性(2NCS),这是一个新的定量图结构属性,与GNN性能的相关性比其他指标更强更一致。2NCS认为两跳邻域是指导GCN训练-推理过程的两步标签传播过程在理论上的结果。在一个合成图和八个真实世界图数据集上的实验证实,在估计基于GCN和GAT的架构在节点分类任务上的准确性方面,比现有指标有一致的改进。
Graph Neural Networks (GNNs) achieve state-of-the-art performance on graph-structured data across numerous domains. Their underlying ability to represent nodes as summaries of their vicinities has proven effective for homophilous graphs in particular, in which same-type nodes tend to connect. On heterophilous graphs, in which different-type nodes are likely connected, GNNs perform less consistently, as neighborhood information might be less representative or even misleading. On the other hand, GNN performance is not inferior on all heterophilous graphs, and there is a lack of understanding of what other graph properties affect GNN performance.In this work, we highlight the limitations of the widely used homophily ratio and the recent Cross-Class Neighborhood Similarity (CCNS) metric in estimating GNN performance. To overcome these limitations, we introduce 2-hop Neighbor Class Similarity (2NCS), a new quantitative graph structural property that correlates with GNN performance more strongly and consistently than alternative metrics. 2NCS considers two-hop neighborhoods as a theoretically derived consequence of the two-step label propagation process governing GCN's training-inference process. Experiments on one synthetic and eight real-world graph datasets confirm consistent improvements over existing metrics in estimating the accuracy of GCN- and GAT-based architectures on the node classification task.
【2】Efficient Graph Reconstruction and Representation Using Augmented Persistence Diagrams标题:利用增强的持久性图谱进行高效的图谱重构和表述作者:Brittany Terese Fasy, Samuel Micka, David L. Millman, Anna Schenfisch, Lucia Williams链接:https://arxiv.org/abs/2212.13206
摘要:持久同构是一种工具,可以用来通过量化同构特征来总结数据的形状。当数据是Rd中的一个物体时,(增强的)持久同构变换((A)PHT)是一个持久图族,由环境空间中的方向作为参数。最近在理解PHT方面的一个进展是使用了重构的框架,以便找到有限的一组方向来忠实地表示形状,这个结果在理论上和实践上都很有意义。在本文中,我们对这一结果进行了改进,并提出了一种改进的图--以及更广泛的单骨架--重建算法。改进之处在于重建边缘,我们使用径向二进制(多)搜索。所采用的二进制搜索利用了这样一个事实,即边缘可以相对于参考平面在径向上排序,这是图形的一个独特特征。
Persistent homology is a tool that can be employed to summarize the shape of data by quantifying homological features. When the data is an object in Rd, the (augmented) persistent homology transform ((A)PHT) is a family of persistence diagrams, parameterized by directions in the ambient space. A recent advance in understanding the PHT used the framework of reconstruction in order to find finite a set of directions to faithfully represent the shape, a result that is of both theoretical and practical interest. In this paper, we improve upon this result and present an improved algorithm for graph -- and, more generally one-skeleton -- reconstruction. The improvement comes in reconstructing the edges, where we use a radial binary (multi-)search. The binary search employed takes advantage of the fact that the edges can be ordered radially with respect to a reference plane, a feature unique to graphs.
【3】A Combined Synchronization Index for Grassroots Activism on Social Media标题:社会媒体上基层活动的综合同步指数作者:Lynnette Hui Xian Ng, Kathleen M. Carley链接:https://arxiv.org/abs/2212.13221
摘要:社交媒体提供了公民的声音,催生了基层的集体行动,用户部署了一致的努力来传播网上的叙述,甚至进行线下的抗议。有时,这些集体行动会得到无机同步的帮助,这些同步来自于机器人行为者。因此,识别社交媒体上新出现的话语的同步性以及对话中有机/无机活动的迹象是很重要的。这提供了一种分析事件的方式,以了解线下抗议和暴力的可能性。在这项研究中,我们在过去对社交媒体上同步活动的定义--用户同时行动--的基础上,开发了一个综合同步指数(CSI),该指数在衡量用户同步性时采用了分层方法。我们将这一指数应用于Twitter上的六个政治和社会活动事件,并分析了三种行动类型:通过标签、URL和@mentions的同步性。CSI对一个事件中所有行动类型的同步性进行了整体量化,这使得六个事件的同步性谱系得到了排名。在大多数事件中,人类用户的同步性得分高于机器人用户;与其他配对(即机器人-机器人和人类-人类)相比,机器人和人类在所有事件中表现出最多的同步性活动。我们进一步依靠CSI-网络得分与网络中心性指标的和谐与不和谐来观察有机/无机同步的存在。我们希望这项工作有助于以集体的方式调查社交媒体内的同步行动。
Social media has provided a citizen voice, giving rise to grassroots collective action, where users deploy a concerted effort to disseminate online narratives and even carry out offline protests. Sometimes these collective action are aided by inorganic synchronization, which arise from bot actors. It is thus important to identify the synchronicity of emerging discourse on social media and the indications of organic/inorganic activity within the conversations. This provides a way of profiling an event for possibility of offline protests and violence. In this study, we build on past definitions of synchronous activity on social media -- simultaneous user action -- and develop a Combined Synchronization Index (CSI) which adopts a hierarchical approach in measuring user synchronicity. We apply this index on six political and social activism events on Twitter and analyzed three action types: synchronicity by hashtag, URL and @mentions.The CSI provides an overall quantification of synchronization across all action types within an event, which allows ranking of a spectrum of synchronicity across the six events. Human users have higher synchronous scores than bot users in most events; and bots and humans exhibits the most synchronized activities across all events as compared to other pairs (i.e., bot-bot and human-human). We further rely on the harmony and dissonance of CSI-Network scores with network centrality metrics to observe the presence of organic/inorganic synchronization. We hope this work aids in investigating synchronized action within social media in a collective manner.
【4】Saliency-Augmented Memory Completion for Continual Learning标题:持续学习的显著性增强记忆完成度作者:Guangji Bai, Chen Ling, Yuyang Gao, Liang Zhao链接:https://arxiv.org/abs/2212.13242
摘要:持续学习被认为是迈向下一代人工智能的关键一步。在各种方法中,基于重放的方法,保持和重放以前样本的小片段记忆,是对抗灾难性遗忘的最成功的策略之一。然而,由于遗忘是不可避免的,鉴于有界的记忆和无界的任务,如何遗忘是一个持续学习必须解决的问题。因此,除了简单地避免灾难性遗忘之外,一个未被充分探索的问题是如何合理地遗忘,同时确保人类记忆的优点,包括1.存储效率,2.可推广性,以及3.一些可解释性。为了同时实现这些,我们的论文提出了一个新的突出性增强的持续学习的记忆完成框架,其灵感来自认知神经科学中记忆完成分离的最新发现。具体来说,我们创新性地提出通过显著性图的提取和记忆编码,将图像中对任务最重要的部分储存在表象记忆中。当学习新的任务时,以前的数据会被一个自适应的数据生成模块涂抹,这个模块的灵感来自于人类如何完成表象记忆。该模块的参数在所有任务中都是共享的,它可以与持续学习分类器联合训练,作为双级优化。在几个持续学习和图像分类的基准上进行的广泛实验证明了所提出的方法的有效性和效率。
Continual Learning is considered a key step toward next-generation Artificial Intelligence. Among various methods, replay-based approaches that maintain and replay a small episodic memory of previous samples are one of the most successful strategies against catastrophic forgetting. However, since forgetting is inevitable given bounded memory and unbounded tasks, how to forget is a problem continual learning must address. Therefore, beyond simply avoiding catastrophic forgetting, an under-explored issue is how to reasonably forget while ensuring the merits of human memory, including 1. storage efficiency, 2. generalizability, and 3. some interpretability. To achieve these simultaneously, our paper proposes a new saliency-augmented memory completion framework for continual learning, inspired by recent discoveries in memory completion separation in cognitive neuroscience. Specifically, we innovatively propose to store the part of the image most important to the tasks in episodic memory by saliency map extraction and memory encoding. When learning new tasks, previous data from memory are inpainted by an adaptive data generation module, which is inspired by how humans complete episodic memory. The module's parameters are shared across all tasks and it can be jointly trained with a continual learning classifier as bilevel optimization. Extensive experiments on several continual learning and image classification benchmarks demonstrate the proposed method's effectiveness and efficiency.
【5】A Posteriori error estimates for Darcy-Forchheimer's problem coupled with the convection-diffusion-reaction equation标题:耦合对流-扩散-反应方程的Darcy-Forchheimer问题的后验误差估计作者:Toni Sayah, Georges Semaan, Faouzi Triki链接:https://arxiv.org/abs/2212.13247
摘要:在这项工作中,我们推导了对流-扩散-反应方程的后验误差估计,该方程与Darcy-Forchheimer问题相耦合,由一个取决于流体浓度的非线性外部源决定。我们介绍了与该问题相关的变分公式,并使用有限元方法对其进行离散。我们用两类可计算的误差指标证明了最佳后验误差。第一种是与线性化相关的,第二种是与离散化相关的。然后,我们在精确解的附加正则性假设下找到误差的上限和下限。最后,进行数值计算以显示所获得的误差指标的有效性。
In this work we derive a posteriori error estimates for the convection-diffusion-reaction equation coupled with the Darcy-Forchheimer problem by a nonlinear external source depending on the concentration of the fluid. We introduce the variational formulation associated to the problem, and discretize it by using the finite element method. We prove optimal a posteriori errors with two types of calculable error indicators. The first one is linked to the linearization and the second one to the discretization. Then we find upper and lower error bounds under additional regularity assumptions on the exact solutions. Finally, numerical computations are performed to show the effectiveness of the obtained error indicators.
【6】Characterizing and Modeling Control-Plane Traffic for Mobile Core Network标题:移动核心网控制层流量的特征和建模作者:Jiayi Meng, Jingqi Huang, Y. Charlie Hu, Yaron Koral, Xiaojun Lin, Muhammad Shahbaz, Abhigyan Sharma链接:https://arxiv.org/abs/2212.13248
摘要:在本文中,我们首先对控制面流量进行了我们所知的第一个深入的特征分析,使用了一个真实世界的LTE移动核心网(MCN)中37325个UE的控制面跟踪采样。我们的分析表明,控制事件在设备类型和UE的时间上表现出明显的多样性。其次,我们研究了被广泛采用于互联网流量建模的传统概率分布是否能对源自单个UE的控制面流量进行建模。我们的分析表明,控制事件的到达时间以及蜂窝网络中EMM和ECM的UE状态的停留时间不能被建模为泊松过程或其他传统的概率分布。我们进一步表明,这些模型不能捕捉控制面流量的原因是由于其较高的突发性和比传统模型更长的累积分布尾巴。第三,我们为UE集群提出了一个基于半马尔可夫模型的自适应集群方案的两级分层状态机器流量模型,以捕捉移动网络控制面流量的关键特征--特别是每个UE产生的事件之间的依赖性,以及UE之间设备类型和时间的多样性。最后,我们展示了我们的模型如何能够很容易地从LTE调整到5G,以支持5G控制面流量的建模,当5G UE的大量控制面跟踪可用于训练调整后的模型。我们开发的LTE/5G网络控制面流量生成器已向研究界开放,以支持高性能MCN架构设计研发。
In this paper, we first carry out to our knowledge the first in-depth characterization of control-plane traffic, using a real-world control-plane trace for 37,325 UEs sampled at a real-world LTE Mobile Core Network (MCN). Our analysis shows that control events exhibit significant diversity in device types and time-of-day among UEs. Second, we study whether traditional probability distributions that have been widely adopted for modeling Internet traffic can model the control-plane traffic originated from individual UEs. Our analysis shows that the inter-arrival time of the control events as well as the sojourn time in the UE states of EMM and ECM for the cellular network cannot be modeled as Poisson processes or other traditional probability distributions. We further show that the reasons that these models fail to capture the control-plane traffic are due to its higher burstiness and longer tails in the cumulative distribution than the traditional models. Third, we propose a two-level hierarchical state-machine-based traffic model for UE clusters derived from our adaptive clustering scheme based on the Semi-Markov Model to capture key characteristics of mobile network control-plane traffic -- in particular, the dependence among events generated by each UE, and the diversity in device types and time-of-day among UEs. Finally, we show how our model can be easily adjusted from LTE to 5G to support modeling 5G control-plane traffic, when the sizable control-plane trace for 5G UEs becomes available to train the adjusted model. The developed control-plane traffic generator for LTE/5G networks is open-sourced to the research community to support high-performance MCN architecture design R&D.
【7】Robust computation of optimal transport by β-potential regularization标题:通过β-势能正则化对最佳运输进行稳健计算作者:Shintaro Nakamura, Han Bao, Masashi Sugiyama链接:https://arxiv.org/abs/2212.13251
摘要:最佳运输(OT)已经成为机器学习领域中广泛使用的工具,用于衡量概率分布之间的差异。例如,OT是一个流行的损失函数,它量化了经验分布和参数模型之间的差异。最近,一个熵罚项和著名的Sinkhorn算法已被普遍用于以计算效率高的方式近似原始OT。然而,由于Sinkhorn算法运行一个与Kullback-Leibler分歧相关的投影,它经常容易受到异常值的影响。为了克服这个问题,我们建议用与所谓的β发散相关的β-势项来规范OT,这是在稳健统计学中开发的。我们的理论分析显示,β-势可以防止质量被传送到异常值。我们在实验中证明,用我们的算法计算的传输矩阵有助于稳健地估计概率分布,即使在有离群值的情况下。此外,我们提出的方法可以成功地从受污染的数据集中检测出离群值
Optimal transport (OT) has become a widely used tool in the machine learning field to measure the discrepancy between probability distributions. For instance, OT is a popular loss function that quantifies the discrepancy between an empirical distribution and a parametric model. Recently, an entropic penalty term and the celebrated Sinkhorn algorithm have been commonly used to approximate the original OT in a computationally efficient way. However, since the Sinkhorn algorithm runs a projection associated with the Kullback-Leibler divergence, it is often vulnerable to outliers. To overcome this problem, we propose regularizing OT with the \beta-potential term associated with the so-called β-divergence, which was developed in robust statistics. Our theoretical analysis reveals that the β-potential can prevent the mass from being transported to outliers. We experimentally demonstrate that the transport matrix computed with our algorithm helps estimate a probability distribution robustly even in the presence of outliers. In addition, our proposed method can successfully detect outliers from a contaminated dataset
【8】DSI2I: Dense Style for Unpaired Image-to-Image Translation标题:DSI2I: 非配对图像到图像翻译的密集风格作者:Baran Ozaydin, Tong Zhang, Sabine Susstrunk, Mathieu Salzmann链接:https://arxiv.org/abs/2212.13253
摘要:基于非配对典范的图像-图像(UEI2I)翻译旨在将源图像翻译成具有目标图像典范风格的目标图像域,而不需要地面真实的输入-翻译对。现有的UEI2I方法使用一个全局的、图像级别的特征向量来表示风格,或者使用每个物体实例/类别的一个向量来表示风格,但需要对场景语义的了解。相比之下,我们建议将风格表示为密集的特征图,允许对源图像进行更精细的转移,而不需要任何外部语义信息。然后,我们依靠知觉和对抗性损失来分离我们的密集风格和内容表征,并利用无监督的跨领域语义对应关系来将典范风格转变成源内容。我们在两个数据集上证明了我们的方法的有效性,这些数据集使用的是标准指标和新的本地化风格指标,以阶级的方式测量风格相似性。我们的结果证明,与最先进的方法相比,我们的方法所产生的译文更加多样化,更加接近典范,同时也保留了源内容。
Unpaired exemplar-based image-to-image (UEI2I) translation aims to translate a source image to a target image domain with the style of a target image exemplar, without ground-truth input-translation pairs. Existing UEI2I methods represent style using either a global, image-level feature vector, or one vector per object instance/class but requiring knowledge of the scene semantics. Here, by contrast, we propose to represent style as a dense feature map, allowing for a finer-grained transfer to the source image without requiring any external semantic information. We then rely on perceptual and adversarial losses to disentangle our dense style and content representations, and exploit unsupervised cross-domain semantic correspondences to warp the exemplar style to the source content. We demonstrate the effectiveness of our method on two datasets using standard metrics together with a new localized style metric measuring style similarity in a class-wise manner. Our results evidence that the translations produced by our approach are more diverse and closer to the exemplars than those of the state-of-the-art methods while nonetheless preserving the source http://content.In this attention paper, we present causal drug discovery as the craft of creating models that ground the process of drug discovery in causal reasoning.
【9】Improved Laguerre Spectral Methods with Less Round-off Errors and Better Stability标题:改进的拉盖尔光谱方法,舍弃误差小,稳定性好作者:Shenghe Huang, Haijun Yu链接:https://arxiv.org/abs/2212.13255
摘要:流量分割是网络中的一个必要功能,例如,在路径或服务器上进行负载平衡,或由源的访问限制。服务器的容量(或具有特定访问限制的用户数量)决定了流量应被分割成的部分的大小。最近的一种方法是在三元内容可寻址存储器(TCAM)内实现流量分割,这在交换机中通常都有。减少分配给这一任务的内存量是很重要的,因为TCAM很耗电,而且通常还需要用于其他任务,如分类和路由。最近的工作提出了在最长前缀匹配(LPM)模型中计算一个给定分区的最小实现的算法。在本文中,我们分析了这种最小表示的属性,并证明了其大小的下限和上限。上界对一般的TCAM来说是成立的,我们还证明了一般TCAM的额外下界。我们还分析了一个表示的预期大小,对于均匀随机的有序分区。我们表明,随机分区的预期表示大小至少是最坏情况下分区大小的一半,并且在部件的数量和地址空间大小的对数中是线性的。
Laguerre polynomials are orthogonal polynomials defined on positive half line with respect to weight e−x. They have wide applications in scientific and engineering computations. However, the exponential growth of Laguerre polynomials of high degree makes it hard to apply them to complicated systems that need to use large numbers of Laguerre bases. In this paper, we introduce modified three-term recurrence formula to reduce the round-off error and to avoid overflow and underflow issues in generating generalized Laguerre polynomials and Laguerre functions. We apply the improved Laguerre methods to solve an elliptic equation defined on the half line. More than one thousand Laguerre bases are used in this application and meanwhile accuracy close to machine precision is achieved. The optimal scaling factor of Laguerre methods are studied and found to be independent of number of quadrature points in two cases that Laguerre methods have better convergence speeds than mapped Jacobi methods.
【10】Codes for Load Balancing in TCAMs: Size Analysis标题:TCAM中负载平衡的代码:尺寸分析作者:Yaniv Sadeh, Ori Rottenstreich, Haim Kaplan链接:https://arxiv.org/abs/2212.13256
摘要:这篇短文讨论了不断更新的因果抽象作为未来研究的一个潜在方向。关键的想法是将现有的因果抽象水平修改为不同的细节水平,既与观察数据的历史相一致,又能更有效地解决特定的任务。
Traffic splitting is a required functionality in networks, for example for load balancing over paths or servers, or by the source's access restrictions. The capacities of the servers (or the number of users with particular access restrictions) determine the sizes of the parts into which traffic should be split. A recent approach implements traffic splitting within the ternary content addressable memory (TCAM), which is often available in switches. It is important to reduce the amount of memory allocated for this task since TCAMs are power consuming and are often also required for other tasks such as classification and routing. Recent works suggested algorithms to compute a smallest implementation of a given partition in the longest prefix match (LPM) model. In this paper we analyze properties of such minimal representations and prove lower and upper bounds on their size. The upper bounds hold for general TCAMs, and we also prove an additional lower-bound for general TCAMs. We also analyze the expected size of a representation, for uniformly random ordered partitions. We show that the expected representation size of a random partition is at least half the size for the worst-case partition, and is linear in the number of parts and in the logarithm of the size of the address space.
【11】Prototype-guided Cross-task Knowledge Distillation for Large-scale Models
标题:原型指导下的大规模模型的跨任务知识提炼作者:Deng Li, Aming Wu, Yahong Han, Qi Tian
链接:https://arxiv.org/abs/2212.13180
摘要:最近,大规模的预训练模型在许多任务中显示出它们的优势。然而,由于巨大的计算复杂性和存储要求,将大规模模型应用于真实场景是具有挑战性的。一个常见的解决方案是知识提炼,它将大规模模型视为教师模型,并帮助训练一个小型学生模型以获得有竞争力的性能。跨任务知识蒸馏法扩大了大规模预训练模型的应用场景。现有的知识提炼工作主要是直接模仿教师模型的最终预测或中间层,这代表了全局层面的特征,是特定任务的。为了缓解不同标签空间的约束,捕捉不变的内在物体特征(如牛和马的腿和尾巴的形状特征)起到了关键作用。考虑到真实场景任务的复杂性和可变性,我们提出了一种原型引导的跨任务知识提炼(ProC-KD)方法,将大规模教师网络的内在本地级对象知识转移到各种任务场景中。首先,为了在跨任务场景中更好地转移教师模型中的泛化知识,我们提出了一个原型学习模块,从教师模型中物体的基本特征表示中学习。其次,针对不同的下游任务,我们提出了一个任务自适应特征增强模块,用学到的泛化原型特征增强学生模型的特征,并指导学生模型的训练以提高其泛化能力。在各种视觉任务上的实验结果证明了我们的方法在大规模模型跨任务知识提炼场景中的有效性。
Recently, large-scale pre-trained models have shown their advantages in many tasks. However, due to the huge computational complexity and storage requirements, it is challenging to apply the large-scale model to real scenes. A common solution is knowledge distillation which regards the large-scale model as a teacher model and helps to train a small student model to obtain a competitive performance. Cross-task Knowledge distillation expands the application scenarios of the large-scale pre-trained model. Existing knowledge distillation works focus on directly mimicking the final prediction or the intermediate layers of the teacher model, which represent the global-level characteristics and are task-specific. To alleviate the constraint of different label spaces, capturing invariant intrinsic local object characteristics (such as the shape characteristics of the leg and tail of the cattle and horse) plays a key role. Considering the complexity and variability of real scene tasks, we propose a Prototype-guided Cross-task Knowledge Distillation (ProC-KD) approach to transfer the intrinsic local-level object knowledge of a large-scale teacher network to various task scenarios. First, to better transfer the generalized knowledge in the teacher model in cross-task scenarios, we propose a prototype learning module to learn from the essential feature representation of objects in the teacher model. Secondly, for diverse downstream tasks, we propose a task-adaptive feature augmentation module to enhance the features of the student model with the learned generalization prototype features and guide the training of the student model to improve its generalization ability. The experimental results on various visual tasks demonstrate the effectiveness of our approach for large-scale model cross-task knowledge distillation scenes.
【12】Advancements in Biometric Technology with Artificial Intelligence
标题:人工智能在生物识别技术方面的进步作者:Lakshmipathi Devaraj, Konark Modi
链接:https://arxiv.org/abs/2212.13187
摘要:认证在处理公共和私人部门的安全方面发挥着重要作用,如医疗系统、银行系统、运输系统和法律与安全。生物识别技术最近发展迅速,特别是在人工智能和身份识别领域。以前,认证过程依赖于密码、身份卡和指纹等安全措施。另一方面,作为这些预防措施的结果,盗窃行为的频率也在增加。作为回应,生物识别安全应运而生,其中,对一个人的识别是基于使用生物识别系统从人体的生理和行为特征得出的特征。生物识别技术小工具被嵌入到计算机系统、电子设备、移动电话和其他消费电子产品中,因此公众可以使用。随着欺诈行为的增加,对生物识别电子设备的需求和使用也在增加。因此,有可能确认一个人的独特身份。本研究的目的是研究生物识别系统在医学和工程学科中的发展。该研究将介绍二手数据的观点和不同的观点,强调需要更深入地了解和应用生物识别技术,以促进其在数字时代的发展。该研究的结果可能会激励人们和企业更有效地采用生物识别技术,以减少数据和身份安全的风险。
Authentication plays a significant part in dealing with security in public and private sectors such as healthcare systems, banking system, transportation system and law and security. Biometric technology has grown quickly recently, especially in the areas of artificial intelligence and identity. Formerly, authentication process has depended on security measures like passcodes, identity fobs, and fingerprints. On the other hand, as just a consequence of these precautions, theft has increased in frequency. In response, biometric security was created, in which the identification of a person is based on features derived from the physiological and behavioral traits of a human body using biometric system. Biometric technology gadgets are available to the public as they are embedded on computer systems, electronic devices, mobile phones, and other consumer electronics. As the fraudulent is increasing demand and use of biometric electronic devices has increased. As a consequence, it may be possible to confirm a person's distinct identification. The goal of this study is to examine developments in biometric systems in the disciplines of medicine and engineering. The study will present the perspectives and different points of view of the secondary data, highlighting the need for more in-depth understanding and application of biometric technology to promote its development in the digital era. The study's findings may inspire people and businesses to more effectively incorporate biometric technologies in order to reduce the risks to data and identity security.