首页 > 古诗词 > 17篇论文 百度大脑AI实力享誉国际
2020
12-06

17篇论文 百度大脑AI实力享誉国际

17篇论文 百度大脑AI实力享誉国际
论文2:lun wen 2:
双曲图神经网络论文地址:----shuang qu tu shen jing wang lao lun wen de zhi :----
开源:kai yuan :
论文1和论文2两者的思想是相似的,都希望把双曲空间的好处和图神经网络的表达能力结合起来,只不过具体的模型设计有所区别。前一篇论文主要研究了节点分类和连接预测任务,相比于欧氏空间中的方法大大降低了错误率,在双曲性分数较低(图和树结构的相似度)的数据集上表现尤其好。后一篇论文关注的重点是图分类任务。lun wen 1he lun wen 2liang zhe de sai xiang shi xiang shi de ,dou xi wang ba shuang qu kong jian de hao chu he tu shen jing wang lao de biao da neng li jie ge qi lai ,zhi bu guo ju ti de mo xing she ji you suo ou bie 。qian yi pian lun wen zhu yao yan jiu le jie dian fen lei he lian jie yu ce ren wu ,xiang bi yu ou shi kong jian zhong de fang fa da da jiang di le cuo wu lv ,zai shuang qu xing fen shu jiao di (tu he shu jie gou de xiang shi du )de shu ju ji shang biao xian you ji hao 。hou yi pian lun wen guan zhu de chong dian shi tu fen lei ren wu 。
论文3:-relationalPoincaréGraphEmbeddingslun wen 3:-relationalPoincaréGraphEmbeddings
多关系庞加莱图嵌入论文地址:https://papers.nips.cc/paper/8696-multi-relational-poincare-graph-embeddings.pdfduo guan ji pang jia lai tu qian ru lun wen de zhi :https://papers.nips.cc/paper/8696-multi-relational-poincare-graph-embeddings.pdf
论文3在它们的多关系庞加莱模型(MuRP)的知识图嵌入中用上了双曲几何。直觉上,正确的三元组客体应该落在主体附近的某个超球面中,相关的这些决策边界是由学习到的参数描绘的。作者用来优化模型的是黎曼几何SGD(大量数学警告)。在两个标准的评测数据集WN18RR和FB15k-237上,MuRP的效果比对比模型更好,因为它“更具备双曲几何”而且也更适用于树结构(如果能像上面的论文一样计算一下Gromov双曲性分数就更好了)。更有趣的是,MuRP只需要40维,得到的准确率就和欧氏空间模型用100维甚至200维向量的结果差不多!明显可以看到,双曲空间的模型可以节省空间维度和存储容量,同时还不需要有任何精度的牺牲。lun wen 3zai ta men de duo guan ji pang jia lai mo xing (MuRP)de zhi shi tu qian ru zhong yong shang le shuang qu ji he 。zhi jiao shang ,zheng que de san yuan zu ke ti ying gai la zai zhu ti fu jin de mou ge chao qiu mian zhong ,xiang guan de zhe xie jue ce bian jie shi you xue xi dao de can shu miao hui de 。zuo zhe yong lai you hua mo xing de shi li man ji he SGD(da liang shu xue jing gao )。zai liang ge biao zhun de ping ce shu ju ji WN18RRhe FB15k-237shang ,MuRPde xiao guo bi dui bi mo xing geng hao ,yin wei ta “geng ju bei shuang qu ji he ”er ju ye geng kuo yong yu shu jie gou (ru guo neng xiang shang mian de lun wen yi yang ji suan yi xia Gromovshuang qu xing fen shu jiu geng hao le )。geng you qu de shi ,MuRPzhi xu yao 40wei ,de dao de zhun que lv jiu he ou shi kong jian mo xing yong 100wei shen zhi 200wei xiang liang de jie guo cha bu duo !ming xian ke yi kan dao ,shuang qu kong jian de mo xing ke yi jie sheng kong jian wei du he cun chu rong liang ,tong shi hai bu xu yao you ren he jing du de xi sheng 。
我们还有一个双曲知识图嵌入比赛,获奖方法名为RotationH,论文见https://grlearning.github.io/papers/101.pdf,其实和上面的双曲图卷积神经网络论文的作者是同一个人。这个模型使用了双曲空间的旋转(思路上和RotatEhttps://arxiv.org/abs/1902.10197模型相似,不过RotatE是复数空间的模型),也使用了可学习的曲率。RotationH在WN18RR上刷新了最好成绩,而且在低维的设定下也有很好的表现,比如,32维的RotationH就能得到和500维RotatE差不多的表现。wo men hai you yi ge shuang qu zhi shi tu qian ru bi sai ,huo jiang fang fa ming wei RotationH,lun wen jian https://grlearning.github.io/papers/101.pdf,ji shi he shang mian de shuang qu tu juan ji shen jing wang lao lun wen de zuo zhe shi tong yi ge ren 。zhe ge mo xing shi yong le shuang qu kong jian de xuan zhuai (sai lu shang he RotatEhttps://arxiv.org/abs/1902.10197mo xing xiang shi ,bu guo RotatEshi fu shu kong jian de mo xing ),ye shi yong le ke xue xi de qu lv 。RotationHzai WN18RRshang shua xin le zui hao cheng ji ,er ju zai di wei de she ding xia ye you hen hao de biao xian ,bi ru ,32wei de RotationHjiu neng de dao he 500wei RotatEcha bu duo de biao xian 。
如果你碰巧在大学学习了sinh(双曲正弦)、庞加莱球面、洛伦兹双曲面之类的高等几何知识但是从来都不知道在哪能用上的话,你的机会来了,做双曲几何+图神经网络吧。ru guo ni peng qiao zai da xue xue xi le sinh(shuang qu zheng xian )、pang jia lai qiu mian 、luo lun ci shuang qu mian zhi lei de gao deng ji he zhi shi dan shi cong lai dou bu zhi dao zai na neng yong shang de hua ,ni de ji hui lai le ,zuo shuang qu ji he +tu shen jing wang lao ba 。
2、逻辑和知识图嵌入如果你平时就有关注arXiv或者AI会议论文的话,你肯定已经发现,每年都会有一些越来越复杂的知识图嵌入模型,每次都会把最佳表现的记录刷新那么一点点。那么,知识图的表达能力有没有理论上限呢,或者有没有人研究过模型本身能对哪些建模、对哪些不能建模呢?看到这篇文章的你可太幸运了,下面这些答案送给你。2、luo ji he zhi shi tu qian ru ru guo ni ping shi jiu you guan zhu arXivhuo zhe AIhui yi lun wen de hua ,ni ken ding yi jing fa xian ,mei nian dou hui you yi xie yue lai yue fu za de zhi shi tu qian ru mo xing ,mei ci dou hui ba zui jia biao xian de ji lu shua xin na me yi dian dian 。na me ,zhi shi tu de biao da neng li you mei you li lun shang xian ne ,huo zhe you mei you ren yan jiu guo mo xing ben shen neng dui na xie jian mo 、dui na xie bu neng jian mo ne ?kan dao zhe pian wen zhang de ni ke tai xing yun le ,xia mian zhe xie da an song gei ni 。
交换群:弱鸡;阿贝尔群:大佬jiao huan qun :ruo ji ;a bei er qun :da lao
论文4:GroupRepresentationTheoryforKnowledgeGraphEmbeddinglun wen 4:GroupRepresentationTheoryforKnowledgeGraphEmbedding
链接:https://grlearning.github.io/papers/15.pdflian jie :https://grlearning.github.io/papers/15.pdf
论文4从群论的角度来研究KG嵌入。结果表明,在复空间中可以对阿贝尔群进行建模,且证明了RotatE(在复空间中进行旋转)可以表示任何有限阿贝尔群。lun wen 4cong qun lun de jiao du lai yan jiu KGqian ru 。jie guo biao ming ,zai fu kong jian zhong ke yi dui a bei er qun jin hang jian mo ,ju zheng ming le RotatE(zai fu kong jian zhong jin hang xuan zhuai )ke yi biao shi ren he you xian a bei er qun 。
有没有被“群论”、“阿贝尔群”这些数学名词吓到?不过没关系,这篇文章里有对相关的群论知识做简要介绍。不过这个工作在如何将这个工作拓展到1-N或N-N的关系上,还有很大的gap。作者提出一个假设,即或许我们可以用四元数域H来代替复数空间C……you mei you bei “qun lun ”、“a bei er qun ”zhe xie shu xue ming ci he dao ?bu guo mei guan ji ,zhe pian wen zhang li you dui xiang guan de qun lun zhi shi zuo jian yao jie shao 。bu guo zhe ge gong zuo zai ru he jiang zhe ge gong zuo ta zhan dao 1-Nhuo N-Nde guan ji shang ,hai you hen da de gap。zuo zhe di chu yi ge jia she ,ji huo hu wo men ke yi yong si yuan shu yu Hlai dai ti fu shu kong jian C……
论文5:QuaternionKnowledgeGraphEmbeddingslun wen 5:QuaternionKnowledgeGraphEmbeddings
链接:https://papers.nips.cc/paper/8541-quaternion-knowledge-graph-embeddings.pdflian jie :https://papers.nips.cc/paper/8541-quaternion-knowledge-graph-embeddings.pdf
……在这次NeurIPS19上,这个问题被Zhangetal.解决了。他们提出了QuatE,一个四元数KG嵌入模型。什么是四元数?这个需要说清楚。简单来说,复数有一个实部,一个虚部,例如a+ib;而四元数,有三个虚部,例如a+ib+jc+kd。相比复数会多出两个自由度,且在计算上更为稳定。QuatE将关系建模为4维空间(hypercomplexspace)上的旋转,从而将complEx和RotatE统一起来。在RotatE中,你有一个旋转平面;而在QuatE中,你会有两个。此外,对称、反对称和逆的功能都保留了下来。与RotatE相比,QuatE在FB15k-237上训练所需的自由参数减少了80%。……zai zhe ci NeurIPS19shang ,zhe ge wen ti bei Zhangetal.jie jue le 。ta men di chu le QuatE,yi ge si yuan shu KGqian ru mo xing 。shen me shi si yuan shu ?zhe ge xu yao shui qing chu 。jian chan lai shui ,fu shu you yi ge shi bu ,yi ge xu bu ,li ru a+ib;er si yuan shu ,you san ge xu bu ,li ru a+ib+jc+kd。xiang bi fu shu hui duo chu liang ge zi you du ,ju zai ji suan shang geng wei wen ding 。QuatEjiang guan ji jian mo wei 4wei kong jian (hypercomplexspace)shang de xuan zhuai ,cong er jiang complExhe RotatEtong yi qi lai 。zai RotatEzhong ,ni you yi ge xuan zhuai ping mian ;er zai QuatEzhong ,ni hui you liang ge 。ci wai ,dui chen 、fan dui chen he ni de gong neng dou bao liu le xia lai 。yu RotatExiang bi ,QuatEzai FB15k-237shang xun lian suo xu de zi you can shu jian shao le 80%。
我上面并没有从群的角度来分析这篇文章,不过若感兴趣,你可以尝试去读原文:wo shang mian bing mei you cong qun de jiao du lai fen xi zhe pian wen zhang ,bu guo re gan xing qu ,ni ke yi chang shi qu dou yuan wen :
四元数域的旋转论文6:QuantumEmbeddingofKnowledgeforReasoningsi yuan shu yu de xuan zhuai lun wen 6:QuantumEmbeddingofKnowledgeforReasoning
链接:https://papers.nips.cc/paper/8797-quantum-embedding-of-knowledge-for-reasoning.pdflian jie :https://papers.nips.cc/paper/8797-quantum-embedding-of-knowledge-for-reasoning.pdf
论文6提出了Embed2Reason(E2R)的模型,这是一种受量子逻辑启发的量子KG嵌入方法。该方法可以嵌入类(概念)、关系和实例。lun wen 6di chu le Embed2Reason(E2R)de mo xing ,zhe shi yi chong shou liang zi luo ji qi fa de liang zi KGqian ru fang fa 。gai fang fa ke yi qian ru lei (gai nian )、guan ji he shi li 。
不要激动,这里面没有量子计算。量子逻辑理论(QL)最初是由伯克霍夫和冯诺依曼于1936年提出,用于描述亚原子过程。E2R的作者把它借用过来保存KG的逻辑结构。在QL中(因此也是E2R中),所有一元、二元以及复合谓词实际上都是某些复杂向量空间的子空间,因此,实体及其按某种关系的组合都落在了特定的子空间内。本来,分布定律aAND(bORc)=(aANDb)OR(aANDc)在QL中是不起作用的。但作者用了一个巧妙的技巧绕开了这个问题。bu yao ji dong ,zhe li mian mei you liang zi ji suan 。liang zi luo ji li lun (QL)zui chu shi you bai ke huo fu he feng nuo yi man yu 1936nian di chu ,yong yu miao shu ya yuan zi guo cheng 。E2Rde zuo zhe ba ta jie yong guo lai bao cun KGde luo ji jie gou 。zai QLzhong (yin ci ye shi E2Rzhong ),suo you yi yuan 、er yuan yi ji fu ge wei ci shi ji shang dou shi mou xie fu za xiang liang kong jian de zi kong jian ,yin ci ,shi ti ji ji an mou chong guan ji de zu ge dou la zai le te ding de zi kong jian nei 。ben lai ,fen bu ding lv aAND(bORc)=(aANDb)OR(aANDc)zai QLzhong shi bu qi zuo yong de 。dan zuo zhe yong le yi ge qiao miao de ji qiao rao kai le zhe ge wen ti 。
作者在论文中还介绍了如何使用QL对来自描述逻辑(DL)的术语(例如包含、否定和量词)进行建模!实验结果非常有趣:在FB15K上,E2R产生的Hits@1高达96.4%(因此H@10也能达到);不过在WN18上效果不佳。事实证明,E2R会将正确的事实排在首位或排在top10以下,这就是为什么在所有实验中H@1等于H@10的原因。zuo zhe zai lun wen zhong hai jie shao le ru he shi yong QLdui lai zi miao shu luo ji (DL)de shu yu (li ru bao han 、fou ding he liang ci )jin hang jian mo !shi yan jie guo fei chang you qu :zai FB15Kshang ,E2Rchan sheng de Hits@1gao da 96.4%(yin ci H@10ye neng da dao );bu guo zai WN18shang xiao guo bu jia 。shi shi zheng ming ,E2Rhui jiang zheng que de shi shi pai zai shou wei huo pai zai top10yi xia ,zhe jiu shi wei shen me zai suo you shi yan zhong H@1deng yu H@10de yuan yin 。
补充一点,作者使用LUBM作为演绎推理的基准,该演绎推理包含了具有类及其层次结构的本体。实际上,这也是我关注的焦点之一,因为标准基准数据集FB15K(-237)和WN18(RR)仅包含实例和关系,而没有任何类归因。显然,大型知识图谱具有数千种类型,处理该信息可以潜在地改善链接预测和推理性能。我还是很高兴看到有越来越多的方法(如E2R)提倡将符号信息包含在嵌入中。bu chong yi dian ,zuo zhe shi yong LUBMzuo wei yan yi tui li de ji zhun ,gai yan yi tui li bao han le ju you lei ji ji ceng ci jie gou de ben ti 。shi ji shang ,zhe ye shi wo guan zhu de jiao dian zhi yi ,yin wei biao zhun ji zhun shu ju ji FB15K(-237)he WN18(RR)jin bao han shi li he guan ji ,er mei you ren he lei gui yin 。xian ran ,da xing zhi shi tu pu ju you shu qian chong lei xing ,chu li gai xin xi ke yi qian zai de gai shan lian jie yu ce he tui li xing neng 。wo hai shi hen gao xing kan dao you yue lai yue duo de fang fa (ru E2R)di chang jiang fu hao xin xi bao han zai qian ru zhong 。
论文7:LogicalExpressivenessofGraphNeuralNetworkslun wen 7:LogicalExpressivenessofGraphNeuralNetworks
链接:https://grlearning.github.io/papers/92.pdflian jie :https://grlearning.github.io/papers/92.pdf
让我们继续来考察图神经网络的逻辑表达。论文7中对哪些GNN架构能够捕获哪个逻辑级别进行了大量的研究。目前为止,这个研究还仅限于一阶逻辑的两变量片段FOC_2,因为FOC_2连接到用于检查图同构的Weisfeiler-Lehman(WL)测试上。rang wo men ji xu lai kao cha tu shen jing wang lao de luo ji biao da 。lun wen 7zhong dui na xie GNNjia gou neng gou bu huo na ge luo ji ji bie jin hang le da liang de yan jiu 。mu qian wei zhi ,zhe ge yan jiu hai jin xian yu yi jie luo ji de liang bian liang pian duan FOC_2,yin wei FOC_2lian jie dao yong yu jian cha tu tong gou de Weisfeiler-Lehman(WL)ce shi shang 。
作者证明,聚合组合神经网络(AC-GNN)的表达方式对应于描述逻辑ALCQ,它是FOC_2的子集。作者还进一步证明,如果我们添加一个独处成分,将GNN转换为聚合组合读出GNN(ACR-GNN),则FOC_2中的每个公式都可以由ACR-GNN分类器捕获。这个工作怎么说呢?简直是不能再棒了!zuo zhe zheng ming ,ju ge zu ge shen jing wang lao (AC-GNN)de biao da fang shi dui ying yu miao shu luo ji ALCQ,ta shi FOC_2de zi ji 。zuo zhe hai jin yi bu zheng ming ,ru guo wo men tian jia yi ge du chu cheng fen ,jiang GNNzhuai huan wei ju ge zu ge dou chu GNN(ACR-GNN),ze FOC_2zhong de mei ge gong shi dou ke yi you ACR-GNNfen lei qi bu huo 。zhe ge gong zuo zen me shui ne ?jian zhi shi bu neng zai bang le !
论文8:EmbeddingSymbolicKnowledgeintoDeepNetworkslun wen 8:EmbeddingSymbolicKnowledgeintoDeepNetworks
链接:https://papers.nips.cc/paper/8676-embedding-symbolic-knowledge-into-deep-networks.pdflian jie :https://papers.nips.cc/paper/8676-embedding-symbolic-knowledge-into-deep-networks.pdf
论文8提出了模型LENSR,这是一个具有语义正则化的逻辑嵌入网络,它可以通过图卷积网(GCN)将逻辑规则嵌入到d-DNNF(决策确定性否定范式)当中。在这篇文章中,作者专注于命题逻辑(与上述论文中更具表现力的描述逻辑相反),并且表明将AND和OR的两个正则化组件添加到损失函数就足够了,而不用嵌入此类规则。这个框架可以应用在视觉关系预测任务中,当给定一张图片,你需要去预测两个objects之间的正确关系。在这篇文章中,Top-5的准确率直接将原有84.3%的SOTA提升到92.77%。lun wen 8di chu le mo xing LENSR,zhe shi yi ge ju you yu yi zheng ze hua de luo ji qian ru wang lao ,ta ke yi tong guo tu juan ji wang (GCN)jiang luo ji gui ze qian ru dao d-DNNF(jue ce que ding xing fou ding fan shi )dang zhong 。zai zhe pian wen zhang zhong ,zuo zhe zhuan zhu yu ming ti luo ji (yu shang shu lun wen zhong geng ju biao xian li de miao shu luo ji xiang fan ),bing ju biao ming jiang ANDhe ORde liang ge zheng ze hua zu jian tian jia dao sun shi han shu jiu zu gou le ,er bu yong qian ru ci lei gui ze 。zhe ge kuang jia ke yi ying yong zai shi jiao guan ji yu ce ren wu zhong ,dang gei ding yi zhang tu pian ,ni xu yao qu yu ce liang ge objectszhi jian de zheng que guan ji 。zai zhe pian wen zhang zhong ,Top-5de zhun que lv zhi jie jiang yuan you 84.3%de SOTAdi sheng dao 92.77%。
3、马尔科夫逻辑网络卷土重来马尔科夫逻辑网络(MarkovLogicNetwork)的目标是把一阶逻辑规则和概率图模型结合起来。然而,直接使用马尔科夫逻辑网络不仅有拓展性问题,推理过程的计算复杂度也过高。近几年来,用神经网络改进马尔科夫逻辑网络的做法越来越多,今年我们能看到很多有潜力的网络架构,它们把符号规则和概率模型结合到了一起。3、ma er ke fu luo ji wang lao juan tu chong lai ma er ke fu luo ji wang lao (MarkovLogicNetwork)de mu biao shi ba yi jie luo ji gui ze he gai lv tu mo xing jie ge qi lai 。ran er ,zhi jie shi yong ma er ke fu luo ji wang lao bu jin you ta zhan xing wen ti ,tui li guo cheng de ji suan fu za du ye guo gao 。jin ji nian lai ,yong shen jing wang lao gai jin ma er ke fu luo ji wang lao de zuo fa yue lai yue duo ,jin nian wo men neng kan dao hen duo you qian li de wang lao jia gou ,ta men ba fu hao gui ze he gai lv mo xing jie ge dao le yi qi 。
论文9:ProbabilisticLogicNeuralNetworksforReasoninglun wen 9:ProbabilisticLogicNeuralNetworksforReasoning
链接:https://papers.nips.cc/paper/8987-probabilistic-logic-neural-networks-for-reasoning.pdflian jie :https://papers.nips.cc/paper/8987-probabilistic-logic-neural-networks-for-reasoning.pdf
论文9提出了pLogicNet,这个模型是用来做知识图推理的,而且知识图嵌入和逻辑规则相结合。模型通过变差EM算法训练(实际上,这几年用EM做训练模型优化的论文也有增加的趋势,这事可以之后单独开一篇文章细说)。论文的重点是,用一个马尔科夫逻辑网络定义知识图中的三元组上的联合分布(当然了,这种做法要对未观察到的三元组做一些限制,因为枚举出所有实体和关系上的所有三元组是做不到的),并给逻辑规则设定一个权重;你可以再自己选择一个预训练知识图嵌入(可以选TransE或者ComplEx,实际上随便选一个都行)。在推理步骤中只能怪,模型会根据规则和知识图嵌入找到缺失的三元组,然后在学习步骤中,规则的权重会根据已见到的、已推理的三元组进行更新。pLogicNet在标准的连接预测测试中展现出了强有力的表现。我很好奇如果你在模型里选用了GNN之类的很厉害的知识图嵌入会发生什么。lun wen 9di chu le pLogicNet,zhe ge mo xing shi yong lai zuo zhi shi tu tui li de ,er ju zhi shi tu qian ru he luo ji gui ze xiang jie ge 。mo xing tong guo bian cha EMsuan fa xun lian (shi ji shang ,zhe ji nian yong EMzuo xun lian mo xing you hua de lun wen ye you zeng jia de qu shi ,zhe shi ke yi zhi hou chan du kai yi pian wen zhang xi shui )。lun wen de chong dian shi ,yong yi ge ma er ke fu luo ji wang lao ding yi zhi shi tu zhong de san yuan zu shang de lian ge fen bu (dang ran le ,zhe chong zuo fa yao dui wei guan cha dao de san yuan zu zuo yi xie xian zhi ,yin wei mei ju chu suo you shi ti he guan ji shang de suo you san yuan zu shi zuo bu dao de ),bing gei luo ji gui ze she ding yi ge quan chong ;ni ke yi zai zi ji shua ze yi ge yu xun lian zhi shi tu qian ru (ke yi shua TransEhuo zhe ComplEx,shi ji shang sui bian shua yi ge dou hang )。zai tui li bu zhou zhong zhi neng guai ,mo xing hui gen ju gui ze he zhi shi tu qian ru zhao dao que shi de san yuan zu ,ran hou zai xue xi bu zhou zhong ,gui ze de quan chong hui gen ju yi jian dao de 、yi tui li de san yuan zu jin hang geng xin 。pLogicNetzai biao zhun de lian jie yu ce ce shi zhong zhan xian chu le jiang you li de biao xian 。wo hen hao ji ru guo ni zai mo xing li shua yong le GNNzhi lei de hen li hai de zhi shi tu qian ru hui fa sheng shen me 。
论文10:NeuralMarkovLogicNetworkslun wen 10:NeuralMarkovLogicNetworks
链接:https://kr2ml.github.io/2019/papers/KR2ML_2019_paper_18.pdflian jie :https://kr2ml.github.io/2019/papers/KR2ML_2019_paper_18.pdf
论文10介绍了一个神经马尔科夫逻辑网络的超类,它不需要显式的一阶逻辑规则,但它带有一个神经势能函数,可以在向量空间中编码固有的规则。作者还用最大最小熵方法来优化模型,这招很聪明(但是很少见到有人用)。但缺点就是拓展性不好,作者只在很小的数据集上做了实验,然后他表示后续研究要解决的一大挑战就是拓展性问题。lun wen 10jie shao le yi ge shen jing ma er ke fu luo ji wang lao de chao lei ,ta bu xu yao xian shi de yi jie luo ji gui ze ,dan ta dai you yi ge shen jing shi neng han shu ,ke yi zai xiang liang kong jian zhong bian ma gu you de gui ze 。zuo zhe hai yong zui da zui xiao shang fang fa lai you hua mo xing ,zhe qiao hen cong ming (dan shi hen shao jian dao you ren yong )。dan que dian jiu shi ta zhan xing bu hao ,zuo zhe zhi zai hen xiao de shu ju ji shang zuo le shi yan ,ran hou ta biao shi hou xu yan jiu yao jie jue de yi da tiao zhan jiu shi ta zhan xing wen ti 。
论文11:CanGraphNeuralNetworksHelpLogicReasoning?lun wen 11:CanGraphNeuralNetworksHelpLogicReasoning?
链接:https://kr2ml.github.io/2019/papers/KR2ML_2019_paper_22.pdflian jie :https://kr2ml.github.io/2019/papers/KR2ML_2019_paper_22.pdf
最后,论文11研究了GNN和马尔科夫逻辑网络在逻辑推理、概率推理方面的表现孰强孰弱。作者们的分析表明,原始的GNN嵌入就有能力编码知识图中的隐含信息,但是无法建模谓词之间的依赖关系,也就是无法处理马尔科夫逻辑网络的后向参数化。为了解决这个问题,作者们设计了ExpressGNN架构,其中有额外的几层可调节的嵌入,作用是对知识图中的实体做层次化的编码。zui hou ,lun wen 11yan jiu le GNNhe ma er ke fu luo ji wang lao zai luo ji tui li 、gai lv tui li fang mian de biao xian shu jiang shu ruo 。zuo zhe men de fen xi biao ming ,yuan shi de GNNqian ru jiu you neng li bian ma zhi shi tu zhong de yin han xin xi ,dan shi mo fa jian mo wei ci zhi jian de yi lai guan ji ,ye jiu shi mo fa chu li ma er ke fu luo ji wang lao de hou xiang can shu hua 。wei le jie jue zhe ge wen ti ,zuo zhe men she ji le ExpressGNNjia gou ,ji zhong you e wai de ji ceng ke diao jie de qian ru ,zuo yong shi dui zhi shi tu zhong de shi ti zuo ceng ci hua de bian ma 。
4、对话AI和图好了,硬核的机器学习算法讲得差不多了,下面我们看点轻松的,比如NLP应用。和NeurIPS正会一起开的workshop里有很多有趣的对话AI+图的论文。4、dui hua AIhe tu hao le ,ying he de ji qi xue xi suan fa jiang de cha bu duo le ,xia mian wo men kan dian qing song de ,bi ru NLPying yong 。he NeurIPSzheng hui yi qi kai de workshopli you hen duo you qu de dui hua AI+tu de lun wen 。
论文12:Multi-domainDialogueStateTrackingasDynamicKnowledgeGraphEnhancedQuestionAnsweringlun wen 12:Multi-domainDialogueStateTrackingasDynamicKnowledgeGraphEnhancedQuestionAnswering
链接:http://alborz-geramifard.com/workshops/neurips19-Conversational-AI/Papers/51.pdflian jie :http://alborz-geramifard.com/workshops/neurips19-Conversational-AI/Papers/51.pdf
这篇论文提出了一个通过问答追踪对话进度(DialogueStateTrackingviaQuestionAnswering(DSTQA))的模型,用来在MultiWOZ环境中实现任务导向的对话系统,更具体地,就是通过对话帮助用户完成某个任务,任务一共分为5个大类、30个模版和超过4500个值。zhe pian lun wen di chu le yi ge tong guo wen da zhui zong dui hua jin du (DialogueStateTrackingviaQuestionAnswering(DSTQA))de mo xing ,yong lai zai MultiWOZhuan jing zhong shi xian ren wu dao xiang de dui hua ji tong ,geng ju ti de ,jiu shi tong guo dui hua bang zhu yong hu wan cheng mou ge ren wu ,ren wu yi gong fen wei 5ge da lei 、30ge mo ban he chao guo 4500ge zhi 。
它基于的是问答(QuestionAnswering)这个大的框架,系统问的每个问题都要先有一个预设模版和一组预设的值,用户通过回答问题确认或者更改模版中的预设值。有个相关的假说提出,同一段对话中的多个模版、多组值之间并不是完全独立的,比如,你刚刚订好五星级酒店的房间,然后你紧接着问附近有什么餐馆,那很有可能你想找的餐馆也是中高档的。论文中设计的整个架构流程很繁琐,我们就只讲讲他们的核心创新点吧:ta ji yu de shi wen da (QuestionAnswering)zhe ge da de kuang jia ,ji tong wen de mei ge wen ti dou yao xian you yi ge yu she mo ban he yi zu yu she de zhi ,yong hu tong guo hui da wen ti que ren huo zhe geng gai mo ban zhong de yu she zhi 。you ge xiang guan de jia shui di chu ,tong yi duan dui hua zhong de duo ge mo ban 、duo zu zhi zhi jian bing bu shi wan quan du li de ,bi ru ,ni gang gang ding hao wu xing ji jiu dian de fang jian ,ran hou ni jin jie zhao wen fu jin you shen me can guan ,na hen you ke neng ni xiang zhao de can guan ye shi zhong gao dang de 。lun wen zhong she ji de zheng ge jia gou liu cheng hen fan suo ,wo men jiu zhi jiang jiang ta men de he xin chuang xin dian ba :
资料.rar

资料1.rar

资料2.rar


本文》有 0 条评论

留下一个回复