Full Text:   <2218>

Summary:  <1599>

CLC number: TP391

On-line Access: 2018-03-10

Received: 2017-11-25

Revision Accepted: 2018-01-24

Crosschecked: 2018-01-25

Cited: 0

Clicked: 5724

Citations:  Bibtex RefMan EndNote GB/T7714

-   Go to

Article info.
Open peer comments

Frontiers of Information Technology & Electronic Engineering  2018 Vol.19 No.1 P.104-115

http://doi.org/10.1631/FITEE.1700788


Temporality-enhanced knowledge memory network for factoid question answering


Author(s):  Xin-yu Duan, Si-liang Tang, Sheng-yu Zhang, Yin Zhang, Zhou Zhao, Jian-ru Xue, Yue-ting Zhuang, Fei Wu

Affiliation(s):  College of Computer Science and Technology, Zhejiang University, Hangzhou 310027, China; more

Corresponding email(s):   duanxinyu@zju.edu.cn, siliang@zju.edu.cn, light.e.gal@gmail.com, zhangyin98@zju.edu.cn, zhaozhou@zju.edu.cn, jrxue@mail.xjtu.edu.cn, yzhuang@zju.edu.cn, wufei@zju.edu.cn

Key Words:  Question answering, Knowledge memory, Temporality interaction


Xin-yu Duan, Si-liang Tang, Sheng-yu Zhang, Yin Zhang, Zhou Zhao, Jian-ru Xue, Yue-ting Zhuang, Fei Wu. Temporality-enhanced knowledge memory network for factoid question answering[J]. Frontiers of Information Technology & Electronic Engineering, 2018, 19(1): 104-115.

@article{title="Temporality-enhanced knowledge memory network for factoid question answering",
author="Xin-yu Duan, Si-liang Tang, Sheng-yu Zhang, Yin Zhang, Zhou Zhao, Jian-ru Xue, Yue-ting Zhuang, Fei Wu",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="19",
number="1",
pages="104-115",
year="2018",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.1700788"
}

%0 Journal Article
%T Temporality-enhanced knowledge memory network for factoid question answering
%A Xin-yu Duan
%A Si-liang Tang
%A Sheng-yu Zhang
%A Yin Zhang
%A Zhou Zhao
%A Jian-ru Xue
%A Yue-ting Zhuang
%A Fei Wu
%J Frontiers of Information Technology & Electronic Engineering
%V 19
%N 1
%P 104-115
%@ 2095-9184
%D 2018
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.1700788

TY - JOUR
T1 - Temporality-enhanced knowledge memory network for factoid question answering
A1 - Xin-yu Duan
A1 - Si-liang Tang
A1 - Sheng-yu Zhang
A1 - Yin Zhang
A1 - Zhou Zhao
A1 - Jian-ru Xue
A1 - Yue-ting Zhuang
A1 - Fei Wu
J0 - Frontiers of Information Technology & Electronic Engineering
VL - 19
IS - 1
SP - 104
EP - 115
%@ 2095-9184
Y1 - 2018
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.1700788


Abstract: 
question answering is an important problem that aims to deliver specific answers to questions posed by humans in natural language. How to efficiently identify the exact answer with respect to a given question has become an active line of research. Previous approaches in factoid question answering tasks typically focus on modeling the semantic relevance or syntactic relationship between a given question and its corresponding answer. Most of these models suffer when a question contains very little content that is indicative of the answer. In this paper, we devise an architecture named the temporality-enhanced knowledge memory network (TE-KMN) and apply the model to a factoid question answering dataset from a trivia competition called quiz bowl. Unlike most of the existing approaches, our model encodes not only the content of questions and answers, but also the temporal cues in a sequence of ordered sentences which gradually remark the answer. Moreover, our model collaboratively uses external knowledge for a better understanding of a given question. The experimental results demonstrate that our method achieves better performance than several state-of-the-art methods.

时序增强的知识记忆网络在问答中的应用

概要:问答系统旨在为人类以自然语言提出的问题提供具体答案。如何对问题做出有效回答是该领域的热点问题。在问答系统研究中,对于给定问题与其相应答案,现有方法通常侧重于模拟问答语料间语义相关性或句法关系。当一个问题包含的答案线索非常少时,这些模型大多受到影响。本文设计了一个名为时序增强型知识记忆网络(temporality-enhanced knowledge memory network, TE-KMN)的架构,并将该模型应用于一个名为Quiz Bowl的知识竞赛问答数据集。与多数现有方法不同,该模型不仅对文本内容进行编码,也对问题中连续语句间能够逐步揭示答案的时序线索进行编码。此外,该模型通过协同使用外部知识,能够更好理解给定问题。实验结果表明,该方法性能优于目前几种最先进方法。

关键词:问答系统;知识记忆;时序增强

Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article

Reference

[1]Bahdanau D, Cho K, Bengio Y, 2014. Neural machine translation by jointly learning to align and translate. arXiv:1409.0473. https://arxiv.org/abs/1409.0473

[2]Bao JW, Duan N, Zhou M, et al., 2014. Knowledge-based question answering as machine translation. 52nd Annual Meeting of the Association for Computational Linguistics, p.967-976.

[3]Barrón-Cedeño A, Filice S, Martino GDS, et al., 2015. Thread-level information for comment classification in community question answering. 53rd Annual Meeting of the Association for Computational Linguistics, p.687-693.

[4]Bilotti MW, Elsas JL, Carbonell JG, et al., 2010. Rank learning for factoid question answering with linguistic and semantic constraints. 19th ACM Conf on Information and Knowledge Management, p.459-468.

[5]Boyd-Graber JL, Satinoff B, He H, et al., 2012. Besting the quiz master: crowdsourcing incremental classification games. Joint Conf on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, p.1290-1301.

[6]Carr CE, 1993. Processing of temporal information in the brain. Ann Rev Neurosci, 16(1):223-243.

[7]Chen D, Bolton J, Manning CD, 2016. A thorough examination of the CNN/daily mail reading comprehension task. 54th Annual Meeting of the Association for Computational Linguistics, p.2358-2367.

[8]Cho K, van Merrienboer B, Gülçehre Ç, et al., 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. Conf on Empirical Methods in Natural Language Processing, p.1724-1734.

[9]Chorowski J, Bahdanau D, Serdyuk D, et al., 2015. Attention-based models for speech recognition. Advances in Neural Information Processing Systems, p.577-585.

[10]Ding S, Cong G, Lin C, et al., 2008. Using conditional random fields to extract contexts and answers of questions from online forums. 46th Annual Meeting of the Association for Computational Linguistics, p.710-718.

[11]Figueroa A, Atkinson J, 2011. Maximum entropy context models for ranking biographical answers to open-domain definition questions. 25th AAAI Conf on Artificial Intelligence, p.1173-1179.

[12]Fillmore CJ, 1976. Frame semantics and the nature of language. Ann New York Acad Sci, 280(1):20-32.

[13]Ghazvininejad M, Brockett C, Chang M, et al., 2017. A knowledge-grounded neural conversation model. arXiv:1702.01932. https://arxiv.org/abs/1702.01932

[14]Graves A, Fernández S, Schmidhuber J, 2005. Bidirectional LSTM networks for improved phoneme classification and recognition. 15th Int Conf on Artificial Neural Networks, p.799-804.

[15]Huang J, Zhou M, Yang D, 2007. Extracting chatbot knowledge from online discussion forums. 20th Int Joint Conf on Artifical Intelligence, p.423-428.

[16]Ivry RB, 1996. The representation of temporal information in perception and motor control. Curr Opin Neurobiol, 6(6):851-857.

[17]Iyyer M, Boyd-Graber JL, Claudino LMB, et al., 2014. A neural network for factoid question answering over paragraphs. Conf on Empirical Methods in Natural Language Processing, p.633-644.

[18]Iyyer M, Manjunatha V, Boyd-Graber JL, et al., 2015. Deep unordered composition rivals syntactic methods for text classification. 53rd Annual Meeting of the Association for Computational Linguistics, p.1681-1691.

[19]Jeon J, Croft WB, Lee JH, 2005. Finding similar questions in large question and answer archives. 14th ACM Int Conf on Information and Knowledge Management, p.84-90.

[20]Joty SR, Barrón-Cedeño A, Martino GDS, et al., 2015. Global thread-level inference for comment classification in community question answering. Conf on Empirical Methods in Natural Language Processing, p.573-578.

[21]Jurczyk P, Agichtein E, 2007. Discovering authorities in question answer communities by using link analysis. 16th ACM Conf on Information and Knowledge Management, p.919-922.

[22]Kalchbrenner N, Grefenstette E, Blunsom P, 2014. A convolutional neural network for modeling sentences. 52nd Annual Meeting of the Association for Computational Linguistics, p.655-665.

[23]Le QV, Mikolov T, 2014. Distributed representations of sentences and documents. 31st Int Conf on Machine Learning, p.1188-1196.

[24]Li B, Lyu MR, King I, 2012. Communities of Yahoo! Answers and Baidu Zhidao: complementing or competing? Int Joint Conf on Neural Networks, p.1-8.

[25]Luong T, Pham H, Manning CD, 2015. Effective approaches to attention-based neural machine translation. Conf on Empirical Methods in Natural Language Processing, p.1412-1421.

[26]Ma X, Hovy EH, 2016. End-to-end sequence labeling via bi-directional LSTM-CNNs-CRF. 54th Annual Meeting of the Association for Computational Linguistics, p.1064-1074.

[27]Mikolov T, Karafiát, Burget L, et al., 2010. Recurrent neural network based language model. 11th Annual Conf of the Int Speech Communication Association, p.1045-1048.

[28]Mikolov T, Chen K, Corrado G, et al., 2013. Efficient estimation of word representations in vector space. arXiv:1301.3781. https://arxiv.org/abs/1301.3781

[29]Minsky M, 1991. Society of mind: a response to four reviews. Artif Intell, 48(3):371-396.

[30]Nakashole N, Mitchell TM, 2015. A knowledge-intensive model for prepositional phrase attachment. 53rd Annual Meeting of the Association for Computational Linguistics, p.365-375.

[31]Navigli R, Velardi P, 2010. Learning word-class lattices for definition and hypernym extraction. 48th Annual Meeting of the Association for Computational Linguistics, p.1318-1327.

[32]Pan B, Li H, Zhao Z, et al., 2017. MEMEN: multi-layer embedding with memory networks for machine comprehension. arXiv:1707.09098. https://arxiv.org/abs/1707.09098

[33]Pan Y, 2016. Heading toward artificial intelligence 2.0. Engineering, 2(4):409-413.

[34]Qiu X, Huang X, 2015. Convolutional neural tensor network architecture for community-based question answering. 24th Int Joint Conf on Artificial Intelligence, p.1305-1311.

[35]Rush AM, Chopra S, Weston J, 2015. A neural attention model for abstractive sentence summarization. Conf on Empirical Methods in Natural Language Processing, p.379-389.

[36]Schweppe J, Rummer R, 2013. Attention, working memory, and long-term memory in multimedia learning: an integrated perspective based on process models of working memory. Ed Psychol Rev, 26(2):285-306.

[37]Shah C, Pomerantz J, 2010. Evaluating and predicting answer quality in community QA. 33rd Int ACM SIGIR Conf on Research and Development in Information Retrieval, p.411-418.

[38]Shen Y, Rong W, Sun Z, et al., 2015. Question/Answer matching for CQA system via combining lexical and sequential information. 29th AAAI Conf on Artificial Intelligence, p.275-281.

[39]Sukhbaatar S, Szlam A, Weston J, et al., 2015. End-to-end memory networks. Advances in Neural Information Processing Systems, p.2440-2448.

[40]Sun R, Jiang J, Tan YF, et al., 2005. Using syntactic and semantic relation analysis in question answering. 14th Text REtrieval Conf.

[41]Sutskever I, Vinyals O, Le QV, 2014. Sequence to sequence learning with neural networks. Advances in Neural Information Processing Systems, p.3104-3112.

[42]Wang B, Wang X, Sun C, et al., 2010. Modeling semantic relevance for question-answer pairs in web social communities. 48th Annual Meeting of the Association for Computational Linguistics, p.1230-1238.

[43]Wang M, 2006. A Survey of Answer Extraction Techniques in Factoid Question Answering. https://cs.stanford.edu/people/mengqiu/publication/LSII-LitReview.pdf

[44]Wei X, Huang H, Nie L, et al., 2017. I know what you want to express: sentence element inference by incorporating external knowledge base. IEEE Trans Knowl Data Eng, 29(2):344-358.

[45]Xue X, Jeon J, Croft WB, 2008. Retrieval models for question and answer archives. 31st Int ACM SIGIR Conf on Research and Development in Information Retrieval, p.475-482.

[46]Yang B, Mitchell TM, 2017. Leveraging knowledge bases in LSTMs for improving machine reading. 55th Annual Meeting of the Association for Computational Linguistics, p.1436-1446.

[47]Yang Y, Zhuang Y, Wu F, et al., 2008. Harmonizing hierarchical manifolds for multimedia document semantics understanding and cross-media retrieval. IEEE Trans Multimed, 10(3):437-446.

[48]Yao X, Durme BV, 2014. Information extraction over structured data: question answering with freebase. 52nd Annual Meeting of the Association for Computational Linguistics, p.956-966.

[49]Zheng S, Bao H, Zhao J, et al., 2015. A novel hierarchical convolutional neural network for question answering over paragraphs. IEEE/WIC/ACM Int Conf on Web Intelligence and Intelligent Agent Technology, p.60-66.

[50]Zhou G, Cai L, Zhao J, et al., 2011. Phrase-based translation model for question retrieval in community question answer archives. 49th Annual Meeting of the Association for Computational Linguistics, p.653-662.

[51]Zhuang Y, Yang Y, Wu F, 2008. Mining semantic correlation of heterogeneous multimedia data for cross-media retrieval. IEEE Trans Multimed, 10(2):221-229.

[52]Zhuang Y, Wu F, Chen C, et al., 2017. Challenges and opportunities: from big data to knowledge in AI 2.0. Front Inform Technol Electron Eng, 18(1):3-14.

Open peer comments: Debate/Discuss/Question/Opinion

<1>

Please provide your name, email address and a comment





Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn
Copyright © 2000 - 2024 Journal of Zhejiang University-SCIENCE