LTDDA: Large Language Model-Enhanced Text Truth Discovery with Dual Attention

Xiu FANG , Zhihong CUI , Guohao SUN , Jinhu LU

Journal of Donghua University(English Edition) ›› 2025, Vol. 42 ›› Issue (6) : 699 -710.

PDF (5490KB)
Journal of Donghua University(English Edition) ›› 2025, Vol. 42 ›› Issue (6) :699 -710. DOI: 10.19884/j.1672-5220.202407003
Information Technology and Artificial Intelligence
research-article

LTDDA: Large Language Model-Enhanced Text Truth Discovery with Dual Attention

Author information +
History +
PDF (5490KB)

Abstract

Existing text truth discovery methods fail to address two challenges: the inherent long-distance dependencies and thematic diversity of long texts; the inherent subjective sentiment that obscures objective evaluation of source reliability.To address these challenges, a novel truth discovery method named large language model (LLM)-enhanced text truth discovery with dual attention (LTDDA) is proposed.First, LLMs generate embedded representations of text claims, and enhance the feature space to tackle long-distance dependencies and thematic diversity.Then, the complex relationship between source reliability and claim credibility is captured by integrating semantic and sentiment features.Finally, dual-layer attention is applied to extract key semantic information and assign consistent weights to similar sources, resulting in accurate truth outputs.Extensive experiments on three realworld datasets demonstrate that the effectiveness of LTDDA outperforms that of state-of-the-art methods, providing new insights for building more reliable and accurate text truth discovery systems.

Keywords

large language model (LLM) / truth discovery / attention mechanism

Cite this article

Download citation ▾
Xiu FANG, Zhihong CUI, Guohao SUN, Jinhu LU. LTDDA: Large Language Model-Enhanced Text Truth Discovery with Dual Attention. Journal of Donghua University(English Edition), 2025, 42(6): 699-710 DOI:10.19884/j.1672-5220.202407003

登录浏览全文

4963

注册一个新账户 忘记密码

References

[1]

LI Y L, LI Q, GAO J, et al. On the discovery of evolving truth[C]//Proceedings of the 21st ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York: ACM, 2015: 675-684.

[2]

MA F L, LI Y L, LI Q, et al. FaitCrowd: fine grained truth discovery for crowdsourced data aggregation[C]//Proceedings of the 21st ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York: ACM, 2015: 745-754.

[3]

LI Q, LI Y L, GAO J, et al. A confidenceaware approach for truth discovery on long-tail data[J]. Proceedings of the VLDB Endowment, 2014, 8(4): 425-436.

[4]

LI Q, LI Y L, GAO J, et al. Resolving conflicts in heterogeneous data by truth discovery and source reliability estimation[C]//Proceedings of the 2014 ACM SIGMOD International Conference on Management of Data. New York: ACM, 2014: 1187-1198.

[5]

ZHANG H T, LI Y L, MA F L, et al.TextTruth:an unsupervised approach to discover trustworthy information from multi-sourced text data[C]//Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York: ACM, 2018: 2729-2737.

[6]

CHANG C, CAO J J, ZHENG Q B, et al. An unsupervised approach of truth discovery from multi-sourced text data[J]. IEEE Access, 2019, 7: 143479-143489.

[7]

LIU J C, TANG F L, HUANG J L.Truth inference with bipartite attention graph neural network from a comprehensive view[C]//2021 IEEE International Conference on Multimedia and Expo (ICME). New York: IEEE, 2021: 1-6.

[8]

YE C, WANG H Z, LU W B, et al. Deep truth discovery for pattern-based fact extraction[J]. Information Sciences, 2021, 580: 478-494.

[9]

LI X, DONG X L, LYONS K B, et al. Truth finding on the deep web[J]. Proceedings of the VLDB Endowment, 2012, 6(2): 97-108.

[10]

LI X, DONG X L, LYONS K B, et al. Scaling up copy detection[C]//2015 IEEE 31st International Conference on Data Engineering. New York: IEEE, 2015: 89-100.

[11]

MENG C S, JIANG W J, LI Y L, et al. Truth discovery on crowd sensing of correlated entities[C]//Proceedings of the 13th ACM Conference on Embedded Networked Sensor Systems. New York: ACM, 2015: 169-182.

[12]

PASTERNACK J, ROTH D. Knowing what to believe (when you already know something)[C]//International Conference on Computational Linguistics. Beijing: COLING, 2010, 2: 877-885.

[13]

LI Y L, LI Q, GAO J, et al. Conflicts to harmony: a framework for resolving conflicts in heterogeneous data by truth discovery[J]. IEEE Transactions on Knowledge and Data Engineering, 2016, 28(8): 1986-1999.

[14]

YANG Y, BAI Q, LIU Q. A probabilistic model for truth discovery with object correlations[J]. Knowledge-Based Systems, 2019, 165: 360-373.

[15]

LI L Y, QIN B, REN W J, et al. Truth discovery with memory network[J]. Tsinghua Science and Technology, 2017, 22 (6): 609-618.

[16]

MARSHALL J, ARGUETA A, WANG D.A neural network approach for truth discovery in social sensing[C]//2017 IEEE 14th International Conference on Mobile Ad Hoc and Sensor Systems (MASS). New York: IEEE, 2017: 343-347.

[17]

DONG X L, GABRILOVICH E, MURPHY K, et al. Knowledge-based trust[J]. Proceedings of the VLDB Endowment, 2015, 8(9): 938-949.

[18]

LI Y L, DU N, LIU C C, et al. Reliable medical diagnosis from crowdsourcing:discover trustworthy answers from non-experts[C]//Proceedings of the Tenth ACM International Conference on Web Search and Data Mining. New York: ACM, 2017: 253-261.

[19]

BROWN T, MANN B, RYDER N, et al. Language models are few-shot learners[J]. Advances in Neural Information Processing Systems, 2020, 33: 1877-1901.

[20]

TOUVRON H, LAVRIL T, IZACARD G, et al. LLaMA: open and efficient foundation language models[EB/OL].(2023-02-17)[2024-07-09]. https://arxiv.org/abs/2302.13971.

[21]

WEU J, TAY Y, BOMMASANI R, et al. Emergent abilities of large language models[EB/OL].(2022-10-26)[2024-07-09]. https://arxiv.org/abs/2206.07682.

[22]

SANH V, WEBSON A, RAFFEL C, et al. Multitask prompted training enables zero-shot task generalization[EB/OL].(2022-03-17)[2024-07-09]. https://arxiv.org/abs/2110.08207.

[23]

HU J E, SHEN Y L, WALLIS P, et al.LoRA: low-rank adaptation of large language models[EB/OL]. (2021-10-16)[2024-07-09].

[24]

LIU X, JI K X, FU Y C, et al. P-tuning v2:prompt tuning can be comparable to fine-tuning universally across scales and tasks [EB/OL]. (2022-03-20)[2024-07-09]. https://arxiv.org/abs/2110.07602.

[25]

LESTER B, AL-RFOU R, CONSTANT N. The power of scale for parameter-efficient prompt tuning[EB/OL].(2021-09-02)[2024-07-09]. https://arxiv.org/abs/2104.08691.

[26]

LIU X, ZHENG Y N, DU Z X, et al. GPT understands, too[EB/OL].(2023-10-25)[2024-07-09]. https://arxiv.org/abs/2103.10385.

[27]

RAJPURKAR P, ZHANG J, LOPYREV K, et al. SQuAD: 100 000+ questions for machine comprehension of text[EB/OL].(2016-10-11)[2024-07-09]. https://arxiv.org/abs/1606.05250.

[28]

CLARK P, COWHEY I, ETZIONI O, et al. Think you have solved question answering? Try ARC, the AI2 reasoning challenge[EB/OL].(2018-03-14)[2024-07-09] https://arxiv.org/abs/1803.05457.

[29]

TALMOR A, HERZIG J, LOURIE N, et al. CommonsenseQA: a question answering challenge targeting commonsense knowledge[EB/OL].(2019-03-15)[2024-07-09].https://arxiv.org/abs/1811.00937.

[30]

JIANG A Q, ABLAYROLLES A, MENSCH A, et al. Mistral 7[EB/OL].(2023-10-10)[2024-07-09]. https://arxiv.org/abs/2310.06825.

[31]

TOUVRON H, MARTIN L, STONE K R, et al. LLaMA 2: open foundation and fine-tuned chat model[EB/OL].(2023-07-19)[2024-07-09]. https://arxiv.org/abs/2307.09288.

[32]

XIAO H P, WANG S Y. A joint maximum likelihood estimation framework for truth discovery: a unified perspective[J]. IEEE Transactions on Knowledge and Data Engineering, 2022, 35(6): 5521-5533.

PDF (5490KB)

105

Accesses

0

Citation

Detail

Sections
Recommended

/