Evaluating Text Summaries using Divergences of the Probability Distribution
Abstract
This paper aims to show that generating and evaluating summaries are two linked but different tasks even when the same Divergence of the Probability Distribution (DPD) is used in both. This result allows the use of DPD functions for evaluating summaries automatically without references and also for generating summaries without falling into inconsistencies.
Keywords
Kullback-Leibler/Jensen-Shannon divergences, probability distribution, natural language processing, automatic text summarization