Korenčić, Dami
(2019)
Računalni postupci za modeliranje i analizu medijske agende temeljeni na strojnome učenju.
Doctoral thesis, Fakultet Elektrotehnike i Računarstva, Sveučilište u Zagrebu.
Abstract
This thesis focuses on computational methods for media agenda analysis based on topic mo-
dels and methods of topic model evaluation. The goal of a media agenda analysis is gaining
insights into the structure and frequency of media topics. Such analyses are of interest for
social scientists studying news media, journalists, media analysts, and other commercial and
political actors. Computational methods for media agenda analysis enable automatic discovery
of topics in large corpora of news text and measuring of topics’ frequency. Data obtained by
such analyses provides insights into the type and structure of topics occurring in the media,
enables the analysis of topic cooccurrence, and analysis of correlation between topics and other
variables such as text metadata and human perception of topic significance.
The goal of the research presented in the thesis is development of efficient computational
methods for the discovery of topics that constitute the media agenda and methods for measu-
ring frequencies of these topics. The proposed methods are based on topic models – a class
of unsupervised machine learning models widely used for exploratory analysis of topical text
structure. The research encompasses the development of applications of topic models for dis-
covery of media topics and for measuring topics’ frequency, as well as development of methods
for improvement and facilitation of these applications. The improvement and facilitation met-
hods encompass methods of topic model evaluation and software tools for working with topic
models. Methods of topic model evaluation can be used for selection of high-quality models
and for accelerating the process of topic discovery. Namely, topic models are a useful tool, but
due to the stohasticity of the model learning algorithms the quality of learned topics varies. For
this reason the methods of topic model evaluation have the potential to increase the efficiency
of the methods based on topic models.
Media agenda consists of a set of topics discussed in the media, and the problem of media
agenda analysis consists of two sub-tasks: discovery of the topics on the agenda and measu-
ring the frequencies of these topics. The first contribution of the thesis is a method for media
agenda analysis based on topic models that builds upon previous approaches to the problem
and addresses their deficiencies. Three notable deficiencies are: usage of a single topic model
for topic discovery, lack of possibility to define new topics that match the analyst’s interests,
and the lack of precise evaluation of methods for measuring topics’ frequency. In addition to
addressing the identified deficiencies, the method also systematizes the previous approaches
to the problem and is evaluated in two case studies of media agenda analysis. The proposed
experimental method for media agenda analysis consists of three steps: topic discovery, topic definition, and topic measuring steps.
In order to achieve better topic coverage, the discovery step is based not on a single model
but on a set of topic models. The type and number of topic models used depends on available
model implementations and the time available for topic annotation, while the hyperparameter
defining the number of model topics depends on the desired generality of learned topics. Reaso-
nable default settings for model construction are proposed based on the existing agenda analysis
studies and an iterative procedure for tuning the number of topics is described. After the topic
models are constructed, topic discovery is performed by human inspection and interpretation
of the topics. Topic interpretation produces semantic topics (concepts) that are recorded in a
reference table of semantic topics that serves as a record of topics and as a tool for synchroni-
zation of human annotators. After all the model topics are inspected, annotators can optionally
perform the error correcting step of revising the semantic topics, as well as the step of building
a taxonomy of semantic topics. Topic discovery is supported with a graphical user interface
developed for topic inspection and annotation.
The step of topic definition is based on semantic topics obtained during topic discovery. The
purpose of topic definition is to define new semantic topics that closely match the analyst’s exact
interests. The possibility of defining new semantic topics is an important difference between the
proposed and the existing media agenda analysis approaches. Namely, the existing approaches
base the analysis only on model-produced topics, although there is no guarantee that these topics
will match the concepts of interest to the analyst. During topic definition, the analysts infers
definitions of new semantic topics based on previously discovered topics and describes these
topics with word lists. Discovered semantic topics that already closely match the concepts of
interest are used without modification.
During the step of topic measuring the frequencies of semantic topics obtained during the
discovery and definition steps are measured. Topic frequency is defined as the number of news
articles in which a topic occurrs, and the measuring problem is cast as the problem of multi-label
classification in which each news article is being tagged with one or more semantic topics. This
formulation allows for precise quantitative evaluation of methods for measuring topic frequency.
Two measuring methods are considered. The baseline is a supervised method using the method
of binary relevance in combination with a linear kernel SVM model. The second method is a
newly proposed weakly supervised approach, in which the measured semantic topics are first
described by sets of highly discriminative words, after which a new LDA model is constructed
in such a way that the topics of the model correspond to measured topics, which is achieved via
prior probabilities of model topics. The method for selecting words highly discriminative for
a semantic topic represents the main difference between the proposed and the previous weakly
supervised approaches. This method consists of inspecting, for each measured semantic topic,
closely related model topics, and selecting words highly discriminative for the topic by means of inspecting word-related documents and assessing their correspondence with the topic.
The proposed three-step method for media agenda analysis is applied to two media agenda
analyses: the analysis of mainstream US political news and the analysis of mainstream Croatian
political news in the election period. The applications of the proposed method show that the
topic discovery step gives a good overview of the media agenda and leads to the discovery of
useful topics, and that the usage of more than one topic model leads to a more comprehensive
set of topics. The two analyses also demonstrate the necessity of the proposed topic definition
step – in the case of US news new sensible topics corresponding to issues are pinpointed during
this step, while in the case of Croatian election-related news the analysis is based entirely on
newly defined semantic topics that describe the pre- and post-election processes. Quantitative
evaluation of topic frequency measuring shows that the proposed weakly supervised approach
works better than the supervised SVM-based method since it achieves better or comparable
performance with less labeling effort. In contrast to the supervised method, weakly supervi-
sed models have a higher recall and work well for smaller topics. Qualitative evaluation of
measuring models confirms the quality of the proposed approach – measured topic frequency
correlates well with real-world events and the election-related conclusions based on measuring
models are in line with conclusions drawn from social-scientific studies.
Observations from two media agenda analysis studies and the analysis of collected topic data
underlined two problems related to methods of topic model evaluation. The first is the problem
of measuring topic quality – the studies both confirmed variations in topic quality and indicated
the inadequacy of existing word-based measures of topic coherence. The second is the problem
of topic coverage – while the data confirms the limited ability of a single topic model to cover
all the semantic topics, no available methods for measuring topic coverage exist, so it is not
possible to identify the high-coverage models. These observations motivated the development
of new methods of topic model evaluation – document-based coherence measures and methods
for topic coverage analysis.
As described, the analysis of topics produced during the applications of topic discovery
confirmed variations in topics’ quality and underlined the need for better measures of topic
quality. The analysis also indicated that existing word-based measures of topic coherence are
inadequate for evaluating quality of media topics often characterized by semantically unrelated
word sets. Based on the observation that media topics can be successfully interpreted using
topic-related documents, a new class of document-based topic coherence measures is proposed.
The proposed measures calculate topic coherence in three steps: selection of topic-related
documents, document vectorization, and computation of the coherence score from document
vectors. Topic-related documents are selected using a simple model-independent strategy – a
fixed number of documents with top document-topic weights is selected. Two families of docu-
ment vectorization methods are considered. The first family consists of two standard methods based on calculation of word and document frequencies: probabilistic bag-of-words vectoriza-
tion and tf-idf vectorization. Methods in the second family vectorize documents by aggregating
either CBOW or GloVe word embeddings. Three types of methods are considered for cohe-
rence score computation: distance-based methods that model coherence via mutual document
distance, probability-based methods that model coherence as probabilistic compactness of do-
cument vectors, and graph-based methods that model coherence via connectivity of the docu-
ment graph. The space of all the coherence measures is parametrized and sensible parameter
values are defined to obtain a smaller set of several thousand measures. Then the selection and
evaluation of the coherence measures is performed, using model topics manually labeled with
document-based coherence scores and using the area under the ROC curve (AUC) as the per-
formance criterion. The measures are partitioned in structural categories and the best measure
from each category is selected using AUC on the development set as a criterion. These best
measures are then evaluated on two test sets containing English and Croatian news topics.
The evaluation of document-based coherence measures shows that the graph-based measu-
res achieve best results. Namely, best approximators of human coherence scores are the graph-
based measures that use frequency-based document vectorization, build sparse graphs of locally
connected documents and calculate coherence by aggregating a local connectivity score such as
closeness centrality. Quantitative evaluation of word-based measures confirms the observations
that word-based measures fail to approximate document-based coherence scores well and qu-
alitative evaluation of coherence measures indicates that document- and word-based coherence
measures complement each other and should be used in combination to obtain a more complete
model of topic coherence.
Motivated by the data from the topic discovery steps performed in two media agenda analyses
and by the obvious need to increase the number of topics discovered by a single topic model, the
problem of topic coverage is defined and solutions are proposed. This problem occurrs in ap-
plication of topic models to any text domain, i.e., it is domain-independent and extends beyond
applications to media text. The problem of topic coverage consists of measuring how automati-
cally learned model topics cover a set of reference topics – topical concepts defined by humans.
Two basic aspects of the problem are the reference topics that represent the concepts topic mo-
dels are expected to cover and the measures of topic coverage that calculate a score measuring
overlap between the model topics and reference topics. Finally, the third aspect encompasses
evaluation of a set of topic models using a reference set and coverage measures.
The coverage experiments are conducted using two datasets that correspond to two separate
text domains – news media texts and biological texts. Each dataset contains a text corpus,
a set of reference topics, and a set of topic models. Reference topics consist of topics that
standard topic models are expected to be able to cover. These topics are constructed by human
inspection, selection, and modification of model-learned topics. Both sets of reference topics are representative of useful topics discovered during the process of exploratory text analysis.
Two approaches to measuring topic coverage are developed – an approach based on super-
vised approximation of topic matching and an unsupervised approach based on integrating co-
verage across a range of topic-matching criteria. The supervised approach is based on building
a classification model that approximates human intuition of topic matching. A binary classifier
is learned from a set of topic pairs annotated with matching scores. Four standard classification
models are considered: logistic regression, support vector machine, random forest, and multi-
layer perceptron. Topic pairs are represented as distances of topic-related word and document
vectors using four distinct distance measures: cosine, hellinger, L1 , and L2. Model selection
and evaluation shows that the proposed method approximates human scores very well, and that
logistic regression is the best-performing model. The second proposed method for measuring
coverage uses a measure of topic distance and a distance threshold to approximate the equality
of a reference topic and a model topic. The threshold value is varied and for each threshold
coverage is calculated as a proportion of reference topics that are matched by at least one model
topic at a distance below the threshold. Varying the threshold results in a curve with threshold
values on the x-axis and coverage scores on the y-axis. The final coverage score is calculated
as the area under this curve. This unsupervised measure of coverage, dubbed area under the
coverage-distance curve, correlates very well with the supervised measures of coverage, while
the curve itself is a useful tool for visual analysis of topic coverage. This measure enables the
users to quickly perform coverage measurements on new domains, without the need to annotate
topic pairs in order to construct a supervised coverage measure.
Using the proposed coverage measures and two sets of reference topics, coverage experi-
ments in two distinct text domains are performed. Experiments consist of measuring coverages
obtained by a set of topic models of distinct types constructed using different hyperparameters.
In addition to demonstrating application of coverage methods, the experiments show that the
NMF model has high coverage scores, is robust to domain change and able to discover topics
on a high level of precision. Nonparametric model based on Pitman-Yor priors achieves the best
coverage for news topics.
Two proposed methods of topic model evaluation – document-based coherence measures
and methods devised for solving the coverage problem – are applied in order to improve the
previously proposed topic-model-based method of media agenda analysis. The improvements
refer to the step of topic discovery and lead to quicker discovery of a larger number of concepts.
This is achieved by using more interpretable models with higher coverage, and by ordering mo-
del topics, before human inspection, in the descending order of their coherence. These impro-
vements conclude the contribution of the thesis related to the methods of computational media
agenda analysis. The first improvement is based on the analysis of the coverage and document-
based coherence scores measured for a large number of different topic models. The main result is the recommendation for using the NMF model as the default model for topic discovery, due
to the fact that NMF proved as a robust, interpretable, and a high-coverage model with the ad-
ditional advantage of being fast to train. In addition, the nonparametric topic model based on
Pitman-Yor priors also proved as a good choice for exploratory analysis of news texts since it
achieves a very high coverage. The second improvement is model-agnostic and consists of or-
dering model topics, inspected during the topic discovery step, by descending topic coherence.
This results in low-quality topics being pushed towards the end of the topic inspection queue.
The experiments show that applying the best graph-based coherence measure in the described
way significantly improves the discovery rate of semantic topics. Several other improvement
recommendations are given based on the experience gained in the course of application of the
media agenda analysis methods. These improvements include: improving the topic inspection
and interpretation process by way of discarding the shared reference table of semantic topics,
improving the step of measuring topics’ frequency, and using tools that lead to quick guided
discovery of topics of interest.
Research of the media agenda analysis methods and the methods of topic model evaluation
revealed a number of technical problems related to usage, construction, storage, and retrieval of
topic models. Namely, in topic modelling experiments it is often necessary to construct a large
number of models by varying model types and various parameters of the construction process
such as hyperparameters, low-level resources, and preprocessing components. A systematic
solution to these problems is proposed – a framework for resource building and management
in topic modeling. The framework’s architecture is based on four principles, which in com-
bination define a general and flexible method for designing and building code for evaluation
and application of topic models. In addition, an application for building corpora of media text
by collecting texts from a set of web news feeds was developed, as well as a graphical user
interface that supports the topic discovery and topic frequency measurement.
The topic modelling framework, dubbed pytopia, is an object-oriented Python framework
that can be viewed as a middleware framework located between application-level code and the
algorithm-level frameworks such as TensorFlow. The framework’s architecture is based on four
design principles: the principle of standard interfaces and the adaptation of various components
to these interfaces, the principle of component identifiability, the principle of using the abs-
traction dubbed Context to organize and retrieve components, and the principle of hierarchical
compositionality that reflects the structure of text-mining components and facilitates their de-
sign and implementation. The framework contains core functionality that supports the four de-
sign principles, and functionality for component building, saving, and loading. The framework
also contains a set of components related to topic modeling, ranging form basic resources such
as dictionaries and corpora to complex components such as sets of vectorized texts and topic
models. Finally, the framework includes several tools for topic model evaluation as well as logging and testing functionality.
A corpus of media texts is a basis for media agenda analysis based on topic models. Mo-
tivated by the need for a tool that enables maximum flexibility in defining and building such a
corpus, an application for collecting texts from a set of news feeds is developed. This applica-
tion, dubbed feedsucker, enables the user to build a corpus of news texts containing texts from
a set of sources corresponding to the exact user interest. The user specifies a set of news feeds
in a text format and runs the application that continuously collects new texts and stores them in
a database. The application is Java-based, object-oriented, and extensible.
This thesis describes the research of computational methods for media agenda analysis,
which enable discovery and measuring of topics in large news corpora and find applications in
a range of scientific and commercial analyses of media text. The researched methods are based
on topic models, standard machine learning models for analysis of topical text structure. In
the first phase of the research, an analysis of existing media agenda methods is performed and
a new method that improves and systematizes the existing ones is proposed. The application
of the proposed method in two use-cases underlined the need for new methods of topic model
evaluation that would improve the efficiency of topic-model based tools. Consequently, two new
methods of topic model evaluation are proposed – document-based measures of topic coherence
and methods for analysis of topic coverage. These evaluation methods are then applied to
improve the initially proposed method for media agenda analysis. In addition, research of topic
model applications and methods of topic model evaluation led to a framework for resource
building and management in topic modelling. The four main contributions of the thesis are:
a method for computational analysis of the media agenda based on topic models, document-
based measures of topic coherence, methods for analysis of topic coverage, and the framework
for resource building and management in topic modelling.
The research described in the thesis led to an improved method for media agenda analysis
and new methods of topic model evaluation. The evaluation methods find applications more
general than the media agenda analysis – the measures of document-based coherence are appli-
cable to any topic-model-based analysis of news text while the methods related to the problem
of topic coverage are domain-independent. These evaluation methods represent new approac-
hes with a potential to provide new insights about topic models, a class of widely-used machine
learning models of text. The topic modelling framework could serve the same purpose since it
facilitates complex experiments.
Item Type: |
Thesis
(Doctoral thesis)
|
Uncontrolled Keywords: |
Media agenda ; Topic models ; Topic model evaluation ; Topic coherence ; Topic coverage ; Topic distance measures ; Topic model construction ; Unsupervised learning ; Supervised learning |
Subjects: |
TECHNICAL SCIENCES > Computing > Artificial Intelligence |
Divisions: |
Division of Electronics |
Projects: |
Project title | Project leader | Project code | Project type |
---|
Napredne metode i tehnologije u znanosti o podatcima i kooperativnim sustavima-DATACROSS | Sven Lončarić; Ivan Petrović; Tomislav Šmuc; Andrej Jokić | KK.01.1.1.01.009 | EK | Mjerenje i karakterizacija podataka iz stvarnog svijeta- | Branka Medved-Rogina | 098-0982560-2566 | MZOS | Postupci strojnog učenja za dubinsku analizu složenih struktura podataka-DescriptiveInduction | Dragan Gamberger | IP-2013-11-9623 | HRZZ |
|
Depositing User: |
Damir Korenčić
|
Date Deposited: |
05 Jul 2022 13:42 |
URI: |
http://fulir.irb.hr/id/eprint/7405 |
Actions (login required)
|
View Item |
7405
UNSPECIFIED