This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Fact-Checking at Scale with DimensionRank

Greg Coppola
Think different, again.
Founder
[email protected]

October 20, 2020
Abstract

The most important problem that has emerged after twenty years of popular internet usage is that of fact-checking at scale. This problem is experienced acutely in both of the major internet application platform types, web search and social media.

We offer a working definition of what a “platform” is. We critically deconstruct what we call the “PolitiFact” model of fact checking, and show it to be inherently inferior for fact-checking at scale to a platform-based solution.

Our central contribution is to show how to effectively platformize the problem of fact-checking at scale. We show how a two-dimensional rating system, with dimensions agreement and hotness allows us to create information-seeking queries not possible with the one-dimensional rating system predominating on existing platforms. And, we show that, underlying our user-friendly user-interface, lies a system that allows the creation of formal proofs in the propositional calculus.

Our algorithm is implemented in our open-source DimensionRank software package available at thinkdifferentagain.art.

1 Introduction

Web Search and Social Media are the two most important internet applications. Over the past 20 years, the primary problems of collecting, storing, and distributing messages and other information at scale have been solved. A new problem, however, has emerged as a result of web search and social media. This is the problem of fact-checking at scale.

Communications platforms have lowered the barriers to entry for independent news publications. So there are now many competing sources of truth. There has been a major loss in trust in the traditional, print and television-based publications. And, publications have taken to being openly partisan, leading to a notion of competing sources of truth.

This means it is now difficult for ordinary people, and even specialists, to decide what is “true.” In current industry parlance, the problem of determining whether a statement is “true” is called fact-checking. Fact-checking at scale is the problem of checking all facts (or, propositions) of interest. We might say that fact-checking at scale is to fact-checking what “big data analysis” is to data analysis. This is the problem we are going to solve in this paper.

2 Formulating the Fact-Checking Problem

There are two primary ways to formulate the fact-checking problem. These are the boolean and relational formulations.

2.1 Boolean Fact-Checking

Boolean fact-checking is the assignment of a boolean label true or false, to some given proposition. For example, relative to the statement the American economy is strong in 2020, a boolean system would assign this a binary label of either true or false. A related variant to this problem is to report a single real-valued probability (between 0 and 11) for the given proposition. PolitiFact, for example, indicates low probability with the figurative term “pants-on-fire,” referring to an idomatic expression for telling a falsehood.

2.2 Relational Fact-Checking

Relational fact-checking involves the retrieval of related documents, facts, or propositions, for a given proposition pp, with some notion of agreement polarity. For example, relative to the statement the American economy is strong in 2020, a relational system might find related financial data, and indicate whether they support or contradict the original assertion.

2.3 Conclusion

We believe that both formulations are interesting. However, we believe that relational fact-checking is primary. As we will see, it is possible to solve relational fact-checking, without first solving boolean fact-checking. We believe boolean fact-checking can be created on the basis of a relational fact-checking system, using a form of belief propagation.

3 Deconstructing the Article-Form Fact-Check

In many popular formulations, including PolitFact, the “checking of a fact” is done through the creation of an argument presented in article-form. A article is a sequence of sentences, perhaps along with images. An article-form argument is a sequence of sentences meant to derive or support the target proposition pp being checked.

3.1 The Recursive Nature of Fact-Checking

An article-form fact-check for the purported “fact” (i.e. proposition) pp takes the form of an article, which for us is a sequence of sentences. This article, in order to make its case will use some of its sentences to introduce assumptions, and use the rest of the sentences to justify inferences, based on the assumptions introduced.

In symbolic terms, we can characterize the situation as follows. We want to argue for the proposition pp, and we must introduce some assumptions in order to do this, which we will denote q1,,qNq_{1},...,q_{N}. Thus, in order to check one initial proposition pp, we end up introducing NN additional assumptions.

3.2 Recursive Explosion

Because we are interested in fact-checking in the first place, we presumably believe, any fact we are depending on should be checked. In such a case, an article-form fact-check raises more questions than it answers. That is, where we once had one proposition pp to check, we now have NN new propositions to check after the first round. But, these NN article-form fact-checks will each introduce NN new assumptions each, meaning we will have N2N^{2} to check after the second round, N3N^{3} after the third round, and so on. Thus, not only does an article-fact check induce more new facts to check than we started with, but it actually induces exponentially more. Assuming a relatively constant staff, an employee-driven fact-checking organization will generate facts to check exponentially faster than they can check them.

3.3 The Base Case Problem

The second problem we have with recursion is that there is no inherent “base case” to a recursive fact-checking operation. The checking of one fact requires the checking of NN others. Does the recursive growth in the number of relevant facts grow forever? What happens when a fact-check for a proposition pp ends up depending on itself? These philosophical questions admit, we believe, no known answer other than the scientific method itself. This will require us to recast the problem from one of checking “facts” to one of checking theories.

4 The Scientific Method

The scientific method presents a coherent and contradiction-free approach to the pursuit of determining “truth” and evaluating “facts.” It does this by checking entire theories, rather than individual facts. A theory is a set of propositions. In order to leverage the coherent and well-developed philosophical basis of the scientific method, we will base our platform on the concepts behind the scientific method.

4.1 Logical Calculus and Logical Theories

Underlying the application of science we must first have a language in which to write our propositions, and a set of inference rules to draw conclusions based on initial assumptions. The pair of language and inference rules are called a logical calculus. We can use for a logical calculus any descendant of the propositional calculus (Andrews, 1986). For example, we can use the propositional calculus itself, or the first-order calculus, or an intensional calculus. The calculus specifies a language LL and a set of inference rules RR. LL can be seen as a denumerably infinite set of propositions L={p1,p2,}L=\left\{p_{1},p_{2},...\right\}. A theory TT (in LL) is a set of propositions, each taken from LL.

4.2 Data and Explanations

In science, propositions can be categorized as either data or explanatory principles. Data record measurements of the “outside world.” Aside from data, we have explanatory principles, which attempt to “explain” the data, by predicting it, as a consequence of a theory. We generally prefer the theory “does the most with the least,” in the sense that it predicts as much data as possible, while also being as small as possible (cf. Occam’s Razor). Now, the data are just measurements, and as such can be incorrect. Sometimes, scientists will try to argue against the accuracy of data, rather than give up on a theory. Nevertheless, incorrect data is still data, and must be distinguished from explanatory principles.

4.3 Inference

Underlying the scientific method is a calculus, which we underline, is a descendant of the propositional calculus. Aside from introducing a language, the calculus a set of inference rules. The inference rules allow us to extend a theory TT. An inference rule R(t1,,tm)αR(t_{1},...,t_{m})\rightarrow\alpha, says that, if {t1,,tm}T\left\{t_{1},...,t_{m}\right\}\subseteq T, then we can create a new set T=T{α}T^{\prime}=T\cup\left\{\alpha\right\}, where α\alpha is inferred from TT. For example, from assuming all men are mortal and Socrates is a man, we can infer Socrates is mortal, in first-order logic.

4.4 Contradiction

Because we have assumed we are working with a propositional calculus we have the concept of negation. The negation of the statement α\alpha would be the statement α\alpha is not true, written ¬α\neg\alpha. If a theory contains α\alpha and ¬α\neg\alpha, it is said to be inconsistent. A theory that is inconsistent is also said to have a contradiction. If there are no contradictions, the theory is called consistent. From this perspective, the goal of science is to produce theories consistent with the data, and not inconsistent with it. One can argue against a theory either by contradicting directly its core assumptions, or else by contradicting one of its predictions.

4.5 The Universe of all Theories

An individual theory must be consistent to be correct. However, the set of all possible theories, if unified together, would not be consistent. That is, at any given time, there groups of active theories that cannot all be true. We can regard the set of all theories, or else all actively considered theories, as a set of theories called the universe of active theories. The union of all propositions in all active theories can be called the universe of all active propositions. Our goal then is to be able to “check” any proposition in the universe of active propositions, and do so without degrading in the (normal) case that the union of all active theories cannot consistently be merged.

5 The Power of Platforms

5.1 The Notion of a “Platform”

There has been much discussion recently in various kinds of press, including business and political, about the notion of a “platform.” We define a platform as a channel that connects many producers to many consumers. That is, each producer can user the channel to some set of consumers. The producer and consumer can be in different organizations, in which case the constitute provider and client. Or the producer and consumer can be in the same organizations, in which case they constitute two steps in the supply chain.

Under this definition, an old-fashioned physical marketplace would be kind of a platform, because it connects many producers to many consumers at the same time. A highway network is a platform, because it transports goods from producer to consumer, as is a railroad network.

In the information-technology field, one platform is built upon another. The personal-computer hardware platform connects software producers to consumers. The operating system sits atop the hardware, and is a platform that connects application producers to consumers. The Internet browser sits atop the operating system, and is a platform connecting web site producers to consumers. Popular web sites like Amazon, Google and Facebook sit atop the Internet browser and the Internet, and connect users to other users.

5.2 Information-Sharing Platforms

An information-sharing platform is a platform whose object is the communication, storage and organization of information.111 By “information” we can either mean arbitrary sequences of bits. Claude Shannon famously studied the quantification of the amount of information in a sequence of bits, but we do not need to worry here about how to quantify information. Information platforms are interesting because they allow us as humans to transcend the normal bounds that exist on how many individuals can work effectively together.

Dunbar’s number suggests that the largest number of meaningful relationships a person can have is 150150 (Dunbar, 1982). The reason hypothesized for a limit is that each person has a limited information-processing capacity. In order to interact with someone meaningfully, we not only need to spend time communicating with them, but also considering what they have said once communication is over. Using information-sharing platforms, like Google Search, Facebook, Twitter, etc., we are able to effectively work with, communicate with, and otherwise share information with far more people than had every been possible before. Information-sharing platforms allow us to create super organizations, that can achieve more than organizations without the same technology.

The ability for people to share information more effectively than ever before has precipitated the crisis of fact-checking, requiring the need for fact-checking at scale. However, we can also turn this crisis to opportunity, by using the power of the platform itself to solve the problem of fact-checking.

6 The “PolitiFact” Model

PolitiFact is an employee-based organization that produces article-form fact-checks. There are a variety of different groups that do such article-form “fact-checking”, such as PolitiFact, Snopes, CNN, and The Washington Post. For illustrative effect, we use PolitiFact as an exemplar to stand for the whole group. We will show how the PolitiFact model is inferior to a platform model of fact-checking.

6.1 Definitions

Let us break down the component concepts involved in our claim.

6.1.1 Employee-Based Organization

An employee-based organization, is a fixed legal organization, whose work is done by legal employees. Each employee has a fixed work day. There are two central problems that plague a fixed organization.

Limited Staff: First, it is difficult to hire and maintain a highly qualified staff. These difficulties grow at least quadratically as the organization grows. We believe empirically staff can scale by at most O(N)O(\sqrt{N}) compared to the number of users NN.

Limited Decision-Making: Decision-making is even more limited in an organization than is its entire staff. For the most important decisions, top-level management will have to be involved. At the least, usually senior mangement will have to be involved. We might say that decision-making capacity is ultimately O(1)O(1), in the sense that there is constant bound on the number of individuals that can be considered “senior management.”

6.1.2 Article-Form

The second part of the notion of employee-based article-form fact-checking organization is that the organization produces article-form fact-checks. We reviewed in §3, why an article-form fact-check introduces more questions than it answers. Every proposition which is checked, introduces NN new assumptions that need checking. This creates an exponential growth in the number of facts of interest, even starting with a single initial fact of interest. The problem is worse by a constant factor with more initial facts of interest.

6.2 Problems

Here, then, are our criticisms of the PolitiFact model.

6.2.1 Scale

We have seen in §3, that each article-form fact-check introduces more questions than it raises, leading to exponential growth in the number of facts of interest, having begun with one fact of interest. Relative to the growth in the number of facts of interest, the organization can only grow at O(1)O(1), because no new employees are automatically hired and trained simply by the realization of a need to check more facts. And, the decision-making bandwidth of an organization is limited by senior management, which can ever only grow to size O(1)O(1). Thus, the research bandwidth and decision-making bandwidth of a fact-checking organization can never scale with the number of facts of interest.

6.2.2 Narrow Bias and Lack of Inclusion

Historically, we have observed that individual fact-checking organizations are accused of having an overly narrow bias. That is, the opinions of the organization do not admit of certain groups’ views. In other words, the narrowness of the bias, makes users feel like their opinions are not included. We see here that the perception of a narrow bias is a necessary consequence of the use of an employee-baed fact-checking organization. This is because the organization can take O(1)O(1) opinions per 2N2^{N} many possible configurations. Obviously O(12N)O(\frac{1}{2^{N}}) an exponentially small fraction. Thus, the fraction of views expressed is a narrow fraction, and necessarily so.

7 A Platform-Based Formulation of Fact-Checking

The employee-based model of fact-checking falls short, as we saw, on the problems of scale, bias, and inclusion. In contrast, a platform-based solution would have none of these short-comings, as it easily incorporates many people, of diverse biases, and allows literally all users to be included. The only question then is, how can we formulate fact-checking to work with platforms.

7.1 Arguments For-and-Against

The answer is that we will operationalize the task of fact-checking, as the task of giving the best arguments for-and-gainst a given proposition pp. This is, in other words, a primarily relative fact-checking formulation.

7.2 Messages and their Metadata

7.2.1 Broadcast Messages

The user-facing interface will allow each user to post a broadcast message, in which that user communicates a public message to potentially all other users. Theoretically, we think of messages as falling into two groups: propositional nodes and proof nodes. Propositional nodes introduce individual propositions (assumptions), and proof nodes derive new propositions from previously proven ones, by use of argumentation.

7.2.2 Comments and Agreement Polarity

A comment cmc^{m} for message mm is also a message, but with certain metadata. The notation cmc^{m} is used to write that message cc is a reference to message mm. The notation camc^{m}_{a} is used to write that message cc has agreement polarity aa with respect to message mm. There are three possible agreement polarities: 1) agree, 2) disagree, and 3) no opinion. These are represented internally in our software by the values 11, 0 and nullnull respectively.

7.2.3 Hotness Score

Each user uu can indicate a hotness score for a given message mm, indicated by σ(u,m)\sigma(u,m). Hotness is a synonym for high energy. To say that a post is hot is to say, relative to whether you agree with it, whether or not it is an important post, and whether it should rank highly in people’s searches. This score is constrained to be within a finite range, so let us assume for simplicity 0σ(u,m)10\leq\sigma(u,m)\leq 1 for all uu, mm. The users input these scores through the user interface.

7.2.4 Two-Dimensional Ratings

Twitter, Reddit, YouTube and Instragram allow the user to rate a post in effectively one dimension. Either the posts is liked, or disliked. We have now introduced two different dimensions that, we claim, are not being distinguished on these existing platforms. In the first dimemsion, the user indicates or not they agree with a message. In the second dimension, the user indicates whether or not it is hot. This corresponds to direction and magnitude of a velocity vector. To our knowledge, this is the first work to propose to use two different information dimensions in user ratings.222 Facebook allows the user to select from a variety of emoticons, but we believe these emoticons do not produce an actionable multi-dimensional semantic meaning. If they do, they do not capture our agreement-hotness dichotomy.

7.3 Relative Fact-Checking Query

The crucial new query, that has not been possible with one-dimensional quality ratings is then: list me the best arguments for (or against) the proposition pp.

Algorithm: To run this query, we first select all comments capc^{p}_{a} that refer to pp with agreement polarity aa. That is, we filter these comments according to their agreement polarity. Finally, we sort the remaining comments by hotness. This way, we have got the best arguments, either for or against, the target proposition pp.

This formulation allows us to present a version of relative fact-checking, which is the task of, for a proposition pp, bringing up relevant other information, and indicating its relation to the proposition pp.

7.4 Theorem-Proving Completeness

In this section we will show how to construct a logical calculus within our network formulation, which is equivalent in power to the theorem-proving system of the propositional calculus. That is, whatever can be proven in propositional calculus can be proven in our platform calculus. In fact, we can show that each node will have O(1)O(1) size in the length of the proof overall. This means that ours is not a trivial formulation, in which, e.g., an entire proof is simply written in a single message. And, by construction, whatever cannot be proven in the propositional calculus cannot be proven in our calculus either. The propositional calculus is consistent and complete, meaning that a set of hypotheses HH derives a proposition α\alpha, if and only if the inference is really warranted. Thus, our new calculus is also consistent and complete for the propositional calculus.

7.4.1 Proof

We base our platform calculus on the simple proof system from (Andrews, 1986). We choose this formulation of propositional logic for simplicity, because it relies on a single inference rule, called modus ponens. Modus ponens is the rule that, based on assumptions α\alpha and αβ\alpha\rightarrow\beta, allows us to conclude β\beta.

Suppose we have a set of assumptions HH, and we want to use this to prove some proposition. Andrews’ system allows us to create a proof in which each line is either: 1) an assumption, hHh\in H, from the hypotheis of interest, 2) an axiom from a special set of logical truisms, γA\gamma\in A, or 3) a statement derived from two earlier statements by an application of modus ponens. This is analagous to creating a graph fragment, that grows by one node on each iteration.

We can create a proof system in our platform as follows. We divide messages into two types: propositional nodes and proof nodes. First, we identify each node hh in the hypothesis HH with a propositional node. Then, to build proofs on top of these hypothesis nodes, we introduce the proof nodes. In logical terms, an application of modus ponens must have two inputs (i.e., some α\alpha and αβ\alpha\rightarrow\beta). In terms of our node graph, a proof node can either refer to two hypothesis nodes, one hypothesis node and one logical truism, or else two logical truisms. The truism assumptions are written right on the node that uses them. There are at most two truisms per node, thus, there is a O(1)O(1) propositions that need to be written on a proof node, relative to the size of the proof.

Using this construction, we can translate any proof in Andrews’ consistent-and-complete system, into our construction. So, anything that can be proven in Andrews’ system can also be proven in our system. QED.

7.5 Personalization

The ordinary DimensionRank algorithm describes how to give a personalized search result based on a non-personalized result (Coppola, 2020). That is, to personalize a search result for user uu, we simply take the non-personalized search result, RR, and tailor this set to uu using their unique user embeddings, to obtain RuR^{u}, personalized to uu. We can of course apply the same principle here to the special case to give a personalized ranking of arguments for-and-against. In other words, one user might be interested in certain arguments more than another, and each user would be able to get a personalized experience using this algorithm.

7.6 Authoritative Voices

Bill Gates has recently suggested that one of the most important questions is how to balance authoritative voices with the activities of the citizen-users of the platform (Gates, 2020).

We can offer a solution that balances the public’s desire to express themselves, with the additional desire to maintain an ability to allow “authoritative voices,” wherever they may come from, to weigh in. Suppose we have authoritative voices, whose voices we want to promote for a given fact. We can simply promote their comments to any target message mm to occur as a prefix to the arguments from the rest of the platform. To maintain a balance towards free speech and civic involvement, the rest of the users still have the ability to offer alternative arguments, both for and against, any target message. Finally, both platform users and designated authoritative voices can each recursively fact-check one another. This is a possibility that hasn’t been possible before our platformization of the fact-check.

7.7 Implementation

We have implemented this algorithm in our open-source DimensionRank search and social media software package, available at thinkdifferentagain.art. We have our own network running this algorithm at deeprevelations.com.

8 Conclusion

We have presented what we believe is the first workable proposal for fact-checking at scale, through the platformization of fact-checking. We gave a detailed analysis of the short-comings of an employee-based fact-checking organization, which we called the “PolitiFact model.” We have released this software and have our own implementation as described in §7.7.

References