There are numerous applications where one receives pieces of information (typically conflicting) from different agents and have to form a point of view from these pieces of information.
When an agent receives conflicting pieces of information from different sources a standard way to solve this conflict is to believe the most trustable sources.
Many works are interested on these notions of trust and reputation, and how to take this trust and reputation into account for decisions and reasoning processes. But they are much less work on how to evaluate this trust. We propose such an evidence-based definition of reliability (truthfulness) from past interactions.
More exactly, we consider a set of agents (sources) that provide us information (facts) on different questions (objects). Our goal is to evaluate both the reliability of the sources, and the reliability of the facts, that then allow to find the correct answers for the different questions (objects).
There are previous works that start from the same structure (sources/fact/objects), but their aim is only to find the correct answers. As far as we know we propose the first approach that allow to identify the correct answers, but also to evaluate the reliability (truthfulness) of the sources.
In order to find this true information, we rely on the idea of Condorcet's Jury Theorem, which states that it is more likely that the majority of the individuals will choose the correct solution.
In this work we suppose that initially we have no information on the reliability of the sources and we define an iterative procedure to establish their reliability. At the beginning, all the sources have the same reliability, then we compare the answers to the different questions, and we use this "Condorcet's Jury Theorem" argument for rewarding the sources that provide pieces of information (facts) that are confirmed by others, and then that are more likely true. Then we iterate the process with these adjusted reliability of the sources until convergence.
After presenting our methods, we discuss logical properties for characterizing interesting methods that aim to evaluate the reliability of sources and facts. We review properties that have been proposed by Booth and Singleton, and discuss why some of them are not appropriate for this setting. We also propose new properties required for all methods, and some properties that characterize interesting subclasses. Then we check which properties are satisfied by our methods.
Beside this formal evaluation of our methods, we also provide experimental ones. The idea is to test if we manage to achieve this aim of evaluating the reliability of sources and facts in practice. They are not numerous real benchmarks for this, but we test our methods on 2 such real benchmarks. Then we also test our methods on generated benchmarks. This allows to test much more parameters.
And the results are good. We show that for the tasks related to finding the true facts we are competitive w.r.t. existing methods. But, contrariwise to the existing methods, we can also give an evaluation of the reliability of the sources. And experiments show that the our methods are really good for this task also.