List of common criticisms
This page lists common criticisms of Tournesol, and provides responses from the Tournesol team.
By accepting trusted email domains only, Tournesol is too elitist.
The goal of our policy of acceptation trusted email domains is to protect our database from Sybil attacks by fake accounts, which are unfortunately extremely common on the Internet SmarterEveryDay-19. In 2019 alone, Facebook reported the removal of 6 billion fake accounts CNN-19 Statista-21, which is far superior to the number of actual human Facebook users. Without such a filtering, we are greatly concerned that Tournesol may be hacked by a 51% attack from a malicious entity engaged in disinformation campaigns, especially given that we do not have the means of Facebook to engage in defense mechanisms against fake accounts.
Evidently, we are looking for alternative efficient and robust solutions to certify accounts without emails from trusted email domains. We greatly welcome any ideas to achieve this.
Is there a point in contributing without a certified account?
We are currently working on presenting both a certified Tournesol score and an uncertified Tournesol score for rated videos, to which uncertified accounts will be contributing. Moreover, we plan to value more the uncertified accounts by displaying on the home page the total number of Tournesol accounts and the total of their contributions. Finally, public contributions from uncertified accounts will be available in the public Tournesol database, which hopefully will be analyzed by all sorts of data scientists and academic researchers.
If contributors provide certifying information on their user profile, we hope that future algorithms developed by researchers or by the Tournesol team will successfully leverage such contributors' contributions.
Tournesol will be biased towards academics
We acknowledge that the certification will create an undesirable bias within the Tournesol database. In fact, this will likely not be the main source of bias. Since our promotion campaign will mostly rely on science communicators, and since we expect contributors to be regular video consumers, we expect our contributors to be a very biased sample of the global population. Typically we unfortunately expect more male contributors than female contributors. Sadly, this seems to be an unavoidable bias for any participatory projects.
Beside calling for promotion of Tournesol beyond these spheres, which is critical, we hope to as well combat such biases through careful algorithmic design. Intuitively, if women are underrepresented in the Tournesol database, then we would give each participating contributing woman a larger voting right than each contributing man, as she represents a larger fraction of the non-participating population. We are currently researching more principled ways to solve this bias issue. Of course, the results of this research will be shared as soon as possible.
Shouldn't experts be given larger voting rights when judging videos within their expertise?
Currently, Tournesol is not giving more voting rights to experts of a field. But this is definitely a problem we aim to eventually tackle. But we insist on the difficulty of this problem. First, we need an automated manner to assess the expertise of a Tournesol account. Then, we need to identify the extent to which an expertise should be accounted for. In fact, for some quality criteria such as "pedagogical and clear", "layman-friendly" or "entertaining and relaxing", the expert should arguably be given less voting rights rather than more. In fact, we hope to eventually leverage a meta-Tournesol to collaboratively determine how expertise should be valued. Finally, we are researching principled manners to determine how expertise should be converted into voting rights, by taking inspiration from Condorcet's jury problem NitzinParoush-80.
Simulating democratic decisions with AI is dystopian
We acknowledge that the ubiquity of algorithms is overwhelming. They are playing an increasingly important role in numerous domains of society, including democracy. Political parties are now leveraging information technologies to better understand the preferences of their electorate and to target their audience through online campaigns BBC-19, and their misuse can be alarmingly concerning Polonski-17. Unfortunately, given the enormous social, economical and political incentives to use them, algorithms are unlikely to disappear.
But we argue that not all algorithms are equally preferable. In particular, we believe that algorithms that spread misinformation and hate are more undesirable than algorithms that promote quality information and healthier habits. Our goal is to stress this distinction between different algorithms, and to collaboratively design algorithms capable of the latter. As the example of Taiwan shows Tang-19, we believe that this can in fact reinforce democracies.
Tournesol is paternalistic
We acknowledge that a lot more thought is needed to best identify how Tournesol should be used to promote quality information at scale. As of today, Tournesol is arguably not paternalistic at all. After all, we mostly collect human judgments, and we then propose recommendations only to those who use our platform or our browser extension.
We do recognize, however, that we hope that Tournesol will be used to audit recommendation algorithms to identify those that fail to promote quality information, and to celebrate those that successfully do so. Moreover, we hope that the companies that design recommendation algorithms will one day leverage the Tournesol database to design more robustly beneficial algorithms for the good of our society. But the details of how to best do so still need to be discussed. One possibility, for instance, would be to have a Tournesol-based recommendation once every five videos, which would not be very invasive. Another possibility is to add a small bias in favor of contents that have been judged more desirable to recommend by Tournesol contributors.
It is noteworthy that such recommendations are being made millions of billions of times Science4All-21b. They are unavoidably paternalistic in some regards. Unfortunately, in the absence of efforts to make such recommendations robustly beneficial, given the scale of disinformation campaigns Science4All-21a, the recommendations are likely to be hacked by those who invest the most in search engine optimization, which are more likely to be large companies or powerful governments, and whose goals may be unaligned with the interests of the general public.
Tournesol is technocratic
We acknowledge that Tournesol is leveraging sophisticated technologies to empower the Tournesol contributors. We argue, however, that the use of technologies is necessary given the scale of the content moderation and recommendation problem, which involves 30,000 hours of new videos per hour on YouTube Statista-20. In fact, content moderation has been found to be traumatic for human moderators TheVerge-19.
Importantly, however, Tournesol does not regard technology as an end in itself. We see it as a tool to scale human judgments. Inspired by the WeBuildAI framework LKKKY+19 Science4All-21d, we use algorithms to learn our contributors' preferences and to aggregate different contributors' preferences in an efficient, reliable, transparent, auditable and robust manner.
In fact, while our current algorithms do not do this, in the future, we hope to design algorithms that reliably extrapolate contributors' judgments to estimate how our contributors would have rated videos that they have not rated yet. This is a research challenge which, as of today, is far from being solved.