Staging version, for the real Tournesol Wiki -> https://wiki.tournesol.app
Welcome to the Tournesol Wiki!
Tournesol aims to identify top videos of public utility by eliciting contributors' judgements on content quality. To achieve this, Tournesol is committed to transparency, pedagogy and kindness. In this spirit, Tournesol maintains this Tournesol wiki, which is driven by the following five main goals:
- Present the motivations of Tournesol.
- Clarify the scientific and societal context of the project.
- Explain the technical features of the Tournesol platform.
- Host proposals and foster debates on the future of Tournesol.
- Help contributors determine how to best contribute to Tournesol.
What is Tournesol?
Tournesol aims to identify top videos of public utility by eliciting contributors' judgements on content quality. We hope to contribute to making today's and tomorrow's large-scale algorithms robustly beneficial for all of humanity.
Our main solution is an open source platform, available at Tournesol.app. The platform allows contributors to set up their account, and to provide judgments by comparing contents on different quality criteria, such as "reliable and not misleading", "important and actionable" and "engaging and thought-provoking". We also develop and maintain a mobile application and a browser extension to facilitate the contributors' workflow.
To protect Tournesol against a 51% attack, contributors are asked to validate an email from a trusted email domain. Certified contributors' ratings are then used by our strategyproof robust learning algorithms to infer certified Tournesol scores, which are used by our recommendation engine. Non-certified contributors still affect the non-certified Tournesol scores. Contributors are also asked to provide personal information on their contributor profile, such as degrees and expertise, which we hope to leverage in future versions of our algorithms.
Contributors can provide judgments publicly or privately. All public contributions are recorded in our public database, which is freely downloadable. We highly encourage data scientists and academic researchers to analyze this database to understand our contributors' judgments, determine what they consider preferable to recommend at scale, and audit or improve today's recommendation algorithms.
Tournesol is also highly engaged in raising awareness and increasing the understanding of the risks with today's and tomorrow's unaligned algorithms. We document and maintain the Tournesol wiki, which provides pedagogical explanations and lists numerous resources. We also collaborate with science communicators to produce quality information about the challenges to make algorithms robustly beneficial. Tournesol is also highly engaged with academic research on AI ethics and algorithmic safety.
Tournesol is a project of the Tournesol Association.
Both more concise and more complete descriptions of Tournesol are available here. Tournesol🌻 is also the topic of this series of videos Science4All-21 Science4All-21FR. Talks presenting Tournesol are also available ProtocolLabs-21 AgoraENSAE-12-20FR, as well as two published academic papers that stress the need for Tournesol Hoang-20 HoangFE-21.
The scientific and societal context
Today's algorithms often rely on machine learning, whose recent developments yield impressive advances in AI. But their large-scale deployment on social medias to process massive amounts of information is posing serious AI risks and challenges for AI safety and AI ethics, mostly because of undesirable side effects of attention maximization by customized recommendation algorithms.
Leading concerns include disinformation, misinformation, radicalization, manipulation, cyberbullying, hate, addiction, attention span, loneliness, depression, suicides, biases, privacy, transparency, deplatforming and increased existential risks. On the other hand, aligned recommendation algorithms could contribute greatly to science communication, science literacy, science curiosity, probabilistic thinking, counterfactual reasoning, public health, mental well-being, climate change, animal suffering, social justice, philanthropy and public conversation.
Relevant science to make robustly beneficial algorithms includes studies on social choice, volition, Bayesianism, complexity theory, language models, reinforcement learning, reward hacking, Goodhart's law, adversarial attacks, robust statistics, overfitting, algorithmic bias, interpretability, instrumental convergence, distributional shift, decision theory, preference learning and alignment, but also cognitive bias, reactance, pedagogy, incentives, moderation, AI governance, crowdsourcing, social pressure, regulation, superintelligence and volition.
The Tournesol platform
The core motivation of Tournesol🌻, especially in the short to medium term, is to create a large-scale, reliable and secure database of human preferences. This is critical for the safety of algorithms since, because of machine learning, data manipulate algorithms.
This wiki aims to describe the code architecture of Tournesol, and in particular Tournesol's back-end API, to facilitate future external contributions to the code. Our code is open source and available on GitHub.
Tournesol revolves around the human judgment of video quality by judging different quality criteria. These criteria are divided in two categories. The default quality criteria are reliable and not misleading, important and actionable and engaging and thought-provoking. The optional quality criteria are encourages better habits, clear and pedagogical, layman-friendly, diversity and inclusion, resilience to backfiring risks. The precise wording and definitions of these criteria is still up to debate. Please consider sharing your views on the quality criteria page.
Like any collaborative platform, Tournesol is vulnerable to 51% attacks. To protect Tournesol against 51% attacks by fake accounts, Tournesol's contributors are filtered by their ability to validate emails from trusted email domains. Tournesol then tries to facilitate the contributors' experience by relying on a rate-later list, and on active learning for rated video selection on the rating page. Tournesol🌻 also enables contributors to select their score privacy settings.
Tournesol hopes to thereby collect a large database. The Tournesol main database collects the contributors' successive inputs to the platform. Other tables store raw data about contributor information, comments, reports, video metadata and user settings. Tournesol also manages derived databases about contributor updated ratings, contributor scores and Tournesol scores.
Tournesol's model to computer scores relies on the Bradley-Terry model, Byzantine learning, strategyproof learning, score uncertainty and coordinate-wise binary convex optimization. It also gives great importance to the interpretability and corrigibility of our data, algorithms and results, through the representative disagreement page and the inconsistency page. Tournesol also proposes rating monitoring to help the study of volition.
The path forwards
The long-term goal of Tournesol🌻 is to contribute to solving AI ethics by providing a large-scale, reliable and secure database for algorithmic alignment. By making the public part of this database accessible to everyone, Tournesol hopes to accelerate AI ethics research. Historically, such databases have indeed allowed to accelerate research, especially in machine learning.
Tournesol also hopes to provide better metrics to audit recommendation algorithms, and to provide solutions for content moderation by social medias. In particular, Tournesol🌻 hopes to convince YouTube (but also Google, Facebook, Twitter, Apple, Amazon, Microsoft...) to use Tournesol scores to make more robustly beneficial customized recommendations at scale. Typically, such scores may be used for recommendation every 5 recommendations, or they could be systematically added to the scores used by the current recommendation algorithms.
Though the focus is on YouTube videos so far, Tournesol hopes to eventually also collect human judgments on other media contents, such as books, news articles, podcasts and scientific papers.
Important research must address the challenge of estimating appropriate voting rights, both to valorize expertise and to correct for sampling bias and distributional shift. Many future developments raise challenging ethical dilemmas, which is planned to be addressed by building a meta-Tournesol. In the long term, decentralizing Tournesol is also considered, to avoid future potential coercions of Tournesol's direction, though this raises numerous important research challenges.
Tournesol is an ongoing project which currently suffers from a lack of human resources and fundings. It is also greatly limited by the team's own understanding of the ethics of content recommendation. Criticisms and suggestions for improvement are welcome. The most common criticisms are highlighted and discussed in this wiki page.
How to contribute to Tournesol
Tournesol🌻 is a large multidisciplinary project, which requires a wide variety of expertises and contributions. There are many different ways to help.
The most straightforward way to contribute is to provide ratings on Tournesol.app, especially if you can certify en email address from a trusted email domain and if you are a regular YouTube consumer. Building this database is the core mission of Tournesol. It is a necessary and sufficient condition to then impact the discussion on AI ethics, raise funds and build more robustly beneficial recommendation algorithms. Please consider contributing this way.
Code development and platform design
Tournesol also requires a large amount of code maintenance and development, especially right now, as there are still many bugs on the platform, many functionalities that have yet to be implemented and many interface designs to be improved. We also need a lot of help in designing the platform to increase contributor-friendliness and contributor retention. Please join the Tournesol Discord for informal brainstorming. To raise issues in our GitHub (platform, mobile, browser extension, meta), please consider reading this page to determine which location best fits your issue.
To develop our platform, Tournesol would like to hire its main developers. This requires funding.
We are currently setting up the donation system. Information on how to donate will be given soon.
Promotion and pedagogy
Tournesol will need a lot of promotion to obtain a large contributor base and to motivate them to regularly provide Tournesol ratings. We also hope to make our platform as transparent as possible by pedagogically explaining its motivations and its components. The Tournesol Wiki you are reading plays a key role in this regard.
Here's a list of appearances of Tournesol in the medias.
Note that the platform is still unstable and in beta test. Large-scale promotion should only start once the platform is a lot more stable. In the meantime, pedagogical explanations of the importance of collaborative AI ethics are more than welcome.
Community building is important to coordinate the Tournesol project, to keep all contributors motivated and to share concerns that have not yet been addressed.
Please consider joining the Tournesol Discord to get to know our community and to keep up to date with our latest developments and issues.