Tournesol

From Tournesol
Jump to navigation Jump to search
Miniature logo (100px by 100px)
Logo of Tournesol
Tournesol banner

This article provides different descriptions of Tournesol. Readers are more than welcome to reuse to promote our platform, as well as branding materials.

Tournesol's official colors are #fed736 and #234600.

Description in 3 words

Collaborative content recommendations.

Description in 1 sentence

Tournesol aims to identify top videos of public utility by eliciting contributors' judgements on content quality.

Description in 300 words

Tournesol aims to identify top videos of public utility by eliciting contributors' judgements on content quality. We hope to contribute to making today's and tomorrow's large-scale algorithms robustly beneficial for all of humanity.

Our main solution is an open source platform, available at Tournesol.app. The platform allows contributors to set up their account, and to provide judgments by comparing contents on different quality criteria, such as "reliable and not misleading", "important and actionable" and "engaging and thought-provoking". We also develop and maintain a mobile application and a browser extension to facilitate the contributors' workflow.

To protect Tournesol against a 51% attack, contributors are asked to validate an email from a trusted email domain. Certified contributors' ratings are then used by our strategyproof robust learning algorithms to infer certified Tournesol scores, which are used by our recommendation engine. Non-certified contributors still affect the non-certified Tournesol scores. Contributors are also asked to provide personal information, such as degrees and expertise, which we hope to leverage in future versions of our algorithms.

Contributors can provide judgments publicly or privately. All public contributions are recorded in our public database, which is freely downloadable. We highly encourage data scientists and academic researchers to analyze this database to understand our contributors' judgments, determine what they consider preferable to recommend at scale, and audit or improve today's recommendation algorithms.

Tournesol is also highly engaged in raising awareness and increasing the understanding of the risks with today's and tomorrow's unaligned algorithms. We document and maintain the Tournesol wiki, which provides pedagogical explanations and lists numerous resources. We also collaborate with science communicators to produce quality information about the challenges to make algorithms robustly beneficial. Tournesol is also highly engaged with academic research on AI ethics and algorithmic safety.

Tournesol is a project of the Tournesol Association.

Description in 1,000 words

Tournesol aims to identify top videos of public utility by eliciting contributors' judgements on content quality. We hope to contribute to making today's and tomorrow's large-scale algorithms robustly beneficial for all of humanity.

Our starting observation is that algorithmic recommendation has become immensely influential. 2 billion YouTube users spend an average of half an hour per day watching videos, which adds up to more views on YouTube than searches on Google. Importantly, 2 views out of 3 result from algorithmic recommendation. Unfortunately, algorithmic recommendations currently fail to provide robustly beneficial results. Numerous concerns include cyber-bullying, algorithmic bias, misinformation, addiction, polarization and mental health, as displayed in the documentary The Social Dilemma. This should not be surprising, given the scale of today's online disinformation campaigns. In 2019, Facebook reportedly removed around 6 billion fake accounts. Such campaigns may promote false information. They have also been observed to cause mute news. Namely, they drown quality important information into an ocean of content of secondary importance.

Arguably, given the complexity of today's social challenges, we desperately need better recommendations of quality contents on a very wide range of topics, such as governmental elections, pandemic control, vaccination, climate change, public health, mental well-being, social justice and science literacy. As sociologist Zeynep Tufekci puts it, “this is the epistemological crisis of the moment: there’s a lot of expertise around, but fewer tools than ever to distinguish it from everything else”.

Interestingly, this issue has been acknowledged by YouTube’s CEO Susan Wojcicki. For instance, in 2020, she took a stand and asserted that “anything that would go against World Health Organization recommendations would be a violation of our policy”. In addition to removing contents, YouTube has added banners to COVID-related contents, to encourage users to check health agencies’ websites. However, while this definitely seems to be a step in the right direction, YouTube's efforts so far have been far from sufficient. Even in the heart of the crisis, few quality COVID-related contents were recommended on an unlogged home page. More concerningly, in late 2020 and early 2021, Google dismantled its AI ethics team, after the team's leaders submitted a paper criticizing the ethics of large language models, which are a core component of nearly all of Google's products. This seriously questions the integrity of Google's commitment to ethics.

Moreover, while platforms are being criticized for inaction by some, other critics argue that private companies should not be deciding what ought and what ought not to be recommended. Any unilateral and opaque decision by such private companies will be justifiably accompanied by important backlashes too. In fact, we all should be concerned by the possibility that any supposedly ethical solution proposed by Google will be tainted by juridic, economical or political hidden motives. Overall, even if YouTube wants to do good, it cannot decide what is good. Any ethical solution must be designed in a much more transparent and collaborative manner.

Tournesol's goal is to be this solution. Our main proposal to make large-scale algorithms robustly beneficial is an open source platform, available at Tournesol.app. The platform allows contributors to set up their account, and to provide judgments by comparing contents on nine different quality criteria. The default quality criteria to rate are "reliable and not misleading", "important and actionable" and "engaging and thought-provoking". The optional quality criteria to rate are "encourages better habits", "clear and pedagogical", "layman-friendly", "diversity and inclusion", "resilience to backfiring risks" and "entertaining and relaxing". Contributors may also report how confident they are in their ratings.

To facilitate the contributors' workflow, Tournesol also develops and maintains a mobile application and a browser extension. The mobile application, currently only available on Android, has the key functionalities of the platform, especially in terms of search and rating. The browser extension enables the contributor to directly access Tournesol recommendations in the YouTube home page. It also enables the contributor to rate a video they are watching with one click.

To protect Tournesol against a 51% attack, contributors are asked to validate an email from a trusted email domain. Certified contributors' ratings are then used by our strategyproof robust learning algorithms to infer certified Tournesol scores, which are used by our recommendation engine. Non-certified contributors still affect the non-certified Tournesol scores. Contributors are also asked to provide personal information, such as degrees and expertise, which we hope to leverage in future versions of our algorithms.

Contributors can provide judgments publicly or privately. All public contributions are recorded in our public database, which is freely downloadable. We highly encourage data scientists and academic researchers to analyze this database to understand our contributors' judgments, determine what they consider preferable to recommend at scale, and audit or improve today's recommendation algorithms.

Tournesol also aims to provide a large amount of statistics and data visualizations to maximize the transparency and the interpretability of our platform and of our algorithms, while protecting the privacy of private ratings. To achieve this goal, Tournesol is also highly engaged in raising awareness and increasing the understanding of the risks with today's and tomorrow's unaligned algorithms. We document and maintain the Tournesol wiki, which provides pedagogical explanations and lists numerous resources. We also collaborate with science communicators to produce quality information about the challenges to make algorithms robustly beneficial, as well as to encourage a larger audience to become Tournesol contributors.

Finally, Tournesol is also highly engaged with academic research on AI ethics and algorithmic safety. Our work has already inspired numerous research directions, like Byzantine machine learning, resilience to Goodhart's law, personalized collaborative learning, high-dimensional voting, strategyproof collaborative learning, Bayesian voting, Bayesian Byzantine resilience and volition learning, to name a few. Note that much of this research is currently ongoing. Tournesol will share and pedagogically explain the results of such research once the preprints are available.

Tournesol is a project of the non-profit Tournesol Association, which was created in the canton of Vaud, Switzerland.