Please describe your proposed solution
While Project Catalyst has grown immensely, the existing reviewing infrastructure has not kept pace with its expansion, leading to bottlenecks in:
- Maintaining trust in review outcomes.
- Ensuring consistent quality and relevance of proposal reviews.
→ Findings of a fund 8-10 analysis revealed, that there is currently no correlation between feasibility rating of proposal and project outcomes: <https://twitter.com/DominikTilman/status/1780904135379849643>
Our proposed solution to the current limitations within the Project Catalyst review system involves a comprehensive redesign of the reviewing framework.
<u>1. Objectives:</u>
- The main objective is to create a review system that recognises and measures the value and potential of proposals based on objective criteria and reliable subjective expert opinions, so that voters in Project Catalyst have credible decision-making support for the selection of proposals.
- Secondary objectives:
- To improve the scalability of the evaluation system to be able to handle a growing number of proposals.
- Building a flexible and modular framework that allows customisation of inputs and outputs.
<u>2. Key components include:</u>
-
New Peer-Review Process:** We will develop a protocol that combines various elements into a new and improved peer-review process. This includes the following components:
-
Domain-specific experts: The protocol will incorporate domain-specific expertise, ensuring that each proposal is evaluated by individuals who not only understand the broader context, but are also experts in the relevant field. This approach recognises the importance of expertise in the evaluation of innovative proposals.
-
Two-track review mechanism: The protocol will utilize both community input and expert assessments. This dual approach aims to balance broad, democratic community participation with the nuanced, in-depth insights of experts. In this way, we can utilize the unique strengths of both groups to achieve a more comprehensive assessment of the proposals.
-
Panel Reviews: We create the possibility to integrate panel reviews, as is already successfully used in some institutional grant programmes and is already being tested in other grant programs with promising results.
-
AI Support for Reviewers and Voters: In order to keep track of the growing number of active and past projects and the increasing number of project submissions, we will experiment with various AI technologies to support the review process. This includes the following:
-
Incorporating Graph Databases: We will utilize graph databases to analyze connections and track records within the Catalyst ecosystem. This will allow reviewers to visualize project interdependencies and historical performances, offering a clear, data-driven foundation for evaluating the strength and impact of new proposals.
-
Experimentation with LLMs: We will experiment with large language models (LLMs) to assess whether proposals meet basic requirements and to train these models specifically for the Catalyst context. This initiative will explore the potential of AI to assist in preliminary proposal screenings, aiming to increase efficiency and consistency in the review process.
-
Development of dynamic algorithms: In order to create a fair and reliable valuation system from the various components mentioned, we develop and experiment with different methods and approaches for the following areas:
-
Proposal Ratings: The above inputs from the peer review process with AI support should generate reliable ratings for proposals.
-
Reputation scores for reviewers: The protocol will make it possible to track the accuracy and quality of their evaluations. This will allow the system to give more weight to those who have given consistently thoughtful, high quality evaluations.
-
Reward system: An incentive system will be proposed to reward those who provide consistently valuable insights and evaluations. This system will be designed to motivate sustained and high-quality contributions.
-
Framework for Impact Assessments: An integrated framework for impact assessments will systematically track and measure the real-world efficacy of funded projects by combining objective and subjective evaluations. This approach provides a more holistic view of a project's effectiveness, using objective data for baseline metrics and subjective insights to contextualize and explain these metrics, thus addressing the limitations inherent in each method separately:
-
Objective Evaluation: Involves quantifiable metrics such as data on user engagement, financial reports, or statistical analyses. This type of evaluation provides a solid foundation of hard facts that can help in measuring direct and tangible impacts of a project.
-
Benefits: Provides clear, measurable, and verifiable data that can help in making rational and unbiased decisions.
-
Application: Useful in early stages of assessment to establish baseline impacts.
-
Subjective Evaluation: Encompasses qualitative assessments such as expert opinions, interviews, and surveys. This approach can capture the nuanced impacts that are not easily quantifiable but are equally important for a comprehensive evaluation.
-
Benefits: Captures the depth of the project’s impact, including community perception, satisfaction, and other intangible benefits.
-
Application: Critical in understanding long-term effects and in contexts where impacts are more about changes in conditions or perceptions.
<u>3. System Architecture Overview:</u>
1. Presentation Layer:
- User Interface (UI): Web and mobile interfaces for community members, experts, and administrators.
- Dashboards: Visual tools to display proposal data, review progress, and impact assessments.
2. Application Layer:
- Review Management System: Manages proposal submissions, reviewer assignments, and review tracking.
- AI Processing Module: Provides AI-driven preliminary screenings and large language model (LLM) assessments.
- Graph Database Interface: Visualizes project relationships and historical performance.
- Reputation and Reward System: Tracks reviewer performance and manages incentives.
- Impact Assessment Engine: Integrates objective and subjective evaluation data for comprehensive impact analysis.
3. Data Layer:
- Graph Database: Stores relationships between projects, proposers, reviewers, and outcomes.
- Relational Database: Stores structured data like user profiles, proposal details, and review records.
- Data Lakes: Stores unstructured data including historical proposal texts and reviews.
4. Integration Layer:
- APIs to facilitates data exchange between system modules and external systems.
- Message Queue: Communication between components.
System Flow Overview
- Proposal Submission: Proposers submit proposals via a portal. We access this submission via API (e.g. Catalyst Voices).
- Reviewer Assignment: Proposals are matched with reviewers based on expertise. Reviewers access assigned proposals through a dashboard.
- Review Process: Community members and experts submit evaluations. AI tools assist with preliminary screenings and summarizations.
- Data Analysis: Graph database visualizes project relationships; impact assessment engine evaluates project impacts.
- Reputation and Rewards: Reviewer performance is tracked, scores are updated, and high-quality reviewers are rewarded.
- Reporting and Visualization: Dashboards display proposal summaries, review progress, and impact metrics for decision-making.