Please describe your proposed solution.
This proposal seeks to research and develop several competing assessment models in parallel and rigorously test the quality of output that each of them produces.
Many have been proposed in community discussions, but it is unclear what the relative merits of each one of these are until they have been empirically and experimentally tested in the field.
Some of the alternative mechanisms that would be researched and developed are discussed in the following Tally blog post written by the lead proposer Simon Sällström and Jan Ole Ernst (PhD Quantum physics, Oxford). These include Holographic voting and conviction voting.
Problem
- There are too many proposals for voters to meaningfully engage within a given Catalyst round
- It does not allow proposal assessors to fully exploit their relative strengths, reducing meaningful participation in the system.
- The current system largely dedicates a roughly equal amount to the assessment of proposals even though some proposals ought to get a higher degree of scrutiny.
- Current proposal assessment ratings are too noisy to provide meaningful guidance to voters.
- Assessments do not adequately incentivise thorough analysis and deep critical investigation.
Solution
We propose to divide the proposal assessment into two stages. In the first stage, proposal assessors check proposals against a list of very well-defined proposal requirements and indicate the domain expertise required to assess the proposal in more depth. Proposal assessors must also give one point of constructive criticism that provides specific examples and actionable suggestions for positive change at this stage. To proceed to the second stage, a proposal needs to satisfy 80% of the requirements.
The second stage is the qualitative assessment stage. It takes inspiration from the scientific article referee review system. The qualitative assessments have several objectives. First, to provide quality assurance on the first-stage assessments. Second, to provide a concise and easily understandable summary of the proposal. Third, to thoroughly investigate and critically engage with all aspects of the proposal. Fourth, to provide constructive feedback to the proposer. Fifth, to make a recommendation to “fund”, “revise and resubmit”, or “not fund”. A revision of the PA model does not necessarily have to be implemented as part of the 2-stage proposal assessment model.
How does your proposed solution address the challenge and what benefits will this bring to the Cardano ecosystem?
Summary
It will improve the efficiency of the use of funds and help voters make better, more well-informed, decisions regarding who should win grants.
Problems with the current process
A thorough review of all issues with the current Catalyst process is beyond the scope of this proposal but we will focus on the ones which we believe the present proposal addresses. The problems are
- Lack of system enabling efficient division of labour according to proposal assessor strength
- Lack of feedback during a funding round
- Lack of incentives and capacity to conduct a thorough analysis
- Waste of resources through expensive PA assessments of incomplete proposals
First, the current system is not well suited for specialisation and division of labour. Some proposals are subpar in the sense that they do not present certain basic elements such as measurable KPIs or some analysis of existing solutions to the problem that is to be solved. Checking for the existence of these necessary components is easy but critically engaging with most of them and providing constructive feedback is difficult. Such critical analysis often requires domain expertise. It will most likely also require the person to be a native (or near-native) English speaker and to have excellent writing skills. There are few people who possess all of the above and those who do will typically have very good outside options in the form of very well-paid jobs. Their time is very valuable and it is a waste of resources to have them read through incomplete proposals. Given their high skill and outside options, a very high remuneration rate will be needed to properly incentivise them.
Second, lack of feedback. In the current system, getting feedback from proposal assessors is not easy. Generally, proposers comment on other proposals hoping to get comments on their proposal in return. However, accessing the group of proposal assessors is much harder. Proposal assessors are generally busy and although some proposers post in the Proposal Advisor Telegram chat and receive, if they are lucky, some feedback, this is the exception rather than the rule. The system does not have a mechanism to incentivise feedback that the proposers can incorporate into their proposal in the same funding round.
Third, lack of incentives to conduct a thorough analysis. Each proposal in a funding round is allocated a budget that is sufficient to reward 3 ‘good’ assessments and 2 ‘excellent’. An excellent assessment receives 3x the reward of that of a ‘good’ assessment. In the past rounds, around 3% of assessments were rated as ‘excellent’. The problem is that many Proposal Assessors (PA) who make cost-benefit calculations arrive at the conclusion that it is more profitable to aim to write ‘good’ assessments. If we assume that attempting to write an excellent assessment takes three times as long as writing a ‘good’ assessment, then the PA should be indifferent between focusing on these two tasks. However, due to a combination of (a) very high ‘excellent assessment’ standards and (b) it is not clear what constitutes an excellent assessment, many Proposal Assessors no longer attempt to write excellent assessments. Instead, they focus on writing ‘good’ assessments where the time invested leads to a more predictable reward. This reduces the overall value that proposal assessors provide to the voting community.
Fourth, there’s a lack of capacity. The sheer volume of proposals means that proposal assessors cannot properly invest time into investigating and engaging with existing proposals to increase their quality. Similarly, voters are overwhelmed by the sheer number of proposals and will only be able to express their preferences on a small subset of the proposals submitted. By adding more quality filtering mechanisms we address both of these problems, thereby improving the quality of the guidance provided and voting decisions being made.
Fifth, each proposal is currently allocated the same budget in terms of Proposal Assessor rewards. This is a waste of resources since there are many proposals that are not complete by objective standards. At the moment, the budget available for each of the ~1000 proposals submitted is around $440.Our hypothesis is that a proposal that doesn’t contain sufficient information for it to be properly assessed can be discarded after only a 15-minute skim. Even if every proposal is skimmed in this manner by seven independent Proposal Assessors remunerated at $30/hour (western Europe white-collar hourly rate), this discovery process would in total only cost $52. Assuming that 100-200 proposals (10-20%) would not have passed this basic objective checklist threshold, then under the assumptions stated above, we would save between $58,200-$116,400; funds which could instead be used to incentivise thorough analysis, interviews and investigation of remaining proposals. This is indeed what we propose here.
How do you intend to measure the success of your project?
The main idea would be to produce a ranking of proposals based on the alternative assessment process, and then conduct qualitative surveys with voters and proposers.
Proposers
- Do the qualitative assessment give a fair and balanced review of your proposal?
- Do the numeric scores by the current community reviewers provide an fair score of your proposal?
Voters
- Are the assessments helpful for your decision on which proposals to support?
- Did you learn something from the assessment?
We may include more questions. This is part of the preparation and research agenda for this.
Please describe your plans to share the outputs and results of your project?
Public channels for ongoing updates
- Discord updates
- Notion page monthly updates
- Catalyst Town hall discussions and town halls
Closeout
- Pre-analysis plan
- Research report summary
- Video recording and slide deck with findings