Below you can find the draft of the new version of the ICF’s Delegations Policy!
About this second iteration of the ICF Delegations Policy
During our first consultation with the interchain community during the summer of 2022, in the interest of utilizing our delegations more effectively to promote decentralization and reward contributions of validator teams, several suggestions were made that didn’t become part of the first version of the policy.
They were not included in the first iteration due to:
● Complexities around 1) the objective measurement of the metrics, which could be done rapidly, consistently, and at scale, and 2) forming fair guidelines for certain subjective assessments
● We want to be able to deploy a policy, ensure its smooth and fair operation, and then iterate on it.
Following up on this and many other feedback, we are happy to highlight some of the points that heavily influenced this new iteration:
● We created a better-defined point system that is based on a range of points for each item considering complexity, time, and resources spent on it and not just 1 point per item
● Additionally, we implemented a CAP system to ensure fairness and distribution of resources and support among the whole active set
● We created new categories for Valuable Upcoming Contributions & Consumer Chains
● We are now opening the Delegation Program to two additional chains: Osmosis and IRISnet
● We cut the distinction between ecosystem/single-chain contributions
Your feedback is extremely important
This version of the policy is to be considered a living draft and we look to collaborate with our interchain community to ensure full alignment in both values and vision. I’m really looking forward to implementing this draft with your ideas and recommendations.
Deadline for feedback
This draft is going to be open for feedback in the Cosmos Hub, IRISnet, and Osmosis Forums till Friday, September 1st.
The idea of having a cap of how many points a validator can get depending on their ranking position is very nice and allows further decentralizing the chain, which is very nice
I have doubts about having a 1 year delegation period, IMO it makes is less flexible
having different points for the activity based on how many efforts it took to spend there also sounds way more fair than just awarding 1 point for each project a team is working on
a lot of appreciation for not requiring most of the validators votes to be not abstain, there are cases where it’s useful
“Must maintain uptime of more than 95%” - can we have a clarification on which window this uptime is calculated against?
it’s really nice we have a better overview on what happens exactly once one of the criteria is not fulfilled
Additionally, regarding the last cohort, as far as I know there was no report published on how the points were distributed, which contributions were considered and which weren’t, can we have it for the next cohort? (and ideally the current one)
Also I am pretty sure there were quite a lot of interesting things that others made open source and it would be benefitial for everyone if we have such a list.
And one more thing, I remember during the first cohort there was a statement that an evaluation of the validators who already received the delegation in the first cohort would have a more simple evaluation process than the new ones, can you clarify if this is true, and would the process of evaluating validators who are already receiving ICF delegation be different than the new ones?
It clearly looks a lot better than the previous version. I think it may even require a second reading to get the whole picture of the changes. But overall it brings more clarity and more granularity which can arguably be called a positive addition.
The current single point based system could easily be taken advantage of, this new complexity based system seems a lot better even though it also reduces the transparency. It is a matter of balancing between these two.
On the other hand I want to congratulate the introduction of the cap system. It was really a necessary update and it seems that this one will ensure a good distribution. The only critic I would formulate towards this draft relates to the spread of the cap:
Position Active Set: 1-10 ~ excluded by Policy’s rules
Position Active Set: 11-20 ~ up to 5pts
Position Active Set: 21-30 ~ up to 10 pts
Position Active Set: 31-40 ~ up to 15 pts
Position Active Set: 41-50 ~ up to 20 pts
Position Active Set: 51+ ~ no CAP
I would have extended the distribution to at least 50% of the active set (so #90) with something like to this:
1-10 = excl.
11-20 = 5
21-40 = 10
41-60 = 15
61-80 = 20
81-100 = 25
100+ = no cap
but of course the difference in the final results may be insignificant…
Interested to see a more detailed breakdown and how this evolves.
Some of us aren’t coders, with traditional finance, marketing, and other backgrounds instead. No doubt contributing open source code and tooling is important. However, glad to see the new point structure where bringing people into the Cosmos is also seen as valuable.
Curious to see how the delegation policy is or isn’t skewed towards validators running consumer chains. We want to, but must focus on staying profitable. We’ve been losing money running infrastructure for Kava, Polygon and HydraDX for months. But, we are playing the long game and that’s just part of the deal. Not complaining, just being real.
We agree with the comments to skew delegations away from the largest and toward smaller validators. It’s so important for the health of the ecosystem. Please keep that in mind. Smaller validators who don’t take on VC money are important for decentralization. ICF and Stride delegations can have a large impact.
Appreciate all that you’re doing. The ICF plays a powerful role. Glad to see you’re doing it responsibly.
Thanks for sharing @catdotfish
If the eval framework is a rubric to allocate points for key activities performed with key resources, it must take into account cost and service of running relayers (particularly for smaller operators).
“Delegations Team, composed of Catdotfish, Zoltan, and the specialists nominated to assist with technical applications. While evaluations will be largely objective, we acknowledge that some criteria require subjective assessment, which will be conducted by the Delegations team as a whole, such assessments, will be final.
Saying so, before the public announcement and as the final step of the evaluation period, the list of selected validators and motivations will be shared with the ICF
Board of Management (Maria Gomez) and the Foundation Council to ensure total fairness and alignment of the evaluation with the values and mission of the ICF itself.” (Page 7/15, ICF Delegations Policy V2 Draft)
how many specialists to be added to delegation team?
is there a process for recusal in the event a delegation team member has worked for a validator team being evaluated?
assessments from delegation team are “final” but “final selections” are to be “shared” with BoM and FC to “ensure alignment” with values/mission of ICF. The language is confusing. Can you be more clear on the purpose of sharing? Is it the case that selections are not final until approved by BoM and FC?
Or, delegations team merely, and only seeks review of final selections from BoM and FC?
First of all, thank you for sharing the draft delegation policy for review. It definitely helps to reinforce transparency and incorporate community feedback to make things better. Appreciate the willingness from foundation to listen to community.
Now let’s review the different aspects of policy:
Validator points cap
It’s a welcome addition to policy to better distribute staked assets across validator set and further support decentralization. This will not make things perfect from decentralization perspective but cap definitely helps with better distribution of stake.
#1 Engineering Contributions
Like the idea of point allocation based on effort and complexity of contribution.
Explorers and Dashboards - These are essential to support ecosystem. However, what are we doing to discourage just forks of ping.pub and recognizing them as new explorer? Probably we need some guidance from foundation to avoid such practices. On the Dashboard front, probably every validator has some sorts of Dashboards to measure different statistics. So same as explorer, we should have some reasonable criteria to incentivize new functionality in dashboards vs 20 dashboards showing redundant information. We need to draw the line somewhere and build standards for what kind of explorer and dashboard contributions will be incentivized.
Code/Stack contribution - We should encourage devs and incentivize them with delegation. In agreement with this completely.
#2 Public Good Contributions
Public infrastructure and Relaying is critical to ecosystem. Wonder why Relaying was left out as an item under this category. Really disappointed with that. Relaying requires quite a bit of effort, monitoring, money and skill on an ongoing basis. Will wait for a reply @catdotfish if it’s an oversight or an intentional decision.
Documentation/Education materials is another topic where we need to define standards. Like any project, documentation is critical but at some point, you get to a point where you have good documentation and it gets reused. Any major/minor updates to it can be handled via RFP or grants vs recurring delegation. So yeah, on this front we need to provide guidance on what really is meant by documentation or Education materials that will be considered for delegation.
#3 Community Contributions
Will let experts in this area weigh in on this.
#4 Valuable Upcoming Contributions
We can see the need for incentivizing upcoming or in flight contributions. Not much to comment here until we see which projects are incentivized under this criteria.
#5 Cosmos Hub Consumer Chains Kickstart
This is a good initiative and like the points system based on active set ranking. Will need a clarification on how many consumer chains you need to run to qualify for points (one or all or ??).
Now a short list of contributions that we didn’t see making this list:
Testnet support - It is important to encourage more teams to run nodes in testnet for various cosmos chains. Something to consider.
Relayers - totally missed in the doc. We encourage to have more objective support of relayers with some minimum qualification criteria and uncapped point allocation. Running relayer on 1 chain vs 20 or relaying 1000 txs vs 10 significantly impacts infrastructure and relayer tx costs.
Saw a comment about secure and stable infrastructure. What does that mean? (like self hosted/self owned, remote signer, HSM backed keys, single tenancy Bare Metal). We think foundation should incentivize hard work done by self hosted Bare Metal operators employing secure practices like TMKMS/Yubihsm. We should put security first, and definitely either make some of these as requirements for delegation or incentivize validators going above and beyond to self host infrastructure or use remote signer/hsm backed keys. This should definitely be considered in this policy imho.
We request foundation to provide constructive feedback to validators who didn’t make the cut. We spend countless hours supporting Cosmos Ecosystem and doing public good work (RPC/Relayers etc), and when we don’t make the cut, it’s really painful. So please please please share feedback so we can do better next time.
Similar to comment mentioned by @freak12techno, please share significant technical contributions that gets delegation. If we are rewarding good work, we should highlight that in community and let others make most of it.
This is all i can think of at this point. Once again, appreciate the opportunity to provide feedback and look forward to any questions.
Happy to see the next iteration of the delegation policy and to see it expand to additoinal chains. Thank you to the team for their hard work on this!
I, like many others, believe relaying is a crucial piece of infrastructure that is subsidized mainly by the relayers themselves. The majority of the teams supporting the hub, including Cosmos Spaces, “pay to relay”, but we all believe it’s something that must be done and that the benefit will be there in the long run. Historic data can be pulled to make sure everyone see’s where the support is coming from and how worthy it is for consideration.
It would be nice for some sort of breakdown or even just a scoring rubric on where all the teams scored so that the community can see what is valued most, and where more support may be needed. This will also bring a level of transparency and trust.
Other points to discuss:
Valuable upcoming contributions - How will this not be gamed, and how will you help ensure accountability for follow through? Especially if this is for a 1 year delegation period, this could be taken advantage of with little intention to deliver. I do believe there is value in a category that is aimed at things in development, and even subjective awards for the value they bring.
Past Contributions - could you better define and show examples of what would be accepted and what won’t be accepted, both in this cycle and the following?
Consumer Chain Kick Start - +1 to @ArchitectNodes for asking to clarify how the scoring will go and what the minimum requirements are?
@freak12techno also brought up a good point of the mentioning that past applicants would have an easy process for reapplying. Will that still be implemented?
Looking forward to seeing this new iteration implemeted!
Thank you for sharing the first draft. Some thoughts:
It’s really nice to have contributions to running consumer chains. Appreciate it.
Running relayers should be counted as contributions. Since we haven’t had an incentive scheme to run a relayer. But I think we should have a maximum of points for relayers, to balance between contractor relayers and validator contribution (for which we bear all costs).
For valuable upcoming contributions: Do we need to submit a full proposal? Or just the description of the project? And the points are only given to the project that deemed very impactful? How would the delegation team make sure that the project will still be developed after the delegation cycle?
Relayer software doesn’t support it yet in prod, unfortunately.
I agree with @Golden-Ratio-Staking - Relaying should absolutely be a consideration. The health of IBC/relayers is the health of the entire cosmos ecosystem.
The cap system implemented in the validator scoring is well-designed to enhance the distribution of staking throughout the lower end of the validator set. This could potentially address the issue of voting power concentration among the top validators. However, I would suggest some modifications to the cap as proposed earlier. This is because there is a noticeable distinction between validators positioned around 50 and those at the very bottom. The suggested CAP allocation appears more effective in achieving a fairer distribution of delegation.
Validators ranked 1-10: Excluded
Validators ranked 11-20: 5 CAP
Validators ranked 21-40: 10 CAP
Validators ranked 41-60: 15 CAP
Validators ranked 61-80: 20 CAP
Validators ranked 81-100: 25 CAP
Validators ranked 100+: No cap
Regarding RPC allocation within the public good category, I noticed the category of “Public Infrastructure Providers: Validators who run public RPC nodes.” However, providing more details about how the scores are computed would be beneficial. For instance, a scoring system like 1 point per 5 chains supported with RPCs could better reward validators genuinely dedicated to infrastructure provision rather than just validation.
It’s unclear why relayers are not incentivized. Setting up and maintaining a competitive relayer infrastructure involves considerably more effort and costs than running RPC nodes. Allocating 10 points or considering an uncapped allocation for relayers seems justified, given that this category represents a highly challenging and expensive public good. If concerns regarding evaluation exist, remember that relayers submit transactions with verifiable ownership via a memo field. Additionally, well-crafted dashboards rank and display relayers’ wallets for each chain.
The dashboard and explorer categories could be merged since many explorers also serve as dashboards. If these categories remain separate, providing clearer specifications for each would enhance understanding of the criteria sought.
Reconsidering the category of translating significant content into non-English languages might be advisable. The current allocation of 5 points seems overly generous for a task that can be relatively straightforward through translation tools. This category could be consolidated with the other educational category to prevent any potential exploitation.
Similarly, the hosted events and meetups categories could be consolidated to avoid duplicating points for essentially the same activities. Clarifying the distinction or merging these categories would be better.
I think the testnet support is an excellent addition to the things that should be considered @ArchitectNodes, perhaps it can even be evaluated in a similar manner as consumer chain kickstarting contributions.
In addition to testnet support I think support when the chains upgrade should also be considered.
Feedback on how validators can improve their chances of more delegation would be greatly appreciated and push the validators to improve in the direction the ICF sees fit for the ecosystem which gives Cosmos more long-term alignment.
There have been many mentions of relayers and we also believe relayers are crucial to the ecosystem so I will also +1 that point.
The cap is superb for a fairer distribution of stake
Overall happy with the direction this is going and curious to see other ways to empower smaller validators as time goes on
Huge improvement compared to previous version in many ways. Cap for points is especially good and Cosmos Hub Consumer Chains Kickstart is good idea also.
Just give some points for relayers also and looks good to go.
First thing, I really want to thank you so much for the time you dedicated to reviewing the new ICF Delegation Policy draft.
Your interpretation, vision, and experience gave critical feedback on the document itself, and me and @ZoltanAtom worked a lot on studying the impact of every one of your considerations in the Policy but also in the validator set and overall points balance.
Saying so, I want to highlight some of the major improvements that followed up on your recommendations:
● Adjusted validators’ cap points
● Added details about the uptime widow necessary to maintain the delegations
● Added a specific section in the submission forms about validators re-submitting items from the first round of the ICF Delegations Program
● Added proper steps and expectations on the category #4 Upcoming valuable contributions category
● Added details about the category #5 Consumer Chains Kickstart as uptime window and more
● Added notes about fork limitations in the category #1 Engineering Contributions
● Added mention to relayers in the category #1 Engineering Contributions category A small note: We always considered relayers eligible as a fundamental column of the interchain, we now made sure that it’s clear that this is a major object of application as well as all the other items listed.
There are a few points that weren’t included in this version of the Delegation Policy but, as happened with some of the recommendations from last year, but as we did in the past we are going to keep track of all your feedback and take them into account for the upcoming iteration of the document.
A few additional points:
● Zoltan and I are working on a format to ensure that this time items that received points from the delegations team are going to be shared with the wider community in a way that can really become an additional source of value for all. We would welcome your feedback on the draft of our plan as soon as finalized.
● About clarifying and making an evident distinction between dashboards and explorers: I definitely see your point, and I’m taking note on how to enhance clarity towards text so that validators are facilitated in their effort of reporting. At the same time, we always wanted to keep a balance between providing examples to validators for their application and going too in-depth with the details, as we indirectly risk making people feel that maybe they’re not eligible for that category after all. So from a side inclusivity is a choice. As these are going to be evaluated in the same category, per item, and with the exact same point range (1~5), we felt that not implementing this in this iteration is not preventing Validators from being fairly rewarded consequently the impact of this was secondary compared to other points.
● The same thing goes for public good-related content and efforts
● We will do our best to provide feedback to all applicants, but times and modality will be subject to the actual number of applications that we receive and their extension/complexities.
Sharing some thoughts here from the Strangelove Team:
Thank you for this overview of the v2.0 Delegations Policy!
For the Valuable Upcoming Contributions evaluation category:
We’re glad to see that there is a relatively higher point earning opportunity here. If anything, we think the point potential here should be even higher.
We’d like to see more detailed, clear language on what is a valuable contribution
For all evaluation categories, we’d like to see clearer definitions and examples of what would be worth e.g. 5 points versus 1 point.
We’d like to better understand how this ties in with the IBC Roadmap work.