Scaling up with conditional basic income

You welcome all actually I support this proposal and it has been approved supporting voted by our community membership.

Thanks again for the fantastic work! There is a lot to talk about :+1:

  • I can’t wrap my head around the soft opt-out, I’ve been asking the definition of it a couple times already on a couple threads and it seems not matching this paper.

    From my understanding it was ONLY about protection bottom x validators based on cumulated VP from slashing and jailing running consumer chains to avoid additionnal costs.

    If it is in fact about allowing said validators to not run the chain and still get the same amount of rewards they would have gotten then I am all for UNTIL Stride and Neutron are running and we have a real feedsback on costs by bottom validators. In that 2nd version of it, I do not think it is a good idea to keep it long term for risks descibe in the essay (sibil).

  • Breaking ground section:
    The analysis of the problem is IMO right, it is delegations spread/VP. We simply can’t have such a difference between top validators and bottom one.
    Again my biais is that everything will eventually balance out by itself and we should all think about sustainability.
    I think that what is describe and the way it is heading is extremely dangerous and harmful. In no way we should even think about such action. What is described very well reminds me local politics forcing what’s good and what’s not.
    I think the direction that we should head towards is incentivising, without taking action that have direct impact on validators since the main issue are the way delegators behave.

Breaking Ground Alternative route:

One idea that I already shared (I must not be the first one) would be to force a minimum staking fees on our top validators higher than the default parameter of the chain.

The purpose of it is to incentivise delegators to move their delegation outside a top-be-defined top x%/x? validators/VP.

The reasoning behind it:

Since what I believe to be the root cause is delegators behavior, this translate into:

  • Lot of new cosmosnauts by default go through the list from top to bottom and from lowest to highest fees.

-Same goes for huge capital addresses that only delegates on top validators perhaps for ‘security’ or ‘laziness’ reasons. If they only are here to farm and not to care about the healthyness of the ecosystem, this could be a great way to incentivize re-delegation.

Afterthought risks:

One potential risk could be mass unbounding/ redelagation, again this is pretty hard to keep an eye on and adjust parameter of the incentive.

Diff. w/ the essay:

On the bright side, this action would never point the finger directly at validators which are core to our ecosystem and never directly redirect part of their hard earned revenue contesting their position in the ecosystem which to me is detrimental.


This mechanism could also be only activated once a certain value of a to-be-defined “concentration metric” is reached (ex: top 50 vals have more than 40%VP).

Final thoughts:

I believe everything balances out by nature. Right now what we need is a only little stimulus which again is dependant of real time healthiness of the hub and is not permanent by nature.
I am strongly against what’s described in the essay which sounds like taking money from the ‘rich’, using the big ol’ common pot to support above water unsustainable behaviors/practices.
Let’s take the bull by the horns and promote/incentivise good delegator practice without impacting other players.

PS: I’d like to add since I am learning myself, it is quite hard to find exact/accurate definition of concept in the ecosystem, we should start a wiki page with definition of each of them as one single source of knowledge instead of having to weights the different understanding of people replying in a post.

We’ve been testing the soft opt-out on multiple recent Neutron testnets so it is extremely likely that mainnet will ship with it :slightly_smiling_face:

1 Like

The current soft opt-out implementation merely protects the bottom 5% validators from getting slashed/jailed for not running the consumer node. They’re still considered « part of the set » but the chain does not send slashing/jailing packets against them. It doesn’t affect reward distribution either, so validators who opt not to run an additional node thanks to the soft opt-out still get rewards proportionally to their voting power.

This implementation is not meant to be permanent: a better implementation would be on the provider side and probably would affect the distribution of rewards, but it would most likely require a Hub upgrade, which is why the current implementation was preferred for now.


So with ICS v1 comes a new way to get slashed which is not to run consumer node as validator.
Not getting slashed = choice not to run the consumer chain.

Got it :+1:

1 Like

Well no, at least, not right now. To get slashed, you would have to double sign on a consumer chain, then there would need to be a specific proposal type on the hub called an « equivocation proposal » that votes based on on-chain evidence to confirm that you should be slashed for double signing, and only then would you actually get slashed.

But yeah otherwise you’re correct, the soft opt-out removes penalties which returns choice to the validators.

1 Like

Many thanks @lexa and for your initiative and leadership to bring this very interesting and important essay for community discussion to find together the best solution to move forward. Here is my feedback and ideas:

1. Problem introduced by consumer chains

  • Consumer chains onboarded increase the overall costs for a large number of smaller validators, at least for a reasonable initial period until the revenue the consumer chains bring is higher than the costs. Without any actions, a large number of smaller validators would be quickly led to bankruptcy due to these large additional costs. This would lead to an even further centralization. Also, as mentioned the soft opt out is a solution from the consumer chain side, not the Cosmos Hut itself, if most consumer chains don’t add this soft opt out like Neutron then this solution wouldn’t work

  • Delegators care about maximizing rewards, they don’t care about reducing centralization as it is obvious from the current stake distribution in the Cosmos Hub. So delegators would be happy to onboard as many consumer chains as possible since they don’t cover any costs and just get the rewards. So we need to design a solution in which delegators also contribute to the costs of consumer chains. One way is your subsidy idea, meaning delegators are taxed to the community pool and part of this is used for the subsidy. Another idea is to modify the distribution module to incentivize them to care about decentralization

2. Sybil attacks

In your essay the highest risk mentioned is a large ATOM whale launching several smaller validators with their own stake to create several sybil validators, since this is the highest risk if we can find a solution for this then sybil attacks would be very low risk. This is the possible solution: the 175 validators in the Cosmos Hub are well known, it is very easy to see when a new validator enters the active set because their governance participation would be very low, so identifying new validators joining the set is easy. Also, large ATOM whales there aren’t many, quoting from Neutron tokenomics/airdrop blogpost ‘Maximum stake: 1,000,000 ATOM. This maximum threshold only concerns the ~25 wealthiest accounts’. It wouldn’t be hard to link movements in these few whales accounts to new validators joining the active set.
Large validators with not many ATOMs cannot really launch sybils and join the active set since the threshold to join the active set is quite high now.

Let’s imagine several scenarios:

  1. An ATOM whale that runs its own validator, why would they do sybil attack for the soft opt-out with several smaller validators? The soft opt-out is just to avoid costs to run consumer chain nodes, but with just one big validator they would have less costs than with several sybils. Also, this incentive is super low for such a whale considering their revenues
  2. If a modification in the distribution module for consumer chains rewards is introduced with a multiplier for smaller validators or your subsidy idea, would this whale do a sybil attack? Again just the higher costs to run several sybil validators may already be larger than potential gains from the subsidy or the multiplier
  3. Let’s assume that regardless this whale decides to do a sybil attack. This would be immediately detected by observing new validators that joined recently the active set and actions could be taken to fix this
  4. The whale could think about doing the sybil attack via other existing smaller validators by delegating to them, but this wouldn’t really be a sybil attack but just increase the decentralization? Also, this probably wouldn’t happen neither because the whale would lose the portion of rewards for the fees of other validators, which is the whole point why some ATOM whales decide to launch their own validators in the first place to avoid any commissions

I think Informal and other teams with the resources to do computer simulations of different scenarios should do it asap and present the results here to have discussions that are more based on data. A range of multipliers in the distribution module for consumer chains, with different number of consumer chains, could be simulated to see how the decentralization of the Cosmos Hub could improve. Also, thresholds could be added to the simulation when it would be profitable for whales to do sybil attacks and include in the simulation the around 25 largest accounts identified by Neutron. This computational simulation should be done asap so that we can make more informed decisions of the next steps.

3. Criteria for the subsidy

  • I think governance voting and participation in forum discussions should be amongst the top criteria. From the 175 active set validators in the Cosmos Hub many of them never voted or read this Forum, many voted in just a few proposals, this includes also many of the larger validators

  • Having to run consumer chain nodes to receive the subsidy seems not logical with the soft opt out. If the subsidy is similar to the additional costs, then validators would just soft opt out, seems simpler than running the consumer chain nodes and additional costs and then waiting for the subsidy to cover this

  • I don’t like the idea of having a group of people controlling the subsidy, while this may seem like a simpler/faster solution initially, people can introduce delays, biases and more. I think modifying the distribution module with a multiplier for the consumer chain rewards could be best. Or for example, the tax to the community pool could be reduced and part of this tax used for the subsidy, and the criteria would be on-chain and objective such as governance participation, uptime, upgrade efficiency, voting power and so on. This could not only lead to more decentralization but more governance participation, more uptime and so on since all of this would be directly incentivized

A UBI model doesn’t seem appropriate in this context. At its core, the limited 175 slots to be an active validator is actually an open public competition.

What if we turn it around, and instead of subsidising, we asymmetrically incentivize the bottom 5%?

The consumer chain that applies for replicated security should be responsible for this incentive, and it should be part of their proposal.
They will then incentivize the bottom 5% much more than the rest of the active set, in order to get the validators to participate. The asymmetrical incentive could both be monetary and time-sensitive.

For example:
The bottom 5% gets 2x more rewards than the rest.
But for the first 6 months from launch of the cchain, they will get 3x more.

The soft opt-out feature is currently being tested - I would not be surprised if it is already going to be an option for consumer chains launching :slight_smile:

One thing that I want to point out to all respondents so far is that I am totally on-board with code-based solutions in the long-term! My heart lives in that ‘surveying the site’ section where we talk about code-based ideas for solving the problem over time. But no argument will convince me that the development work of designing, creating, and testing a new feature (new param, new way for a module to work, etc) is fast enough to address the issue I see with scaling up Replicated Security.

I want something short-lived and renewable if need-be: Something like a 4 month tranche controlled by a trusted multisig until we actually have the information from seeing these consumer chains in action. We can’t possibly know what things will look like until it’s active, and I do not believe we can develop a code-based solution fast enough to protect our validator set.

@gh0st - remember that the penalty for downtime on a consumer chain is jailing, not slashing! Not that this is necessarily a consolation if you are opposed to a project itself but I want to be very clear about which penalties apply to what!

@ephemeral_25 -

Again my biais is that everything will eventually balance out by itself and we should all think about sustainability. […] I believe everything balances out by nature.

I don’t understand this perspective - if the set will eventually balance itself out, why hasn’t it? It seems like it is growing even more centralized, imo, and we see that in more traditional economies as well.

I like this alternate route you’re proposing, but I would put it more in the long-term strategy group than in the patch solution that ‘breaking ground’ is intended to be. Anything involving engineering or new features/parameters inevitably involves a spec, dev work, testing. I think this potential solution is up there with changing the UI, imposing a delegation cap, etc. All good ideas, all things that take time in which the smaller validators might no longer be around to benefit from it.

So what do we do in the meantime to tide our validator set over until we have a longer and more sustainable solution?

sounds like taking money from the ‘rich’,

It’s our community pool - it belongs to ATOM holders and we just raised the tax to provide even more resources for that pool. We can disagree about whether the raised tax was a good idea or ‘taking from the rich’ but it happened, and now we have money we are all able to suggest a use for.

I don’t think the CBI idea is ‘taking’ money from anyone at all. I’d like to use it to help the validators who keep the Hub alive to while we get the Hub’s main product offering (Replicated Security) off the ground.

@Cosmic_Validator - I don’t agree with these criteria necessarily. My hope is that CBI is specifically to benefit validators who are invested in Replicated Security and working to contribute to this feature.

A team that is not incurring the additional costs of Replicated Security (by not running consumer nodes) doesn’t need a subsidy. They are already getting consumer chain rewards without running a node. To me, this seems like a sensible either/or: You get the consumer chain rewards either way but if you run a node, the Hub helps you out with a subsidy because you’re contributing to the feature.

And I will refer to what I wrote to Ephemeral - there are tons of good longer term solutions. But good engineering takes time and we have two consumer chains looking to launch in the next few months. Engineering introduces delays as well, as we have to write specs, design a way to check all these criteria, build the feature, and thoroughly test it out. Developers are people - the dev process is just as full of delays as a multisig distribution.

But the things you’ve raised as criteria are just as easily checkable by a person as a piece of code. If I were on the multisig (which I don’t really want to be, because mutisig transactions are such a pain lol), and I put up a spreadsheet with a timestamp and say “This is when I checked all the criteria and this is what I found”. That person can check the exact same things that code checks, but the difference is that a human can already do it, whereas the code doesn’t exist yet, hasn’t been designed, hasn’t been tested, and would need to be developed and then deployed onto the Hub (which takes a long time). For something like a tranche of 4 months of CBI while we wait to see how our first consumer chains do, that seems very reasonable and low risk to me.

I don’t disagree that something like modifying the distribution module is a good idea - it’s one of the other things we proposed in the essay. But it happens on a different timeline than the community spend idea, and I think it’s important to have one fast thing while we figure out slower solutions.

@adintium - Consumer chains do have a fix on their end, it’s the soft opt-out that Neutron is already testing out. As an ATOM holder, I’m not interested in solely relying on consumer chains to solve a Hub-centric issue. We’re the ones offering Replicated Security as a product feature and I think it’s our job to make sure it’s sustainable for us. As I said - it benefits the Hub to have a healthy and diverse validator set.

Abra and I talked a lot about whether an open public competition is really the best method for strengthening the Hub’s product offering. Hopefully he’ll chime in - he had some really good thoughts on the matter.

Hi Lexa!

In my ideal cosmos hub setup, there would be a re-delegating mechanism. Essentially there is a limit to how many delegations a validator can hold. Hence, for the surplus, the validator can re-delegate the delegation… except the validator can only re-delegate to other validators in the active set (+5 of the top in the inactive set).

I would find this to be more of a long-term solution for the Hub-centric issue. :slight_smile:

On the consumer chain… The soft opt-out is a good solution… I agree it is fair that they are not penalised but I don’t see why validators who opt out should still get rewarded.
As there is no real incentive to opt-in in the future and the validator is also rewarded for inaction, I cannot yet see how this is beneficial for the Hub or atom holders in the long-term.

If we have a situation where validators #100-#175 aren’t willing to validate for consumer chains (because of the understandable cost issues), but inactive validators #175-#180 are willing to take the cash flow risk and provide this service… then I would like to encourage validators #175-#180 to be productive and contribute to the network security. But the dynamics of such situation would be hampered if the soft opt-out validators continue to be rewarded despite not validating on consumer chains.

1 Like


Yes, this is my ideal vision as well! Strong re-delegation mechanism, voting power hard cap. Feels really clean to me, but long-term for sure.

As there is no real incentive to opt-in in the future and the validator is also rewarded for inaction, I cannot yet see how this is beneficial for the Hub or atom holders in the long-term.

I think it’s a strong business move to participate once it’s profitable. I would expect keen delegators to fully expect and pressure their validators to actually participate. To me, it seems long-term beneficial to the Hub because it improves our product offering and attracts more high quality consumer chains.

1 Like

It is extremely frustrating to debate such important topics on a forum :sweat_smile: I do miss local meetups.

  • everything balances out nature part.

    I agree with what you are saying, my point is we should not have such a disruptive approach. Just observe nature’s ecosystems, everytime man touches it thinking it’s gonna fix it, it leads to more damage either ST/LT, the impact is always negative.
    Other parallel could be Adam Smith’s invisible hand theory in which at the end of the day if all the actors are acting out of personnal interest, it benefit wealthiness and common good.

    We should make use of that invisible hand which by nature keeps a natural balance by creating incentives such as the one I described without having strong, direct, hard to measure/balance/define impact such as the one you described.

    Surely yours have the benefit to be applied ST but it is taking additionnal risks and is not sustainable. The kind of proposal you are drafting would ‘yet’ again put power in the hands of few with a multisig etc… which again give a terrible look following past proposals which for some were total robbery. I know this one is not but if decentralization is what we want that’s not it.

    The problem only exists in a ST timeframe if we think about expending our validator set which puts the choice in our hands. The problems aren’t gonna magically appear tomorrow, everything is monitored. I think we do have enough time to act carefully and in a sustainable and healthy manner.

  • Taking money from the rich part.

    I don’t like the idea of redirecting the consumer chains’ revenue to small validators but I think is has more to do about personnal believes perhaps political opinion.
    First of all it’s their job to be financially stable, they are companies it’s their responsabilities to make money.

    The problem is for too long we opened the gates of the active validators set without thinking ahead, now we indeed foresee a risk for the ‘bottom’ validators.

    Let’s say we use CP to help small validators, it leaves a huge cap between the validators right outside the criterias(which very surely could not be necessarily top x or xx) and the one fitting. All validators SHOULD also have different infrastructure, by providing a fixed subzidy not only there is the sybil that you prefectly described but also all validators fitting the criterias shifting towards the same infrastrure that fit the subsidy bundle to make more money.
    I also think CP should ONLY be used for innovative and ‘step-forward’ techs, ideas, teams etc… but this is a rule (perhaps only mine) that we broke a long time ago.

Haha, I definitely do not think I could be this coherent in real time! Anyone debating this with me over a table would have to wait 30 min while I think quietly to myself lmao

I don’t understand the criticism of ‘not sustainable’ when applied to a ST solution. It’s not supposed to be sustainable - it’s supposed to bridge the gap between an untenable circumstance and a LT solution. I am thinking about it like a patch while we go beneath the hood and work on something that fixes the core problem. Without knowing anything about the exact financial situation, I would rather not risk a LT solution arriving too late to make a difference when a patch is readily available.

The problem only exists in a ST timeframe if we think about expending our validator set which puts the choice in our hands.

The problems aren’t gonna magically appear tomorrow

I disagree with these quite heartily. I’m anti-expansion, but this is a projected problem for validators currently in the set, not just ones who would be joining after an expansion. The problem of incurring an additional cost with few additional rewards is one that happens quite rapidly as well. Things may be monitored, but what do we do when we see it happening? I don’t see any quick options available.

First of all it’s their job to be financially stable, they are companies it’s their responsabilities to make money.

Yeah, I think we have differing political opinions here and that’s fine :slight_smile: This ‘invisible hand’ theory is elegant, but I also think that exposing a new feature directly to the forces of ‘nature’ is the best way to foster a feature we want (?) to become the cornerstone of our economic zone.

From my perspective, the Hub (and its tokenholders) are responsible for creating an environment in which it’s possible for validator ‘companies’ to make money and be financially stable, especially given that we need them for Replicated Security to succeed. If it comes down to it, I’d rather subsidize validators and have RS scale and succeed than let RS struggle on the principle of not helping validators bridge the gap. We are doing a disservice to our longterm economy if we create an environment where small business collapse in the interim period before they’re able to profit.

However - this part is not my expertise. I know Abra’s offline for a bit but I’ll just pass the baton to him for when he gets back. I’m tapping out!

I’m on board with past criticism - expanding the set too aggressively, for example. But I also don’t want to be the person who puts up a proposal to reduce the validator set size and I can’t imagine that prop would pass anyway lol! We have 175 validators, and we can only move forward imo.

I think some of the latter thoughts are focused on assumptions that may not actually be there. A fixed subsidy might incentivize uniform infrastructure, sure. What if we make it a proportionate subsidy?

And for long-term exploits like changing a whole infra setup or dividing into Sybils, is it really worth it for something like a 4 month cost-covering subsidy? There are levels of game theory here where the short-term obvious choice is to make whatever move is necessary to get the most money tomorrow, but the human effort of actually going through the work to make a few extra hundred bucks a month? I think it’s just a difference of opinion that I don’t think people will do that.

There are creative ways to structure distribution in this idea and I would love to see energy focused on spinning out proposed criteria, pros/cons, suggestions for how to make it work. At the early stage of an idea, it’s very easy to see faults everywhere but hard to actually build it into something where we have a hope of fixing any of the relevant faults.

And of course, I’d love to see people take their favourite long-term solution and go start making it happen :slight_smile:

Hahaha I’d discuss it small pieces at a time. Surely having a couple drinks while debating around the table would not help with growing headaches after 10s of minutes in :joy:

On the whole it seems we share the same analysis of the problem which is kindda reconforting to me since I am just a small delegator without any bonds to any party.

I’ll keep an eye out for Ibra’s reaction and how this ST solution unfolds over time.

Edit: I would never think about reducing the validator set. First it would never pass. 2nd it goes against everything I said earlier about not having a direct impact and counting on the invisible :wink:

New debate strat - tell my conversation partner that I need to think and give them an hour to get drunk before I respond lmao.


I don’t think it’s about the numbers of delegations but more about the amount of delegations (VP).
A simpler approach to what you are describing could be to just cap the max bounded tokens. Delegators would just have to delegate elsewhere since validators could be ‘full’.

Allowing a validator to redelegate for it’s delegators is pretty risky, let’s say the cap is 5% VP but the validator is managing an extra 20% to redelegate because of the CAP, one single hack could have huge repercussions.

But again I’d consider this die trying, survival mode type of measure and it goes against what I was describing earlier.

As far as I can remember the opt-out is not a permanent feature.
It needs to be implemented so that current bottom validators can still have the diversity of tokens to offer to delegators despite not being able to validate the consumer chain (because it would not be economically viable for them).
If you where to have a validators outside ICS only offering $ATOM as staking rewards, I doubt any new delegators would like their stake there and perhaps the ones already positonned would re-delegate which would create a major void in our current validator set and a pending bankruptcy for ALL the bottom validators, diminuishing even more the spread of the VP, accentuating even more the concentration at the top.

But I agree this should not be LT, perhaps it’s up to delegators to take action accordly.

Once again I think that’s not ours to take care of in any way, shape or form.

If validators are inactive they should remains so and not forcely put to the active set against previous active validators.
A validator should only be in the active set in the first place because he is financially stable, therefor providing continuous and sustainable security to the Hub.
Swapping them around is not solving anything, you are just replace a weak link by another faulty one.

And let’s say an inactive validator is still WILLING to do so, then he should just already be in the active set by providing for himself the needed tokens proving himself financially stable to the rest of the hub.

I appreciate the summary provided on the issue of validator funding and its potential negative impact on smaller validators. While I agree with the findings, I am not convinced that subsidizing costs and micromanaging the non-permission space is the best solution.

If we were to subsidize the bottom 75 validators (5% VP) with a monthly cost of $400 for one consumer chain, the expense for a year would amount to a couple of million USD from the community pool. Given that more chains may join ICS and increase the subsidy burden, the tradeoff between subsidy and total benefit from the consumer chain may be a net negative.

The urgency of the matter and the limited solutions available to address this problem are rightly pointed out by the OP. Therefore, it is advisable to work simultaneously on both short-term and long-term solutions. While a soft opt-out may be a short-term solution, it is not a reliable one and should not be solely relied upon.

One potential solution that may be implemented is to set a maximum cap on delegations, which could redistribute voting power within 4-6 quarters. ( as previous attempts such as increasing the validator set, changing the staking dashboard UI, and creating awareness have proven to be inefficient in the past) This proposal could be challenging to pass, For instance, if we set the maximum cap at 1% VP, 27 top validators currently holding 67% of the voting power will not receive delegations. If we set the cap at 2%, 11 validators holding 44% of the voting power will be unable to receive more delegations, which could prove challenging to pass as they may vote no as they are affected by the implementation.

But, if this proposal passes we are on a right track to redistribute the VP.

Thanks for opening this important conversation. Will support this.

1 Like

Yes, of course it’s about the amount of delegation, not number of delegators. I don’t think there is any implementation in Cosmos that calculates active set inclusion based on the number of delegators, but happy to be proven wrong :slight_smile:

Simple cap on the max tokens is a good idea, but I guess popular validators can just spawn new nodes, and gobble more of the 175 active slots. The re-delegating idea definitely needs to be worked out more, but I think it may be interesting to model.

As for inactive validators, I don’t mean inactive validators without any delegations, but if you look at the 175 set, you see there are sometimes shifts between no. 175 and no. 176, especially when there are some downtime and the active validator gets jailed. If a validator does NOT want to be in the active set, I think they can just turn off their machine, then they won’t be added into the active set. No one will force them into the active set then.

About “financially stable”: There are fixed costs to running the node setup depending on each validator. Cosmos is a popular network and I sincerely doubt there will be a lack of validators who wouldn’t take the place of validators that will drop out. Also… personally I don’t like to identify potential bankruptcies without clear data. If a validator is going to face bankruptcy now because they can’t meet cash flow, that means to me they are already in significant liabilities (possible if they bought 400k hardware in their DC), is this the case? Or that they would just not be able to cover their costs, of which they could stop validating and eating into their assets (unless they have fixed contracts on DC or cloud that they will still have to pay regardless).

@waqarmmirza -

the expense for a year would amount to a couple of million USD from the community pool. Given that more chains may join ICS and increase the subsidy burden, the tradeoff between subsidy and total benefit from the consumer chain may be a net negative.

My vision here is for a patch solution. I don’t think CBI should last for a year, let alone several. You’ve hit the nail on the head - it’s not a solely reliable solution and I don’t think it’s worth the effort to make a short-term solution into something perfect when we could implement it and then buy ourselves time to work out a sustainable one.

A year is plenty of time to develop a longer term solution such as the others mentioned in the essay (including the max cap you’ve suggested, which we called a ‘voting power hard cap’). I think this solution holds more water than increasing the set size - I’m glad it sparks interest in you as well :slight_smile:

If I had to pick a number out of thin air, I think I would be more generous than 1% or 2% to start off - I would want to gradually reduce the hard cap and move slowly so we see how it impacts validator businesses. I could see us starting at 4.5% maybe, and then reducing to 4, 3.5, 3, etc. if things were going well and businesses were staying afloat.

@adintium -

Simple cap on the max tokens is a good idea, but I guess popular validators can just spawn new nodes, and gobble more of the 175 active slots.

This is the Sybil concern that we are thinking about here. Given how top-of-mind it is for commentators, developers, and content creators, I think that a big validator spawning new nodes would destroy their business reputation in the space. Unless they had enough self-delegation to make the new nodes profitable, it would be a bad long-term decision for them.

1 Like