Scaling up with conditional basic income

Introduction

With the launch of the replicated security functionality from the Lambda upgrade on March 15th, the Cosmos Hub has the technical capacity to establish itself as a key social and economic Hub of the Interchain. However, replicated security has a critical issue: validators are going to incur costs for running consumer chains, and consumer chains are going to need time to generate meaningful revenue. This essay explores how ATOM holders might choose to leverage existing social, fiscal, or technological resources to mitigate these concerns. We believe that feedback from the community is critical to discover a solution that addresses the concerns raised in this essay , and we intend to develop this work into a signaling proposal within the coming weeks in accordance with that feedback.

This work is presented by Lexa Michaelides (Hypha Co-op) and Abra Tusz (ICF), with appreciation to Udit Vira (Hypha Co-op) and Thyborg (Informal Systems) for the conversations and shared research leading to this analysis.

The characteristics of Replicated Security

Replicated security has the potential to drive immense value to ATOM stakeholders. Ultimately, long-lasting value will come from consumer chains that find product-market fit and generate meaningful activity on their chain. This value might flow to ATOM holders in the form of MEV, fees, and/or inflationary token rewards. The benefits to the Hub will increase as consumer chains attract more users, generate more activity, or provide important services in the Interchain.

Based on the design of replicated security, the history of economic growth and innovation, and characteristics of complex systems – we can make the following assumptions about the ATOM economic zone that will form around Replicated Security:

  • It will take time for consumer chains to find product-market fit and generate meaningful activity
  • The relationships between chains in the ATOM economic zone will be emergent (we cannot predict what relationships will develop)
  • As more synergistic relationships develop in the ATOM economic zone, more value will be created
  • On-boarding many chains (that meet a certain quality standard) and off-boarding those that don’t succeed is likely to be a more effective strategy than trying to pick winners.

However, there are material limitations to the Cosmos Hub’s ability to on-board consumer chains. These limitations are that the validators that support Replicated Security bear all the costs for each additional consumer chain they run, and that these consumer chains need time to generate enough value to offset that cost.

Two perspectives: delegators and validators

From a short-term perspective, delegators have everything to gain and nothing to lose from onboarding new chains and reaping the immediate tangible benefits.

Tokens from consumer chains flow through the Hub in the following order:

  1. The Hub gets some tokens from consumer chains (e.g, 25% of fee revenue).
  2. Validators take commission on this (roughly 0-20%, with the majority below 10%), leaving the majority of the tokens as staking rewards for delegators.
  3. Delegators receive the remaining 90% (roughly) as staking rewards proportionate to their own stake.

While validators stand to benefit in proportion to their own staked ATOM (as well as commission on the stake delegated to them), they also incur an additional cost from running a new node for each consumer chain. Cost estimates in this area have been very hard to nail down because each validator runs their infrastructure differently, but estimates point to a range of about $300 - 800 per month.

These costs are incurred regardless of a validator’s position in the Hub’s set, and while $800/month might not be a huge expense for the top 20 validators,it could make a large impact on a smaller validator’s operating expenses.

In short, consumer chain rewards are expected to follow the same distribution model as the Hub’s block rewards: proportionate to stake. Combined with the awareness of how validators incur costs, this paints a telling picture of how smaller validators might struggle to support additional consumer chains without additional aid.

Screenshot 2023-04-17 at 11.10.49 AM

The self-reinforcing cycle of stake

To optimize for success, we need to encourage a Provider <> Consumer chain culture that allows strong chains and useful services to emerge. Projects which don’t thrive might be offboarded (decreasing costs) and profitable chains may stick around and provide increasing revenue to the validator set, but it will take time to get there. This means that many chains may lease security from the Hub well before they are profitable for small validators, which are limited by the stake-weighted rewards they receive.

The discrepancy between large and small validators could increase as stake centralizes with the upper-half validators, who have access to a higher proportion of rewards. In the long-term, replicated security might address this by increasing the overall rewards coming into the Hub - but this only works if small validators manage to stay afloat as Replicated Security scales.

Hub culture depends on a healthy validator set

We need a strong, healthy culture in the Hub to foster the pioneering work being done by prospective consumer chains, and the validator set is a critical aspect of our culture. The Hub prides itself on having a high-quality validator set – it’s what makes us so attractive to potential consumer chain projects.

Voting power distribution is an additional point of appeal to consumer chains because it means less money is needed to be cost-covering for the smallest validators. When stake is evenly distributed, all validators might expect to receive approximately the same share of rewards, but if the top of the set receives 100x more rewards than the bottom, the rewards accruing at the top are consequently 100x higher than what is needed to be cost-covering.

We need a set that is decentralized in terms of voting power and infrastructure; is reachable and responsive to consumer chain concerns, such as upgrades; and familiar with the software as it develops.

We can’t create this culture without mitigating systemic issues that threaten the health of our validator set. As things stand, we run the risk of big validators getting even bigger while smaller operators are pushed out of the business, and there is a limit to how much any individual validator can address the problem alone.

Surveying the site

In the long-term, balancing the stake distribution would allow validators to access rewards more uniformly and curb the self-perpetuating cycle of stake-weighted income. While the root of this problem is the distribution of stake on the Hub, this is not a problem that will be solved overnight.

Many ideas touching on this issue have been raised and explored already, and detailing out the next steps for any of them is beyond the scope of this work. Continuing to research and work on solutions like these is part of contributing to a rich solution space, and we present them here as a starting point for anyone interested in taking them up:

  • Expanding the validator set, which allows validators to enter the set but which has not shown evidence of shifting the overall distribution of stake in the long-run.
  • Establishing a minimum commission, which would force a minimum proportionate income for validators, but not address the top-heavy distribution of stake.
  • UI solutions for encouraging staking with smaller validators and awareness of stake distribution, such as on Kujira.
  • Implementing a voting power hard-cap, in which validators cannot receive delegations after hitting a given threshold of voting power. A similar solution has been implemented on Ethereum (where each validator has a maximum voting power of 32 ETH), though Ethereum does not cap the validator set size.
  • Working with liquid staking providers and community owned treasuries to redistribute stake – this approach entails governance coordination between communities, liquid staking providers (and other entities that hold a lot of stake) to redistribute network stake more evenly in accordance with a set of conditions.
  • Modifications to how the Distribution Module allocates staking rewards (e.g., Using additional criteria as opposed to strictly being stake-weighted).

Planning a solid foundation

Consumer chains will take time to find product-market fit but as Replicated Security scales, we need a way to bootstrap a dynamic network of consumer chains while maintaining a quality validator set. This means allowing small validators to access the upside of successful projects without unmanageable operating costs.

On the consumer chain side, something like the soft opt-out could address this - a lower percentage of validators (e.g., 5%) could be excused from running a particular chain while still receiving rewards. The last 5% of the validator set by voting power currently comprises 60 - 75 validators and letting them opt out would drastically reduce their costs.

However, this is a consumer chain fix, not one that the Hub controls. As Replicated Security scales, we need a Hub-centric solution that keeps our validator set healthy and decentralized enough to attract high quality consumer chain candidates.

Breaking ground

One method of support is subsidizing a base-level cost for validators that meet a certain set of criteria. This enables Cosmos Hub governance to shape the validator set by subsidizing smaller validators that perform their duties responsibly.

This solution would enable smaller validators that add value to the network to continue to operate, while longer-term issues such as stake distribution are addressed or consumer chains become more profitable.

Regardless of where funds come from, some criteria must also be identified to determine which validators receive a subsidy. Potential criteria include:

  • Currently in the lower x% of the active set, as the intent is to produce a means for smaller members of the set to thrive rather than introducing new members or increasing rewards to larger operators.
  • Currently running nodes for all Hub consumer chains, because this basic income is meant to support validators who are contributing to the Hub’s Replicated Security offering.
  • Sufficiently high uptime.
  • Quick upgrade operations both for the Hub and consumer chains.
  • Reachable and responsive on Hub communication platforms such as Discord.
  • Participation in feature testing and software feedback, such as by being active on Cosmos Hub testnets.

The subsidy could be created by changing how tokens from the Hub’s distribution module are allocated to validators, and this could apply to consumer chain revenue only, or include the Hub’s block rewards too. Instead of being 100% stake-weighted, a portion could be allocated to a subsidy which is distributed based on a set of criteria. However, this essay will focus on a much simpler and more immediate method which doesn’t require additional engineering work. Funds for the subsidy could come directly from the Cosmos Hub Community Pool, or from a pool of money contributed up-front by consumer chains onboarding to the Hub. This situation has benefits and tradeoffs, which we examine below from the perspective of the Cosmos Hub as a provider chain.

This method currently entails:

  • Deciding upon a set of criteria
  • Using governance to delegate authority to a group of people to monitor validators against those criteria and dispense subsidies appropriately
  • Making a community-pool-spend proposal on the Cosmos Hub to release funds to a multi-sig wallet that will dispense the subsidies

In the future, it may be possible to automate some aspects of the above process via smart-contracts, but that is outside of the scope of this essay. The major differentiator between the Community Pool option and the Distribution Module option is the source of funding; the differences in how they are implemented is a side effect of that.

The benefits of this approach are that it can be employed immediately, without having to negotiate with a counterparty or develop new code for the Hub. It also enables the Cosmos Hub community to have fine-grained control over the conditions for receiving conditional basic income.

The downside of this approach is that the subsidy comes at the cost of the provider chain, requires a hands-on and labor-intensive approach, and is more prone to capture due to the current state of DAO tooling. Additionally, this could create a certain amount of downward sell pressure against ATOM as operators need to liquidate tokens to cover costs, though it’s unclear what impact this would have, or if this is already something that occurs in regular validator operations.

Some open questions around this idea:

  • How much money should be allocated to a conditional basic income program? How long should we expect to subsidize costs?
  • What should the criteria be? How do we monitor and check them?
  • Who could sit on this multisig? Are there conflicts of interest that we need to be aware of?
  • How would conditional basic income interact with the soft-opt out? Does one preclude the other? Are there any synergies between the methods?

Structural support: sybil resistance

A major concern present alongside this solution is that large validators could split up into many smaller nodes to become eligible for conditional basic income. It is worth noting that this risk is also present in the soft-opt out methodology, which might enable mid-sized validators to split stake into smaller nodes to avoid the costs of running consumer chains but still reap the rewards.

In the short-term where a subsidy is human-controlled, the multisig responsible for evaluating validators would have to be aware of this risk.

This sybil behaviour becomes more problematic when there is no human step in the process. A fully automated rewards/subsidy system (or soft-opt out) could be gamed by a wealthy validator, and the highest risk is that of an individual whale with enough ATOM to keep several nodes in the set purely by self-delegation. However, in a community that values decentralization, there will likely be a strong watchdog culture which whistle blows on sybil behaviour and discourages delegations to a validator who did something like this. We believe that the intangible social cost of a Sybil attacks by a validator would be sufficiently high to dissuade validators from risking their reputation in order to offset the cost of running an additional node.

Conclusion

The above solutions are short-term but viable solutions to the centralizing-effects that replicated security might have on the Cosmos Hub validator set. It will take community effort and coordination to determine which solution (if any) it believes is most viable. We’ve put together these ideas as a way of exploring the problem space in a Hub-focused way, and plan to evolve it into a signalling proposal in accordance with community feedback in order to accommodate the needs of the growing ATOM economic zone and its stakeholders.

Ultimately, we would recommend that consumer chains also think about this problem as they come to launch on the Cosmos Hub.

15 Likes

As a smaller validator this discussion is a very important one. Many consumer chains would definitely hurt our sustainability. As it stands now our validating on the Hub is barely profitable. I think a UBI of some sort makes sense to cover expenses of new chains.

Well thought out @lexa. Perhaps consumer chains can help carry the burden somehow.

5 Likes

Our team recognizes the potential benefits of a UBI to help cover additional expenses associated with consumer chains. We have explored various strategies to address increased server costs and time management requirements, including raising commission rates.

Additionally, we hope that an “opt-out” provision can be [quickly] implemented for chains that fail to meet our team’s standards, as being obligated to run all consumer chains (under penalty of slashing) is less than ideal.

Ultimately, we support the concept of a UBI and appreciate your efforts to spark this discussion in the community :+1:

4 Likes

You welcome all actually I support this proposal and it has been approved supporting voted by our community membership.

Thanks again for the fantastic work! There is a lot to talk about :+1:

  • I can’t wrap my head around the soft opt-out, I’ve been asking the definition of it a couple times already on a couple threads and it seems not matching this paper.

    From my understanding it was ONLY about protection bottom x validators based on cumulated VP from slashing and jailing running consumer chains to avoid additionnal costs.

    If it is in fact about allowing said validators to not run the chain and still get the same amount of rewards they would have gotten then I am all for UNTIL Stride and Neutron are running and we have a real feedsback on costs by bottom validators. In that 2nd version of it, I do not think it is a good idea to keep it long term for risks descibe in the essay (sibil).

  • Breaking ground section:
    The analysis of the problem is IMO right, it is delegations spread/VP. We simply can’t have such a difference between top validators and bottom one.
    Again my biais is that everything will eventually balance out by itself and we should all think about sustainability.
    I think that what is describe and the way it is heading is extremely dangerous and harmful. In no way we should even think about such action. What is described very well reminds me local politics forcing what’s good and what’s not.
    I think the direction that we should head towards is incentivising, without taking action that have direct impact on validators since the main issue are the way delegators behave.

Breaking Ground Alternative route:

One idea that I already shared (I must not be the first one) would be to force a minimum staking fees on our top validators higher than the default parameter of the chain.

The purpose of it is to incentivise delegators to move their delegation outside a top-be-defined top x%/x? validators/VP.

The reasoning behind it:

Since what I believe to be the root cause is delegators behavior, this translate into:

  • Lot of new cosmosnauts by default go through the list from top to bottom and from lowest to highest fees.

-Same goes for huge capital addresses that only delegates on top validators perhaps for ‘security’ or ‘laziness’ reasons. If they only are here to farm and not to care about the healthyness of the ecosystem, this could be a great way to incentivize re-delegation.

Afterthought risks:

One potential risk could be mass unbounding/ redelagation, again this is pretty hard to keep an eye on and adjust parameter of the incentive.

Diff. w/ the essay:

On the bright side, this action would never point the finger directly at validators which are core to our ecosystem and never directly redirect part of their hard earned revenue contesting their position in the ecosystem which to me is detrimental.

“Improvements”:

This mechanism could also be only activated once a certain value of a to-be-defined “concentration metric” is reached (ex: top 50 vals have more than 40%VP).

Final thoughts:

I believe everything balances out by nature. Right now what we need is a only little stimulus which again is dependant of real time healthiness of the hub and is not permanent by nature.
I am strongly against what’s described in the essay which sounds like taking money from the ‘rich’, using the big ol’ common pot to support above water unsustainable behaviors/practices.
Let’s take the bull by the horns and promote/incentivise good delegator practice without impacting other players.

PS: I’d like to add since I am learning myself, it is quite hard to find exact/accurate definition of concept in the ecosystem, we should start a wiki page with definition of each of them as one single source of knowledge instead of having to weights the different understanding of people replying in a post.

We’ve been testing the soft opt-out on multiple recent Neutron testnets so it is extremely likely that mainnet will ship with it :slightly_smiling_face:

1 Like

The current soft opt-out implementation merely protects the bottom 5% validators from getting slashed/jailed for not running the consumer node. They’re still considered « part of the set » but the chain does not send slashing/jailing packets against them. It doesn’t affect reward distribution either, so validators who opt not to run an additional node thanks to the soft opt-out still get rewards proportionally to their voting power.

This implementation is not meant to be permanent: a better implementation would be on the provider side and probably would affect the distribution of rewards, but it would most likely require a Hub upgrade, which is why the current implementation was preferred for now.

5 Likes

So with ICS v1 comes a new way to get slashed which is not to run consumer node as validator.
Not getting slashed = choice not to run the consumer chain.

Got it :+1:

1 Like

Well no, at least, not right now. To get slashed, you would have to double sign on a consumer chain, then there would need to be a specific proposal type on the hub called an « equivocation proposal » that votes based on on-chain evidence to confirm that you should be slashed for double signing, and only then would you actually get slashed.

But yeah otherwise you’re correct, the soft opt-out removes penalties which returns choice to the validators.

1 Like

Many thanks @lexa and @ala.tusz.am for your initiative and leadership to bring this very interesting and important essay for community discussion to find together the best solution to move forward. Here is my feedback and ideas:

1. Problem introduced by consumer chains

  • Consumer chains onboarded increase the overall costs for a large number of smaller validators, at least for a reasonable initial period until the revenue the consumer chains bring is higher than the costs. Without any actions, a large number of smaller validators would be quickly led to bankruptcy due to these large additional costs. This would lead to an even further centralization. Also, as mentioned the soft opt out is a solution from the consumer chain side, not the Cosmos Hut itself, if most consumer chains don’t add this soft opt out like Neutron then this solution wouldn’t work

  • Delegators care about maximizing rewards, they don’t care about reducing centralization as it is obvious from the current stake distribution in the Cosmos Hub. So delegators would be happy to onboard as many consumer chains as possible since they don’t cover any costs and just get the rewards. So we need to design a solution in which delegators also contribute to the costs of consumer chains. One way is your subsidy idea, meaning delegators are taxed to the community pool and part of this is used for the subsidy. Another idea is to modify the distribution module to incentivize them to care about decentralization

2. Sybil attacks

In your essay the highest risk mentioned is a large ATOM whale launching several smaller validators with their own stake to create several sybil validators, since this is the highest risk if we can find a solution for this then sybil attacks would be very low risk. This is the possible solution: the 175 validators in the Cosmos Hub are well known, it is very easy to see when a new validator enters the active set because their governance participation would be very low, so identifying new validators joining the set is easy. Also, large ATOM whales there aren’t many, quoting from Neutron tokenomics/airdrop blogpost ‘Maximum stake: 1,000,000 ATOM. This maximum threshold only concerns the ~25 wealthiest accounts’. It wouldn’t be hard to link movements in these few whales accounts to new validators joining the active set.
Large validators with not many ATOMs cannot really launch sybils and join the active set since the threshold to join the active set is quite high now.

Let’s imagine several scenarios:

  1. An ATOM whale that runs its own validator, why would they do sybil attack for the soft opt-out with several smaller validators? The soft opt-out is just to avoid costs to run consumer chain nodes, but with just one big validator they would have less costs than with several sybils. Also, this incentive is super low for such a whale considering their revenues
  2. If a modification in the distribution module for consumer chains rewards is introduced with a multiplier for smaller validators or your subsidy idea, would this whale do a sybil attack? Again just the higher costs to run several sybil validators may already be larger than potential gains from the subsidy or the multiplier
  3. Let’s assume that regardless this whale decides to do a sybil attack. This would be immediately detected by observing new validators that joined recently the active set and actions could be taken to fix this
  4. The whale could think about doing the sybil attack via other existing smaller validators by delegating to them, but this wouldn’t really be a sybil attack but just increase the decentralization? Also, this probably wouldn’t happen neither because the whale would lose the portion of rewards for the fees of other validators, which is the whole point why some ATOM whales decide to launch their own validators in the first place to avoid any commissions

I think Informal and other teams with the resources to do computer simulations of different scenarios should do it asap and present the results here to have discussions that are more based on data. A range of multipliers in the distribution module for consumer chains, with different number of consumer chains, could be simulated to see how the decentralization of the Cosmos Hub could improve. Also, thresholds could be added to the simulation when it would be profitable for whales to do sybil attacks and include in the simulation the around 25 largest accounts identified by Neutron. This computational simulation should be done asap so that we can make more informed decisions of the next steps.

3. Criteria for the subsidy

  • I think governance voting and participation in forum discussions should be amongst the top criteria. From the 175 active set validators in the Cosmos Hub many of them never voted or read this Forum, many voted in just a few proposals, this includes also many of the larger validators

  • Having to run consumer chain nodes to receive the subsidy seems not logical with the soft opt out. If the subsidy is similar to the additional costs, then validators would just soft opt out, seems simpler than running the consumer chain nodes and additional costs and then waiting for the subsidy to cover this

  • I don’t like the idea of having a group of people controlling the subsidy, while this may seem like a simpler/faster solution initially, people can introduce delays, biases and more. I think modifying the distribution module with a multiplier for the consumer chain rewards could be best. Or for example, the tax to the community pool could be reduced and part of this tax used for the subsidy, and the criteria would be on-chain and objective such as governance participation, uptime, upgrade efficiency, voting power and so on. This could not only lead to more decentralization but more governance participation, more uptime and so on since all of this would be directly incentivized

A UBI model doesn’t seem appropriate in this context. At its core, the limited 175 slots to be an active validator is actually an open public competition.

What if we turn it around, and instead of subsidising, we asymmetrically incentivize the bottom 5%?

The consumer chain that applies for replicated security should be responsible for this incentive, and it should be part of their proposal.
They will then incentivize the bottom 5% much more than the rest of the active set, in order to get the validators to participate. The asymmetrical incentive could both be monetary and time-sensitive.

For example:
The bottom 5% gets 2x more rewards than the rest.
But for the first 6 months from launch of the cchain, they will get 3x more.

The soft opt-out feature is currently being tested - I would not be surprised if it is already going to be an option for consumer chains launching :slight_smile:

One thing that I want to point out to all respondents so far is that I am totally on-board with code-based solutions in the long-term! My heart lives in that ‘surveying the site’ section where we talk about code-based ideas for solving the problem over time. But no argument will convince me that the development work of designing, creating, and testing a new feature (new param, new way for a module to work, etc) is fast enough to address the issue I see with scaling up Replicated Security.

I want something short-lived and renewable if need-be: Something like a 4 month tranche controlled by a trusted multisig until we actually have the information from seeing these consumer chains in action. We can’t possibly know what things will look like until it’s active, and I do not believe we can develop a code-based solution fast enough to protect our validator set.

@gh0st - remember that the penalty for downtime on a consumer chain is jailing, not slashing! Not that this is necessarily a consolation if you are opposed to a project itself but I want to be very clear about which penalties apply to what!

@ephemeral_25 -

Again my biais is that everything will eventually balance out by itself and we should all think about sustainability. [
] I believe everything balances out by nature.

I don’t understand this perspective - if the set will eventually balance itself out, why hasn’t it? It seems like it is growing even more centralized, imo, and we see that in more traditional economies as well.

I like this alternate route you’re proposing, but I would put it more in the long-term strategy group than in the patch solution that ‘breaking ground’ is intended to be. Anything involving engineering or new features/parameters inevitably involves a spec, dev work, testing. I think this potential solution is up there with changing the UI, imposing a delegation cap, etc. All good ideas, all things that take time in which the smaller validators might no longer be around to benefit from it.

So what do we do in the meantime to tide our validator set over until we have a longer and more sustainable solution?

sounds like taking money from the ‘rich’,

It’s our community pool - it belongs to ATOM holders and we just raised the tax to provide even more resources for that pool. We can disagree about whether the raised tax was a good idea or ‘taking from the rich’ but it happened, and now we have money we are all able to suggest a use for.

I don’t think the CBI idea is ‘taking’ money from anyone at all. I’d like to use it to help the validators who keep the Hub alive to while we get the Hub’s main product offering (Replicated Security) off the ground.

@Cosmic_Validator - I don’t agree with these criteria necessarily. My hope is that CBI is specifically to benefit validators who are invested in Replicated Security and working to contribute to this feature.

A team that is not incurring the additional costs of Replicated Security (by not running consumer nodes) doesn’t need a subsidy. They are already getting consumer chain rewards without running a node. To me, this seems like a sensible either/or: You get the consumer chain rewards either way but if you run a node, the Hub helps you out with a subsidy because you’re contributing to the feature.

And I will refer to what I wrote to Ephemeral - there are tons of good longer term solutions. But good engineering takes time and we have two consumer chains looking to launch in the next few months. Engineering introduces delays as well, as we have to write specs, design a way to check all these criteria, build the feature, and thoroughly test it out. Developers are people - the dev process is just as full of delays as a multisig distribution.

But the things you’ve raised as criteria are just as easily checkable by a person as a piece of code. If I were on the multisig (which I don’t really want to be, because mutisig transactions are such a pain lol), and I put up a spreadsheet with a timestamp and say “This is when I checked all the criteria and this is what I found”. That person can check the exact same things that code checks, but the difference is that a human can already do it, whereas the code doesn’t exist yet, hasn’t been designed, hasn’t been tested, and would need to be developed and then deployed onto the Hub (which takes a long time). For something like a tranche of 4 months of CBI while we wait to see how our first consumer chains do, that seems very reasonable and low risk to me.

I don’t disagree that something like modifying the distribution module is a good idea - it’s one of the other things we proposed in the essay. But it happens on a different timeline than the community spend idea, and I think it’s important to have one fast thing while we figure out slower solutions.

@adintium - Consumer chains do have a fix on their end, it’s the soft opt-out that Neutron is already testing out. As an ATOM holder, I’m not interested in solely relying on consumer chains to solve a Hub-centric issue. We’re the ones offering Replicated Security as a product feature and I think it’s our job to make sure it’s sustainable for us. As I said - it benefits the Hub to have a healthy and diverse validator set.

Abra and I talked a lot about whether an open public competition is really the best method for strengthening the Hub’s product offering. Hopefully he’ll chime in - he had some really good thoughts on the matter.

Hi Lexa!

In my ideal cosmos hub setup, there would be a re-delegating mechanism. Essentially there is a limit to how many delegations a validator can hold. Hence, for the surplus, the validator can re-delegate the delegation
 except the validator can only re-delegate to other validators in the active set (+5 of the top in the inactive set).

I would find this to be more of a long-term solution for the Hub-centric issue. :slight_smile:

On the consumer chain
 The soft opt-out is a good solution
 I agree it is fair that they are not penalised but I don’t see why validators who opt out should still get rewarded.
As there is no real incentive to opt-in in the future and the validator is also rewarded for inaction, I cannot yet see how this is beneficial for the Hub or atom holders in the long-term.

If we have a situation where validators #100-#175 aren’t willing to validate for consumer chains (because of the understandable cost issues), but inactive validators #175-#180 are willing to take the cash flow risk and provide this service
 then I would like to encourage validators #175-#180 to be productive and contribute to the network security. But the dynamics of such situation would be hampered if the soft opt-out validators continue to be rewarded despite not validating on consumer chains.

1 Like

Hey!

Yes, this is my ideal vision as well! Strong re-delegation mechanism, voting power hard cap. Feels really clean to me, but long-term for sure.

As there is no real incentive to opt-in in the future and the validator is also rewarded for inaction, I cannot yet see how this is beneficial for the Hub or atom holders in the long-term.

I think it’s a strong business move to participate once it’s profitable. I would expect keen delegators to fully expect and pressure their validators to actually participate. To me, it seems long-term beneficial to the Hub because it improves our product offering and attracts more high quality consumer chains.

1 Like

It is extremely frustrating to debate such important topics on a forum :sweat_smile: I do miss local meetups.

  • everything balances out nature part.

    I agree with what you are saying, my point is we should not have such a disruptive approach. Just observe nature’s ecosystems, everytime man touches it thinking it’s gonna fix it, it leads to more damage either ST/LT, the impact is always negative.
    Other parallel could be Adam Smith’s invisible hand theory in which at the end of the day if all the actors are acting out of personnal interest, it benefit wealthiness and common good.

    We should make use of that invisible hand which by nature keeps a natural balance by creating incentives such as the one I described without having strong, direct, hard to measure/balance/define impact such as the one you described.

    Surely yours have the benefit to be applied ST but it is taking additionnal risks and is not sustainable. The kind of proposal you are drafting would ‘yet’ again put power in the hands of few with a multisig etc
 which again give a terrible look following past proposals which for some were total robbery. I know this one is not but if decentralization is what we want that’s not it.

    The problem only exists in a ST timeframe if we think about expending our validator set which puts the choice in our hands. The problems aren’t gonna magically appear tomorrow, everything is monitored. I think we do have enough time to act carefully and in a sustainable and healthy manner.

  • Taking money from the rich part.

    I don’t like the idea of redirecting the consumer chains’ revenue to small validators but I think is has more to do about personnal believes perhaps political opinion.
    First of all it’s their job to be financially stable, they are companies it’s their responsabilities to make money.

    The problem is for too long we opened the gates of the active validators set without thinking ahead, now we indeed foresee a risk for the ‘bottom’ validators.

    Let’s say we use CP to help small validators, it leaves a huge cap between the validators right outside the criterias(which very surely could not be necessarily top x or xx) and the one fitting. All validators SHOULD also have different infrastructure, by providing a fixed subzidy not only there is the sybil that you prefectly described but also all validators fitting the criterias shifting towards the same infrastrure that fit the subsidy bundle to make more money.
    I also think CP should ONLY be used for innovative and ‘step-forward’ techs, ideas, teams etc
 but this is a rule (perhaps only mine) that we broke a long time ago.

Haha, I definitely do not think I could be this coherent in real time! Anyone debating this with me over a table would have to wait 30 min while I think quietly to myself lmao

I don’t understand the criticism of ‘not sustainable’ when applied to a ST solution. It’s not supposed to be sustainable - it’s supposed to bridge the gap between an untenable circumstance and a LT solution. I am thinking about it like a patch while we go beneath the hood and work on something that fixes the core problem. Without knowing anything about the exact financial situation, I would rather not risk a LT solution arriving too late to make a difference when a patch is readily available.

The problem only exists in a ST timeframe if we think about expending our validator set which puts the choice in our hands.

The problems aren’t gonna magically appear tomorrow

I disagree with these quite heartily. I’m anti-expansion, but this is a projected problem for validators currently in the set, not just ones who would be joining after an expansion. The problem of incurring an additional cost with few additional rewards is one that happens quite rapidly as well. Things may be monitored, but what do we do when we see it happening? I don’t see any quick options available.

First of all it’s their job to be financially stable, they are companies it’s their responsabilities to make money.

Yeah, I think we have differing political opinions here and that’s fine :slight_smile: This ‘invisible hand’ theory is elegant, but I also think that exposing a new feature directly to the forces of ‘nature’ is the best way to foster a feature we want (?) to become the cornerstone of our economic zone.

From my perspective, the Hub (and its tokenholders) are responsible for creating an environment in which it’s possible for validator ‘companies’ to make money and be financially stable, especially given that we need them for Replicated Security to succeed. If it comes down to it, I’d rather subsidize validators and have RS scale and succeed than let RS struggle on the principle of not helping validators bridge the gap. We are doing a disservice to our longterm economy if we create an environment where small business collapse in the interim period before they’re able to profit.

However - this part is not my expertise. I know Abra’s offline for a bit but I’ll just pass the baton to him for when he gets back. I’m tapping out!

I’m on board with past criticism - expanding the set too aggressively, for example. But I also don’t want to be the person who puts up a proposal to reduce the validator set size and I can’t imagine that prop would pass anyway lol! We have 175 validators, and we can only move forward imo.

I think some of the latter thoughts are focused on assumptions that may not actually be there. A fixed subsidy might incentivize uniform infrastructure, sure. What if we make it a proportionate subsidy?

And for long-term exploits like changing a whole infra setup or dividing into Sybils, is it really worth it for something like a 4 month cost-covering subsidy? There are levels of game theory here where the short-term obvious choice is to make whatever move is necessary to get the most money tomorrow, but the human effort of actually going through the work to make a few extra hundred bucks a month? I think it’s just a difference of opinion that I don’t think people will do that.

There are creative ways to structure distribution in this idea and I would love to see energy focused on spinning out proposed criteria, pros/cons, suggestions for how to make it work. At the early stage of an idea, it’s very easy to see faults everywhere but hard to actually build it into something where we have a hope of fixing any of the relevant faults.

And of course, I’d love to see people take their favourite long-term solution and go start making it happen :slight_smile:

Hahaha I’d discuss it small pieces at a time. Surely having a couple drinks while debating around the table would not help with growing headaches after 10s of minutes in :joy:

On the whole it seems we share the same analysis of the problem which is kindda reconforting to me since I am just a small delegator without any bonds to any party.

I’ll keep an eye out for Ibra’s reaction and how this ST solution unfolds over time.

Edit: I would never think about reducing the validator set. First it would never pass. 2nd it goes against everything I said earlier about not having a direct impact and counting on the invisible :wink:

New debate strat - tell my conversation partner that I need to think and give them an hour to get drunk before I respond lmao.

2 Likes

I don’t think it’s about the numbers of delegations but more about the amount of delegations (VP).
A simpler approach to what you are describing could be to just cap the max bounded tokens. Delegators would just have to delegate elsewhere since validators could be ‘full’.

Allowing a validator to redelegate for it’s delegators is pretty risky, let’s say the cap is 5% VP but the validator is managing an extra 20% to redelegate because of the CAP, one single hack could have huge repercussions.

But again I’d consider this die trying, survival mode type of measure and it goes against what I was describing earlier.

As far as I can remember the opt-out is not a permanent feature.
It needs to be implemented so that current bottom validators can still have the diversity of tokens to offer to delegators despite not being able to validate the consumer chain (because it would not be economically viable for them).
If you where to have a validators outside ICS only offering $ATOM as staking rewards, I doubt any new delegators would like their stake there and perhaps the ones already positonned would re-delegate which would create a major void in our current validator set and a pending bankruptcy for ALL the bottom validators, diminuishing even more the spread of the VP, accentuating even more the concentration at the top.

But I agree this should not be LT, perhaps it’s up to delegators to take action accordly.

Once again I think that’s not ours to take care of in any way, shape or form.

If validators are inactive they should remains so and not forcely put to the active set against previous active validators.
A validator should only be in the active set in the first place because he is financially stable, therefor providing continuous and sustainable security to the Hub.
Swapping them around is not solving anything, you are just replace a weak link by another faulty one.

And let’s say an inactive validator is still WILLING to do so, then he should just already be in the active set by providing for himself the needed tokens proving himself financially stable to the rest of the hub.

I appreciate the summary provided on the issue of validator funding and its potential negative impact on smaller validators. While I agree with the findings, I am not convinced that subsidizing costs and micromanaging the non-permission space is the best solution.

If we were to subsidize the bottom 75 validators (5% VP) with a monthly cost of $400 for one consumer chain, the expense for a year would amount to a couple of million USD from the community pool. Given that more chains may join ICS and increase the subsidy burden, the tradeoff between subsidy and total benefit from the consumer chain may be a net negative.

The urgency of the matter and the limited solutions available to address this problem are rightly pointed out by the OP. Therefore, it is advisable to work simultaneously on both short-term and long-term solutions. While a soft opt-out may be a short-term solution, it is not a reliable one and should not be solely relied upon.

One potential solution that may be implemented is to set a maximum cap on delegations, which could redistribute voting power within 4-6 quarters. ( as previous attempts such as increasing the validator set, changing the staking dashboard UI, and creating awareness have proven to be inefficient in the past) This proposal could be challenging to pass, For instance, if we set the maximum cap at 1% VP, 27 top validators currently holding 67% of the voting power will not receive delegations. If we set the cap at 2%, 11 validators holding 44% of the voting power will be unable to receive more delegations, which could prove challenging to pass as they may vote no as they are affected by the implementation.

But, if this proposal passes we are on a right track to redistribute the VP.