[Proposal] [Draft] Proportional Slashing

IMO this proposal is PLACEBO :no_mouth:

1 Like

I don’t think this proposal would have the effect of flattening the voting power distribution. In fact, it might have the opposite effect, and drive further consolidation of stake to a smaller number of large, well capitalized entities.

Increasing slashing penalties for larger validators favors sophisticated and will capitalized entities who can afford to build infrastructures that are very unlikely to fault. Smaller validators are less able to invest in high quality infrastructure and technical operations, and are viewed as higher risk operations. A simple example is the use of HSM based key management, which even in a simple configuration reduces the risk of a double sign by a considerable degree. Many small operators are not able to afford (or choose not) to pay the cost of the physical infrastructure required, and instead use local software signing with plain text keys on disk. Because the risk of a slashing events can me mitigated by the larger operators, even with a higher cost of a fault their relative risk of loss may be lower than a smaller and less sophisticated operator. It is rational to select a 1% risk of a 10% loss over a 3% risk of a 5% loss (numbers for illustration only).

The need to introduce an anti sybil measure further disadvantages small operators. For many reasons it is difficult, perhaps impossible, to reason about the risk of correlated slashing faults between validators. Small operators are more likely to have similar infrastructure, deployed in similar configurations, with identical software stacks. Many small operators do not use hardware based key management, leaving them all vulnerable to similar risks, which will correlate across diverse cloud infrastructures. While small operators can make claims about their infrastructures and operational skills, they are less able to invest in things like 3rd party audits to verify claims. If a delegator can not confidently assess the risk of correlated faults causing larger slashing events, they will assign higher risk to smaller operators.

Finally, large operators will be better able to insure against slashing losses. As markets mature, sophisticated and well funded operators are likely to be able to acquire third party insurance against slashing, negating the increased slashing penalties. The cost structures inherent in offering financial products of this type advantage larger operators, despite larger slashing penalties. It will be more difficult for smaller operators to qualify for and afford such coverage, leaving them less able to compete. Well capitalized entities, such as centralized exchanges can self insure and provide full guarantees against loss. Very few operators other than large centralized exchanges have the available capital to offer meaningful guarantees of this nature.

In addition to the negative consequences of the anti sybil measure, it is also unlikely to actually work. The incremental cost to large operators to split their operation into a number of smaller validators is small, even if they do so on diverse infrastructure. Many large operators already operate hardware in multiple physical locations and spread cloud based operations across multiple providers. In the context of the high operating costs of these entities, the incremental cost to split their operations would be small.

1 Like

Thanks for your work on this, @sunnya97

A few initial questions about your design choices:

  1. Are you relying upon slashing events to drive behavioural changes? This solution only appears to make delegators directly feel the negative externality of centralization in the case that their validator gets slashed.
  2. How can participants understand and manage risk, given a variable slashing rate that may change often and/or very quickly?
  3. I’m having a hard time seeing how this is Sybil-resistant. Is equivocation a correlated fault? ie. if I’m running multiple validators and one of them double-signs, is there an increased likelihood that my other validators will do the same?

I find the different points you are raising interesting.
Given that I am working on a slashing insurance product, maybe I can answer one of them:

DeFi and building products on the blockchain allows to open the doors and let all the validators have access to “sophisticated products”. With the current formula, small validators will actually be advantaged; they would have a lower slashing percentage and even if the cost of the insurance is high compared to larger validators, they would still be advantaged and would be able to offer an insurance at a way lower cost for them than for bigger validators.
The example of the 100th validator being at 0.04% means his slashing percentage would be 0.04% when being slashed alone. When protecting this risk, the gross cost will be lower for the 100th validator than one with 0.5% slashing risk (taking into account that there won’t be a x10 difference between the risk price of different validators). But once the 100th validator moves to a higher position, the gross cost will increase.

So from a level 1 slashing risk perspective, I think the current solution empowers the weak ones.

For level 2 slashing (2 validators being slashed at the same time), I would consider it the small validators responsibility to decorrelate: once they have the means to gain more voting power, they should commit to their role and make sure they can secure the network.
Indeed it comes with more investment but at least these investments have more chances of paying off on the long run than in the current situation.

1 Like

If a small validator’s fault is correlated with a large validator, as I understand it the small validator would be subjected to a large slash.

eg. validator A has 0.1% and validator B has 10%. If they have a correlated fault
(sqrt(0.001) + sqrt(0.1))^2 = 12.1%

This small validator just suffered a very large slash. This seems to be the opposite of the stated goal. Or am I misunderstanding?

:smiley:
Love the way you think. Everytime I am on this forum I get so excited about the COSMOS network.

1 Like

Yes this is true and this math should be forcing small validators to decorrelate.

Basically if this proposal passes and both delegators and validators don’t take it into account (meaning delegators don’t start redelegating to smaller validators and small validators don’t decorrelate) then a case like this might happen.
But I get your point regarding the small validator moving from a very low slashing percentage to a very high one while the big validator sees a smaller change.

1 Like

Hey @sunnya97, curious to know where you’re seeing the gini coefficient of validator voting power.
I just went through my records, and I’m sort of seeing a bit of the opposite ie. consensus power appears to have decentralized somewhat.

I’ve shown the Lorenz curve for April, July, August, September, and now October. Let me know if you’d like to see more in-depth data. Here are the table summaries:

image
image
image
image

As an aside, it appears that only Sikka’s power has been increasing steadily and rapidly. Using gov power charts because gov power is equivalent to consensus power:
image
image

2 Likes

Basically if this proposal passes and both delegators and validators don’t take it into account (meaning delegators don’t start redelegating to smaller validators and small validators don’t decorrelate) then a case like this might happen.

I don’t think it’s been shown that small validators can effectively decorrelate, and even if they could, they have no way to reliably signal this to delegators. Since it is impossible to accurately estimate correlation risk, a rational delegator will evaluate the slashing risk as though validators are highly correlated.

But I get your point regarding the small validator moving from a very low slashing percentage to a very high one while the big validator sees a smaller change.

Given this, the proposal fails to achieve it’s stated intent.

3 Likes

I have to agree with @mattharrop here. It’s difficult not only to decorrelate but also to signal and externalize this. Amongst smaller validators, there are only so many varying infrastructure setups and cloud providers.

1 Like

Great point. Admittedly, I didn’t actually calculate the Gini change over time. I guess I meant more that there is a high Gini coefficient, rather than necessarily an increasing one. I removed reference to Gini coefficient altogether from the proposal.

1 Like

Imo, this is exactly the type of validator we want to incentivize. Validators should be investing into their hardware and security setups.

And by doing this, they are causing a massive negative externality harm to the network in the form of resilience decrease. This procedure is to incentivize them to not do that.

If they do this, it is beneficial to the network, as it is helping make the network more resilient.

Yeah, that is exactly my intention. The harm that centralization poses is when validators fault. And so we should harm the contributors to centralization when they fault.

Redelegate whenever they feel that the risk profiles of their validators have changed. Delegators are expected to be active participants in the network.

If somoene hacks your setup, then sure, they’d try to make both your keys fault.

Only if the slash is correlated. Then all correlated validators get the same slash. The standard default is that the small validator gets a smaller slash.

Well they should figure out how. Cause if not, they’re just burdening the network with additional signatures while not contributing to security by increasing the resilience of the network.

They probably shouldn’t run their entire setup on single cloud providers. And there’s many different infrastructure setups. Using different HSMs, data centers, KMS service, cloud providers (there’s many), servers, tendermint implementations, even OSs. I believe @mdyring was even planning on using BSD originally!

2 Likes

My answer might sound a bit light on this one but I am a strong believer in free and transparent markets. And transparency is what DeFi brings.
One of the use cases of the slashing insurance is actually to have a risk score for each validator calculated out of the premiums that were bought for each particular validator. The higher the insurance premium price, the more risky the validator (from a slashing risk perspective). For a validator, one way of signaling is to make sure that the market (especially the risk buyers) are aware that they are doing everything needed to decorrelate and publish/show case what they implemented. This will push the market to price that particular validator risk at a lower price. The same way in traditional markets price is seen as taking into account all available informations, the premiums prices will be taking into account all available infos the validators make available.
All the data being also available to delegators in a transparent manner.

1 Like

Isn’t slashing is already proportional (ie. 5%)? What is your goal by redistributing the slashing amount–is it actually to drive a change in the gini coefficient? Do you have a gini coefficient target in mind?

1 Like

Heads up, that the ADR has been updated:

It now allows the “root” used in correlation punishment to be governance specified, rather than requiring the use of the square root.

1 Like

I believe as well it would not have a sufficient desired effect with a view to achieving more decentralization. IMHO, paying a flat amount per node as a percentage of the overall inflation is the better solution. Different from a slashing risk fear induction, I believe getting a higher reward for staking on a low staked node is a truly scalable and working solution. It solves the centralization and unsustainable node problem, and as well it provides a kind of masternode price tag giving value to Atoms.

1 Like

What stops someone from just running many nodes?

Nice to see some discussion goin on. Here are a couple of our concerns:

  1. If you’re taking all this at face value, isn’t this making decisions based on nothing but assumptions? Based on the time frame they failed in, you’re essentially relying on guesswork to slash validators that are likely (!) to run the same setups. I think we can agree that this is not ideal. Actually, there might not be an ideal solution to this, who knows.

  2. What stops large validators from running multiple validators with different setups? Let’s assume you are one of the currently 125 validators in the set. One of these 125 setups is known to you which is your own one. You don’t know anything about the setups of the other 124 validators. By splitting up your validator into two validators you’ve increased the amount of known setups to two, decreased the amount of unknown setups to 123, pushed out a small validator from the set therefore slightly increasing everyone’s voting power and reduced the risk of having correlated setups. If I now have one setup that is unique and one that many others have, doesn’t this reduce the amount I’m getting slashed?

  3. How do you think the amount of different setups to choose from scales with an ever increasing validator set? Wouldn’t this disincentive the use of open-source software like the TMKMS? You can’t expect everyone to develop their own in-house solution for every single problem.

  4. The proposal would only make the network more resilient if you assume all setups will be flawed in some way for eternity. If I know there is a setup that many others use but is stable and is known to never fail, why wouldn’t I want to go with that? This goes against the idea of running different setups from other validators.

2 Likes

Interesting to think about the true cost of large validators splitting up to get around this, and whether the cost to do this would be less than the benefit gained by small validators. Would be good to hear from small validators if they think this will really be helpful, or just further penalize them due to the cost of decorelating. Obviously decorrelating is something we should aim for, but maybe wonder if forcing it like this might have unintended consequences. Of course for this to help small vals in the first place the penalties for slashing when its just them faulting must be less than they are currently.

Wonder if there’s other ways to work towards validator decentralization we could consider. Some random ideas (may or may not be good):

  • smaller rewards for larger validators (penalty would have to be small enough that its not worth it for them to split into multiple smaller validators …)
  • higher cost to delegating to large validators (maybe you have to burn some atoms as a function of the validator’s voting power, or pay into the community pool, or pay a higher fee)
  • forced distribution of delegation ie. some fraction of a delegation to a validator is distributed as delegations to all smaller validators (might close the gap between the biggest and smallest vals but could have weird side effects like bringing up the minimum bond to be an active validator and creating incentives for 100% commission …)
  • incentives for re-delegating to smaller validators (maybe paid out of the community pool, or taken from the rewards for the larger validators, or paid out of atoms that are burned in some other mechanism?)

What else? Are any of these good?

It’s already happening, to some degree - infrastructure-as-a-service companies that run whitelabel validators on behalf of others are essentially a single validator in terms of network security since they control the consensus keys. Once it’s fully automated, it’s trivial to deploy additional instances.

The cheapest and most correlated way to run a lot validators would be a modification to the node software that votes on behalf of multiple validators.