[Proposal] [Draft] Proportional Slashing

Imo, this is exactly the type of validator we want to incentivize. Validators should be investing into their hardware and security setups.

And by doing this, they are causing a massive negative externality harm to the network in the form of resilience decrease. This procedure is to incentivize them to not do that.

If they do this, it is beneficial to the network, as it is helping make the network more resilient.

Yeah, that is exactly my intention. The harm that centralization poses is when validators fault. And so we should harm the contributors to centralization when they fault.

Redelegate whenever they feel that the risk profiles of their validators have changed. Delegators are expected to be active participants in the network.

If somoene hacks your setup, then sure, they’d try to make both your keys fault.

Only if the slash is correlated. Then all correlated validators get the same slash. The standard default is that the small validator gets a smaller slash.

Well they should figure out how. Cause if not, they’re just burdening the network with additional signatures while not contributing to security by increasing the resilience of the network.

They probably shouldn’t run their entire setup on single cloud providers. And there’s many different infrastructure setups. Using different HSMs, data centers, KMS service, cloud providers (there’s many), servers, tendermint implementations, even OSs. I believe @mdyring was even planning on using BSD originally!

2 Likes

My answer might sound a bit light on this one but I am a strong believer in free and transparent markets. And transparency is what DeFi brings.
One of the use cases of the slashing insurance is actually to have a risk score for each validator calculated out of the premiums that were bought for each particular validator. The higher the insurance premium price, the more risky the validator (from a slashing risk perspective). For a validator, one way of signaling is to make sure that the market (especially the risk buyers) are aware that they are doing everything needed to decorrelate and publish/show case what they implemented. This will push the market to price that particular validator risk at a lower price. The same way in traditional markets price is seen as taking into account all available informations, the premiums prices will be taking into account all available infos the validators make available.
All the data being also available to delegators in a transparent manner.

1 Like

Isn’t slashing is already proportional (ie. 5%)? What is your goal by redistributing the slashing amount–is it actually to drive a change in the gini coefficient? Do you have a gini coefficient target in mind?

1 Like

Heads up, that the ADR has been updated:

It now allows the “root” used in correlation punishment to be governance specified, rather than requiring the use of the square root.

1 Like

I believe as well it would not have a sufficient desired effect with a view to achieving more decentralization. IMHO, paying a flat amount per node as a percentage of the overall inflation is the better solution. Different from a slashing risk fear induction, I believe getting a higher reward for staking on a low staked node is a truly scalable and working solution. It solves the centralization and unsustainable node problem, and as well it provides a kind of masternode price tag giving value to Atoms.

1 Like

What stops someone from just running many nodes?

Nice to see some discussion goin on. Here are a couple of our concerns:

  1. If you’re taking all this at face value, isn’t this making decisions based on nothing but assumptions? Based on the time frame they failed in, you’re essentially relying on guesswork to slash validators that are likely (!) to run the same setups. I think we can agree that this is not ideal. Actually, there might not be an ideal solution to this, who knows.

  2. What stops large validators from running multiple validators with different setups? Let’s assume you are one of the currently 125 validators in the set. One of these 125 setups is known to you which is your own one. You don’t know anything about the setups of the other 124 validators. By splitting up your validator into two validators you’ve increased the amount of known setups to two, decreased the amount of unknown setups to 123, pushed out a small validator from the set therefore slightly increasing everyone’s voting power and reduced the risk of having correlated setups. If I now have one setup that is unique and one that many others have, doesn’t this reduce the amount I’m getting slashed?

  3. How do you think the amount of different setups to choose from scales with an ever increasing validator set? Wouldn’t this disincentive the use of open-source software like the TMKMS? You can’t expect everyone to develop their own in-house solution for every single problem.

  4. The proposal would only make the network more resilient if you assume all setups will be flawed in some way for eternity. If I know there is a setup that many others use but is stable and is known to never fail, why wouldn’t I want to go with that? This goes against the idea of running different setups from other validators.

2 Likes

Interesting to think about the true cost of large validators splitting up to get around this, and whether the cost to do this would be less than the benefit gained by small validators. Would be good to hear from small validators if they think this will really be helpful, or just further penalize them due to the cost of decorelating. Obviously decorrelating is something we should aim for, but maybe wonder if forcing it like this might have unintended consequences. Of course for this to help small vals in the first place the penalties for slashing when its just them faulting must be less than they are currently.

Wonder if there’s other ways to work towards validator decentralization we could consider. Some random ideas (may or may not be good):

  • smaller rewards for larger validators (penalty would have to be small enough that its not worth it for them to split into multiple smaller validators …)
  • higher cost to delegating to large validators (maybe you have to burn some atoms as a function of the validator’s voting power, or pay into the community pool, or pay a higher fee)
  • forced distribution of delegation ie. some fraction of a delegation to a validator is distributed as delegations to all smaller validators (might close the gap between the biggest and smallest vals but could have weird side effects like bringing up the minimum bond to be an active validator and creating incentives for 100% commission …)
  • incentives for re-delegating to smaller validators (maybe paid out of the community pool, or taken from the rewards for the larger validators, or paid out of atoms that are burned in some other mechanism?)

What else? Are any of these good?

It’s already happening, to some degree - infrastructure-as-a-service companies that run whitelabel validators on behalf of others are essentially a single validator in terms of network security since they control the consensus keys. Once it’s fully automated, it’s trivial to deploy additional instances.

The cheapest and most correlated way to run a lot validators would be a modification to the node software that votes on behalf of multiple validators.