Imo, this is exactly the type of validator we want to incentivize. Validators should be investing into their hardware and security setups.
And by doing this, they are causing a massive negative externality harm to the network in the form of resilience decrease. This procedure is to incentivize them to not do that.
If they do this, it is beneficial to the network, as it is helping make the network more resilient.
Yeah, that is exactly my intention. The harm that centralization poses is when validators fault. And so we should harm the contributors to centralization when they fault.
Redelegate whenever they feel that the risk profiles of their validators have changed. Delegators are expected to be active participants in the network.
If somoene hacks your setup, then sure, theyâd try to make both your keys fault.
Only if the slash is correlated. Then all correlated validators get the same slash. The standard default is that the small validator gets a smaller slash.
Well they should figure out how. Cause if not, theyâre just burdening the network with additional signatures while not contributing to security by increasing the resilience of the network.
They probably shouldnât run their entire setup on single cloud providers. And thereâs many different infrastructure setups. Using different HSMs, data centers, KMS service, cloud providers (thereâs many), servers, tendermint implementations, even OSs. I believe @mdyring was even planning on using BSD originally!
My answer might sound a bit light on this one but I am a strong believer in free and transparent markets. And transparency is what DeFi brings.
One of the use cases of the slashing insurance is actually to have a risk score for each validator calculated out of the premiums that were bought for each particular validator. The higher the insurance premium price, the more risky the validator (from a slashing risk perspective). For a validator, one way of signaling is to make sure that the market (especially the risk buyers) are aware that they are doing everything needed to decorrelate and publish/show case what they implemented. This will push the market to price that particular validator risk at a lower price. The same way in traditional markets price is seen as taking into account all available informations, the premiums prices will be taking into account all available infos the validators make available.
All the data being also available to delegators in a transparent manner.
Isnât slashing is already proportional (ie. 5%)? What is your goal by redistributing the slashing amountâis it actually to drive a change in the gini coefficient? Do you have a gini coefficient target in mind?
I believe as well it would not have a sufficient desired effect with a view to achieving more decentralization. IMHO, paying a flat amount per node as a percentage of the overall inflation is the better solution. Different from a slashing risk fear induction, I believe getting a higher reward for staking on a low staked node is a truly scalable and working solution. It solves the centralization and unsustainable node problem, and as well it provides a kind of masternode price tag giving value to Atoms.
Nice to see some discussion goin on. Here are a couple of our concerns:
If youâre taking all this at face value, isnât this making decisions based on nothing but assumptions? Based on the time frame they failed in, youâre essentially relying on guesswork to slash validators that are likely (!) to run the same setups. I think we can agree that this is not ideal. Actually, there might not be an ideal solution to this, who knows.
What stops large validators from running multiple validators with different setups? Letâs assume you are one of the currently 125 validators in the set. One of these 125 setups is known to you which is your own one. You donât know anything about the setups of the other 124 validators. By splitting up your validator into two validators youâve increased the amount of known setups to two, decreased the amount of unknown setups to 123, pushed out a small validator from the set therefore slightly increasing everyoneâs voting power and reduced the risk of having correlated setups. If I now have one setup that is unique and one that many others have, doesnât this reduce the amount Iâm getting slashed?
How do you think the amount of different setups to choose from scales with an ever increasing validator set? Wouldnât this disincentive the use of open-source software like the TMKMS? You canât expect everyone to develop their own in-house solution for every single problem.
The proposal would only make the network more resilient if you assume all setups will be flawed in some way for eternity. If I know there is a setup that many others use but is stable and is known to never fail, why wouldnât I want to go with that? This goes against the idea of running different setups from other validators.
Interesting to think about the true cost of large validators splitting up to get around this, and whether the cost to do this would be less than the benefit gained by small validators. Would be good to hear from small validators if they think this will really be helpful, or just further penalize them due to the cost of decorelating. Obviously decorrelating is something we should aim for, but maybe wonder if forcing it like this might have unintended consequences. Of course for this to help small vals in the first place the penalties for slashing when its just them faulting must be less than they are currently.
Wonder if thereâs other ways to work towards validator decentralization we could consider. Some random ideas (may or may not be good):
smaller rewards for larger validators (penalty would have to be small enough that its not worth it for them to split into multiple smaller validators âŚ)
higher cost to delegating to large validators (maybe you have to burn some atoms as a function of the validatorâs voting power, or pay into the community pool, or pay a higher fee)
forced distribution of delegation ie. some fraction of a delegation to a validator is distributed as delegations to all smaller validators (might close the gap between the biggest and smallest vals but could have weird side effects like bringing up the minimum bond to be an active validator and creating incentives for 100% commission âŚ)
incentives for re-delegating to smaller validators (maybe paid out of the community pool, or taken from the rewards for the larger validators, or paid out of atoms that are burned in some other mechanism?)
Itâs already happening, to some degree - infrastructure-as-a-service companies that run whitelabel validators on behalf of others are essentially a single validator in terms of network security since they control the consensus keys. Once itâs fully automated, itâs trivial to deploy additional instances.
The cheapest and most correlated way to run a lot validators would be a modification to the node software that votes on behalf of multiple validators.