It has been mentioned in the paper that the frequency of getting selected as a proposer is dependent on voting power , but as i understand if the proposers are selected in a round robin fashion , each validator will get its turn to be proposer once ?
I wanted to have more clarity on this , maybe i am perceiving wrongly.
Just found out doc which might help https://github.com/tendermint/tendermint/blob/master/docs/spec/reactors/consensus/proposer-selection.md
If I write something wrong, please correct me. But I think the following function in the Tendermint Repo makes this quite easy. This also explains the meaning of Accum, which has often been discussed in the chat.
Every validator starts with his personal Voting Power and Accum 0. Every round the IncrementAccum function gets called and increases each validators Accum by its Voting Power. Then, the validator with the highest Accum is selected as the proposer for this round and his Accum gets decreased by the total voting power of all validators, therefore most likely being negative.
Let’s do a little example and say the total voting power is 100. The example is quite long and a little simplified, but I hope this makes sense and helps people to understand it.
Validator A has 10 Steak Voting Power
Validator B has 20 Steak Voting Power
Validator C has 30 Steak Voting Power
Validator D has 40 Steak Voting Power
In the first round, the Accum of each Validator will be increased, resulting in the following Accums:
A -> 10
B -> 20
C -> 30
D -> 40
Now, D has the highest Accum and will be the Proposer for the round. His Accum will be decreased by the total voting power (100), resulting in this:
A -> 10
B -> 20
C -> 30
D -> -60 (40 - 100 = 60)
We enter round 2 and Accum of each validator will be increased by their voting power again.
A -> 20
B -> 40
C -> 60
D -> -20
This time, C has the highest Accum and will be the proposer, resulting in the following:
A -> 20
B -> 40
C -> -40 (60 - 100 = -40)
D -> -20
Entering round 3, increasing Accum by voting power again:
A -> 30
B -> 60
C -> -10
D -> 20
Now, B will be Round Proposer:
A -> 30
B -> -40 (60 - 100 = -40)
C -> -10
D -> 20
Round 4:
A -> 40
B -> -20
C -> 20
D -> 60
Now D is again at the top and will be proposer. A has not been the proposer yet because its voting power is too small. A will be at the top at one point, but D will be the proposer much more often because of the higher voting power.
Thanks @katernoir for the explanation.
I quickly coded what you said and from validator_set.go
:
function sortValidators(validators) {
validators
.sort((v1, v2) => v2.address < v1.address)
.sort((v1, v2) => v2.accumulator - v1.accumulator)
}
function getProposer(validators, times = 1) {
validators.forEach(validator => {
validator.accumulator =
(validator.accumulator || 0) + validator.power * times
})
sortValidators(validators)
// calculate total voting power
const totalVotingPower = validators.reduce((result, validator) => {
return result + validator.power
}, 0)
const mostest = validators[0]
mostest.accumulator = mostest.accumulator - totalVotingPower
return mostest.address
}
function getProposers(validators, rounds = 20) {
const result = []
for (let index = 0; index < rounds; index++) {
result.push(getProposer(validators))
}
return result
}
// This prints
// p0 p1 p2 p3 p0 p0 p1 p2 p3 p0 p0 p1 p2 p3 p0 p0 p1 p2 p3 p0
console.log(
getProposers(
[
{ address: 'p0', power: 4 },
{ address: 'p1', power: 2 },
{ address: 'p2', power: 2 },
{ address: 'p3', power: 2 }
],
20
).join(' ')
)
// This prints
// p0 p1 p0 p2 p3 p0 p0 p1 p0 p2 p3 p0
console.log(
getProposers(
[
{ address: 'p0', power: 3 },
{ address: 'p1', power: 1 },
{ address: 'p2', power: 1 },
{ address: 'p3', power: 1 }
],
12
).join(' ')
)
Thank you for clear explanation!
But I am not sure why tendermint does not use any kind of random probabilistic approach.
If it is deterministic, it has extremely weak point to be hacked because everyone knows
who is the next proposer and hacker only need to attack current proposer for every block.
Hence it is much easier to break the chain.
If we use random number with probability distribution then, hacker cannot get any clue who is the
next proposer, only he knows probability, so weakening the hacking power.
Any thoughts on this??
There have been many discussions for ways to do randomized proposer selection via a VDF. These aren’t prelaunch. VRF style proposals end up being unsafe in asynchrony, e.g. Algorand. That is an extremely undesirable property, hence the inclination towards VDF’s.
However my take on the whole randomized proposer selection is that it really is not an issue. We’re in a p2p network, not the classic client server model. This means that it is quite viable for a validator to hide their real IP, making them resilient to attacks. We heavily recommend that everyone use a sentry node. This means that to DDOS the actual validator, you would have to hack the sentry (or be the sentry’s ISP) in order to glean the info for the real validator. However suppose you DDOS’d the sentry. As a validator, I would have a “Slow” sentry, which basically broadcasted my proposed blocks 2-3 round trips later than my main sentry. This makes it harder to figure out that its my main sentry when beginning the DOS attack. (You would then have to retarget it) You could even put this one behind cloudflare.
Additionally you would likely have private sentries / relay nodes, to make this DDOS attack even less viable. (So even if you did DDOS all public facing sentries, it won’t really cause a problem)