Seeking Opinions on Quantifying Validator Stability Factors

Hello, I am an undergraduate student working on a project at my university to quantitatively represent the stability of validators with numerical values. However, I am unsure about which data and weights to use for this quantification process. Therefore, I am writing this post to ask for your opinions.

The data to be quantified are as follows:

  • Self-Unbonded amount and its trend
  • Commission trend
  • Uptime trend
  • Average number of blocks
  • Amount of bonded tokens
  • Whether slashed in the past 6 months
  • Proposal voting rate (how many of the last 50 proposals were voted on)

Among these data points, I would appreciate it if you could help me determine the priority of which data is essential when evaluating the stability of validators. Moreover, I would be even more grateful if you could explain why you chose the first data point.

Thank you.

2 Likes

Hey, I would be really happy to help you with that, including having a chat. You can send me a DM on Twitter, that’s twitter.com/gadikian and I’ll gladly take you through it .

There’s also something that I would like you to be aware of, it’s my opinion that most attempts in fact every attempt today have seen so far at creating a qualitative scoring system for validators have failed in the sense that they would lead to false conclusions. Another person that I think that you would likely want to talk to about this is @Spaydh from P2P validator. If I had to guess they have the most detailed set of analytics and metrics that anybody does. One thing that you should note though is that from conversations with P2P, I know that a great deal of their assessment approach is qualitative instead of quantitative.

Look forward to talking to you more about this soon

Hi @shk9946, welcome to the Cosmos Hub forum! You picked a really interesting topic to study, hope you’ll get all the support you need. I’m happy to have a chat, although I’m probably from being as sophisticated as @jacobgadikian paints me to be.
Could you define what you mean by “stability”? Are you assessing it from the perspective of the infrastructure (how efficiently and reliably blocks are processed) or from the perspective of the business’ financials (how economically sustainable validation is)? Thanks by advance for clarifying :slightly_smiling_face:

1 Like

Also, do check out https://observatory.zone/ as they may have interesting data points and weights for you to explore. I believe they’re pretty open to discussion too so feel free to reach out to them if you want to know more

2 Likes

I regard P2P’s organizational stances on these matters as the most sophisticated in the industry, and would also recommend that @shk9946 note how rapidly we shifted from quantitative to qualitative.

I think that in the end you’d find that most of this is qualitative.

Agreed about https://observatory.zone.

In summary I’ve found that the qualitative information very frequently invalidates the quantative information, and I’d like to provide an example of this–

on EVMOS notional misses around 5% of blocks. It isn’t up to our quality standards for missed blocks at all. Here’s a screenshot:

Looks bad right?

image

We miss so many blocks on evmos because:

  • the network has short block times
  • we self host our node in Hanoi, Vietnam

conversations with the evmos team, evmos dao, and our evmos delegators led to making the decision to self-host despite the high miss rate. I think that @mircea could also chime in here, he was instrumental in getting the quantative fixation out of my mind – though it took a long time to sink in.

Most of the stated factors are common in all the validators. I am stating some observations.

In the cosmos, validators rarely change the commission %. Most of the chains are a minimum of 5% and Hub itself is very competitive, validators don’t change much.

Uptime trend is common mostly 99-100% few chains like Evmos and Rebus miss blocks.

Can you explain what you mean by this? if these are the proposed blocks, It is related to VP. The factor is same for all.

On the other hand, the factors which might differ from validator to validator

Slash history
Governance participation

Untrackable
Self bonded assets, because validators use proxy addresses for various reasons to delegate to their validator.

Thank you for your response.

I understand that a qualitative approach is necessary, but since it is difficult to determine what makes a validator qualitatively good based on block data alone, we may have no choice but to use a quantitative approach to evaluate validators.

Additionally, to explain more about my project, we want to give an insight to find healthy validator targeting delegators and service providers.

1 Like

Thank you for your response.

Yes. you are right. I’d like to assess stability in the two concepts that you mentioned.

To explain more about my project, we want to give an insight to find healthy validator targeting delegators and service providers.

1 Like

Okay I got it. but my project is based on on-chain data. how can I approach to qualitative method? When creating insights using on-chain data, it is difficult to reflect qualitative data

And if the network is unstable due to the geographical location and they missed the block generation, isn’t it possible to solve it to some extent by using cloud services? I don’t think delegators or service providers are interested in distinguishing between good and bad validators. Just because it is important for them to find a highly available validator and chain. They are only interested in the benefit validators. I aim to show people such validators with some quantitative score.

And you said it’s a way that delegators like, so can I ask you why?

1 Like

Thank you for your replying.

In fact, I’m in an environment where I can get seven pieces of data up there in a time series. It doesn’t matter to me even if it’s untackable.

Avarage number of blocks: I misrepresented. it means the number of transactions in a block(or the amount of gas in a block) when the validator proposed . I think the reason why I think this has room for utilization is because the validator can maliciously create blocks without multiple tx counts.

1 Like