Replicated Security Reimagined: Jeonse and Package Differentiation Proposal

Greetings Cosmos Community,

The author of this post is Taron, a researcher at A41, an APAC based organization specializing in blockchain infrastructure services with particular strengths in ecosystem growth and technical understanding of protocols.

Abstract

This paper critically analyzes the challenges of replicated security in the Cosmos ecosystem, particularly the financial sustainability of smaller validators. Replicated security, viewed as a type of Service Level Agreement (SLA), presents certain obstacles, such as time lag and uncertainty of payments. Existing strategies, including Conditional Basic Income and Soft opt-out, are evaluated for their effectiveness and potential side effects. While these strategies provide relevant solutions, they do not fully address fundamental problems.

To mitigate these problems, we propose novel solutions inspired by real-world models, such as the South Korean ‘Jeonse’ leasing system and a differentiated package system for validators based on their risk profiles. The ‘Jeonse’ model suggests a sizable upfront deposit from consumer chains, redistributed among validators to offset their operational burdens. The Package Differentiation approach offers different revenue packages tailored to the risk tolerance of validators, thereby optimizing their preferences.

Although these suggestions show potential, further validation of underlying assumptions is necessary. This paper aims to stimulate discussion towards achieving efficiency and financial viability within the Cosmos ecosystem’s replicated security framework.

Preamble

A Current Gaze at Interchain Security (ICS)

With the successful deployment of Neutron as the inaugural app chain leveraging Replicated Security, and the recent addition of Stride as the second at block height #4616678, Duality is poised to possibly be the third, pending governance approval. The discourse surrounding the current iteration of Interchain Security (ICS) - represented by Replicated Security - has been vibrant, especially with the challenges experienced by smaller validators.

These challenges include the financial strain of validating additional consumer chains, the cost of relayers, centralization concerns, and the observed low participation rate of validators in replicated security (noted in both Neutron and Stride, the latter struggling to achieve 66% voting power during its initial phase). These, and several other issues, have been the key discussion points within our community.

Indeed, several solution proposals have surfaced in this forum, with some, like the ‘soft opt-out’, finding their way into real-world application. Other solutions, such as providing a conditional basic income for small validators, advancing the implementation of ICS to V2 (Opt-in) or V3 (Layered) via fraud votes, managing on/offboarding of consumer chains even in M&A format, and permitting validators to define their commission rates for each consumer chain have also been tabled.

As we contemplate the future, it’s crucial to understand that our scalability considerations extend beyond merely adding three or four consumer chains to the Cosmos Hub - we’re envisioning significant scaling. Thus, we must earnestly consider the scalability of our current Replicated Security model. While substantial protocol upgrades like the V2 or V3 Replicated Security might require substantial time (potentially over a year), it’s incumbent on us to vigorously explore and evaluate potential solutions and shortcuts that can help mitigate the current and future challenges related to this approach. It’s equally crucial that we analyze the potential side effects of each proposed solution and reach consensus on the best course of action through governance.

Therefore, this paper aims to shed light on the challenges encountered with Replicated Security, delve into the root causes of these issues, and stimulate discussion on the refinement of current strategies and potential novel solutions. This exploration seeks to offer insights and recommendations to aid us in overcoming the issues we face.

Identification of Key Issues

Replicated Security, although a cause for concern for many, is already in use by two live appchains. From these real-world implementations, two main issues have been identified: (1) The added burden on small validators of managing an extra consumer chain, and (2) the cost related to relayers involved in implementing Replicated Security. Since the latter issue is tied to the frequency of relaying VSC-related packets, resulting in a heavy workload for relayers, it is already being addressed as part of the cosmos hub’s OKR. Hence, this discussion will focus and delve deeper into the former issue.

At its core, the economic viability of a validator is built on the principle that revenue should outweigh costs. In the context of Replicated Security, this straightforward calculation encounters two issues: (1) A time lag between costs incurred and revenue received, as costs are spent on day one, while revenue trickles in much later, and (2) uncertainty concerning the generation and amount of revenue from the consumer chain. To address these challenges, it may be beneficial to categorize Replicated Security as a product - either as a Service Level Agreement (SLA) or an investment.

If Replicated Security is viewed as an SLA, the provider chain offers a certain level of service to the consumer chain for a corresponding price. Here, the provider must ensure a stable quality of service, and the consumer must pay the stipulated price.

If Replicated Security is deemed an investment, the provider grants benefits to the consumer chain (which may vary based on the investor’s characteristics - be it SI or FI). In return, the consumer chain gives a certain percentage (usually in the form of shares) of its revenue to the provider. In this case, the provider assumes the inherent risk of uncertainty, and no direct causality exists between the provider’s service quality and the rewards from the consumer.

Similar products to consider include open-source tools assisting in L2 or L3 development (like OP-stack, ZK-stack, Orbit, Supernets, etc.) and services akin to RaaS (Rollup-as-a-service) like Saga. From the former perspective, the costs are largely operational, while profits come from sequence fees and MEV, mirroring the Rollup business model. The latter perspective, represented by services like Saga, clearly falls into the SLA category, where users deposit $SAGA to run a rollup and a portion of the deposit is deducted (given to the provider) each time the chain operates.

Considering these examples, we can conclude that Replicated Security also functions as a type of SLA, offering ‘security’ to the consumer and earning revenue (in fees/MEV) from the consumer chain. However, one might wonder why issues related to small validators and centralization unique to Replicated Security haven’t surfaced elsewhere. I believe it’s due to Cosmos’ higher degree of decentralization compared to other networks, as the sequencer of most rollups is highly centralized. The issues we’re solving aren’t currently problems in other ecosystems. But for a genuinely decentralized, interconnected ecosystem, it’s crucial to address these challenges and persistently search for solutions.

With the clarification that Replicated Security operates as an SLA, our problems become transparent: (1) how to maintain a stable quality of service, where service quality is synonymous with the decentralization of the provider chain, and (2) how to address the time lag between the provision of service (incurring costs) and the reception of fees from the consumer chain.

Existing strategies can be classified into two categories: (1) increasing revenue from day one, which aligns with discussions around a conditional basic income, and (2) reducing costs from day one, where solutions like soft opt-out are positioned.

Resolution

In terms of addressing the key issues outlined above, the focus of this paper is to minimize the additional burden from the Cosmos development side. Given that protocol complexity may increase, and significant time is required for an upgrade of the current protocol, such as transitioning from ICS to its V2 or V3 versions (a.m., a year or more), the discussions that follow will center on exploring the optimal solutions within the existing structure of Replicated Security. In addition, this paper places significant emphasis on contemplating a scenario where the number of consumer chains is scaled up substantially, rather than being limited to just 3-4 chains. The solutions explored here are therefore geared towards facilitating a larger, more complex network of consumer chains within the Replicated Security framework.

Navigating the Future: Refining Discussed Strategies

A myriad of discussions have been held in the forum, and all the proposals put forward have presented a clear problem statement, along with appropriate approaches for addressing the corresponding issues. The following discussions highlight the potential side-effects of each solution, while shedding light on aspects that could enable these solutions to effectively tackle the problems they target.

  1. Conditional Basic Income

The Basic Income approach falls under the category of increasing revenue from day one. The discussion surrounding this solution needs to delve deeper into specifics - how much subsidy is required, where it will come from, how it will be distributed, the criteria for eligible validators, etc. But first and foremost, we mustn’t lose sight of the problem this approach is designed to solve, which is to address the time lag and uncertainty of payment and costs from the perspective of small validators. Bearing this in mind, the subsidy should work as efficiently as possible, targeting those most in need. Specifically, the subsidy (1) should be allocated to small validators that might struggle with the financial burden of scaling up Replicated Security, and (2) the usage of such funds should contribute to a socially efficient state, in terms of both network and ecosystem health.

There are two aspects that this paper will highlight that could potentially cater to the two subsidy requirements mentioned above. Firstly, the establishment of an application process for receiving subsidies. A study investigating the relationship between subsidy application processes and the productivity of Small and Medium Enterprises (SMEs) in Japan found that having such processes in place improves their productivity. This approach could satisfy both requirements in that (1) this process could act as a screening procedure to bridge the information gap between the validator and governance participants regarding the validator’s business aspects, and (2) as the aforementioned study suggests, such processes might lead to a socially effective (or possibly optimal) state by enhancing the productivity of individual validators. Secondly, the subsidy should be set as a variable, rather than a fixed amount. Since validators’ income is in the form of tokens (subject to volatility) and not fiat currency, while costs (typically paid in fiat) are more stable, adjusting the subsidy to account for these fluctuations can make the process more efficient.

  1. Soft Opt-out

This approach falls under the category of reducing costs to zero from day one. With the current implementation, a validator that meets the threshold does not receive slashing packets even when down, which effectively allows those validators to opt out of validating the consumer chain without weakening their profitability. While these validators still receive rewards from the consumer chain, it reduces the risk of delegation focusing on certain validators which are actually validating the consumer chain. Observing the Neutron case, this approach seems effective. Many validators below the threshold indeed opted out of validating the consumer chain (approximately 25% of the validators in the lower 5% of voting power).

However, problems emerge when we consider a scenario where n consumer chains have launched (for a large n). As n grows larger, (1) the opt-out threshold is likely to increase since a higher n implies higher costs to each validator (as additional cost is the sum of costs to validate all consumer chains) and (2) it is highly probable that each consumer chain will select a different threshold or even a completely different strategy other than opting out. As stated earlier, with Replicated Security functioning as a Service Level Agreement (SLA), the provider must offer an adequate level of service (i.e., security) to the consumer chain. However, in the outlined scenario, these assumptions break down, suggesting that the solution could lead to an overall deterioration of the product. One major cause of this issue is that the soft opt-out is a solution implemented from the consumer chain’s side, not the provider’s. Given that changing the overall protocol of the provider isn’t realistic under the current circumstances, it might be effective to allow each validator to set its own commission rate for each consumer chain, thereby potentially filling the cost gap by increasing the rate, if a single validator chooses to do so. However, it remains uncertain whether this kind of marginal increase in rates could genuinely offset the costs, particularly during the early stages of the consumer chain’s life cycle - as observed in the Neutron case.

Outside the Box: Suggesting Novel Solutions for Existing Challenges

This paper aims to suggest alternative solutions to existing challenges within the Cosmos ecosystem. The fundamental consideration in this regard pertains to the Replicated Security as a service level agreement (SLA), which we must consider two points - (1) providing a stable quality of service, and (2) resolving the time lag and uncertainty associated with payment.

However, the strategies discussed previously - the Conditional Basic Income and the Soft Opt-out approach - do not sufficiently resolve these issues. However, the problem persists as the Basic Income approach does not resolve the time lag; instead, it transitions from being situated between the Validator and Consumer Chain to being positioned between the Community Pool and Consumer Chain. On the other hand, the Soft Opt-out approach targets the payment issue but it doesn’t primarily aim at maintaining or enhancing the quality of security.

To resolve the time lag and uncertainty, it is logical to reflect risk and time value into the preference (i.e., utility function) of the participants of replicated security. With this in mind, two types of solutions emerge - those reflecting these factors on the consumer chain and those doing so on the validators of the provider chain.

  1. Diversify options to consumer chain: Jeonse

Jeonse is a unique leasing system in South Korea where tenants provide a substantial deposit, typically 50-80% of the property’s value, instead of paying monthly rent. Drawing inspiration from this model, we propose a similar approach for replicated security. In essence, the consumer chain would deposit a certain amount of $ATOM, which would be subsequently redelegated to each validator of the provider chain. This would boost the profitability of smaller validators, mitigating the burden of validating additional consumer chains. Such an approach would facilitate value accrual to $ATOM and provide immediate profit for validators and delegators. However, there may be concerns about this acting as a barrier for consumer chains to form an AEZ (Atom Economic Zone).

  1. Diversify options to validators - Package Differentiation

We propose a package differentiation strategy to cater to validators with different risk premiums. It considers the revenue and cost dynamics specific to validators in the Replicated Security framework. Revenue in replicated security typically functions over time and risk (including voting power), while costs remain relatively constant. Under this assumption, revenue would be discounted into its present value (PV) regarding risk premium.

The risk premium is assumed to be inversely proportional to a validator’s voting power, suggesting smaller validators tend to be more risk-averse than their larger counterparts. Taking these assumptions into account, we design revenue packages catered to validators with varying risk premiums.

The current structure of replicated security includes token inflation (if the consumer chain has its own native token), fees, and Miner Extractable Value (MEV) as primary revenue sources. Although grants or a conditional basic income could also be included in these packages, this paper specifically aims to address time lag issues at their core, hence these elements are not considered. Each revenue component can be categorized based on its uncertainty value, with tokens presenting the highest uncertainty and fees the lowest. However, this uncertainty analysis may vary depending on numerous factors, including diverse human, system, and capital factors such as protocol implementation, type of service. Therefore, there could be scenarios where token inflation may possess much lower uncertainty than fees.

Given that several consumer chains have already adopted the soft opt-out strategy, it is also included as a package option. The ultimate goal is to incentivize smaller validators to avoid opting for soft opt-out, thereby stabilizing the overall quality of security. The diversified package options, therefore, consider the distinct revenue sources, risk profiles, and strategic choices of different validators.

  • Package A: Aimed at larger validators with a lower risk premium. It comprises a high percentage of tokens (50%), MEV (35%), and a low percentage of fees (15%).
  • Package B: Aimed at smaller validators who tend to be more risk-averse. It includes a low percentage of tokens (5%), MEV (20%), and a high percentage of fees (75%).
  • Package C: Aimed at opt-out validators who don’t pay additional fixed costs. It simply offers a small percentage of fees (10%).

The varying levels of uncertainty associated with each revenue component can be adjusted according to the specific consumer chain. This approach could grant consumer chains the flexibility to create more sophisticated strategies(Duality - for instance, fees are paid via $ATOM, where uncertainty is significantly reduced, potentially to a level comaprable to a conditional basic income). Implementing these solutions might necessitate changes to the distribution module. One potential concern is that it could increase the selection complexity from the perspectives of both validators and delegators. However, it could also contribute to decentralization regarding the DPoS algorithm since the reflection of risk and time on the validator side would be propagated to the delegator side.

It’s important to note that these assumptions will need further validation, and continuous improvements - like how to allocate each portion depending on the package selected by validators - will be required. It might be worthwhile to consider a game-theoretic approach to reach efficiency for validators, such as an analysis using the folk theorem, but this should be considered when the discussion has matured further.

Epilogue

As we circulate this paper across the forums, we hope it will engage validators, delegators, Cosmos development teams, and the broader Cosmos community in discussions that will ultimately lead to an optimized, decentralized, and resilient network. While the potential solutions presented herein are based on robust assumptions, their actual application will require extensive deliberation, understanding, and a collective agreement from the entire Cosmos community.

Our aim is not to dictate changes but to inspire thoughts, foster debates, and stimulate creativity in finding the best path forward. It is our hope that this paper encourages the Cosmos community to come together to discuss, refine, and ultimately implement strategies that maintain the robustness and decentralization of the Cosmos network while supporting the invaluable role of small validators.

Let’s continue our journey towards creating an accessible, secure, and sustainable Cosmos ecosystem. Your thoughts, feedback, and active participation will undoubtedly help shape the future of replicated security in Cosmos, and bring us closer to our vision of an inclusive, decentralized digital cosmos.

12 Likes

Hello and thank you for that very interesting post.

I have just 2 questions:

How “Jeonse” differs from the model Polkadot is applying to its Parachains slot acquisition? Even if DOT are locked and not redistributed to validators, wouldn’t this kind of deposit create the same barriers to entry than this eco knew (hence why they are currently pivoting to another model)

How to determine the “property’s value” of a consumer chain before its launch?

Thank you again. (:

Just to note that DOT is trying to get away from the auction model by the looks of it

Until now, in replicated security, the consumer chain has only been able to select token rewards. As mentioned in the above proposal, I believe that granting the packaging option to select the ‘jeonse’ model using the ATOM token can bring many benefits depending on which stage of the protocol’s growth phase the consumer chain is in and what level of security (and how it rewards it).

By the way, the article mentioned that this implementation should be done in a way that does not increase the complexity of the protocol, but I wonder how it can be achieved.

Great posting. Thorough consideration under current circumstances and scalability/decentralization of the cosmos ecosystem is quite touching.

Two questions:

Combining Jeonse and package differentiation - and settle them as RS onboarding option can be novel approach, even considering the Polkadot case since it is somewhat different from auction/space in that this kind of approach may propagate ‘preference’ to both directly and indirectly to validator/delegator. But, same question as above, it seems quite complex - so is it really easier than upgrading ICS?

Regarding that - what is your perspective about Opt-in and Mesh Security? Seems like the approach of this paper(as mentioned) does not targeted to coexist with further improvement in ICS.

Thank you for overall summary and fresh insight!

1 Like

Hi tom, thank you for your questions.

Indeed, the “Jeonse” model and the parachain auction model serve different objectives and are characterized by different elements. While the “Jeonse” model serves as a solution for time lag and uncertainty, aligning with Service Level Agreement (SLA) perspective, the parachain auction model addresses resource scarcity and, given its competitive nature, resembling investment perspective.

In the “Jeonse” model, we propose that the consumer chain makes a deposit, which, when redistributed, can help offset additional costs incurred by validators in securing the consumer chain. This approach directly supports the quality of service by enhancing decentralization and economic stability. Nevertheless, such a deposit can present an entry barrier for consumer chains wishing to onboard. Therefore, while imposing a large deposit may not be feasible in the short term, it could become a viable option over the longer term.

Taking into account recent discussions(1/2) in the Polkadot forum, the shift away from the auction model largely stems from its decreasing demand and the investment-centric nature of parachain auction. It’s important to underline that the “Jeonse” model and the concept of Replicated Security operate under different paradigms.

When determining the value associated with the “Jeonse” deposit, it’s important to clarify that it isn’t directly tied to the “property’s value” of the consumer chain itself. Instead, the deposit is reflective of the security burden or the value of the security provided by the provider chain.

It’s worth noting that the precise value of a consumer chain pre-launch is challenging to ascertain. This complexity inherently generates elements of time lag and uncertainty. Thus, the approach we advocate for is the propagation of these time and risk factors across the consumer chain, provider chain, and delegators. This approach aims to mitigate uncertainty and foster a balanced ecosystem where security responsibilities and risks are appropriately shared.

2 Likes

The concern about maintaining protocol simplicity while implementing the changes is valid. In the real-world application, the specific development could deviate from the theoretical plan, adding complexity. Our perspective is that focusing most of the necessary modifications on the Cross Chain Validation (CCV) module could mitigate this.

Thanks a lot!

1 Like

For the first question, I believe that answer above helps. Feel free to ask if there is any further issue to resolve.

For the other:

While our paper primarily addresses the issues that have emerged from the current architecture, it is intended to be adaptable and isn’t incompatible with future enhancements to the ICS protocol. Many of the discussions and refinements happening within the governance forums relate to overall incentives for participants in ICS, making them concurrent with the issues we address.

As for Opt-in and Mesh Security, we see them holding significant potential for a sustainable and scalable future, ultimately strengthening the security and uniqueness of the Cosmos ecosystem. We align with Informal Systems’ perspective that the subset problem needs comprehensive analysis before implementing these security models. It is critical to sidestep temporary solutions that don’t wholly tackle these issues.

3 Likes

Thanks a lot!

btw fascinating expression. Thank you again for your analysis!

One of the goals of Replicated Security was to ease the launch of new chains bringing calculatable potential value to ATOM and its stakeholders. The first option somewhat counters that by obliging CC to cover infra costs with its own funds. Should cc apply for a donation from Community Pool, in this case, to be delegated to validators under a special delegation program with periodic revision?

2nd option sounds reasonable, but it needs to have a possibility for the validators to pick the model for themselves and change it once a week (e.g.). That option still has room for opting out for validators who simply cannot scale at the moment, while leaving an option for ready-to-scale small validators to pick the preferred revenue model and take higher risks. In other cases linking the revenue models to voting power leaves no choice for small validators but to opt out, since 10% of fees on new consumer chains doesn’t really look promising. With the current two CCs and their profitability, it may not work perfectly, but it could work in the long run, having more CCs. To work properly 2v requires full automation and regular revision without passing proposals, it can be complicated especially if leads to regular chain upgrades.

Also considered options can be supplemented with one more 3d:
Hub’s community pool may be used as a fund for the “delegation” lottery, available only for the small validators who support customer chains. For example, each month, a defined bond from the pool will be delegated to a few random validators evenly, significantly increasing their profits. When the lottery will happen in the next month, the previous winners are excluded from the participation list to avoid repeatable delegation and to improve winning chances for other validators. Once all validators will participate and eventually win the delegation, the lottery repeats. The chance of winning may be modified by % of consumer chains validator supports and uptime, also participation may require a decent uptime during 2-3 months to avoid a late jump-in. This is just a concept to explain the idea of an additional support arm, and must be properly structured in detail.
The economics here really needs fresh ideas and changes from our angle of view.

The issue with most suggestions above (I mean all of them) is that it disregards the “appetite” or rather the costs of a validator in question.

“All small validators spend X on that” - no they don’t. They (the solutions) are trying to take into account spendings and not project economic (which is impossible to do). A project, with 2 people on the team and 2 networks, can easily stay afloat for months, even if it has no revenue streams. A project with 15 people on board - if it monetized via validation. Cannot. Unsure how we plan to randomly incentivize small validators with different sums.

Thank you all for your insightful opinions and ideas.

We agree that our first option may seem to increase the burden for Consumer Chains (CC) by requiring them to cover infrastructural costs. We should stress that this approach is intended as a long-term solution and will be implemented gradually, not instantly. A special delegation program, as suggested, could be an effective interim measure, allowing validators to receive funds while the CC prepares to take on these costs. It can be a bridge solution, offering a more sustainable model compared to the current situation.

We appreciate the idea of allowing validators to select and adjust their revenue model. We initially thought of this on a monthly basis, as operational costs are typically calculated and incurred monthly. However, we understand that different validators have different capabilities and needs, so offering the option to make changes on a shorter timescale (e.g., weekly) could be beneficial. We need to ensure that our model offers enough flexibility for validators to manage their risks and margins effectively.

The suggestion to use the Hub’s community pool for a “delegation lottery” for small validators who support customer chains is an interesting one. While we see the potential short-term appeal, we’re also considering the long-term, scalable solutions for our ecosystem. This proposal could be further explored in combination with a basic income approach, and the implementation of time-risk preferences in validator selection. However, it should be subjected to an in-depth discussion within the community to ensure we make balanced and sustainable decisions.

Thanks a lot!

We appreciate the critical view and the emphasis on the heterogeneous nature of validators’ costs. Certainly, the financial reality of a validator operation can vary greatly. Small operations, with minimal staffing and infrastructure, can sustain themselves differently compared to larger ones with multiple networks and a larger team. This diversity is something we should consider when we think about validator incentives and overall ecosystem health.

It’s true that estimating each validator’s costs ‘objectively’ is a challenge, given the variables involved. However, our approach seeks to incorporate the ‘subjective’ view from each validator’s perspective. We believe the ‘one-size-fits-all’ model may not necessarily account for the reality on the ground. So, the question becomes: how do we tailor solutions to better meet the diverse needs of validators?

To this end, the refinement we proposed for the Conditional Basic Income approach
– an application process for receiving subsidies – is intended to better accommodate validators’ unique circumstances. This process aims not to enforce a fixed cost model but to provide a mechanism where validators can justify and express their individual needs.

Moreover, our proposed time-risk preference and ‘Jeonse; approach are also designed to incorporate these individual perspectives. They offer ways to navigate the complexities of each validator’s economic circumstances, including aspects of time, risk, and financial commitments. The intent is to move towards a more personalised model of validator subsidies, where the complexity and diversity of operating costs can be better reflected and understood.

Thanks for your reply. I don’t mean to be offensive, so apologies if this comes out wrong in writing, but I have re-read your reply about 4 times. I really do not see anything in it that tackles the issues im talking about. It (the reply) is mega generalizing and has roughly 0 to do with reality.

An example: you mention Small operations, with minimal staffing and infrastructure, can sustain themselves differently compared to larger ones with multiple networks and a larger team. what about small operators with large team and projects? What about large validators with 2-3 people (that have been running since inception on old reputation, but have stopped giving 2 years ago to the network?). My point is - you’re reply, as respectful as it is, has no context, or is attempting to solve the issues arisen.

Hope it doesnt come of aggressive (my reply), its not meant to be. Jut pointing out what I see here

2 Likes