[ Proposal ] $2,000,000 Community Fund Proposal to hire Microsoft to make Cosmos transaction processing more Distributed and Scalable

Introduction

The Cosmos Blockchain Ecosystem is a decentralized network of interconnected blockchains that relies on the Atom binary for consensus and transaction processing. However, the current monolithic architecture of the Atom binary poses challenges in terms of scalability and distribution. This proposal outlines a plan to introduce distributed ordering and processing using Apache Kafka and a load balancer, enabling a more scalable and resilient architecture aligned with the decentralized nature of the Cosmos network increasing the value for Atom holders.

Current Architecture and Challenges

The Atom binary currently operates in a monolithic manner, where a single node handles transaction ordering, processing and consensus. This approach faces limitations as the Cosmos network expands:

  • Scalability Limitations: The monolithic architecture can become overwhelmed with increased transaction volume, leading to performance degradation and potential network congestion.
  • Monolitihic Processing: The monolithic model introduces a bottleneck, making the network vulnerable to spamming or disruption if the nodes become saturated with transitions.

Proposed Solution with Kafka and Load Balancer

To address these challenges and enhance the scalability and distribution of the Atom binary, we propose utilizing Apache Kafka and a load balancer. Apache Kafka is a distributed streaming platform that excels in high-throughput, low-latency message processing. A load balancer can efficiently distribute incoming transactions across multiple Kafka nodes for parallel processing, aligning with the decentralized nature of the Cosmos network.

Implementation Details

  1. Kafka Integration: Integrate the Atom binary with Apache Kafka to enable distributed transaction ordering and processing. This involves creating Kafka topics for transaction messages and implementing consumer groups to process and order transactions in a distributed manner, maintaining the decentralized consensus mechanism.
  2. Load Balancer Integration: Employ a load balancer to distribute incoming transactions across multiple Kafka nodes. The load balancer should consider factors such as node availability, load, and performance metrics to ensure optimal distribution, ensuring efficient transaction processing across the distributed network.
  3. Atom Binary Modifications: Modify the Atom binary to utilize Kafka for transaction ordering and processing. This involves adapting the consensus mechanism to work with distributed transaction data and ensuring compatibility with the Kafka infrastructure, maintaining the decentralized consensus model.

Benefits of the Proposed Solution

  1. Enhanced Scalability: The distributed Kafka-based architecture enables the Atom binary to handle increasing transaction volume efficiently, preventing performance bottlenecks and network congestion, ensuring scalability without compromising decentralization.
  2. Distributed Control: The distributed approach enhances network scalability, preserving the decentralized nature of the Cosmos network.

Conclusion

Adopting Apache Kafka and a load balancer to enhance the distribution of the Atom binary will address the scalability, resilience, and distribution limitations of the current architecture. This enhanced architecture will pave the way for a more robust, scalable, and distributed Atom Economic Zone, preserving the decentralized principles of the network.

1 Like

Conversation starter. Soliciting feedback from experts outside the core cosmos teams, Microsoft, seem to be something that could be healthy for the community as biases exist among the good ole’ boy relationship dynamics of those who have contributed heavily to the development of the CosmosSDK.

Not that the various core contributors haven’t developed a worthwhile product, but this is an opportunity to engage the resources of an established tech heavy organization to improve upon the Atom Economic Zone product. The proposal here is specifically to expand the transaction processing capacity of the Atom Hub, but it might be a time where opportunities - such as integration with artificial intelligence or quantum hardening could come into scope.

As a template for ideation because the protocol design would be different - check out the white paper linked here. The Regulated Liability Network (RLN) Whitepaper on Scalability and Performance - SETL

Furthermore - just to add to my own mess here - not to get off topic, but I broached the subjects of artificial intelligence and quantum hardening for a purpose, mainly beacuse the addition of a transaction que is a location Ai can perform well. I understand how IBM Hyperledger integrates IBM Watson. Also, I previously broached the subject of quantum hardening of the platform and a solution to accomplish it, the solution I suggested is one of a couple that are possibly viable. 1) being an authorization token issued by an auth server with auth gate out in front of the transactional endpoint, perhaps with two-factor authentication to make a H2M (Human to Machine) transaction, there is also M2M (Machine to Machine) auth gating that can be implemented - 2) some flavor of zero-knowledge proofs.

All these subjects are worthwhile in exploring and gauging feedback from the community as to how they would want to maximize the value of their holdings.

2 Likes

Here is a blog post for a Universal Basic Income paradigm I have called the Universal Basic Income Feedback Loop. Essentially the hypothesis is such that the international implementation of Ai, an increasing reliance of robotics and deployment of blockchain technology will reduce the cumulative number of hours human labor will be required in a number of industries.

That being said - an area of opportunity is blockchain payments - taxation - in robotic technologies to subsidize a Univeral Basic Income. This is another area worth exploring and engaging Microsoft as a partner. Cosmos technology already exists that allows frictionless integration into robotics. Implementation details would necessitate a broader discussion for the proper “taxation” of robotics.

Life is getting better, technology is improving - I do recognize that there are some who hold opinions that traditional establishments are evil - governments/corporations/banks, but I think this perspective falls short in recognizing the amount of investment derived from governments, corporations and banks that have the effect of improving the quality-of-life. Swapping one mature imperfect system that an undercurrent of revolt is often impressed upon others for a relatively immature imperfect system where similar corruptions take place is…for a lack of better words, insane. People tend to not have the cognitive bandwidth to think for themselves critically and in an informed manner, mostly because - who has the time? The information one should have in a philosophical and technical capcity to see with even the most rudimentary ability to make informed decisions might even eclipse some people’s ability to assess that information. OK…you have the information, even in a condensed form, what does it mean?

Basically - the proposition is multi-faceted, and an ice-breaker that could allow other objectives to propagate.

1 Like

Why is this the first I have heard about these

Issues. Can we get some context from other Cosmos validators and engineers?
Also is this solution already built or is it something we will need to build out and implement?

This type of thinking seems to me to be visionary and way ahead of its time. I warn you however that you will receive pushback in terms of people scoffing at and ignoring these ideas, because if it doesn’t affect them directly and immediately, then it is someone else’s problem. Good luck though and I hope you will be successful.

1 Like

I’ve been a Cosmos supporter since before the Cosmos white paper was published. Essentially the value of “sharding” a chain with the “app chain” thesis was apparent very early on, and in my opinion it’s a good solution to problems that existed at that time in the blockchain space.

There is a difference between distributed systems and decentralized systems, but some of the mechanics are similar - multiple nodes holding state, ect. Fallbacks and redundancy - and even latency, are designs in distributed systems.

What I’ve raised here is…modularizing the monolithic processing capacity of the Atom Economic Zone by introducing some tools from distributed systems similar to what is spec’d in the Regulated Liability Networks paper. Hypothetically this would allow Atom to handle the load of it’s varied services and not necessitate fractured app-chain states.

There are some visionary intents, mainly to do with Univeral Basic Income and a future civilization with a large labor force of mature Ai and Robotics - and where quantum computers are a threat to public key cryptography. This is a key motivation of raising this Community Fund Proposal. Having the expertise of not only the cosmos developer community, but of Microsoft and other Microsoft partners (OpenAi) in exploring this possibility could lead to an optimal design to achieve the stated objectives.

This modification is not completed, nor do I know of any effort to experiment with this kind of design with the Cosmos-SDK with the various design requirements addressed. As for the current functioning of different cosmos-sdk chains I do not know specifics on performances so other validators and engineers feedback in this regard is welcomed.

2 Likes

As an afterthought - in relation to OpenAi and additional alignment that might exist there, the 1st quantum hardening solution suggested above can be achieved with Auth-O. “Sign in with Worldcoin” is now available on Okta’s Auth0 Marketplace World Coin has an Auth-O integration. They are also working on the Universal Basic Income strategy.

I take the approach of being optimistically cautious about different technologies. The Orb is one of those, however it might suit some people’s flavor of security. The beauty of Auth-O is there’re multiple providers that could be implemented.

  1. Sign in with Ethereum - Example Auth-O impl on a server not hosted momentarily
  2. 2-Factor Authentication
  3. Social Sign-In

No matter what ones level of skepticism is, an Auth-O auth-gate would provide a layer of security to achieve quantum hardening of the underlying PKC, but there are other common goals where synergies emerge.

2 Likes

Dude, seriously? The TPS on the Cosmos Hub is 1. Can we stop shooting bazookas at birds? For now a slingshot is fine. When we have a scalability problem then we will deal it. From all the chains I am looking at now, Osmosis has the highest TPS of 7. SEI has a TPS of 50 but that chain is custom chain and its consensus mechanism is overridden and made faster. In any case, 50 tps is not a reason to do any of this stuff. When you get to TPS of 800-900 and start reaching the 1000 limit, then you start looking for solution. Because it is sharded, 1000 TPS for an appchain should basically do the job for almost all chains with the exception of 2-3 situations.

I don’t mind research but practically speaking we would be funding a solution to a problem that doesn’t exist.

2 Likes

Thank God for people in the community such as yourself. Genuinely a reasonable perspective. A subject of this funding proposal is exploratory in nature and is directed towards robotics and humans interfacing with robotics. It’s not addressing the paradigm as it is, but as we would like it to be.

Since the SDK is opensource there are teams that could hypothetically take up this task. The funding proposal would in turn be an investment from the community to in some way shape and form get an ROI. Economics suggest that protocol revenue is essential to drive demand. When Ethereum first got started there was an ambitious project that was going to fund hardware/software/automation tools called - the DAO. Core development of the Cosmos tech stack, including consensus improvements developed by Sei, are all pieces that would be accessed in a final product.

Let competetion assume responsibility, or active investment from the community.

Is this for real or did April come early?

1 Like

No this is for real, but the joke must be in good nature.

Here is a prior contribution that I offered up to the community. ISO-20022 - Cosmos Ecosystem Financial Messaging Standardization - Miscellaneous - Cosmos Hub Forum No more or less serious than the value that could be extracted from that.

Instead of value - maybe credibility among institutions who have high standards and expectations of doing business is a better perspective, which in turn - equates to value.

this should be split into seperate proposals.

we’ve got 1) proposal to modularize the architecture (reasonable thing to discuss) and 2) proposal to spend $2m to hire microsoft consultants (kek?)

Discussions around architecture are always helpful.

the additional points around AI, quantum computing, and using 2FA to gatekeep transactions is ridiculous imo

1 Like

Point taken.

  1. No harm in exploring architecture.
  2. Yes, this as an investment from the community that could lead to other business relationships.

I know of some fairly unsophisticated robotics solutions where public transactions for the services of the robotic products would be acceptable. Using something like DAO DAO to manage teams all around the world as the human managers overseeing the business processes of the robotics is also in scope.

The implementation of this is much less technical debt than modularizing the architecture. Quantum computing is a credible threat, and the earlier this is addressed the better - especially if additional businesses flourish around this approach. 2-Factor authentication is a secure security measure that gives people more time to decide if they want to make a transaction and a registered IP makes sure that the device requesting the transaction is associated with the account. Reduces any attack vector significantly.

The AI aspect of what I’ve broached here I honestly do not know how it will apply, but I have an idea of what I think would be worth while to experiment with - in a DAO 1st to better understand how people utilize it. He proposes other ways to mature the technology before being deployed in ways that are more mission critical - like politics or management of the world’s resources.

I’ve previously suggested this to, and Microsoft owns GitHub. Essentially, I’m persisting what I think would be beneficial for Gaia. Few will understand.

[Proposal ##][DRAFT] Signalling Proposal - Move Governance to Github - Hub Proposals / Signaling/Text - Cosmos Hub Forum

Microsoft is in the title. The body of the message does not mention Microsoft even once. An proposal with such a title should not appear online. Otherwise, this is just an attempt to hype.

On topic: Current TPS ~ 1 and it is not necessary.

It’s noy hype. There is no need for it to be mentioned. It should be inferred by context in which ways the business dynamics are being approached.

Thank you for sharing your opinion though.

Correct title - “$2,000,000 Community Fund Proposal: Integrate Apache Kafka & Load Balancer in Cosmoshub to make transaction processing more Distributed and Scalable”

“no_with_veto” from me if this proposal goes online with your title. Because many people read only the title, and the title you suggested does not correspond to the content. Thanks for your suggestions to improve Cosmoshub.

1 Like

The title can be changed, but it’s not the full scope of the milestones to improve the hub. Adding the additional objectives to the body of the content instead of as comments would be more in alignment with the overall intent. The body of the content is a primary objective though.

Here is a diagram that describes an architecture of how zones could connect via a Kafka Que (millions of messages per second) and be routed to a central zone and storage. One of the 1st objectives I recall of the hub was to be a fly-wheel of commerce connecting all the zones - opt-in, of course.

This architecture would be similar to what I envisioned the initial objectives of Cosmos being. I think there was a limit on what the hub could process as the central routing of Cosmos, and as there were exponentially more zones the complexity of handling all these connections increased. This kind of architecture would only connect to the que. The packets contain all the routing information and the data-flow is bi-directional. As of now there are limits on how many consumer chains and how many transactions the hub can process.

This is just one example of different architectures that could be evaluated.

The SDK can be modified to create/save to a database for every zone. The data packet contains all the data to create/save to a DB associated with every zone. Not sure if it’s tables or how the separation of concerns for all the connections are stored currently - anyone have a 2 sec brief on that?

if(exists){
sdk.push()
}
else{
creatdb()
sdk.push()
}

More or less.

Chain ID on a table:
ibc/spec/client/ics-007-tendermint-client at main · cosmos/ibc (github.com)

interface ClientState {
  chainID: string
  trustLevel: Rational
  trustingPeriod: uint64
  unbondingPeriod: uint64
  latestHeight: Height
  frozenHeight: Maybe<uint64>
  upgradePath: []string
  maxClockDrift: uint64
  proofSpecs: []ProofSpec
}

Perhaps creating DBs for each zone isn’t necessary. With a databus there wouldn’t be a handshake connection between the systems. To copy data between Kafka and another system, users instantiate Kafka Connectors for the systems they want to pull data from or push data to. Connectors come in two flavors: SourceConnectors, which import data from another system, and SinkConnectors, which export data to another system.

What are the general thoughts on this?
Guide for Kafka Connector Developers | Confluent Documentation
ibc/spec/core/ics-004-channel-and-packet-semantics/README.md at main · cosmos/ibc (github.com)

1 Like

On the host state machine - or just in the client app, having a list of target machines or the ability for the user to manually input a target machine and save it in the app cache, would initiate the transfer to the target machine through a sink connector when a state update occurs on the host machine.

A workable description of how this system would function with this architecture??? Acknowledgement packets for successful updates on the target machine or timeout still applicable.