Minimum Requirements for Ram for Validator and Sentry

Hey Robert I’m sorry that I missed your question. It varies wildly and that’s one of the reasons that I opposed proposal 69.

So what I can say is that osmosis has pretty much doubled its memory consumption since CW was enabled on it even though there aren’t any contracts to the best of my knowledge currently deployed on osmosis.

I’ve seen this doubling pretty consistently. We also saw it with the recent dig upgrade.

Also and I apologize this gets a little bit hand wavy. I did make a post recently however where I posted some screenshots from our actual servers.

Please note that these are our relay machines and they buy necessity are much higher capacity than what’s actually necessary to validate. We do have a standardized validation spec, and I can share that with you as well:

received_673863560347647

These machines perform extremely well and so far they have been able to validate every chain that we have thrown at them but I do have to say that it’s quite possible that Juno will soon need 128 GB of RAM. Typically for validator nodes we run in the most restricted configuration possible, so we turn off indexing, we run either from state sync or a snapshot image that has been truncated using state sync, and we turn off every single port other than p2p.

Relayers, as you can see in the spot the CW enabled chain post, are pretty much the opposite configuration of that. They need to have at least one unbonding Of state and we usually go with much more than that. This just keeps things stable and happy and running well. Additionally, research into database performance has been spurred by relaying because relayers make a lot of queries and this gets challenging to the database itself.

1 Like