csknk
July 1, 2021, 8:48pm
1
I’m having trouble compiling and using Tendermint with cleveldb.
Followed the instructions for compiling with cleveldb - completed without obvious errors.
I then amended the db_backend
field in my apps config. On starting a fresh chain I get this error:
unknown db_backend cleveldb, expected one of memdb,goleveldb
In the initDBs()
function here I can see that the config has the correct value - i.e. config.DBBackend == "cleveldb"
.
The app works fine with “golevelb” selected in the config.toml
, even though the Tendermint binary was compiled with the cleveldb option - so I’m guessing it has not compiled properly.
If I have compiled TM with cleveldb support, should the app be able to start with the goleveldb option?
Is there a way to check that I have compiled TM with cleveldb properly?
I’m relatively new to Golang and I’m confused that there are no cleveldb related build tags throughout the TM codebase. Can someone explain the purpose of the ‘cleveldb’ build tags in the Make recipe?
Would appreciate some help - thanks in advance.
1 Like
There is actually a build tag for that as far as I know.
Let me see if I can dig it up for you.
https://docs.tendermint.com/master/introduction/install.html#compile-with-cleveldb-support
Were you reading that? Is that helpful and what you were looking for?
The purpose of those tags is to allow go to know that it needs to break out cgo, and compile some c as well as some go.
I’m also fairly sure that we may be standardizing around badger DB, I think that there’s an issue on this.
It would basically allow us to drop the TMDB library.
opened 11:55AM - 02 Feb 21 UTC
S:proposal
T:tracking
## Summary
There have been a few discussions already surrounding the topic. T… his is an issue that tries to capture the current landscape and offer some guidance towards heading forwards. Essentially the crux of the topic is more of a question: How involved does Tendermint want to be with the database layer that it operates above? The core value of Tendermint resides in BFT state machine replication. This means modules such as consensus, p2p, mempool and so forth. Because of this focus, we should slowly reduce the large maintenance surface area that Tendermint currently holds with the database layer. In its place, we should look to settle for leading and established technologies that we can leverage off.
Specifically that points three issues:
- Reduce the amount of databases supported, eventually converging to a singe database that best fits Tendermint's use case.
- Implement an established query engine to use for the Tx Indexer (such as sqlite or postgres)
- Reassess the write ahead log
## Converge to a single database
Currently, tm-db supports five different databases (GoLevelDB, LevelDB, BoltDB, RocksDB and BadgerDB). This offers a greater degree of flexibility to application developers but it's not clear whether such flexibility provides the necessary gain to offset the maintenance burden of providing continual support. Having this wrapped around a common interface restricts that amount of value we can extract from each of the respective databases and means that we can't exploit some of the invariants of Tendermint (append only, high write volume and so forth).
In order to move forward, we will need to undergo a process of comparing the various technologies out there, writing them all up in a document, conversing with SDK and other users and eventually reach a consensus on the db engine of choice.
Following from this, another document, this time more implementation focused and most likely an ADR, will need to be sketched out. Given a single database engine we will need to review how it should now interface with Tendermint. This could likely lead to a large rewrite of some of the internal code. Another question that would arise then would be: Do we want to use only one or perhaps two databases instead of four different ones. This may offer greater atomicity of transactions.
## Implement an established query engine for the Tx Indexer
The Tx Indexer is a secondary index to query and find particular Txs. Whilst the current implementation of a kv store suffices, there is already a wealth of highly performant query engines that we should consider using (sqlite, postgres). This will allow us to piggy back off the latest technology whilst only needing to maintain a much smaller codebase (the wrapper around the engine).
## Reassess the write ahead log
The WAL, records all messages that the consensus engine produces so that in the event of a crash, when the node starts up again it can replay all the messages back to the point of failure. This is another, although perhaps smaller, area where we might be able to reduce the amount of code we need to upkeep. It seems, at least to me, that we only need a write ahead log for the votes that we sign in order to prevent double signing yet for everything else we could rely on the existing "catch up" mechanism to receive all the votes and blocks that have happened since the crash to restore the node to the most recent height.
____
#### For Admin Use
- [ ] Not duplicate issue
- [ ] Appropriate labels applied
- [ ] Appropriate contributors tagged
- [ ] Contributor assigned/self-assigned
csknk
July 19, 2021, 1:13pm
3
Thanks for the response Jacob - yes, that is the doc guide I was following.
I’ll check the issue you shared and report back.
1 Like