Prep for async-raft@0.6.0 & memstore@0.2.0 releases.

This commit is contained in:
Anthony Dodd 2021-01-20 22:56:57 -06:00
parent 0af8f72f97
commit cf823a1dc3
No known key found for this signature in database
GPG Key ID: 6E0613E0F653DBC0
4 changed files with 24 additions and 27 deletions

View File

@ -4,7 +4,22 @@ This changelog follows the patterns described here: https://keepachangelog.com/e
## [unreleased]
## async-raft 0.6.0-alpha.2 && memstore 0.2.0-alpha.2
## async-raft 0.6.0
The big news for this release is that we are now based on Tokio 1.0! Big shoutout to @xu-cheng for doing all of the heavy lifting, along with many other changes which are part of this release.
It is important to note that 0.6.0 does include two breaking changes from 0.5: the new `RaftStorage::ShutdownError` associated type, and Tokio 1.0. Both of these changes are purely code based, and it is not expected that they will negatively impact running systems.
### changed
- Updated to Tokio 1.0!
- **BREAKING:** this introduces a `RaftStorage::ShutdownError` associated type. This allows for the Raft system to differentiate between fatal storage errors which should cause the system to shutdown vs errors which should be propagated back to the client for application specific error handling. These changes only apply to the `RaftStorage::apply_entry_to_state_machine` method.
- A small change to Raft startup semantics. When a node comes online and successfully recoveres state (the node was already part of a cluster), the node will start with a 30 second election timeout, ensuring that it does not disrupt a running cluster.
- [#89](https://github.com/async-raft/async-raft/pull/89) removes the `Debug` bounds requirement on the `AppData` & `AppDataResponse` types.
- The `Raft` type can now be cloned. The clone is very cheap and helps to facilitate async workflows while feeding client requests and Raft RPCs into the Raft instance.
- The `Raft.shutdown` interface has been changed slightly. Instead of returning a `JoinHandle`, the method is now async and simply returns a result.
- The `ClientWriteError::ForwardToLeader` error variant has been modified slightly. It now exposes the data (generic type `D` of the type) of the original client request directly. This ensures that the data can actually be used for forwarding, if that is what the parent app wants to do.
- Implemented [#12](https://github.com/async-raft/async-raft/issues/12). This is a pretty old issue and a pretty solid optimization. The previous implementation of this algorithm would go to storage (typically disk) for every process of replicating entries to the state machine. Now, we are caching entries as they come in from the leader, and using only the cache as the source of data. There are a few simple measures needed to ensure this is correct, as the leader entry replication protocol takes care of most of the work for us in this case.
- Updated / clarified the interface for log compaction. See the guide or the updated `do_log_compaction` method docs for more details.
### added
- [#97](https://github.com/async-raft/async-raft/issues/97) adds the new `Raft.current_leader` method. This is a convenience method which builds upon the Raft metrics system to quickly and easily identify the current cluster leader.
@ -12,25 +27,9 @@ This changelog follows the patterns described here: https://keepachangelog.com/e
- Fixed [#98](https://github.com/async-raft/async-raft/issues/98) where heartbeats were being passed along into the log consistency check algorithm. This had the potential to cause a Raft node to go into shutdown under some circumstances.
- Fixed a bug where the timestamp of the last received heartbeat from a leader was not being stored, resulting in degraded cluster stability under some circumstances.
## memstore 0.2.0
### changed
- **BREAKING:** this introduces a `RaftStorage::ShutdownError`associated type. This allows for the Raft system to differentiate between fatal storage errors which should cause the system to shutdown vs errors which should be propagated back to the client for application specific error handling. These changes only apply to the `RaftStorage::apply_entry_to_state_machine` method.
- A small change to Raft startup semantics. When a node comes online and successfully recoveres state (the node was already part of a cluster), the node will start with a 30 second election timeout, ensuring that it does not disrupt a running cluster.
- [#89](https://github.com/async-raft/async-raft/pull/89) removes the `Debug` bounds requirement on the `AppData` & `AppDataResponse` types.
## async-raft 0.6.0-alpha.1
### changed
- The `Raft` type can now be cloned. The clone is very cheap and helps to facilitate async workflows while feeding client requests and Raft RPCs into the Raft instance.
- The `Raft.shutdown` interface has been changed slightly. Instead of returning a `JoinHandle`, the method is now async and simply returns a result.
- The `ClientWriteError::ForwardToLeader` error variant has been modified slightly. It now exposes the data (generic type `D` of the type) of the original client request directly. This ensures that the data can actually be used for forwarding, if that is what the parent app wants to do.
## memstore 0.2.0-alpha.0
### changed
- Updated async-raft dependency to `0.6.0-aplpha.0` & updated storage interface as needed.
## async-raft 0.6.0-alpha.0
### changed
- Implemented [#12](https://github.com/async-raft/async-raft/issues/12). This is a pretty old issue and a pretty solid optimization. The previous implementation of this algorithm would go to storage (typically disk) for every process of replicating entries to the state machine. Now, we are caching entries as they come in from the leader, and using only the cache as the source of data. There are a few simple measures needed to ensure this is correct, as the leader entry replication protocol takes care of most of the work for us in this case.
- Updated / clarified the interface for log compaction. See the guide or the updated `do_log_compaction` method docs for more details.
- Updated async-raft dependency to `0.6.0` & updated storage interface as needed.
### fixed
- Fixed [#76](https://github.com/async-raft/async-raft/issues/76) by moving the process of replicating log entries to the state machine off of the main task. This ensures that the process never blocks the main task. This also includes a few nice optimizations mentioned below.

View File

@ -1,6 +1,6 @@
[package]
name = "async-raft"
version = "0.6.0-alpha.2"
version = "0.6.0"
edition = "2018"
authors = ["Anthony Dodd <Dodd.AnthonyJosiah@gmail.com>"]
categories = ["algorithms", "asynchronous", "data-structures"]
@ -28,7 +28,7 @@ tracing-futures = "0.2.4"
[dev-dependencies]
maplit = "1.0.2"
memstore = { version="0.2.0-alpha.0", path="../memstore" }
memstore = { version="0.2.0-alpha.2", path="../memstore" }
tracing-subscriber = "0.2.10"
[features]

View File

@ -792,15 +792,13 @@ impl<'a, D: AppData, R: AppDataResponse, N: RaftNetwork<D>, S: RaftStorage<D, R>
let mut pending_votes = self.spawn_parallel_vote_requests();
// Inner processing loop for this Raft state.
let timeout_fut = sleep_until(self.core.get_next_election_timeout());
tokio::pin!(timeout_fut);
loop {
if !self.core.target_state.is_candidate() {
return Ok(());
}
let timeout_fut = sleep_until(self.core.get_next_election_timeout());
tokio::select! {
_ = &mut timeout_fut => break, // This election has timed-out. Break to outer loop, which starts a new term.
_ = timeout_fut => break, // This election has timed-out. Break to outer loop, which starts a new term.
Some((res, peer)) = pending_votes.recv() => self.handle_vote_response(res, peer).await?,
Some(msg) = self.core.rx_api.recv() => match msg {
RaftMsg::AppendEntries{rpc, tx} => {

View File

@ -1,6 +1,6 @@
[package]
name = "memstore"
version = "0.2.0-alpha.2"
version = "0.2.0"
edition = "2018"
categories = ["algorithms", "asynchronous", "data-structures"]
description = "An in-memory implementation of the `async-raft::RaftStorage` trait."
@ -14,7 +14,7 @@ readme = "README.md"
[dependencies]
anyhow = "1.0.32"
async-raft = { version="0.6.0-alpha.2", path="../async-raft" }
async-raft = { version="0.6", path="../async-raft" }
serde = { version="1.0.114", features=["derive"] }
serde_json = "1.0.57"
thiserror = "1.0.20"