项目作者: tidwall

项目描述 :
High Availability Framework for Happy Data
高级语言: Go
项目地址: git://github.com/tidwall/uhaha.git
创建时间: 2019-03-21T19:42:24Z
项目社区:https://github.com/tidwall/uhaha

开源协议:MIT License

下载



uhaha



GoDoc

High Availabilty Framework for Happy Data

Uhaha is a framework for building highly available Raft-based data applications in Go.
This is basically an upgrade to the Finn project, but has an updated API, better security features (TLS and auth passwords),
customizable services, deterministic time, recalculable random numbers, simpler snapshots, a smaller network footprint, and more.
Under the hood it utilizes hashicorp/raft, tidwall/redcon, and syndtr/goleveldb.

Features

  • Simple API for quickly creating a custom Raft-based application.
  • Deterministic monotonic time that does not drift and stays in sync with the internet.
  • APIs for building custom services such as HTTP and gRPC.
    Supports the Redis protocol by default, so most Redis client library will work with Uhaha.
  • TLS and Auth password support.
  • Multiple examples to help jumpstart integration, including
    a Key-value DB,
    a Timeseries DB,
    and a Ticket Service.

Example

Below a simple example of a service for monotonically increasing tickets.

  1. package main
  2. import "github.com/tidwall/uhaha"
  3. type data struct {
  4. Ticket int64
  5. }
  6. func main() {
  7. // Set up a uhaha configuration
  8. var conf uhaha.Config
  9. // Give the application a name. All servers in the cluster should use the
  10. // same name.
  11. conf.Name = "ticket"
  12. // Set the initial data. This is state of the data when first server in the
  13. // cluster starts for the first time ever.
  14. conf.InitialData = new(data)
  15. // Since we are not holding onto much data we can used the built-in JSON
  16. // snapshot system. You just need to make sure all the important fields in
  17. // the data are exportable (capitalized) to JSON. In this case there is
  18. // only the one field "Ticket".
  19. conf.UseJSONSnapshots = true
  20. // Add a command that will change the value of a Ticket.
  21. conf.AddWriteCommand("ticket", cmdTICKET)
  22. // Finally, hand off all processing to uhaha.
  23. uhaha.Main(conf)
  24. }
  25. // TICKET
  26. // help: returns a new ticket that has a value that is at least one greater
  27. // than the previous TICKET call.
  28. func cmdTICKET(m uhaha.Machine, args []string) (interface{}, error) {
  29. // The the current data from the machine
  30. data := m.Data().(*data)
  31. // Increment the ticket
  32. data.Ticket++
  33. // Return the new ticket to caller
  34. return data.Ticket, nil
  35. }

Building

Using the source file from the examples directory, we’ll build an application
named “ticket”

  1. go build -o ticket examples/ticket/main.go

Running

It’s ideal to have three, five, or seven nodes in your cluster.

Let’s create the first node.

  1. ./ticket -n 1 -a :11001

This will create a node named 1 and bind the address to :11001

Now let’s create two more nodes and add them to the cluster.

  1. ./ticket -n 2 -a :11002 -j :11001
  2. ./ticket -n 3 -a :11003 -j :11001

Now we have a fault-tolerant three node cluster up and running.

Using

You can use any Redis compatible client, such as the redis-cli, telnet,
or netcat.

I’ll use the redis-cli in the example below.

Connect to the leader. This will probably be the first node you created.

  1. redis-cli -p 11001

Send the server a TICKET command and receive the first ticket.

  1. > TICKET
  2. "1"

From here on every TICKET command will guarentee to generate a value larger
than the previous TICKET command.

  1. > TICKET
  2. "2"
  3. > TICKET
  4. "3"
  5. > TICKET
  6. "4"
  7. > TICKET
  8. "5"

Built-in Commands

There are a number built-in commands for managing and monitor the cluster.

  1. VERSION # show the application version
  2. MACHINE # show information about the state machine
  3. RAFT LEADER # show the address of the current raft leader
  4. RAFT INFO [pattern] # show information about the raft server and cluster
  5. RAFT SERVER LIST # show all servers in cluster
  6. RAFT SERVER ADD id address # add a server to cluster
  7. RAFT SERVER REMOVE id # remove a server from the cluster
  8. RAFT SNAPSHOT NOW # make a snapshot of the data
  9. RAFT SNAPSHOT LIST # show a list of all snapshots on server
  10. RAFT SNAPSHOT FILE id # show the file path of a snapshot on server
  11. RAFT SNAPSHOT READ id [RANGE start end] # download all or part of a snapshot

And also some client commands.

  1. QUIT # close the client connection
  2. PING # ping the server
  3. ECHO [message] # echo a message to the server
  4. AUTH password # authenticate with a password

Network and security considerations (TLS and Auth password)

By default a single Uhaha instance is bound to the local 127.0.0.1 IP address. Thus nothing outside that machine, including other servers in the cluster or machines on the same local network will be able communicate with this instance.

Network security

To open up the service you will need to provide an IP address that can be reached from the outside.
For example, let’s say you want to set up three servers on a local 10.0.0.0 network.

On server 1:

  1. ./ticket -n 1 -a 10.0.0.1:11001

On server 2:

  1. ./ticket -n 2 -a 10.0.0.2:11001 -j 10.0.0.1:11001

On server 3:

  1. ./ticket -n 3 -a 10.0.0.3:11001 -j 10.0.0.1:11001

Now you have a Raft cluster running on three distinct servers in the same local network. This may be enough for applications that only require a network security policy. Basically any server on the local network can access the cluster.

Auth password

If you want to lock down the cluster further you can provide a secret auth, which is more or less a password that the cluster and client will need to communicate with each other.

  1. ./ticket -n 1 -a 10.0.0.1:11001 --auth my-secret

All the servers will need to be started with the same auth.

  1. ./ticket -n 2 -a 10.0.0.2:11001 --auth my-secret -j 10.0.0.1:11001
  1. ./ticket -n 2 -a 10.0.0.3:11001 --auth my-secret -j 10.0.0.1:11001

The client will also need the same auth to talk with cluster. All redis clients support an auth password, such as:

  1. redis-cli -h 10.0.0.1 -p 11001 -a my-secret

This may be enough if you keep all your machines on the same private network, but you don’t want all machines or applications to have unfettered access to the cluster.

TLS

Finally you can use TLS, which I recommend along with an auth password.

In this example a custom cert and key are created using the mkcert tool.

  1. mkcert uhaha-example
  2. # produces uhaha-example.pem, uhaha-example-key.pem, and a rootCA.pem

Then create a cluster using the cert & key files. Along with an auth.

  1. ./ticket -n 1 -a 10.0.0.1:11001 --tls-cert uhaha-example.pem --tls-key uhaha-example-key.pem --auth my-secret
  1. ./ticket -n 2 -a 10.0.0.2:11001 --tls-cert uhaha-example.pem --tls-key uhaha-example-key.pem --auth my-secret -j 10.0.0.1:11001
  1. ./ticket -n 2 -a 10.0.0.3:11001 --tls-cert uhaha-example.pem --tls-key uhaha-example-key.pem --auth my-secret -j 10.0.0.1:11001

Now you can connect to the server from a client that has the rootCA.pem.
You can find the location of your rootCA.pem file in the running ls "$(mkcert -CAROOT)/rootCA.pem".

  1. redis-cli -h 10.0.0.1 -p 11001 --tls --cacert rootCA.pem -a my-secret

Command-line options

Below are all of the command line options.

  1. Usage: my-uhaha-app [-n id] [-a addr] [options]
  2. Basic options:
  3. -v : display version
  4. -h : display help, this screen
  5. -a addr : bind to address (default: 127.0.0.1:11001)
  6. -n id : node ID (default: 1)
  7. -d dir : data directory (default: data)
  8. -j addr : leader address of a cluster to join
  9. -l level : log level (default: info) [debug,verb,info,warn,silent]
  10. Security options:
  11. --tls-cert path : path to TLS certificate
  12. --tls-key path : path to TLS private key
  13. --auth auth : cluster authorization, shared by all servers and clients
  14. Networking options:
  15. --advertise addr : advertise address (default: network bound address)
  16. Advanced options:
  17. --nosync : turn off syncing data to disk after every write. This leads
  18. to faster write operations but opens up the chance for data
  19. loss due to catastrophic events such as power failure.
  20. --openreads : allow followers to process read commands, but with the
  21. possibility of returning stale data.
  22. --localtime : have the raft machine time synchronized with the local
  23. server rather than the public internet. This will run the
  24. risk of time shifts when the local server time is
  25. drastically changed during live operation.
  26. --restore path : restore a raft machine from a snapshot file. This will
  27. start a brand new single-node cluster using the snapshot as
  28. initial data. The other nodes must be re-joined. This
  29. operation is ignored when a data directory already exists.
  30. Cannot be used with -j flag.