项目作者: randree

项目描述 :
Postgres high-availability multiple databases connector for GROM
高级语言: Go
项目地址: git://github.com/randree/multibase.git
创建时间: 2021-05-03T17:21:05Z
项目社区:https://github.com/randree/multibase

开源协议:MIT License

下载


Multinode database connector for GROM

MIT licensed Go Report Card

Simple module to access multiple (Postgres or other) database nodes. Databases are divided into ONE write (or master) and MANY read (or slave) nodes. Replication is handled by the database and is not part of this module. You can choose for example bitnami/bitnami-docker-postgresql docker containers. They take care of replication.

A loss of a connection to any (including the master) of the nodes does not end up in a panic. Instead that node is marked as offline. If all read nodes are offline the the load will be redirected to the master. If the master is down no query can be processed but still if the nodes reconnect everything gets back to normal.

The distribution to read nodes is done randomly.

Under the hood this package uses GROM hooks. Similar to go-gorm/dbresolver.

Example

Lets start with one write and two read nodes.

  1. import (
  2. ...
  3. "github.com/randree/multibase/v2"
  4. ...
  5. )
  6. func main() {
  7. logger := logger.New(
  8. log.New(os.Stdout, "\r\n", log.LstdFlags), // io writer
  9. logger.Config{
  10. SlowThreshold: time.Second, // Slow SQL threshold
  11. LogLevel: logger.Silent, // Log level
  12. IgnoreRecordNotFoundError: true, // Ignore ErrRecordNotFound error for logger
  13. Colorful: false, // Disable color
  14. },
  15. )
  16. // WRITE NODE
  17. nodeWrite := &multibase.NodeConf{
  18. Host: "mycomputer",
  19. Port: 9000,
  20. User: "database_user",
  21. Password: "database_password",
  22. Sslmode: "disable",
  23. TimeZone: "Asia/Shanghai",
  24. Db: "testdb",
  25. DbMaxOpenConns: 20,
  26. DbMaxIdleConns: 8,
  27. DbConnMaxLifetime: 1 * time.Hour,
  28. DbLogger: logger,
  29. }
  30. // READ NODE 1
  31. nodeRead1 := &multibase.NodeConf{
  32. Host: "mycomputer",
  33. Port: 9001,
  34. User: "database_user", // User must be the master.
  35. Password: "database_password",
  36. Sslmode: "disable",
  37. TimeZone: "Asia/Shanghai",
  38. Db: "testdb",
  39. DbMaxOpenConns: 20,
  40. DbMaxIdleConns: 8,
  41. DbConnMaxLifetime: 1 * time.Hour,
  42. DbLogger: logger,
  43. }
  44. // READ NODE 2
  45. nodeRead2 := &multibase.NodeConf{
  46. Host: "mycomputer",
  47. Port: 9002,
  48. User: "database_user",
  49. Password: "database_password",
  50. Sslmode: "disable",
  51. TimeZone: "Asia/Shanghai",
  52. Db: "testdb",
  53. DbMaxOpenConns: 20,
  54. DbMaxIdleConns: 8,
  55. DbConnMaxLifetime: 1 * time.Hour,
  56. DbLogger: logger,
  57. }
  58. // OpenNode uses gorm.Open with DisableAutomaticPing: true
  59. // You can replace it by any other GORM opener
  60. // The result should be a *gorm.DB instance
  61. dbWrite, _ := multibase.OpenNode(nodeWrite) // Feel free to check err
  62. dbRead1, _ := multibase.OpenNode(nodeRead1)
  63. dbRead2, _ := multibase.OpenNode(nodeRead2)
  64. // Initiate multibase
  65. // At this stage NO actual connection is made
  66. mb := multibase.New(dbWrite, dbRead1, dbRead2)
  67. // The most important node is the write node.
  68. // We use the following lines of code to ping the write node and connect to it.
  69. // Even if no connection can be established, a panic does not occur.
  70. for {
  71. err := mb.ConnectWriteNode()
  72. if err != nil {
  73. fmt.Println(err)
  74. } else {
  75. break
  76. }
  77. time.Sleep(time.Millisecond * 1000) // You can choose the interval
  78. }
  79. // After the write node is set up, it is time to connect the read nodes
  80. err := mb.ConnectReadNodes()
  81. if err != nil {
  82. fmt.Println(err)
  83. }
  84. // After this is done, GetDatabaseReplicaSet binds all nodes to a GORM database
  85. // All read queries are forwarded to the read nodes.
  86. db := mb.GetDatabaseReplicaSet()
  87. // The StartReconnector is a go routine that checks the connection and
  88. // reconnects to the nodes if necessary.
  89. mb.StartReconnector(time.Second * 1)
  90. // Now we can use db as usual
  91. type User struct {
  92. ID int `gorm:"primarykey"`
  93. Name string
  94. }
  95. db.AutoMigrate(&User{})
  96. user := &User{}
  97. db.FirstOrInit(user, User{Name: "Jackx"})
  98. ...
  99. // To get some statistics use GetStatistics
  100. statistics := mb.GetStatistics()
  101. fmt.Println(statistics)
  102. }

Statistics is a struct of following shape:

  1. type Statistic struct {
  2. online bool
  3. queryCount int64
  4. errorCount int64
  5. errorConnectionCount int64
  6. }

The output is similar to

  1. map[read0:{true 91214 3 0} read1:{true 98232 2 0} write:{true 234 0 0}]

Try it out

Use the docker-compose.yml to test the example.

  1. Open two consoles.
  2. In the first one spin up nodes with
    1. $ docker-compose up -d
  3. Write a main.go like one the above, add

    1. for {
    2. fmt.Print("\033[H\033[2J") // to clear the console after each cycle
    3. user := []User{}
    4. db.Find(&user)
    5. // Print out statistics
    6. fmt.Println(mb.GetStatistics())
    7. // map[read0:{true 1125 0 0} read1:{true 1087 0 0} write:{true 0 0 0}]
    8. time.Sleep(time.Millisecond * 100) // refresh each 100 ms
    9. }
  4. In the second console run go
    1. $ go run main.go
    the result should be a similar to
    1. map[read0:{true 91214 3 0} read1:{true 98232 2 0} write:{true 234 0 0}]
  5. While the go program is running stop a read node with

    1. $ docker stop read2

    and see that all queries are forwarded to read0 (count starts at here at 0) and read1 (read2) is marked as offline.

  6. Turn read2 on.

    1. $ docker start read2

    and the count is distributed between both read nodes.

  7. Play around. Stop the write node. Stop all nodes. Then start all with docker start write read1 read2.

The program should never stop or panic.

Replication

Note that replication takes time in some cases. Try to avoid writing and reading the same data set in one routine.