doushi9856 2017-01-25 11:13
浏览 71
已采纳

微服务集中式数据库模型

Currently we have some microservice, they have their own database model and migration what provided by GORM Golang package. We have a big old MySQL database which is against the microservices laws, but we can't replace it. Im afraid when the microservices numbers start to growing, we will be lost in the many database model. When I add a new column in a microservice I just type service migrate to the terminal (because there is a cli for run and migrate commands), and it is refresh the database.

What is the best practice to manage it. For example I have 1000 microservice, noone will type the service migrate when someone refresh the models. I thinking about a centralized database service, where we just add a new column and it is store the all model with all migration. The only problem, how will the services get know about database model changes. This is how we store for example a user in a service:

type User struct {
    ID        uint           `gorm:"column:id;not null" sql:"AUTO_INCREMENT"`
    Name      string         `gorm:"column:name;not null" sql:"type:varchar(100)"`
    Username  sql.NullString `gorm:"column:username;not null" sql:"type:varchar(255)"`
}

func (u *User) TableName() string {
    return "users"
}
  • 写回答

3条回答 默认 最新

  • dtotuki47568 2017-01-26 02:36
    关注

    If I'm understanding your question correctly, you're trying to still use one MySQL instance but with many microservices.

    There are a couple of ways to make an SQL system work:

    1. You could create a microservice-type that handles data inserts/reads from the database and take advantage of connection pooling. And have the rest of your services do all their data read/writes through these services. This will definitely add a bit of extra latency to all your writes/reads and likely be problematic at scale.

    2. You could attempt to look for a multi-master SQL solution (e.g. CitusDB) that scales easily and you can use a central schema for your database and just make sure to handle edge cases for data insertion (de-deuping etc.)

    3. You can use data-streaming architectures like Kafka or AWS Kinesis to transfer your data to your microservices and make sure they only deal with data through these streams. This way, you can de-couple your database from your data.

    The best way to approach it in my opinion is #3. This way, you won't have to think about your storage at the computation layer of your microservice architecture.

    Not sure what service you're using for your microservices, but StdLib forces a few conversions (e.g. around only transferring data through HTTP) that helps folks wrap their head around it all. AWS Lambda also works very well with Kinesis as a source to launch the function which could help with the #3 approach.

    Disclaimer: I'm the founder of StdLib.

    本回答被题主选为最佳回答 , 对您是否有帮助呢?
    评论
查看更多回答(2条)

报告相同问题?

悬赏问题

  • ¥15 metadata提取的PDF元数据,如何转换为一个Excel
  • ¥15 关于arduino编程toCharArray()函数的使用
  • ¥100 vc++混合CEF采用CLR方式编译报错
  • ¥15 coze 的插件输入飞书多维表格 app_token 后一直显示错误,如何解决?
  • ¥15 vite+vue3+plyr播放本地public文件夹下视频无法加载
  • ¥15 c#逐行读取txt文本,但是每一行里面数据之间空格数量不同
  • ¥50 如何openEuler 22.03上安装配置drbd
  • ¥20 ING91680C BLE5.3 芯片怎么实现串口收发数据
  • ¥15 无线连接树莓派,无法执行update,如何解决?(相关搜索:软件下载)
  • ¥15 Windows11, backspace, enter, space键失灵