在php微服务篇说到,当一个系统足够庞大,用户足够多的时候,项目就有必要对服务进行拆分。服务拆分后需要对拆分出的服务进行管理,发现,治理,这些工作做起来是非常繁琐的。好在这里有一个很好的组件,也就是这一篇的主角————consul可以帮助我们做这些事情。
Consul 是一个支持多数据中心分布式高可用的服务发现和配置共享的服务软件
consul提供的一些关键特性:
- service discovery:consul通过DNS或者HTTP接口使服务注册和服务发现变的很容易,一些外部服务。
- health checking:健康检测使consul可以快速的警告在集群中的操作。和服务发现的集成,可以防止请求转发到故障的服务上面。
- key/value storage:一个用来存储动态配置的系统。提供简单的HTTP接口,可以在任何地方操作。
- WEB UI:支持 WEB UI。点点点,你就能够了解你的服务现在的运行情况,一目了然,对开发运维是非常友好的。
consul 的架构是什么,官方给出了一个很直观的图片:
Agent——agent是一直运行在Consul集群中每个成员上的守护进程。通过运行 consul agent 来启动。agent可以运行在client或者server模式。指定节点作为client或者server是非常简单的,除非有其他agent实例。所有的agent都能运行DNS或者HTTP接口,并负责运行时检查和保持服务同步。
Client——一个Client是一个转发所有RPC到server的代理。这个client是相对无状态的。client唯一执行的后台活动是加入LAN gossip池。这有一个最低的资源开销并且仅消耗少量的网络带宽。
Server——一个server是一个有一组扩展功能的代理,这些功能包括参与Raft选举,维护集群状态,响应RPC查询,与其他数据中心交互WAN gossip和转发查询给leader或者远程数据中心。
DataCenter——虽然数据中心的定义是显而易见的,但是有一些细微的细节必须考虑。例如,在EC2中,多个可用区域被认为组成一个数据中心?我们定义数据中心为一个私有的,低延迟和高带宽的一个网络环境。这不包括访问公共网络,但是对于我们而言,同一个EC2中的多个可用区域可以被认为是一个数据中心的一部分。
Consensus——在我们的文档中,我们使用Consensus来表明就leader选举和事务的顺序达成一致。由于这些事务都被应用到有限状态机上,Consensus暗示复制状态机的一致性。
Gossip——Consul建立在Serf的基础之上,它提供了一个用于多播目的的完整的gossip协议。Serf提供成员关系,故障检测和事件广播。更多的信息在gossip文档中描述。这足以知道gossip使用基于UDP的随机的点到点通信。
LAN Gossip——它包含所有位于同一个局域网或者数据中心的所有节点。
WAN Gossip——它只包含Server。这些server主要分布在不同的数据中心并且通常通过因特网或者广域网通信。
RPC——远程过程调用。这是一个允许client请求server的请求/响应机制。
在每个数据中心,client和server是混合的。一般建议有3-5台server。这是基于有故障情况下的可用性和性能之间的权衡结果,因为越多的机器加入达成共识越慢。然而,并不限制client的数量,它们可以很容易的扩展到数千或者数万台。
同一个数据中心的所有节点都必须加入gossip协议。这意味着gossip协议包含一个给定数据中心的所有节点。这服务于几个目的:第一,不需要在client上配置server地址。发现都是自动完成的。第二,检测节点故障的工作不是放在server上,而是分布式的。这是的故障检测相比心跳机制有更高的可扩展性。第三:它用来作为一个消息层来通知事件,比如leader选举发生时。
每个数据中心的server都是Raft节点集合的一部分。这意味着它们一起工作并选出一个leader,一个有额外工作的server。leader负责处理所有的查询和事务。作为一致性协议的一部分,事务也必须被复制到所有其他的节点。因为这一要求,当一个非leader得server收到一个RPC请求时,它将请求转发给集群leader。
server节点也作为WAN gossip Pool的一部分。这个Pool不同于LAN Pool,因为它是为了优化互联网更高的延迟,并且它只包含其他Consul server节点。这个Pool的目的是为了允许数据中心能够以low-touch的方式发现彼此。这使得一个新的数据中心可以很容易的加入现存的WAN gossip。因为server都运行在这个pool中,它也支持跨数据中心请求。当一个server收到来自另一个数据中心的请求时,它随即转发给正确数据中想一个server。该server再转发给本地leader。
这使得数据中心之间只有一个很低的耦合,但是由于故障检测,连接缓存和复用,跨数据中心的请求都是相对快速和可靠的。
想安装Consul非常简单,Consul官网提供了非常方便的可执行二进制文件,下载地址戳我
根据自己的系统环境下载后解压zip即可执行
[root@izbp1acp86oa3ixxw4n1dpz update]# wget https://releases.hashicorp.com/consul/1.6.2/consul_1.6.2_linux_armhfv6.zip
[root@izbp1acp86oa3ixxw4n1dpz update]# unzip consul_1.6.2_linux_armhfv6.zip
[root@izbp1acp86oa3ixxw4n1dpz update]# ./consul --version
Consul v1.6.2
非常简单吧!然后将consul移动到环境变量对应目录中就可以全局执行它了
[root@izbp1acp86oa3ixxw4n1dpz update]# echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin
[root@izbp1acp86oa3ixxw4n1dpz update]# mv consul /usr/local/bin/consul
[root@izbp1acp86oa3ixxw4n1dpz update]# consul --version
Consul v1.6.2
在生产环境中,可以以服务器或客户端模式运行Cousul。每个Consul必须至少有一个服务器,该服务器负责维护Consul的状态,其中包括有关其他Consul服务器和客户端,可用于发现的服务以及允许哪些服务与哪些其他服务进行通信的信息。
官方建议是不要使用单机consul来做服务的!从上面的架构图也可以看出来,consul是由一个leader-server和若干个备用server集群组成的,事实上在consul启动的时候可以配置consul启动需要的最小集群数量,在有足够数量连接后consul内部会推举一个服务器作为leader,其他服务器会备份数据到本地以防leader出现故障
In order to make sure that Consul's state is preserved even if a server fails, you should always run either three or five servers in production. The odd number of servers (and no more than five of them) strikes a balance between performance and failure tolerance.
非服务器会以客户端模式运行。客户端会注册服务,发送运行状况检查并将指令转发到服务器。客户端必须在Consul数据中心中运行服务的点上运行,因为客户端是有关服务运行状况的真实来源。
使用consul agent -dev
来运行开发模式的consul
不要再生产环境使用
-dev
来运行consul,这里仅仅是入门的参考
[root@izbp1acp86oa3ixxw4n1dpz update]# consul agent -dev
==> Starting Consul agent... # 开启consul代理
Version: 'v1.6.2' # 版本号
Node ID: 'db1c9773-317f-d713-ec47-dfcc176f1503' # 节点id
Node name: 'izbp1acp86oa3ixxw4n1dpz' # 节点名
Datacenter: 'dc1' (Segment: '<all>')
Server: true (Bootstrap: false) # 是不是server,consul是分server端和client端的
Client Addr: [127.0.0.1] (HTTP: 8500, HTTPS: -1, gRPC: 8502, DNS: 8600) # 8500 端口基于 HTTP 协议,用于 API 接口或 WEB UI 访问。
Cluster Addr: 127.0.0.1 (LAN: 8301, WAN: 8302) # 通信端口用的8301
Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false, Auto-Encrypt-TLS: false
==> Log data will now stream in as it occurs:
2019/12/21 19:36:30 [DEBUG] agent: Using random ID "db1c9773-317f-d713-ec47-dfcc176f1503" as node ID
2019/12/21 19:36:30 [DEBUG] tlsutil: Update with version 1
2019/12/21 19:36:30 [DEBUG] tlsutil: OutgoingRPCWrapper with version 1
2019/12/21 19:36:30 [INFO] raft: Initial configuration (index=1): [{Suffrage:Voter ID:db1c9773-317f-d713-ec47-dfcc176f1503 Address:127.0.0.1:8300}] #8300 端口用于服务器节点。客户端通过该端口 RPC 协议调用服务端节点。
2019/12/21 19:36:30 [INFO] serf: EventMemberJoin: izbp1acp86oa3ixxw4n1dpz.dc1 127.0.0.1
2019/12/21 19:36:30 [INFO] serf: EventMemberJoin: izbp1acp86oa3ixxw4n1dpz 127.0.0.1
2019/12/21 19:36:30 [INFO] agent: Started DNS server 127.0.0.1:8600 (udp)
2019/12/21 19:36:30 [INFO] raft: Node at 127.0.0.1:8300 [Follower] entering Follower state (Leader: "")
2019/12/21 19:36:30 [INFO] consul: Adding LAN server izbp1acp86oa3ixxw4n1dpz (Addr: tcp/127.0.0.1:8300) (DC: dc1)
2019/12/21 19:36:30 [INFO] consul: Handled member-join event for server "izbp1acp86oa3ixxw4n1dpz.dc1" in area "wan"
2019/12/21 19:36:30 [INFO] agent: Started DNS server 127.0.0.1:8600 (tcp)
2019/12/21 19:36:30 [INFO] agent: Started HTTP server on 127.0.0.1:8500 (tcp)
2019/12/21 19:36:30 [INFO] agent: started state syncer
==> Consul agent running!
2019/12/21 19:36:30 [INFO] agent: Started gRPC server on 127.0.0.1:8502 (tcp)
2019/12/21 19:36:30 [WARN] raft: Heartbeat timeout from "" reached, starting election
2019/12/21 19:36:30 [INFO] raft: Node at 127.0.0.1:8300 [Candidate] entering Candidate state in term 2
2019/12/21 19:36:30 [DEBUG] raft: Votes needed: 1
2019/12/21 19:36:30 [DEBUG] raft: Vote granted from db1c9773-317f-d713-ec47-dfcc176f1503 in term 2. Tally: 1
2019/12/21 19:36:30 [INFO] raft: Election won. Tally: 1
2019/12/21 19:36:30 [INFO] raft: Node at 127.0.0.1:8300 [Leader] entering Leader state
2019/12/21 19:36:30 [INFO] consul: cluster leadership acquired
2019/12/21 19:36:30 [INFO] connect: initialized primary datacenter CA with provider "consul"
2019/12/21 19:36:30 [DEBUG] consul: Skipping self join check for "izbp1acp86oa3ixxw4n1dpz" since the cluster is too small
2019/12/21 19:36:30 [INFO] consul: member 'izbp1acp86oa3ixxw4n1dpz' joined, marking health alive
2019/12/21 19:36:30 [INFO] consul: New leader elected: izbp1acp86oa3ixxw4n1dpz
2019/12/21 19:36:30 [DEBUG] agent: Skipping remote check "serfHealth" since it is managed automatically
2019/12/21 19:36:30 [INFO] agent: Synced node info
2019/12/21 19:36:32 [DEBUG] agent: Skipping remote check "serfHealth" since it is managed automatically
2019/12/21 19:36:32 [DEBUG] agent: Node info in sync
2019/12/21 19:36:32 [DEBUG] agent: Node info in sync
2019/12/21 19:36:32 [DEBUG] tlsutil: OutgoingRPCWrapper with version 1
通过consul members
命令令来检查Consul数据中心的成员。
[root@izbp1acp86oa3ixxw4n1dpz laravel-blog]# consul members
Node Address Status Type Build Protocol DC Segment
izbp1acp86oa3ixxw4n1dpz 127.0.0.1:8301 alive server 1.6.2 2 dc1 <all>
你可以简单粗暴的使用ctrl+c
退出consul,或者优雅的使用退出命令来执行退出。
使用ctrl+c
,捕获信号并退出。
^C 2019/12/21 19:41:35 [INFO] agent: Caught signal: interrupt
2019/12/21 19:41:35 [INFO] agent: Graceful shutdown disabled. Exiting
2019/12/21 19:41:35 [INFO] agent: Requesting shutdown
2019/12/21 19:41:35 [INFO] consul: shutting down server
2019/12/21 19:41:35 [WARN] serf: Shutdown without a Leave
2019/12/21 19:41:35 [WARN] serf: Shutdown without a Leave
2019/12/21 19:41:35 [INFO] manager: shutting down
2019/12/21 19:41:35 [INFO] agent: consul server down
2019/12/21 19:41:35 [INFO] agent: shutdown complete
2019/12/21 19:41:35 [INFO] agent: Stopping DNS server 127.0.0.1:8600 (tcp)
2019/12/21 19:41:35 [INFO] agent: Stopping DNS server 127.0.0.1:8600 (udp)
2019/12/21 19:41:35 [INFO] agent: Stopping HTTP server 127.0.0.1:8500 (tcp)
2019/12/21 19:41:35 [INFO] agent: Waiting for endpoints to shut down
2019/12/21 19:41:35 [INFO] agent: Endpoints down
2019/12/21 19:41:35 [INFO] agent: Exit code: 1
使用consul leave
2019/12/21 19:44:25 [INFO] consul: server starting leave
2019/12/21 19:44:25 [INFO] serf: EventMemberLeave: izbp1acp86oa3ixxw4n1dpz.dc1 127.0.0.1
2019/12/21 19:44:25 [INFO] consul: Handled member-leave event for server "izbp1acp86oa3ixxw4n1dpz.dc1" in area "wan"
2019/12/21 19:44:25 [INFO] manager: shutting down
2019/12/21 19:44:28 [INFO] serf: EventMemberLeave: izbp1acp86oa3ixxw4n1dpz 127.0.0.1
2019/12/21 19:44:28 [INFO] consul: Removing LAN server izbp1acp86oa3ixxw4n1dpz (Addr: tcp/127.0.0.1:8300) (DC: dc1)
2019/12/21 19:44:28 [WARN] consul: deregistering self (izbp1acp86oa3ixxw4n1dpz) should be done by follower
2019/12/21 19:44:30 [ERR] autopilot: Error updating cluster health: error getting server raft protocol versions: No servers found
2019/12/21 19:44:31 [INFO] consul: Waiting 5s to drain RPC traffic
2019/12/21 19:44:32 [ERR] autopilot: Error updating cluster health: error getting server raft protocol versions: No servers found
2019/12/21 19:44:34 [ERR] autopilot: Error updating cluster health: error getting server raft protocol versions: No servers found
2019/12/21 19:44:36 [WARN] consul: deregistering self (izbp1acp86oa3ixxw4n1dpz) should be done by follower
2019/12/21 19:44:36 [ERR] autopilot: Error updating cluster health: error getting server raft protocol versions: No servers found
2019/12/21 19:44:36 [ERR] autopilot: Error promoting servers: error getting server raft protocol versions: No servers found
2019/12/21 19:44:36 [INFO] agent: Requesting shutdown
2019/12/21 19:44:36 [INFO] consul: shutting down server
2019/12/21 19:44:36 [INFO] agent: consul server down
2019/12/21 19:44:36 [INFO] agent: shutdown complete
2019/12/21 19:44:36 [DEBUG] http: Request PUT /v1/agent/leave (11.001163248s) from=127.0.0.1:58024
2019/12/21 19:44:36 [INFO] agent: Stopping DNS server 127.0.0.1:8600 (tcp)
2019/12/21 19:44:36 [INFO] agent: Stopping DNS server 127.0.0.1:8600 (udp)
2019/12/21 19:44:36 [INFO] agent: Stopping HTTP server 127.0.0.1:8500 (tcp)
2019/12/21 19:44:36 [INFO] agent: Waiting for endpoints to shut down
2019/12/21 19:44:36 [INFO] agent: Endpoints down
2019/12/21 19:44:36 [INFO] agent: Exit code: 0
Graceful leave complete
这里当然还是推荐使用命令行的方式来退出consul。ctrl+c
进程会让数据中心中的其他Cosnul觉得该节点发生故障而不是离开。当节点发生故障时,其运行状况将标记为“critical”,但不会从目录中删除。Consul将自动尝试重新连接到发生故障的节点。如果程序正在作为服务器运行,那么请务必使用优雅的consul leave
,以避免引起潜在的可用性中断等危险。
Consul的主要用例之一是服务发现。Consul提供了一个DNS接口,下游服务可以使用该接口查找其上游依赖项的IP地址。
每个服务都会用本地客户端向Consul注册服务。客户端可以手动注册服务,配置管理工具可以在部署时注册服务,或者容器编排平台可以通过集成自动注册服务。
定义服务可以通过配置文件的方式或者使用官方提供的REST风格API来做。
先简单介绍一个使用配置文件来定义服务,因为大部分情况下服务是会变动的,所以这里通过配置文件的来配置的方法就不细说了。
配置文件文档:Services
第一步,创建一个用来存放配置文件组的文件夹
[root@izbp1acp86oa3ixxw4n1dpz update]# mkdir config.d
接下来,写一个服务的配置文件。假设端口80上运行着名为“test”的服务。在配置目录中创建一个名为test.json的文件。该文件将包含服务定义:名称,端口和可选标签。
[root@izbp1acp86oa3ixxw4n1dpz update]# cd config.d/
[root@izbp1acp86oa3ixxw4n1dpz config.d]# echo '{"service": {"name": "test", "tags": ["testTag"], "port": 80 } }' > web.json
运行consul
[root@izbp1acp86oa3ixxw4n1dpz config.d]# consul agent -dev -enable-script-checks -config-dir=./
==> Starting Consul agent...
......
==> Log data will now stream in as it occurs:
......
2019/12/21 20:05:56 [INFO] agent: Synced service "test" # 已同步到test服务
2019/12/21 20:05:56 [DEBUG] agent: Node info in sync
2019/12/21 20:05:57 [DEBUG] agent: Skipping remote check "serfHealth" since it is managed automatically
2019/12/21 20:05:57 [DEBUG] agent: Service "test" in sync
2019/12/21 20:05:57 [DEBUG] agent: Node info in sync
2019/12/21 20:05:57 [DEBUG] agent: Service "test" in sync
2019/12/21 20:05:57 [DEBUG] agent: Node info in sync
2019/12/21 20:05:58 [DEBUG] tlsutil: OutgoingRPCWrapper with version 1
可以看到日志中已同步到test服务。
官方提供了很多REST风格的API,文档戳这里:Service - Agent HTTP API
通过API来注册服务使用接口/agent/service/register
Method | Path | Produces |
---|---|---|
PUT | /agent/service/register | application/json |
Blocking Queries | Consistency Modes | Agent Caching |
---|---|---|
NO | none | none |
参数:
测试参数
{
"ID": "redis1",
"Name": "redis",
"Tags": [
"primary",
"v1"
],
"Address": "127.0.0.1",
"Port": 80,
"Meta": {
"redis_version": "4.0"
},
"EnableTagOverride": false,
"Check": {
"DeregisterCriticalServiceAfter": "90m",
"Args": ["/usr/local/bin/check_redis.py"],
"Interval": "10s",
"Timeout": "5s"
},
"Weights": {
"Passing": 10,
"Warning": 1
}
}
注意!这里想开启Consul后在外网通过API的形式与服务端互动需要在启动时加上参数
-client 0.0.0.0
来监听所有ip,加上-enable_script_checks true
开启脚本检查
2019/12/21 20:34:49 [DEBUG] http: Request PUT /v1/agent/service/register (1.143084ms) from=183.197.19.174:9762
2019/12/21 20:34:49 [DEBUG] agent: Service "redis1" in sync
2019/12/21 20:34:49 [DEBUG] agent: Check "service:redis1" in sync
2019/12/21 20:34:49 [DEBUG] agent: Node info in sync
2019/12/21 20:34:49 [DEBUG] agent: Service "redis1" in sync
2019/12/21 20:34:49 [DEBUG] agent: Check "service:redis1" in sync
2019/12/21 20:34:49 [DEBUG] agent: Node info in sync
同步到了名字为redis1的服务。
使用/v1/catalog/services
来查询服务列表
[root@izbp1acp86oa3ixxw4n1dpz config.d]# curl http://127.0.0.1:8500/v1/catalog/services
{
"consul": [],
"redis": [
"primary",
"v1"
]
}
使用/v1/catalog/service/[serviceName]
来查询服务详情
[root@izbp1acp86oa3ixxw4n1dpz config.d]# curl http://127.0.0.1:8500/v1/catalog/service/redis
[
{
"ID": "dd0196d3-443e-34c9-d6fa-a24f53f4d017",
"Node": "izbp1acp86oa3ixxw4n1dpz",
"Address": "127.0.0.1",
"Datacenter": "dc1",
"TaggedAddresses": {
"lan": "127.0.0.1",
"wan": "127.0.0.1"
},
"NodeMeta": {
"consul-network-segment": ""
},
"ServiceKind": "",
"ServiceID": "redis1",
"ServiceName": "redis",
"ServiceTags": [
"primary",
"v1"
],
"ServiceAddress": "127.0.0.1",
"ServiceWeights": {
"Passing": 10,
"Warning": 1
},
"ServiceMeta": {
"redis_version": "4.0"
},
"ServicePort": 80,
"ServiceEnableTagOverride": false,
"ServiceProxy": {
"MeshGateway": {},
"Expose": {}
},
"ServiceConnect": {},
"CreateIndex": 79,
"ModifyIndex": 79
}
]
使用/v1/health/service/web
开查看健康检查
[
{
"Node": {
"ID": "8286435d-93a9-d95b-b37e-5fb6684eb8b4",
"Node": "izbp1acp86oa3ixxw4n1dpz",
"Address": "127.0.0.1",
"Datacenter": "dc1",
"TaggedAddresses": {
"lan": "127.0.0.1",
"wan": "127.0.0.1"
},
"Meta": {
"consul-network-segment": ""
},
"CreateIndex": 9,
"ModifyIndex": 10
},
"Service": {
"ID": "bulletScreen-1",
"Service": "bulletScreen",
"Tags": [
"primary"
],
"Address": "172.100.0.11",
"Meta": null,
"Port": 8001,
"Weights": {
"Passing": 1,
"Warning": 1
},
"EnableTagOverride": false,
"Proxy": {
"MeshGateway": {},
"Expose": {}
},
"Connect": {},
"CreateIndex": 11,
"ModifyIndex": 80
},
"Checks": [
{
"Node": "izbp1acp86oa3ixxw4n1dpz",
"CheckID": "serfHealth",
"Name": "Serf Health Status",
"Status": "passing",
"Notes": "",
"Output": "Agent alive and reachable",
"ServiceID": "",
"ServiceName": "",
"ServiceTags": [],
"Type": "",
"Definition": {},
"CreateIndex": 9,
"ModifyIndex": 9
},
{
"Node": "izbp1acp86oa3ixxw4n1dpz",
"CheckID": "service:bulletScreen-1",
"Name": "Service 'bulletScreen' check",
"Status": "critical", # critical = 异常(致命)
"Notes": "",
"Output": "dial tcp 172.100.0.11:8000: connect: connection refused",
"ServiceID": "bulletScreen-1",
"ServiceName": "bulletScreen",
"ServiceTags": [
"primary"
],
"Type": "tcp",
"Definition": {},
"CreateIndex": 11,
"ModifyIndex": 89
}
]
},
{
"Node": {
"ID": "8286435d-93a9-d95b-b37e-5fb6684eb8b4",
"Node": "izbp1acp86oa3ixxw4n1dpz",
"Address": "127.0.0.1",
"Datacenter": "dc1",
"TaggedAddresses": {
"lan": "127.0.0.1",
"wan": "127.0.0.1"
},
"Meta": {
"consul-network-segment": ""
},
"CreateIndex": 9,
"ModifyIndex": 10
},
"Service": {
"ID": "bulletScreen-2",
"Service": "bulletScreen",
"Tags": [
"primary"
],
"Address": "172.100.0.11",
"Meta": null,
"Port": 9503,
"Weights": {
"Passing": 1,
"Warning": 1
},
"EnableTagOverride": false,
"Proxy": {
"MeshGateway": {},
"Expose": {}
},
"Connect": {},
"CreateIndex": 24,
"ModifyIndex": 24
},
"Checks": [
{
"Node": "izbp1acp86oa3ixxw4n1dpz",
"CheckID": "serfHealth",
"Name": "Serf Health Status",
"Status": "passing", # passing = 正常
"Notes": "",
"Output": "Agent alive and reachable",
"ServiceID": "",
"ServiceName": "",
"ServiceTags": [],
"Type": "",
"Definition": {},
"CreateIndex": 9,
"ModifyIndex": 9
},
{
"Node": "izbp1acp86oa3ixxw4n1dpz",
"CheckID": "service:bulletScreen-2",
"Name": "Service 'bulletScreen' check",
"Status": "passing",
"Notes": "",
"Output": "TCP connect 172.100.0.11:8001: Success",
"ServiceID": "bulletScreen-2",
"ServiceName": "bulletScreen",
"ServiceTags": [
"primary"
],
"Type": "tcp",
"Definition": {},
"CreateIndex": 24,
"ModifyIndex": 25
}
]
}
]
使用/v1/health/service/web?passing'
来查看健康检查通过的服务
[
{
"Node": {
"ID": "8286435d-93a9-d95b-b37e-5fb6684eb8b4",
"Node": "izbp1acp86oa3ixxw4n1dpz",
"Address": "127.0.0.1",
"Datacenter": "dc1",
"TaggedAddresses": {
"lan": "127.0.0.1",
"wan": "127.0.0.1"
},
"Meta": {
"consul-network-segment": ""
},
"CreateIndex": 9,
"ModifyIndex": 10
},
"Service": {
"ID": "bulletScreen-2",
"Service": "bulletScreen",
"Tags": [
"primary"
],
"Address": "172.100.0.11",
"Meta": null,
"Port": 9503,
"Weights": {
"Passing": 1,
"Warning": 1
},
"EnableTagOverride": false,
"Proxy": {
"MeshGateway": {},
"Expose": {}
},
"Connect": {},
"CreateIndex": 24,
"ModifyIndex": 24
},
"Checks": [
{
"Node": "izbp1acp86oa3ixxw4n1dpz",
"CheckID": "serfHealth",
"Name": "Serf Health Status",
"Status": "passing",
"Notes": "",
"Output": "Agent alive and reachable",
"ServiceID": "",
"ServiceName": "",
"ServiceTags": [],
"Type": "",
"Definition": {},
"CreateIndex": 9,
"ModifyIndex": 9
},
{
"Node": "izbp1acp86oa3ixxw4n1dpz",
"CheckID": "service:bulletScreen-2",
"Name": "Service 'bulletScreen' check",
"Status": "passing",
"Notes": "",
"Output": "TCP connect 172.100.0.11:8001: Success",
"ServiceID": "bulletScreen-2",
"ServiceName": "bulletScreen",
"ServiceTags": [
"primary"
],
"Type": "tcp",
"Definition": {},
"CreateIndex": 24,
"ModifyIndex": 25
}
]
}
]
这一篇演示了最简单的下载->安装->启动(dev模式)->注册服务->查询服务的过程。下一篇可能会演示consul集群的配置
想安装Consul非常简单,Consul官网提供了非常方便的可执行二进制文件,下载地址戳我
配置文件文档:Services
HTTP API文档:Service - Agent HTTP API
本文为龚学鹏原创文章,转载无需和我联系,但请注明来自龚学鹏博客http://www.noobcoder.cn
最新评论