Docker部署安装应用大集合
部署Tomcat
(1)拉取镜像
docker image pull tomcat
(2)创建容器
docker run -id --name tomcat666 -p 8081:8080 -v /usr/local/docker/tomcat1:/usr/local/tomcat/webapps tomcat
(3)浏览器访问查看
部署Nginx
1)拉取镜像
docker pull nginx
(2)创建Nginx容器
docker run --name nginx-test -p 8080:80 -d nginx
(3)浏览器查看
(4)对nginx部分目录进行挂载
docker run -id -p 8080:80 --name nginx
-v /usr/local/docker/nginx/index:/usr/share/nginx/html
-v /usr/local/docker/nginx/conf/nginx.conf:/etc/nginx/nginx.conf
-v /usr/local/docker/nginx/logs:/var/log/nginx nginx
注意:在进行目录挂载时,如果不是手动创建相关目录文件时可能报错。比如nginx.conf在创建时会创建成文件夹,删除nginx.conf目录从新touch nginx.conf创建文件
部署Mysql
(1)拉取mysql镜像
docker pull mysql
(2)创建容器
-e 代表添加环境变量 MYSQL_ROOT_PASSWORD 就是root用户的登陆密码
docker run -di --name=mysql8 -p 3306:3306 -e MYSQL_ROOT_PASSWORD=123456 mysql
(3)远程登录mysql
进入容器
docker exec -it mysql8 /bin/bash
登录MySQL
mysql -u root -p123456
执行status查看mysql信息,以及更改刷新mysql权限
ALTER USER 'root'@'%' IDENTIFIED WITH mysql_native_password BY '123456';
flush privileges;
远程登陆
(4)对数据库某些目录作映射
docker run -di -p 3306:3306 --name mysql
-v /usr/local/docker/mysql/conf:/etc/mysql
-v /usr/local/docker/mysql/logs:/var/log/mysql
-v/usr/local/docker/mysql/data:/var/lib/mysql -e
MYSQL_ROOT_PASSWORD=123456 mysql
部署Redis
(1)拉取镜像
docker pull redis
(2)创建容器
docker run -id --name=redis666 -p 6379:6379 redis
(3) 远程连接Redis
部署Redis集群
创建容器
注意:redis官网要求: docker搭建redis集群必须使用docker的主机联网模式( --net host)
docker create --name redis-node01 --net host -v redis-node01:/data redis --cluster-enabled yes --cluster-config-file nodes-node-01.conf --port 6379
docker create --name redis-node02 --net host -v redis-node02:/data redis --cluster-enabled yes --cluster-config-file nodes-node-02.conf --port 6380
docker create --name redis-node03 --net host -v redis-node03:/data redis --cluster-enabled yes --cluster-config-file nodes-node-03.conf --port 6381
启动容器
docker start redis-node01 redis-node02 redis-node03
进入任意容器
docker exec -it redis-node01 /bin/bash
组建集群
注意:当遇到一直Waiting for the cluster to join,后修改组建集群IP为内外IP即可解决
root@administrator:/data# redis-cli --cluster create IP:6379 IP:6380 IP:6381 --cluster-replicas 0
>>> Performing hash slots allocation on 3 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
M: 3a14f73f21646f4e659e7f963378216912cf444a 119.23.62.62:6379
slots:[0-5460] (5461 slots) master
M: 0087956b2d447ddc404372c33284b1e81eb4d755 119.23.62.62:6380
slots:[5461-10922] (5462 slots) master
M: 7de5e3e07654ebdc928f96dffd029bd0f7bf45d6 119.23.62.62:6381
slots:[10923-16383] (5461 slots) master
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
...............................................................................................................
root@administrator:/data# redis-cli --cluster create 172.17.0.1:6379 172.17.0.1:6380 172.17.0.1:6381 --cluster-replicas 0
>>> Performing hash slots allocation on 3 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
M: 3a14f73f21646f4e659e7f963378216912cf444a 172.17.0.1:6379
slots:[0-5460] (5461 slots) master
M: 0087956b2d447ddc404372c33284b1e81eb4d755 172.17.0.1:6380
slots:[5461-10922] (5462 slots) master
M: 7de5e3e07654ebdc928f96dffd029bd0f7bf45d6 172.17.0.1:6381
slots:[10923-16383] (5461 slots) master
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
>>> Performing Cluster Check (using node 172.17.0.1:6379)
M: 3a14f73f21646f4e659e7f963378216912cf444a 172.17.0.1:6379
slots:[0-5460] (5461 slots) master
M: 0087956b2d447ddc404372c33284b1e81eb4d755 172.18.255.237:6380
slots:[5461-10922] (5462 slots) master
M: 7de5e3e07654ebdc928f96dffd029bd0f7bf45d6 172.18.255.237:6381
slots:[10923-16383] (5461 slots) master
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
root@administrator:/data#
查看集群信息
root@administrator:/data# redis-cli
127.0.0.1:6379> cluster nodes
0087956b2d447ddc404372c33284b1e81eb4d755 172.18.255.237:6380@16380 master - 0 1630076325108 2 connected 5461-10922
7de5e3e07654ebdc928f96dffd029bd0f7bf45d6 172.18.255.237:6381@16381 master - 0 1630076326120 3 connected 10923-16383
3a14f73f21646f4e659e7f963378216912cf444a 172.17.0.1:6379@16379 myself,master - 0 1630076325000 1 connected 0-5460
127.0.0.1:6379>
测试
root@administrator:/data# reids-cli -c
bash: reids-cli: command not found
root@administrator:/data# redis-cli -c
127.0.0.1:6379> set test 123
-> Redirected to slot [6918] located at 172.18.255.237:6380
OK
172.18.255.237:6380> get test
"123"
172.18.255.237:6380>
部署web应用
(1)拉取镜像
docker image pull tomcat
(2)创建容器
docker run -id --name tomcat666 -p 8081:8080 -v /usr/local/docker/tomcat1:/usr/local/tomcat/webapps tomcat
(3)向挂载目录上传war
(4)验证
部署Node
(1)拉取镜像
docker pull node
(2)创建容器
docker run -id --name node node
(3)进入容器
docker exec -it node /bin/bash
(4)查看node版本
node -v
部署Rabbitmq
(1)搜索并拉取镜像
docker search rabbitmq
docker pull rabbitmq (镜像不带控制台)
docker pull rabbitmq:management (镜像带控制台)
(2)创建容器
5672: rabbitMQ 的服务端口
15672: RabbitMQ 的控制台端口
docker run -id --name mq -e RABBITMQ_DEFAULT_USER=admin -e RABBITMQ_DEFAULT_PASS=admin123 -p 15672:15672 -p 5672:5672 rabbitmq:management
-e RABBITMQ_DEFAULT_USER=admin 管理者用户名
-e RABBITMQ_DEFAULT_PASS=admin123 管理者密码
(3)浏览器访问验证
http://x:x:x:x:15672/
RabbitMQ默认的用户名:guest,密码:guest (使用guest将登陆失败)
当然刚才启动容器时已经指定了用户名和密码,试试指定用户名与密码
-e RABBITMQ_DEFAULT_USER=admin 管理者用户名
-e RABBITMQ_DEFAULT_PASS=admin123 管理者密码
安装ActiveMQ
1.搜索镜像
docker search ActiveMQ
2.拉取镜像
docker pull webcenter/activemq
3创建容器
docker run -d --name activemq -p 61617:61616 -p 8162:8161 webcenter/activemq
4.浏览器访问
访问地址:http://IP:8161/
部署RocketMQ
拉取镜像
docker pull foxiswho/rocketmq
进入目录
/usr/local/program/docker/rocketmq/
注意:凡是涉及映射本地目录权限一定要设置为 777 权限,否则启动不成功
chmod 777 logs
chmod 777 conf
chmod 777 store
创建启动nameserver容器
docker run -d -v $(pwd)/logs:/home/rocketmq/logs --name rmqnamesrv -e "JAVA_OPT_EXT=-Xms512M -Xmx512M -Xmn128m" -p 9876:9876 foxiswho/rocketmq:4.8.0 sh mqnamesrv
创建conf目录并进入,创建broker.conf文件
vim /usr/local/program/docker/rocketmq/broker.conf
brokerIP1=IP
namesrvAddr=IP:9876
brokerName=broker_name
创建启动broker容器
docker run -d -v $(pwd)/logs:/home/rocketmq/logs -v $(pwd)/store:/home/rocketmq/store -v $(pwd)/conf:/home/rocketmq/conf --name rmqbroker -e "NAMESRV_ADDR=IP:9876" -e "JAVA_OPT_EXT=-Xms512M -Xmx512M -Xmn128m" -p 10911:10911 -p 10912:10912 -p 10909:10909 foxiswho/rocketmq:4.8.0 sh mqbroker -c /home/rocketmq/conf/broker.conf
拉取RocketMQ的管理工具
docker pull styletang/rocketmq-console-ng
创建启动rocketmq-console
docker run -d --name rocketmq-console -e "JAVA_OPTS=-Drocketmq.namesrv.addr=IP:9876 -Dcom.rocketmq.sendMessageWithVIPChannel=false -Duser.timezone='Asia/Shanghai' " -v /etc/localtime:/etc/localtime -p 8082:8080 -t styletang/rocketmq-console-ng
查看管理工具
部署Minion对象存储服务
部署zookeeper
1.拉取镜像
docker pull zookeeper
2.创建容器
docker run -id --name zookeeper -p 2181:2181 zookeeper
3.执行验证
部署dobbo-admin
1.拉取镜像
docker pull apache/dubbo-admin
2.创建容器
必须指定zookeeper地址,不能是默认的127.0.0.1,否则报错连接超时
docker run -it --name dubbo-admin -e admin.registry.address=zookeeper://IP:2181 -e admin.config-center=zookeeper://IP:2181 -e admin.metadata-report.address=zookeeper://IP:2181 -p 8080:8080 apache/dubbo-admin
3.浏览器访问
部署Portainer
Portainer是Docker的可视化管理工具,提供状态显示面板、应用模板快速部署、容器镜像网络数据卷的基本操作(包括上传下载镜像,创建容器等操作)、事件日志显示、容器控制台操作、Swarm集群和服务等集中管理和操作、登录用户管理和控制等功能。
搜索选择并拉取镜像
docker search portainer
docker pull portainer/portainer
创建并启动容器
docker run -d -p 9000:9000 --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v /usr/local/docker/portainer:/data --name portainer portainer/portainer
第一次访问需要设置账号密码
若创建失败,报错:Failure Unable to create administrator user
1.关闭并删除portainer容器
2.重新创建容器,同时指定用户名与密码
docker run -d -p 9000:9000 --env ADMIN_USERNAME=Administrator --env ADMIN_PASS=Administrator -v /var/run/docker.sock:/var/run/docker.sock -v /usr/local/docker/portainer:/data --name portainer portainer/portainer
3.输入命令行中的账户密码,即可创建成功
管理本地docker
部署Canal
拉取canal容器
docker pull canal/canal-server:v1.1.5
启动容器
docker run -p 11111:11111 --name canal -id 0c7f1d62a7d8
进入canal容器
docker exec -it canal bash
编辑canal容器的配置
vi canal-server/conf/example/instance.properties
修改3处:
# 与server_id的值不重复即可
canal.instance.mysql.slaveId=666
# 数据库地址
canal.instance.master.address=IP:3306
# 创建的账号、密码
canal.instance.dbUsername=canal
canal.instance.dbPassword=canal
[root@064fb8202b09 admin]# cat canal-server/conf/example/instance.properties
#################################################
## mysql serverId , v1.0.26+ will autoGen
# 与server_id的值不重复即可
canal.instance.mysql.slaveId=666
# enable gtid use true/false
canal.instance.gtidon=false
# position info
# 数据库地址
canal.instance.master.address=IP:3306
canal.instance.master.journal.name=
canal.instance.master.position=
canal.instance.master.timestamp=
canal.instance.master.gtid=
# rds oss binlog
canal.instance.rds.accesskey=
canal.instance.rds.secretkey=
canal.instance.rds.instanceId=
# table meta tsdb info
canal.instance.tsdb.enable=true
#canal.instance.tsdb.url=jdbc:mysql://127.0.0.1:3306/canal_tsdb
#canal.instance.tsdb.dbUsername=canal
#canal.instance.tsdb.dbPassword=canal
#canal.instance.standby.address =
#canal.instance.standby.journal.name =
#canal.instance.standby.position =
#canal.instance.standby.timestamp =
#canal.instance.standby.gtid=
# 创建的账号、密码
canal.instance.dbUsername=canal
canal.instance.dbPassword=canal
canal.instance.connectionCharset = UTF-8
# enable druid Decrypt database password
canal.instance.enableDruid=false
#canal.instance.pwdPublicKey=MFwwDQYJKoZIhvcNAQEBBQADSwAwSAJBALK4BUxdDltRRE5/zXpVEVPUgunvscYFtEip3pmLlhrWpacX7y7GCMo2/JM6LeHmiiNdH1FWgGCpUfircSwlWKUCAwEAAQ==
# table regex
canal.instance.filter.regex=.*\\..*
# table black regex
canal.instance.filter.black.regex=mysql\\.slave_.*
# table field filter(format: schema1.tableName1:field1/field2,schema2.tableName2:field1/field2)
#canal.instance.filter.field=test1.t_product:id/subject/keywords,test2.t_company:id/name/contact/ch
# table field black filter(format: schema1.tableName1:field1/field2,schema2.tableName2:field1/field2)
#canal.instance.filter.black.field=test1.t_product:subject/product_image,test2.t_company:id/name/contact/ch
# mq config
canal.mq.topic=example
# dynamic topic route by schema or table regex
#canal.mq.dynamicTopic=mytest1.user,mytest2\\..*,.*\\..*
canal.mq.partition=0
# hash partition config
#canal.mq.partitionsNum=3
#canal.mq.partitionHash=test.table:id^name,.*\\..*
#canal.mq.dynamicTopicPartitionNum=test.*:4,mycanal:6
#################################################
重启docker
docker restart canal
查看日志
tail -n60 -f canal-server/logs/example/example.log
部署MongoDB
拉取镜像
docker pull mongo
创建容器
docker create --name mongodb -p 27017:27017 -v mongodb:/data/db mongo
docker create --name mongodb --restart=always -p 27017:27017 -v mongodb:/data/db mongo
启动容器
docker start mongodb
进入容器
docker exec -it mongodb /bin/bash
操作测试
root@b3079ea93c31:/# mongo
MongoDB shell version v5.0.2
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("c1b0e71c-23f9-4440-9bce-398dddb96a8a") }
MongoDB server version: 5.0.2
================
Warning: the "mongo" shell has been superseded by "mongosh",
which delivers improved usability and compatibility.The "mongo" shell has been deprecated and will be removed in
an upcoming release.
We recommend you begin using "mongosh".
For installation instructions, see
https://docs.mongodb.com/mongodb-shell/install/
================
Welcome to the MongoDB shell.
For interactive help, type "help".
For more comprehensive documentation, see
https://docs.mongodb.com/
Questions? Try the MongoDB Developer Community Forums
https://community.mongodb.com
---
The server generated these startup warnings when booting:
2021-08-28T08:23:05.880+00:00: Access control is not enabled for the database. Read and write access to data and configuration is unrestricted
2021-08-28T08:23:05.880+00:00: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. We suggest setting it to 'never'
---
---
Enable MongoDB's free cloud-based monitoring service, which will then receive and display
metrics about your deployment (disk utilization, CPU, operation statistics, etc).
The monitoring data will be available on a MongoDB website with a unique URL accessible to you
and anyone you share the URL with. MongoDB may use this information to make product
improvements and to suggest MongoDB products and deployment options to you.
To enable free monitoring, run the following command: db.enableFreeMonitoring()
To permanently disable this reminder, run the following command: db.disableFreeMonitoring()
---
> show dbs
admin 0.000GB
config 0.000GB
local 0.000GB
>
部署单机Nacos
docker pull nacos/nacos-server:1.4.2
# MODE cluster模式/standalone模
docker run --name nacos -e MODE=standalone -p 8848:8848 -d nacos/nacos-server:1.4.2
访问: IP:8848/nacos/index.html
使用账号: nacos 密码: nacos登录
部署Sentinel
docker pull bladex/sentinel-dashboard:1.8.0
docker run --name sentinel -d -p 8858:8858 -d bladex/sentinel-dashboard:1.8.0
访问:http://112.74.96.150:8858
使用账号/密码: sentinel/sentinel 登录
部署Haproxy
1.拉取haproxy的镜像
docker pull haproxy:2.6
2.创建haproxy的配置文件:vim /usr/local/program/haproxy/haproxy.cfg
注意
:配置文件名后缀一定是:cfg
# 全局配置参数,属于进程级的配置
global
# 日志配置 local0:日志设备 info:日志记录级别
log 127.0.0.1 local0 info
# haproxy工作目录
#chroot /usr/local/program/haproxy
# haproxy启动后进程的pid文件路径
#pidfile /usr/local/program/data/haproxy.pid
# 每个haproxy进程可接受的最大并发连接数
maxconn 4000
#user haproxy
#group haproxy
# haproxy启动时可创建的进程数,默认1个,值应小于服务器的CPU核数
nbproc 1
# haproxy在后台运行
daemon
# 默认参数配置
defaults
mode tcp
log global
option abortonclose
option redispatch
# 配置连接后端服务器失败重试次数,超过3次后会将失败的后端服务器标记为不可用
retries 3
# 配置成功连接到一台服务器的最长等待时间,默认单位是毫秒,也可自己指定单位
timeout connect 10000
# 配置连接客户端发送数据时的最长等待时间,默认单位是毫秒,也可自己指定单位
timeout client 1m
# 配置服务器端回应客户端数据发送时最长等待时间,默认单位是毫秒,也可自己指定单位
timeout server 1m
# 配置对后端服务器的检测超时时间,默认单位是毫秒,也可自己指定单位
timeout check 10s
# 最大连接数
maxconn 3000
# 定义服务叫"proxy_status "名字的虚拟节点
# haproxy代理的两个mycat
listen proxy_status
# 配置监听8086端口
bind 0.0.0.0:8086
# tcp模式
mode tcp
# 轮询访问mycat_1于mycat_2
balance roundrobin
# mycat真实IP:端口
server mycat_1 IP:8066 check inter 10s
server mycat_2 IP:8066 check inter 10s
# 定义服务叫"admin_stats"名字的虚拟节点
# haproxy管理页面
frontend admin_stats
# 监听地址和端口
bind *:8085
# http模式
mode http
# 配置在客户端和服务器完成一次连接请求后,haproxy主动关闭此TCP连接
option httpclose
# 配置后端服务器需要获得客户端的真实IP,通过增加"X-Forwarded-For"来记录客户端IP
option forwoardfor
# 启用日志来记录http请求,默认只对tcp日志进行日志记录
option httplog
maxconn 10
stats enable
stats refresh 30s
# 统计页面路径
stats uri /admin
# 设置统计页面认证的用户和密码
stats auth admin:123123
stats hide-version
stats admin if TRUE
3.创建容器
docker create --name haproxy --net host -v /usr/local/program/haproxy:/usr/local/etc/haproxy haproxy:2.6
4.启动容器
docker start haproxy
5.查看容器日志
[root@administrator ~]# docker logs haproxy
[NOTICE] (1) : New worker (8) forked
[NOTICE] (1) : Loading success.
[WARNING] (8) : Server proxy_status/mycat_1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
6.打开浏览器访问
浏览器访问:
http://IP:8085/admin
,输入配置的用户名与密码
部署Gitlab
GitLab 是一个用于代码仓库管理系统的开源项目,使用Git作为代码管理工具,并在此基础上搭建起来的Web服务平台
注意:安装GitLab官方推荐至少4G的内存,否则可能会卡顿或者运行非常慢
默认pull最新latest稳定版本
说明:ce 表示社区免费版 ,ee 表示企业付费版
docker pull gitlab/gitlab-ce
创建并启动容器
docker run -d \
--name gitlab \
--restart always \
-p 8001:443 -p 8000:80 -p 8002:22 \
-v /etc/localtime:/etc/localtime:ro \
-v /usr/local/program/gitlab/config:/etc/gitlab \
-v /usr/local/program/gitlab/logs:/var/log/gitlab \
-v /usr/local/program/gitlab/data:/var/opt/gitlab \
gitlab/gitlab-ce
查看Gitlab生成的默认密码
[root@localhost ~]# docker exec -it e0b5bb0c02f5657d37b2b344e14999b25924440f0fec9ff0030165e92de64649 grep 'Password:' /etc/gitlab/initial_root_password
Password: 3FqypH7A8sN1yZVGh9uXeggYxAq/eo26nyIlkAo6reo=
使用账号:root以及获取的密码登录
若获取的该密码无法登录,则进入容器执行重置密码
root@e0b5bb0c02f5:/# gitlab-rake "gitlab:password:reset[root]"
Enter password:
Confirm password:
Password successfully updated for user with username root.
root@e0b5bb0c02f5:/#
部署其他应用
到此为止,可以发现,docker部署应用非常简单、高效,其他应用部署也类似。
版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 举报,一经查实,本站将立刻删除。
文章由极客之音整理,本文链接:https://www.bmabk.com/index.php/post/137164.html