Linux服务器安装常用软件
安装Git
简单安装
yum install -y git
编译安装
先删除
yum remove git
安装编译相关包
yum install -y curl-devel expat-devel gettext-devel openssl-devel zlib-devel
yum install -y gcc perl-ExtUtils-MakeMaker
下载Git,各个版本:https://mirrors.edge.kernel.org/pub/software/scm/git/
wget https://mirrors.edge.kernel.org/pub/software/scm/git/git-2.36.0.tar.gz
编译安装
tar -zxvf git-2.36.0.tar.gz
cd git-2.36.0
make prefix=/usr/local/git all
make prefix=/usr/local/git install
配置环境变量
vi /etc/profile
export PATH=$PATH:/usr/local/git/bin
source /etc/profile
验证
git --version
安装Maven
Maven各版本下载地址:https://archive.apache.org/dist/maven/maven-3/
下载、解压、重命名
wget https://archive.apache.org/dist/maven/maven-3/3.6.2/binaries/apache-maven-3.6.2-bin.tar.gz
tar -zxvf apache-maven-3.6.2-bin.tar.gz
mv apache-maven-3.6.2 maven
修改配置文件
指定Maven本地仓库地址
<localRepository>/usr/local/maven/repository</localRepository>
指定Maven中央仓库地址
<mirrors>
<!-- 阿里云仓库 -->
<mirror>
<id>alimaven</id>
<mirrorOf>central</mirrorOf>
<name>aliyun maven</name>
<url>http://maven.aliyun.com/nexus/content/repositories/central/</url>
</mirror>
<!-- 中央仓库1 -->
<mirror>
<id>repo1</id>
<mirrorOf>central</mirrorOf>
<name>Human Readable Name for this Mirror.</name>
<url>http://repo1.maven.org/maven2/</url>
</mirror>
<!-- 中央仓库2 -->
<mirror>
<id>repo2</id>
<mirrorOf>central</mirrorOf>
<name>Human Readable Name for this Mirror.</name>
<url>http://repo2.maven.org/maven2/</url>
</mirror>
</mirrors>
配置环境变量
vim /etc/profile
export MAVEN_HOME=/usr/local/maven
export PATH=$MAVEN_HOME/bin:$PATH
使之生效:source /etc/profile
测试查看版本:mvn -v
免密登录配置
修改/etc/hosts文件
编辑修改vim /etc/hosts
文件,添加服务器节点内网IP地址
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
172.29.234.1 node001 node001
172.29.234.2 node002 node002
172.29.234.3 node003 node003
将此配置文件分发到其他节点
[root@node001 ~]# scp /etc/hosts node002:/etc/
The authenticity of host 'node002 (172.29.234.2)' can't be established.
ECDSA key fingerprint is SHA256:Z6I7zKpDTCOJr6RVl7HQBIURUh6C1+YYW5HZ0xGwwmk.
ECDSA key fingerprint is MD5:59:7f:7c:dd:12:98:78:2a:c4:ae:9c:c4:4d:b3:47:f9.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node002,172.29.234.2' (ECDSA) to the list of known hosts.
root@node002's password:
hosts 100% 237 1.2MB/s 00:00
[root@node001 ~]# scp /etc/hosts node003:/etc/
The authenticity of host 'node003 (172.29.234.3)' can't be established.
ECDSA key fingerprint is SHA256:Ub9y+VfZtLqzon1++014jnb3AqfX45mL6w7D+pow/5k.
ECDSA key fingerprint is MD5:a4:29:3d:4e:aa:f4:32:c8:f3:07:d6:ca:5a:2b:72:9e.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node003,172.29.234.3' (ECDSA) to the list of known hosts.
root@node003's password:
hosts 100% 237 1.1MB/s 00:00
[root@node001 ~]#
生成公匙和私匙
SSH密钥默认保留在~/.ssh
目录中
id_rsa
:私钥文件
id_rsa.pub
:公钥文件
[root@node001 ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:1WjdrYEBcxls2FXgsRY1ywsTgyJh5useAPYfGZTQWLw root@node001
The key's randomart image is:
+---[RSA 2048]----+
| .**o o==*==o|
| .== ..B==+=o|
| o oo =.+o*o.|
| . o E+o .oo.|
| o +S .. |
| + . |
| + |
| . . |
| . |
+----[SHA256]-----+
[root@node001 ~]# ls ./.ssh/
authorized_keys id_rsa id_rsa.pub known_hosts
将公匙发送到其他节点
将node001
节点的公匙发送到其他节点的./.ssh/authorized_keys
文件存储
[root@node001 ~]# ssh-copy-id node001
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'node001 (172.29.234.1)' can't be established.
ECDSA key fingerprint is SHA256:YJklM93EYQBcynZgIMKgh3rmAu4iQWoPRCCpTaQH390.
ECDSA key fingerprint is MD5:ca:e2:66:60:51:fa:c1:fd:34:4b:d9:d0:ef:89:73:7b.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@node001's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'node001'"
and check to make sure that only the key(s) you wanted were added.
[root@node001 ~]# ssh-copy-id node002
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'node002 (172.29.234.2)' can't be established.
ECDSA key fingerprint is SHA256:Z6I7zKpDTCOJr6RVl7HQBIURUh6C1+YYW5HZ0xGwwmk.
ECDSA key fingerprint is MD5:59:7f:7c:dd:12:98:78:2a:c4:ae:9c:c4:4d:b3:47:f9.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@node002's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'node002'"
and check to make sure that only the key(s) you wanted were added.
[root@node001 ~]# ssh-copy-id node003
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@node003's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'node003'"
and check to make sure that only the key(s) you wanted were added.
在node002
节点查看node001
的公匙
[root@node002 ~]# cat ./.ssh/authorized_keys
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDJhq6CTpErwAxBs1Hp0SUx19FXuqAF6r3sJekoZyU8/lGKPk2o4sukRGTq7eV3gsKD1JMNYuXyHpJ/upWhL2i+Bh9foaE8DYH5i2lk---------------------PkWRew4OYiCsW9XXU9/+UpeHgXu28pd9PpRlJxwfDoUTJRnMv/c2ptN6NeX6bocffdCbYu6S2SMNpt6p3GNFKKE0ARRxRVSAr4CH72StpOSNqXZfIe1QucMtriTdd6qXj7iYGv6fmciyW2dZ2SqeXA8dYP+upyA2LbUQ5P/RlNxlp+yxXzwSitqDYpepzwFZyBS90/1Mu9v5ko9htcVbl8yzBQDN63+F root@node001
免密登录测试
使用ssh node002
形式即可实现免密登录
[root@node001 ~]# ssh node002
Last login: Sun Apr 10 20:33:02 2022 from 172.29.234.1
Welcome to Alibaba Cloud Elastic Compute Service !
[root@node002 ~]#
经上述配置后,node001节点
就实现了免密登录node002节点
与node003节点
。
所以:节点与节点间想要相互免密登录的核心在于:把节点自身的公匙发送给免密登录的节点。
安装JDK
卸载自带jdk
卸载服务器原有open jdk
# 查询
rpm -qa | grep jdk
# 卸载
rpm -e --nodeps java-openjdk
下载与安装
JDK下载: https://www.oracle.com/java/technologies/downloads/#java8
创建软件存放目录
mkdir /usr/local/program
cd /usr/local/program
上传或下载安装包并解压
tar -zxvf jdk-8u321-linux-i586.tar.gz
mv jdk1.8.0_321 jdk8
配置环境变量
添加环境变量配置
vim /etc/profile
#JAVA_HOME
export JAVA_HOME=/usr/local/program/jdk8
export PATH=$PATH:$JAVA_HOME/bin
使配置生效
[root@node001 program]# source /etc/profile
验证
[root@node001 program]# java -version
-bash: /usr/local/program/jdk8/bin/java: /lib/ld-linux.so.2: bad ELF interpreter: 没有那个文件或目录
缺少依赖,进行相应安装
[root@node001 program]# yum install glibc.i686
[root@node001 program]# java -version
java version "1.8.0_321"
Java(TM) SE Runtime Environment (build 1.8.0_321-b07)
Java HotSpot(TM) Server VM (build 25.321-b07, mixed mode)
[root@node001 program]#
分发JDK到其他节点
将node001机器上的JDK包传输到集群其他机器上,并配置Jdk环境变量,最后source /etc/profile
是配置生效
[root@node001 program]# scp -r jdk8 node002:/usr/local/program/jdk8
[root@node001 program]# scp -r jdk8 node003:/usr/local/program/jdk8
[root@node001 program]# scp /etc/profile node002:/etc/
[root@node001 program]# scp /etc/profile node003:/etc/
[root@node002 ~]# yum install glibc.i686
[root@node002 ~]# source /etc/profile
[root@node002 ~]# java -version
java version "1.8.0_321"
Java(TM) SE Runtime Environment (build 1.8.0_321-b07)
Java HotSpot(TM) Server VM (build 25.321-b07, mixed mode)
[root@node003 ~]# yum install glibc.i686
[root@node003 ~]# source /etc/profile
[root@node003 ~]# java -version
java version "1.8.0_321"
Java(TM) SE Runtime Environment (build 1.8.0_321-b07)
Java HotSpot(TM) Server VM (build 25.321-b07, mixed mode)
安装Zookeeper
官网:https://zookeeper.apache.org/
下载与安装
[root@node001 zookeeper]# wget https://downloads.apache.org/zookeeper/zookeeper-3.7.0/apache-zookeeper-3.7.0-bin.tar.gz
解压并重命名
[root@node001 zookeeper]# tar -zxvf apache-zookeeper-3.7.0-bin.tar.gz
[root@node001 zookeeper]# mv apache-zookeeper-3.7.0-bin zookeeper
配置
创建data目录
[root@node001 zookeeper]# cd zookeeper
[root@node001 zookeeper]# mkdir data
创建myid文件,填写每个节点编号(node001 ==> 1),需唯一。
[root@node001 zookeeper]# cd data
[root@node001 zookeeper]# vim myid
1
配置zoo.cfg
[root@node001 zookeeper]# cd conf/
[root@node001 zookeeper]# mv zoo_sample.cfg zoo.cfg
[root@node001 zookeeper]# vim zoo.cfg
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
# 修改数据存储路径
dataDir=/usr/local/program/zookeeper/data
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
## Metrics Providers
#
# https://prometheus.io Metrics Exporter
#metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider
#metricsProvider.httpPort=7000
#metricsProvider.exportJvmInfo=true
# 添加集群配置
server.1=node001:2888:3888
server.2=node002:2888:3888
server.3=node003:2888:3888
分发Zookeeper到其他节点
scp -r zookeeper node002:/usr/local/program/zookeeper
scp -r zookeeper node003:/usr/local/program/zookeeper
修改分发Zookeeper机器的myid编号,分别为2,3
[root@node002 zookeeper]# vim data/myid
2
[root@node003 zookeeper]# vim data/myid
3
启动集群
[root@node001 zookeeper]# bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /usr/local/program/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@node002 zookeeper]# bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /usr/local/program/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@node003 zookeeper]# bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /usr/local/program/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
查看集群状态
[root@node001 zookeeper]# bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/program/zookeeper/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: leader
[root@node002 zookeeper]# bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/program/zookeeper/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: follower
[root@node003 zookeeper]# bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/program/zookeeper/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: follower
查看进程
[root@node001 hadoop]# jps
18946 QuorumPeerMain
19675 Jps
[root@node002 hadoop]# jps
21860 QuorumPeerMain
22095 Jps
[root@node003 hadoop]# jps
25607 Jps
25547 JournalNode
版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 举报,一经查实,本站将立刻删除。
文章由极客之音整理,本文链接:https://www.bmabk.com/index.php/post/136976.html