【云原生 | Kubernetes 实战】05、Pod高级实战:基于污点、容忍度、亲和性的多种调度策略(上)

导读:本篇文章讲解 【云原生 | Kubernetes 实战】05、Pod高级实战:基于污点、容忍度、亲和性的多种调度策略(上),希望对大家有帮助,欢迎收藏,转发!站点地址:www.bmabk.com

目录

一、标签

1.1 什么是标签?

1.2 给pod资源打标签

1.3 查看资源标签

二、node 节点选择器

2.1 nodeName:指定pod节点运行在哪个具体的node上

2.2 nodeSelector:指定pod调度到具有特定标签的node节点上

2.3 同时有 nodeName 和 nodeSelector 字段

三、node 节点亲和性 

3.1 使用 requiredDuringSchedulingIgnoredDuringExecution 硬亲和性

3.2 使用 preferredDuringSchedulingIgnoredDuringExecution 软亲和性


一、标签

1.1 什么是标签?

        标签其实就一对 key/value ,被关联到对象上,比如Pod,标签的使用我们倾向于能够表示对象的特殊特点,就是一眼就看出了这个Pod是干什么的,标签可以用来划分特定的对象(比如版本,服务类型等),标签可以在创建一个对象的时候直接定义,也可以在后期随时修改,每一个对象可以拥有多个标签,但是,key值必须是唯一的。创建标签之后也可以方便我们对资源进行分组管理。如果对pod打标签,之后就可以使用标签来查看、删除指定的pod。

在k8s中,大部分资源都可以打标签。

1.2 给pod资源打标签

#1. 创建pod资源
[root@k8s-master01 pod-yaml]# kubectl apply -f pod-first.yaml

#2. 查看默认空间下的所有 pods
[root@k8s-master01 pod-yaml]# kubectl get pods 
NAME          READY   STATUS    RESTARTS   AGE
tomcat-test   1/1     Running   0          6s

#3. 查看默认空间下的所有pods 的标签
[root@k8s-master01 pod-yaml]# kubectl get pods --show-labels 
NAME          READY   STATUS    RESTARTS   AGE   LABELS
tomcat-test   1/1     Running   0          38s   app=tomcat

#4. 对指定的pod资源 tomcat-test 打上标签
[root@k8s-master01 pod-yaml]# kubectl label pods tomcat-test release=v1 

#5. 查看指定pods 的标签
[root@k8s-master01 pod-yaml]# kubectl get pods tomcat-test --show-labels 
NAME          READY   STATUS    RESTARTS   AGE     LABELS
tomcat-test   1/1     Running   0          4m47s   app=tomcat,release=v1

1.3 查看资源标签

# 查看指定名称空间下的所有pods 的标签
[root@k8s-master01 pod-yaml]# kubectl get pods -n kube-system --show-labels

# 查看默认名称空间下指定pod具有的所有标签
[root@k8s-master01 pod-yaml]# kubectl get pods tomcat-test --show-labels

# 列出默认名称空间下标签key是release的pod,但不显示标签
[root@k8s-master01 pod-yaml]# kubectl get pods -l release
NAME          READY   STATUS    RESTARTS   AGE
tomcat-test   1/1     Running   0          10m

# 列出默认名称空间下标签key是release、值是v1的pod,不显示标签
[root@k8s-master01 pod-yaml]# kubectl get pods -l release=v1
NAME          READY   STATUS    RESTARTS   AGE
tomcat-test   1/1     Running   0          11m

# 列出默认名称空间下标签key是release的所有pod,并打印对应的标签值
[root@k8s-master01 pod-yaml]# kubectl get pods -L release
NAME          READY   STATUS    RESTARTS   AGE   RELEASE
tomcat-test   1/1     Running   0          12m   v1

# 列出默认名称空间下多个标签(release,app)的所有pods,并打印对应的标签值
[root@k8s-master01 pod-yaml]# kubectl get pods -L release,app
NAME          READY   STATUS    RESTARTS   AGE   RELEASE   APP
tomcat-test   1/1     Running   0          13m   v1        tomcat

# 查看所有名称空间下的所有pod的标签
[root@k8s-master01 pod-yaml]# kubectl get pods --all-namespaces --show-labels

二、node 节点选择器

        我们在创建pod资源的时候,pod会根据schduler进行调度,那么默认会调度到随机的一个工作节点,如果我们想要pod调度到指定节点或者调度到一些具有相同特点的node节点,怎么办呢?
可以使用pod中的 nodeName 或者 nodeSelector 字段指定要调度到的node节点。

2.1 nodeName:指定pod节点运行在哪个具体的node上

#1. 在node1和node2上拉取 tomcat 和 busybox 镜像
[root@k8s-node1 ~]# docker pull tomcat
[root@k8s-node1 ~]# docker pull busybox

[root@k8s-node2 ~]# docker pull tomcat
[root@k8s-node2 ~]# docker pull busybox

#2. 编写 yaml 文件
[root@k8s-master01 pod-yaml]# vi pod-node.yaml
apiVersion: v1
kind: Pod
metadata:
  name: demo-pod
  namespace: default
  labels:
    app: myapp
    env: dev
spec:
  nodeName: k8s-node2    # 指定调度到 node2 节点
  containers:
  - name:  tomcat-pod
    ports:
    - containerPort: 8080
    image: tomcat:latest
    imagePullPolicy: IfNotPresent
  - name: busybox
    image: busybox:latest
    command:             # busybox 容器指定 sh 编译器,-c 指定执行的命令
    - "/bin/sh"
    - "-c"
    - "sleep 3600"

#3. 创建pod资源
[root@k8s-master01 pod-yaml]# kubectl apply -f pod-node.yaml 
pod/demo-pod created
[root@k8s-master01 pod-yaml]# kubectl get pods 
NAME          READY   STATUS              RESTARTS   AGE
demo-pod      0/2     ContainerCreating   0          7s
tomcat-test   1/1     Running             0          32m

#4. 查看pods被调度到哪个节点了
[root@k8s-master01 pod-yaml]# kubectl get pods -o wide 
NAME          READY   STATUS    RESTARTS   AGE   IP               NODE        NOMINATED NODE   READINESS GATES
demo-pod      2/2     Running   0          53s   10.244.169.140   k8s-node2   <none>           <none>
tomcat-test   1/1     Running   0          33m   10.244.169.139   k8s-node2   <none>           <none>

2.2 nodeSelector:指定pod调度到具有特定标签的node节点上

#1. 查看k8s集群中所有node节点的标签
[root@k8s-master01 pod-yaml]# kubectl get nodes --show-labels 
NAME           STATUS   ROLES           AGE   VERSION   LABELS
k8s-master01   Ready    control-plane   37d   v1.25.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master01,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node.kubernetes.io/exclude-from-external-load-balancers=
k8s-node1      Ready    work            37d   v1.25.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node1,kubernetes.io/os=linux,node-role.kubernetes.io/work=work
k8s-node2      Ready    work            37d   v1.25.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node2,kubernetes.io/os=linux,node-role.kubernetes.io/work=work

#2. 编写 yaml 文件
[root@k8s-master01 pod-yaml]# vi pod-1.yaml
apiVersion: v1
kind: Pod
metadata:
  name: demo-pod-1
  namespace: default
  labels:
    app: myapp
    env: dev
spec:
  nodeSelector:
    disk: ceph    # 指定标签为 disk=ceph 的node节点
  containers:
  - name:  tomcat-pod-1
    ports:
    - containerPort: 8080
    image: tomcat:latest
    imagePullPolicy: IfNotPresent

#3. 创建 pod 资源
[root@k8s-master01 pod-yaml]# kubectl apply -f pod-1.yaml

#4. 查看pods状态。demo-pod-1 为等待状态
[root@k8s-master01 pod-yaml]# kubectl get pods 
NAME          READY   STATUS    RESTARTS   AGE
demo-pod      2/2     Running   0          15m
demo-pod-1    0/1     Pending   0          6s
tomcat-test   1/1     Running   0          48m

#5. 查看pod详细信息
[root@k8s-master01 pod-yaml]# kubectl describe pods demo-pod-1

可以看到,demo-pod-1 资源调度失败,三个节点没有可用的。因为找不到指定标签 disk: ceph 的node:

【云原生 | Kubernetes 实战】05、Pod高级实战:基于污点、容忍度、亲和性的多种调度策略(上)

解决办法:

#1. 给node1节点打标签,打个具有disk=ceph的标签
[root@k8s-master01 pod-yaml]# kubectl label nodes k8s-node1 disk=ceph

#2. 再次查看pods状态
[root@k8s-master01 pod-yaml]# kubectl get pods 
NAME          READY   STATUS    RESTARTS   AGE
demo-pod      2/2     Running   0          25m
demo-pod-1    1/1     Running   0          10m
tomcat-test   1/1     Running   0          58m

#3. 查看pods 调度信息
[root@k8s-master01 pod-yaml]# kubectl get pods -o wide 
NAME          READY   STATUS    RESTARTS   AGE   IP               NODE        NOMINATED NODE   READINESS GATES
demo-pod      2/2     Running   0          27m   10.244.169.140   k8s-node2   <none>           <none>
demo-pod-1    1/1     Running   0          11m   10.244.36.75     k8s-node1   <none>           <none>
tomcat-test   1/1     Running   0          60m   10.244.169.139   k8s-node2   <none>           <none>

#4. 再次查看 node 节点的标签
[root@k8s-master01 pod-yaml]# kubectl get nodes --show-labels

做完上面实验,需要把default名称空间下的pod全都删除,kubectl delete pods pod名字:

[root@k8s-master01 ~]# kubectl delete pods demo-pod demo-pod-1 tomcat-test 
pod "demo-pod" deleted
pod "demo-pod-1" deleted
pod "tomcat-test" deleted
[root@k8s-master01 ~]# kubectl get pods 
No resources found in default namespace.

# 删除node1节点打的标签 disk=ceph 。把等号换成 '-' 表示删除
[root@k8s-master01 ~]# kubectl label nodes k8s-node1 disk-

# 查看是否删除成功
[root@k8s-master01 ~]# kubectl get nodes k8s-node1 --show-labels

2.3 同时有 nodeName 和 nodeSelector 字段

[root@k8s-master01 pod-yaml]# vi pod-1.yaml
apiVersion: v1
kind: Pod
metadata:
  name: demo-pod-1
  namespace: default
  labels:
    app: myapp
    env: dev
spec:
  nodeName: k8s-node2
  nodeSelector:
    disk: ceph
  containers:
  - name:  tomcat-pod-1
    ports:
    - containerPort: 8080
    image: tomcat:latest
    imagePullPolicy: IfNotPresent

# 创建pod资源
[root@k8s-master01 pod-yaml]# kubectl get pods 
NAME         READY   STATUS         RESTARTS   AGE
demo-pod-1   0/1     NodeAffinity   0          7s
[root@k8s-master01 pod-yaml]# kubectl describe pods demo-pod-1 

创建失败了,不能正常调度,报错信息如下:

【云原生 | Kubernetes 实战】05、Pod高级实战:基于污点、容忍度、亲和性的多种调度策略(上)

        结论:同一个yaml文件里定义pod资源,如果同时定义了nodeName和NodeSelector,那么条件必须都满足才可以,有一个不满足都会调度失败。

# 我给node2打上标签 disk=ceph
[root@k8s-master01 pod-yaml]# kubectl label nodes k8s-node2 disk=ceph

[root@k8s-master01 pod-yaml]# kubectl delete pods demo-pod-1

[root@k8s-master01 pod-yaml]# kubectl apply -f pod-1.yaml

# 调度成功
[root@k8s-master01 pod-yaml]# kubectl get pods -o wide 
NAME         READY   STATUS    RESTARTS   AGE   IP               NODE        NOMINATED NODE   READINESS GATES
demo-pod-1   1/1     Running   0          5s    10.244.169.143   k8s-node2   <none>           <none>

结论:如果同时定义了nodeName和NodeSelector,如果只满足NodeSelector则可以调度成功。

三、node 节点亲和性 

node节点亲和性调度:nodeAffinity

官方文档:将 Pod 指派给节点 | Kubernetes

# 查看帮助命令。一层一层的找
[root@k8s-master01 pod-yaml]# kubectl explain pods.spec.affinity
KIND:     Pod
VERSION:  v1

RESOURCE: affinity <Object>

DESCRIPTION:
     If specified, the pod's scheduling constraints

     Affinity is a group of affinity scheduling rules.

FIELDS:
   nodeAffinity	<Object>
     Describes node affinity scheduling rules for the pod.

   podAffinity	<Object>
······

   podAntiAffinity	<Object>
······

[root@k8s-master01 pod-yaml]# kubectl explain pods.spec.affinity.nodeAffinity
KIND:     Pod
VERSION:  v1

RESOURCE: nodeAffinity <Object>

DESCRIPTION:
     Describes node affinity scheduling rules for the pod.

     Node affinity is a group of node affinity scheduling rules.

FIELDS:
   preferredDuringSchedulingIgnoredDuringExecution	<[]Object>
     The scheduler will prefer to schedule pods to nodes that satisfy the
     affinity expressions specified by this field, but it may choose a node that
     violates one or more of the expressions. The node that is most preferred is
     the one with the greatest sum of weights, i.e. for each node that meets all
     of the scheduling requirements (resource request, requiredDuringScheduling
     affinity expressions, etc.), compute a sum by iterating through the
     elements of this field and adding "weight" to the sum if the node matches
     the corresponding matchExpressions; the node(s) with the highest sum are
     the most preferred.

   requiredDuringSchedulingIgnoredDuringExecution	<Object>
     If the affinity requirements specified by this field are not met at
     scheduling time, the pod will not be scheduled onto the node. If the
     affinity requirements specified by this field cease to be met at some point
     during pod execution (e.g. due to an update), the system may or may not try
     to eventually evict the pod from its node.

# prefered:表示有节点尽量满足这个位置定义的亲和性,这不是一个必须的条件,软亲和性
# require:表示必须有节点满足这个位置定义的亲和性,这是个硬性条件,硬亲和性

[root@k8s-master01 pod-yaml]# kubectl explain pods.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution
KIND:     Pod
VERSION:  v1

RESOURCE: requiredDuringSchedulingIgnoredDuringExecution <Object>

DESCRIPTION:
     If the affinity requirements specified by this field are not met at
     scheduling time, the pod will not be scheduled onto the node. If the
     affinity requirements specified by this field cease to be met at some point
     during pod execution (e.g. due to an update), the system may or may not try
     to eventually evict the pod from its node.

     A node selector represents the union of the results of one or more label
     queries over a set of nodes; that is, it represents the OR of the selectors
     represented by the node selector terms.

FIELDS:
   nodeSelectorTerms	<[]Object> -required-
     Required. A list of node selector terms. The terms are ORed.

[root@k8s-master01 pod-yaml]# kubectl explain pods.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms
KIND:     Pod
VERSION:  v1

RESOURCE: nodeSelectorTerms <[]Object>

DESCRIPTION:
     Required. A list of node selector terms. The terms are ORed.

     A null or empty node selector term matches no objects. The requirements of
     them are ANDed. The TopologySelectorTerm type implements a subset of the
     NodeSelectorTerm.

FIELDS:
   # 匹配表达式的
   matchExpressions	<[]Object>
     A list of node selector requirements by node's labels.
  
   # 匹配字段的
   matchFields	<[]Object>
     A list of node selector requirements by node's fields.

[root@k8s-master01 pod-yaml]# kubectl explain pods.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms.matchFields
KIND:     Pod
VERSION:  v1

RESOURCE: matchFields <[]Object>

DESCRIPTION:
     A list of node selector requirements by node's fields.

     A node selector requirement is a selector that contains values, a key, and
     an operator that relates the key and values.

FIELDS:
   # 检查label标签
   key	<string> -required-
     The label key that the selector applies to.

   # 做等值选则还是不等值选则
   operator	<string> -required-
     Represents a key's relationship to a set of values. Valid operators are In,
     NotIn, Exists, DoesNotExist. Gt, and Lt.

     Possible enum values:
     - `"DoesNotExist"`
     - `"Exists"`
     - `"Gt"`
     - `"In"`
     - `"Lt"`
     - `"NotIn"`

   # 给定值
   values	<[]string>
     An array of string values. If the operator is In or NotIn, the values array
     must be non-empty. If the operator is Exists or DoesNotExist, the values
     array must be empty. If the operator is Gt or Lt, the values array must
     have a single element, which will be interpreted as an integer. This array
     is replaced during a strategic merge patch.

3.1 使用 requiredDuringSchedulingIgnoredDuringExecution 硬亲和性

# 检查当前节点中有任意一个节点拥有zone标签的值是foo或者bar,就可以把pod调度到这个node节点的foo或者bar标签上的节点上
[root@k8s-master01 pod-yaml]# vim pod-nodeaffinity-demo.yaml
apiVersion: v1
kind: Pod
metadata:
  name:  pod-node-affinity-demo
  namespace: default
  labels:
    app: myapp
    tier: frontend
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: zone
            operator: In
            values:
            - foo
            - bar
  containers:
  - name: myapp
    image: nginx:latest
    imagePullPolicy: IfNotPresent

[root@k8s-master01 pod-yaml]# kubectl get pods 
NAME                     READY   STATUS    RESTARTS   AGE
demo-pod-1               1/1     Running   0          38m
pod-node-affinity-demo   0/1     Pending   0          8s

[root@k8s-master01 pod-yaml]# kubectl describe pods pod-node-affinity-demo

        status 的状态是pending,上面说明没有完成调度,因为没有一个拥有zone的标签的值是foo或者bar,而且使用的是硬亲和性,必须满足条件才能完成调度:

【云原生 | Kubernetes 实战】05、Pod高级实战:基于污点、容忍度、亲和性的多种调度策略(上)

# 给 node1 打上标签 zone=foo
[root@k8s-master01 pod-yaml]# kubectl label nodes k8s-node1 zone=foo
node/k8s-node1 labeled
您在 /var/spool/mail/root 中有新邮件

[root@k8s-master01 pod-yaml]# kubectl get pods 
NAME                     READY   STATUS    RESTARTS   AGE
demo-pod-1               1/1     Running   0          43m
pod-node-affinity-demo   1/1     Running   0          4m47s

# 再次查看,调度成功
[root@k8s-master01 pod-yaml]# kubectl get pods -o wide 
NAME                     READY   STATUS    RESTARTS   AGE     IP               NODE        NOMINATED NODE   READINESS GATES
demo-pod-1               1/1     Running   0          43m     10.244.169.143   k8s-node2   <none>           <none>
pod-node-affinity-demo   1/1     Running   0          4m52s   10.244.36.79     k8s-node1   <none>           <none>

3.2 使用 preferredDuringSchedulingIgnoredDuringExecution 软亲和性

[root@k8s-master01 pod-yaml]# vim pod-nodeaffinity-demo-2.yaml
apiVersion: v1
kind: Pod
metadata:
  name: pod-node-affinity-demo-2
  namespace: default
  labels:
    app: myapp02
    tier: frontend
spec:
  containers:
  - name: myapp02
    image: nginx:latest
    imagePullPolicy: IfNotPresent
  affinity:
    nodeAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - preference:
          matchExpressions: 
          - key: zone1
            operator: In
            values:
            - foo1
            - bar1
        weight: 10
      - preference:
          matchExpressions:
          - key: zone2
            operator: In
            values:
            - foo2
            - bar2
        weight: 20

[root@k8s-master01 pod-yaml]# kubectl apply -f pod-nodeaffinity-demo-2.yaml

[root@k8s-master01 pod-yaml]# kubectl get pods
NAME                       READY   STATUS    RESTARTS        AGE
demo-pod-1                 1/1     Running   1 (9m39s ago)   24h
pod-node-affinity-demo     1/1     Running   1 (9m42s ago)   23h
pod-node-affinity-demo-2   1/1     Running   0               4s
[root@k8s-master01 pod-yaml]# kubectl get pods -o wide 
NAME                       READY   STATUS    RESTARTS        AGE   IP               NODE        NOMINATED NODE   READINESS GATES
demo-pod-1                 1/1     Running   1 (9m52s ago)   24h   10.244.169.144   k8s-node2   <none>           <none>
pod-node-affinity-demo     1/1     Running   1 (9m55s ago)   23h   10.244.36.81     k8s-node1   <none>           <none>
pod-node-affinity-demo-2   1/1     Running   0               17s   10.244.169.145   k8s-node2   <none>           <none>

上面说明软亲和性是可以运行这个pod的,尽管没有节点被定义标签 zone1、zone2,pod依然会被调度(随机)。

        下面来测试 weight 权重。weight 是相对权重,权重值越高,pod被调度到指定的node节点几率就越大。

# 给node1和node2打上标签:
[root@k8s-master01 pod-yaml]# kubectl label nodes k8s-node1 zone1=foo1

[root@k8s-master01 pod-yaml]# kubectl label nodes k8s-node2 zone2=foo2

# 删除原来的pod:
[root@k8s-master01 pod-yaml]# kubectl delete pods pod-node-affinity-demo-2

# 修改下权重,node1 权重为40
[root@k8s-master01 pod-yaml]# vim pod-nodeaffinity-demo-2.yaml
      preferredDuringSchedulingIgnoredDuringExecution:
      - preference:
          matchExpressions: 
          - key: zone1
            operator: In
            values:
            - foo1
            - bar1
        weight: 40
      - preference:
          matchExpressions:
          - key: zone2
            operator: In
            values:
            - foo2
            - bar2
        weight: 20

# 被调度到node1了。前面是被随机调度到node2的
[root@k8s-master01 pod-yaml]# kubectl apply -f pod-nodeaffinity-demo-2.yaml 
pod/pod-node-affinity-demo-2 created
[root@k8s-master01 pod-yaml]# kubectl get pods -o wide 
NAME                       READY   STATUS    RESTARTS      AGE   IP               NODE        NOMINATED NODE   READINESS GATES
demo-pod-1                 1/1     Running   1 (20m ago)   24h   10.244.169.144   k8s-node2   <none>           <none>
pod-node-affinity-demo     1/1     Running   1 (20m ago)   23h   10.244.36.81     k8s-node1   <none>           <none>
pod-node-affinity-demo-2   1/1     Running   0             4s    10.244.36.83     k8s-node1   <none>           <none>

        结论:pod在定义node节点亲和性的时候,如果node1 和 node2 都满足相同的条件(如标签相同、weight 相同),都可以调度pod。但是node2具有的标签是zone2=foo2,pod在匹配zone2=foo2的权重高,那么pod就会优先调度到onode2上。

# 把前面做实验的pod资源删除。为后面实验做准备
[root@k8s-master01 pod-yaml]# kubectl label nodes k8s-node1 zone1-
node/k8s-node1 unlabeled
您在 /var/spool/mail/root 中有新邮件
[root@k8s-master01 pod-yaml]# kubectl label nodes k8s-node1 zone-
node/k8s-node1 unlabeled
[root@k8s-master01 pod-yaml]# kubectl label nodes k8s-node2 zone2-
node/k8s-node2 unlabeled
[root@k8s-master01 pod-yaml]# kubectl delete pods demo-pod-1 pod-node-affinity-demo pod-node-affinity-demo-2

上一篇文章:【云原生 | Kubernetes 实战】04、k8s 名称空间和资源配额_Stars.Sky的博客-CSDN博客

下一篇文章:Pod高级实战:基于污点、容忍度、亲和性的多种调度策略(下)

版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 举报,一经查实,本站将立刻删除。

文章由极客之音整理,本文链接:https://www.bmabk.com/index.php/post/74560.html

(0)
小半的头像小半

相关推荐

极客之音——专业性很强的中文编程技术网站,欢迎收藏到浏览器,订阅我们!