kubebuilder init 创建一个Project

为什么使用kubebuilder生成Project

以前开发一个Operator,我们需要自己创建一个golang项目,自己编写go.mod、Makefile、Dockerfile等文件。但是使用 kubebuilder 创建一个脚手架项目,将会把这些文件都给我们生成好,还会使用 Kustomize 生成一些默认配置文件,以便更轻松地管理和部署Operator项目

kubebuilder init 初始化 Project

1
kubebuilder init --domain my.domain --repo my.domain/guestbook
  • –domain:指定自定义资源的 API 组的域名,通常是域名的反向形式,例如 domain.com 反转后为 com.domain

  • –repo:指定项目代码的存储库位置
    输出如下:

  • 先为项目生成了Kustomize清单文件

  • 生成各种文件

  • 然后 执行了 go get controller-runtime 和 go mod tidy

  • 最后提醒我们,如果想要创建API,需要执行 kubebuilder create api

    1
    2
    3
    4
    5
    6
    7
    8
    9
    [root@master guestbook]# kubebuilder init --domain my.domain --repo my.domain/guestbook
    INFO Writing kustomize manifests for you to edit...
    INFO Writing scaffold for you to edit...
    INFO Get controller runtime:
    $ go get sigs.k8s.io/controller-runtime@v0.17.0
    INFO Update dependencies:
    $ go mod tidy
    Next: define a resource with:
    $ kubebuilder create api

    命令执行结束后,生成了哪些文件和目录

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    [root@master guestbook]# tree
    .
    ├── cmd
    │ └── main.go
    ├── config
    │ ├── default
    │ │ ├── kustomization.yaml
    │ │ ├── manager_auth_proxy_patch.yaml
    │ │ └── manager_config_patch.yaml
    │ ├── manager
    │ │ ├── kustomization.yaml
    │ │ └── manager.yaml
    │ ├── prometheus
    │ │ ├── kustomization.yaml
    │ │ └── monitor.yaml
    │ └── rbac
    │ ├── auth_proxy_client_clusterrole.yaml
    │ ├── auth_proxy_role_binding.yaml
    │ ├── auth_proxy_role.yaml
    │ ├── auth_proxy_service.yaml
    │ ├── kustomization.yaml
    │ ├── leader_election_role_binding.yaml
    │ ├── leader_election_role.yaml
    │ ├── role_binding.yaml
    │ ├── role.yaml
    │ └── service_account.yaml
    ├── Dockerfile
    ├── go.mod
    ├── go.sum
    ├── hack
    │ └── boilerplate.go.txt
    ├── Makefile
    ├── PROJECT
    ├── README.md
    └── test
    ├── e2e
    │ ├── e2e_suite_test.go
    │ └── e2e_test.go
    └── utils
    └── utils.go

    10 directories, 28 files
  • 目录解释

    • cmd: 包含主要的应用程序代码,通常是控制器的入口点。
    • config: 包含了 Kubernetes 资源的配置文件,包括默认的资源、管理器的资源、监控资源以及角色绑定和服务账户等资源的配置。
    • Dockerfile: 用于构建容器镜像的 Dockerfile 文件。
    • go.mod 和 go.sum: Go 模块文件,用于管理项目的依赖项。
    • hack: 包含一些辅助脚本或模板文件,用于构建或生成代码。
    • Makefile: 包含了一些 Make 命令,用于简化项目的构建和部署过程。
    • PROJECT: Kubebuilder 项目的配置文件,指定了项目的 API 版本等信息。
    • README.md: 项目的说明文档,通常包含了如何构建、运行项目的指南。
    • test: 包含测试相关的代码,例如端到端测试和测试工具函数。

kubebuilder create 创建一个API

kubebuilder create 创建API

上面创建了一个Project guestbook,但是从目录上看,还没有创建Operator相关的CRD资源和Controller
下面就在 guestbook 项目中,创建一个 GV(Group Version),然后为该GV创建一个 Kind
创建 GVK:webapp/v1/Guestbook

1
kubebuilder create api --group webapp --version v1 --kind Guestbook
  • 输出如下
    • 提醒:是否创建Resource,即CRD资源
    • 提醒:是否创建Controller
      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      [root@master guestbook]# kubebuilder create api --group webapp --version v1 --kind Guestbook
      INFO Create Resource [y/n]
      y
      INFO Create Controller [y/n]
      y
      INFO Writing kustomize manifests for you to edit...
      INFO Writing scaffold for you to edit...
      INFO api/v1/guestbook_types.go
      INFO api/v1/groupversion_info.go
      INFO internal/controller/suite_test.go
      INFO internal/controller/guestbook_controller.go
      INFO internal/controller/guestbook_controller_test.go
      INFO Update dependencies:
      $ go mod tidy
      INFO Running make:
      $ make generate
      mkdir -p /root/zgy/project/guestbook/bin
      Downloading sigs.k8s.io/controller-tools/cmd/controller-gen@v0.14.0
      /root/zgy/project/guestbook/bin/controller-gen-v0.14.0 object:headerFile="hack/boilerplate.go.txt" paths="./..."
      Next: implement your new API and generate the manifests (e.g. CRDs,CRs) with:
      $ make manifests

命令执行结束后,生成了哪些文件和目录

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
[root@master guestbook]# tree
.
├── api
│ └── v1
│ ├── groupversion_info.go
│ ├── guestbook_types.go
│ └── zz_generated.deepcopy.go
├── bin
│ └── controller-gen-v0.14.0
├── cmd
│ └── main.go
├── config
│ ├── crd
│ │ ├── kustomization.yaml
│ │ └── kustomizeconfig.yaml
│ ├── default
│ │ ├── kustomization.yaml
│ │ ├── manager_auth_proxy_patch.yaml
│ │ └── manager_config_patch.yaml
│ ├── manager
│ │ ├── kustomization.yaml
│ │ └── manager.yaml
│ ├── prometheus
│ │ ├── kustomization.yaml
│ │ └── monitor.yaml
│ ├── rbac
│ │ ├── auth_proxy_client_clusterrole.yaml
│ │ ├── auth_proxy_role_binding.yaml
│ │ ├── auth_proxy_role.yaml
│ │ ├── auth_proxy_service.yaml
│ │ ├── guestbook_editor_role.yaml
│ │ ├── guestbook_viewer_role.yaml
│ │ ├── kustomization.yaml
│ │ ├── leader_election_role_binding.yaml
│ │ ├── leader_election_role.yaml
│ │ ├── role_binding.yaml
│ │ ├── role.yaml
│ │ └── service_account.yaml
│ └── samples
│ ├── kustomization.yaml
│ └── webapp_v1_guestbook.yaml
├── Dockerfile
├── go.mod
├── go.sum
├── hack
│ └── boilerplate.go.txt
├── internal
│ └── controller
│ ├── guestbook_controller.go
│ ├── guestbook_controller_test.go
│ └── suite_test.go
├── Makefile
├── PROJECT
├── README.md
└── test
├── e2e
│ ├── e2e_suite_test.go
│ └── e2e_test.go
└── utils
└── utils.go

17 directories, 41 files

更改 CRD types文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
// GuestbookSpec defines the desired state of Guestbook
type GuestbookSpec struct {
// INSERT ADDITIONAL SPEC FIELDS - desired state of cluster
// Important: Run "make" to regenerate code after modifying this file

// Quantity of instances
// +kubebuilder:validation:Minimum=1
// +kubebuilder:validation:Maximum=10
Size int32 `json:"size"`

// Name of the ConfigMap for GuestbookSpec's configuration
// +kubebuilder:validation:MaxLength=15
// +kubebuilder:validation:MinLength=1
ConfigMapName string `json:"configMapName"`

// +kubebuilder:validation:Enum=Phone;Address;Name
Type string `json:"alias,omitempty"`
}

// GuestbookStatus defines the observed state of Guestbook
type GuestbookStatus struct {
// INSERT ADDITIONAL STATUS FIELD - define observed state of cluster
// Important: Run "make" to regenerate code after modifying this file

// PodName of the active Guestbook node.
Active string `json:"active"`

// PodNames of the standby Guestbook nodes.
Standby []string `json:"standby"`
}

// +kubebuilder:object:root=true
// +kubebuilder:subresource:status
// +kubebuilder:resource:scope=Cluster

// Guestbook is the Schema for the guestbooks API
type Guestbook struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`

Spec GuestbookSpec `json:"spec,omitempty"`
Status GuestbookStatus `json:"status,omitempty"`
}

make manifests生成资源清单

从上面的输出目录来看,为gvk生成了types.go,即为:api/v1/guestbook_types.go
但是还没有为该资源生成对应的CRD yaml文件。执行 make manifests 命令就可以生成

1
2
[root@master guestbook]# make manifests
/root/zgy/project/guestbook/bin/controller-gen-v0.14.0 rbac:roleName=manager-role crd webhook paths="./..." output:crd:artifacts:config=config/crd/bases

通过运行 make manifests,可以确保生成的 Kubernetes 资源清单文件,与代码中定义的资源规范保持同步。所以以后但凡更改了 api/v1/guestbook_types.go,就一定要执行一下 make manifests

安装CRD

Kind 的 types结构文件编写好,并且 Controller 开发完毕后,再次执行:make manifests,重新生成一下 资源清单

将CRD安装到当前 kubernetes-cluster 中

1
make install
输出如下:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
[root@master guestbook]# make install
/root/zgy/project/guestbook/bin/controller-gen-v0.14.0 rbac:roleName=manager-role crd webhook paths="./..." output:crd:artifacts:config=config/crd/bases
Downloading sigs.k8s.io/kustomize/kustomize/v5@v5.3.0
go: downloading sigs.k8s.io/kustomize/kustomize/v5 v5.3.0
go: downloading sigs.k8s.io/kustomize/api v0.16.0
go: downloading sigs.k8s.io/kustomize/cmd/config v0.13.0
go: downloading sigs.k8s.io/kustomize/kyaml v0.16.0
go: downloading github.com/go-errors/errors v1.4.2
go: downloading golang.org/x/exp v0.0.0-20231006140011-7918f672742d
go: downloading golang.org/x/text v0.13.0
go: downloading k8s.io/kube-openapi v0.0.0-20230601164746-7562a1006961
go: downloading github.com/monochromegane/go-gitignore v0.0.0-20200626010858-205db1a8cc00
go: downloading github.com/xlab/treeprint v1.2.0
go: downloading github.com/imdario/mergo v0.3.13
go: downloading gopkg.in/evanphx/json-patch.v5 v5.6.0
go: downloading google.golang.org/protobuf v1.30.0
go: downloading go.starlark.net v0.0.0-20200306205701-8dd3e2ee1dd5
go: downloading github.com/google/go-cmp v0.5.9
go: downloading github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510
/root/zgy/project/guestbook/bin/kustomize-v5.3.0 build config/crd | kubectl apply -f -
customresourcedefinition.apiextensions.k8s.io/guestbooks.webapp.my.domain created

可以看到,最后一句输出:customresourcedefinition.apiextensions.k8s.io/guestbooks.webapp.my.domain created。即这个crd被创建

验证结果

1
2
3
[root@master guestbook]# kubectl get crds
NAME CREATED AT
guestbooks.webapp.my.domain 2024-03-02T15:23:02Z

前台运行 Controller

如果想要快速看一下编写的Controller效果,验证自己的代码,可以先在前台跑一下 Controller

1
make run

这种方式,终端会一直处于Controller的控制台。如果你想要做其他操作,需要在开启一个终端
输出如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[root@master guestbook]# make run
\/root/zgy/project/guestbook/bin/controller-gen-v0.14.0 rbac:roleName=manager-role crd webhook paths="./..." output:crd:artifacts:config=config/crd/bases
/root/zgy/project/guestbook/bin/controller-gen-v0.14.0 object:headerFile="hack/boilerplate.go.txt" paths="./..."
go fmt ./...
api/v1/guestbook_types.go
go vet ./...
go run ./cmd/main.go
2024-03-02T23:42:47+08:00 INFO setup starting manager
2024-03-02T23:42:47+08:00 INFO controller-runtime.metrics Starting metrics server
2024-03-02T23:42:47+08:00 INFO controller-runtime.metrics Serving metrics server {"bindAddress": ":8080", "secure": false}
2024-03-02T23:42:47+08:00 INFO starting server {"kind": "health probe", "addr": "[::]:8081"}
2024-03-02T23:42:47+08:00 INFO Starting EventSource {"controller": "guestbook", "controllerGroup": "webapp.my.domain", "controllerKind": "Guestbook", "source": "kind source: *v1.Guestbook"}
2024-03-02T23:42:47+08:00 INFO Starting Controller {"controller": "guestbook", "controllerGroup": "webapp.my.domain", "controllerKind": "Guestbook"}
2024-03-02T23:42:47+08:00 INFO Starting workers {"controller": "guestbook", "controllerGroup": "webapp.my.domain", "controllerKind": "Guestbook", "worker count": 1}

此时关于 webapp/v1/guestbook 资源的 Controller 就运行起来了,我们可以去创建资源验证一下

创建CR资源验证Controller

注意运行Controller的终端不可关闭
再开启一个终端,修改一下config/samples/webapp_v1_guestbook.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
apiVersion: webapp.my.domain/v1
kind: Guestbook
metadata:
labels:
app.kubernetes.io/name: guestbook
app.kubernetes.io/instance: guestbook-sample
app.kubernetes.io/part-of: guestbook
app.kubernetes.io/managed-by: kustomize
app.kubernetes.io/created-by: guestbook
name: guestbook-sample
spec:
size: 1
configMapName: cm-test
alias: Phone
apply一下这个文件
1
2
[root@master samples]# kubectl apply -f config/samples/webapp_v1_guestbook.yaml
guestbook.webapp.my.domain/guestbook-sample created
查看资源
1
2
3
[root@master samples]# kubectl get guestbook
NAME AGE
guestbook-sample 7m5s

因为我们并没有修改Controller的调谐逻辑,所以在Controller的运行终端里什么都没有输出,我们加上真实业务逻辑后,就可以按需求输出信息了

打包Controller项目为镜像并上传

打包和上传的命令

上面我们是在集群本地运行的Controller,一般用于开发阶段,验证和调试功能
当Controller开发完毕后,我们一般希望将它打包后用在其他集群里,此时就可以 将项目 打包成镜像,上传到镜像仓库
命令格式:

1
2
# 命令格式
make docker-build docker-push IMG=<some-registry>/<project-name>:tag

实际演示:

  • 比如我们这里将guestbook这个项目打包成镜像,上传到我的dockerhub镜像仓库
  • 我已经事先在dockerhub中创建了一个镜像仓库 gesang321/guestbook
  • dockerhub的网址:https://hub.docker.com/explore
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    [root@master guestbook]# make docker-build docker-push IMG=gesang321/guestbook:v1
    docker build --network host -t gesang321/guestbook:v1 .
    Sending build context to Docker daemon 132.6kB
    Step 1/18 : FROM golang:1.21 AS builder
    ---> 603d8d7f7de0
    Step 2/18 : ARG TARGETOS
    ---> Using cache
    ---> 482316cd42d0
    Step 3/18 : ARG TARGETARCH
    ---> Using cache
    ---> be05abbe45e6
    Step 4/18 : ENV GO111MODULE=on
    ---> Using cache
    ---> aa0c0397e236
    Step 5/18 : ENV GOPROXY=https://goproxy.cn
    ---> Using cache
    ---> 9d532d2a3eda
    Step 6/18 : WORKDIR /workspace
    ---> Using cache
    ---> 7bb2c0d4524b
    Step 7/18 : COPY go.mod go.mod
    ---> Using cache
    ---> 65d463ab60d2
    Step 8/18 : COPY go.sum go.sum
    ---> Using cache
    ---> 97a850a41214
    Step 9/18 : RUN go mod download
    ---> Using cache
    ---> 962faccef7ba
    Step 10/18 : COPY cmd/main.go cmd/main.go
    ---> Using cache
    ---> a91aaa2fe247
    Step 11/18 : COPY api/ api/
    ---> Using cache
    ---> d52a03f7523e
    Step 12/18 : COPY internal/controller/ internal/controller/
    ---> Using cache
    ---> 2daac03cea36
    Step 13/18 : RUN CGO_ENABLED=0 GOOS=${TARGETOS:-linux} GOARCH=${TARGETARCH} go build -a -o manager cmd/main.go
    ---> Using cache
    ---> 0e03838a9253
    Step 14/18 : FROM gcr.io/distroless/static:nonroot
    ---> 51a1a0f285f9
    Step 15/18 : WORKDIR /
    ---> Running in 697a5679fbb1
    Removing intermediate container 697a5679fbb1
    ---> 4e153340a2ca
    Step 16/18 : COPY --from=builder /workspace/manager .
    ---> c08825e79d7e
    Step 17/18 : USER 65532:65532
    ---> Running in 29e184d23ba2
    Removing intermediate container 29e184d23ba2
    ---> d202f2ddf6c0
    Step 18/18 : ENTRYPOINT ["/manager"]
    ---> Running in 2a0ac6725a8c
    Removing intermediate container 2a0ac6725a8c
    ---> 7bf4b9ede810
    Successfully built 7bf4b9ede810
    Successfully tagged gesang321/guestbook:v1
    docker push gesang321/guestbook:v1
    The push refers to repository [docker.io/gesang321/guestbook]
    31ac0a0f7a4f: Pushed
    be245a236de8: Mounted from katanomi/distroless-static
    00af80914b10: Mounted from katanomi/distroless-static
    67c1a9e72017: Mounted from katanomi/distroless-static
    v1: digest: sha256:47331b3faf3aee3619454ab6c21a98d1a441c8561613f247a7f52889a769def9 size: 1156

上传成功后,可以在dockerhub中看到我们push的image

命令执行中可能遇到的问题

1、卡在:RUN go mod download

问题描述

  • 可能卡住不动

  • 也可能很久后报错:go: github.com/beorn7/perks@v1.0.1: Get “https://proxy.golang.org/github.com/beorn7/perks/@v/v1.0.1.mod“: dial tcp 142.250.72.145:443: connect: connection refused

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    [root@master guestbook]# make docker-build docker-push IMG=gesang321/guestbook:v1
    docker build -t gesang321/guestbook:v1 .
    Sending build context to Docker daemon 132.6kB
    Step 1/16 : FROM golang:1.21 AS builder
    ---> 603d8d7f7de0
    Step 2/16 : ARG TARGETOS
    ---> Using cache
    ---> 482316cd42d0
    Step 3/16 : ARG TARGETARCH
    ---> Using cache
    ---> be05abbe45e6
    Step 4/16 : WORKDIR /workspace
    ---> Using cache
    ---> d9122045a4a3
    Step 5/16 : COPY go.mod go.mod
    ---> Using cache
    ---> b9d3a8104258
    Step 6/16 : COPY go.sum go.sum
    ---> Using cache
    ---> 5464452ac3dc
    Step 7/16 : RUN go mod download
    ---> Running in 29475cc7a5bd
  • 解决方案

    • cd 到guestbook的所在目录,修改 Dockerfile 文件,加上这么两句
      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      # Build the manager binary
      FROM golang:1.21 AS builder
      ARG TARGETOS
      ARG TARGETARCH

      # 就是加上这两句,设置一下go的国内代理加速
      ENV GO111MODULE=on
      ENV GOPROXY=https://goproxy.cn

      WORKDIR /workspace
      # Copy the Go Modules manifests
      COPY go.mod go.mod
      COPY go.sum go.sum
      ......
  • 然后 修改Makefile 文件,在docker-build命令中,添加 –network host ,这是让我们的机器使用主机网络,能够连接外网

    1
    2
    3
    4
    5
    6
    # If you wish to build the manager image targeting other platforms you can use the --platform flag.
    # (i.e. docker build --platform linux/arm64). However, you must enable docker buildKit for it.
    # More info: https://docs.docker.com/develop/develop-images/build_enhancements/
    .PHONY: docker-build
    docker-build: ## Build docker image with the manager.
    $(CONTAINER_TOOL) build --network host -t ${IMG} .

然后重新执行 make docker-build docker-push IMG=gesang321/guestbook:v1

2、卡在 RUN CGO_ENABLED=0…

  • 我先是遇到了3.6.2.1的问题,解决后又卡在了这一句
    1
    2
    3
    Step 13/18 : RUN CGO_ENABLED=0 GOOS=${TARGETOS:-linux} GOARCH=${TARGETARCH} go build -a -o manager cmd/main.go
    ---> Using cache
    ---> 0e03838a9253

等待很久后,这句也成功了.所以要耐心等待,等一等就能成功的

33、FROM gcr.io/distroless/static:nonroot 报错

  • gcr.io是google提供的镜像仓库,国内访问不到。如果你的机器无法科学上网,则会在这一步报错,因为压根连不上这个镜像仓库,也就无法获取其中的镜像了
  • 解决方法:
    • 国内有人下载了 gcr.io/distroless/static:nonroot,改了名称后上传到了dockerhub
    • 我们只需要下载 dockerhub 上的镜像,再把名称改回 gcr.io/distroless/static:nonroot
    • Dockerfile执行的时候,就可以直接在本地找到 gcr.io/distroless/static:nonroot,自然就不会报错了
      1
      2
      3
      4
      [root@master guestbook]# docker pull katanomi/distroless-static:nonroot
      [root@master guestbook]# docker tag katanomi/distroless-static:nonroot gcr.io/distroless/static:nonroot
      [root@master guestbook]# docker images | grep gcr
      gcr.io/distroless/static nonroot 51a1a0f285f9 19 months ago 2.97MB

然后重新执行 make docker-build docker-push IMG=gesang321/guestbook:v1

部署Controller到kubernetes集群

部署Controller的命令

镜像打包并上传完毕,接下来就可以到任何有仓库权限的kubernetes集群中,部署

1
2
# 命令格式
make deploy IMG=<some-registry>/<project-name>:tag

实际演示:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[root@master guestbook]# make deploy IMG=gesang321/guestbook:v1
/root/zgy/project/guestbook/bin/controller-gen-v0.14.0 rbac:roleName=manager-role crd webhook paths="./..." output:crd:artifacts:config=config/crd/bases
cd config/manager && /root/zgy/project/guestbook/bin/kustomize-v5.3.0 edit set image controller=gesang321/guestbook:v1
/root/zgy/project/guestbook/bin/kustomize-v5.3.0 build config/default | kubectl apply -f -
namespace/guestbook-system created
customresourcedefinition.apiextensions.k8s.io/guestbooks.webapp.my.domain unchanged
serviceaccount/guestbook-controller-manager created
role.rbac.authorization.k8s.io/guestbook-leader-election-role created
clusterrole.rbac.authorization.k8s.io/guestbook-manager-role created
clusterrole.rbac.authorization.k8s.io/guestbook-metrics-reader created
clusterrole.rbac.authorization.k8s.io/guestbook-proxy-role created
rolebinding.rbac.authorization.k8s.io/guestbook-leader-election-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/guestbook-manager-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/guestbook-proxy-rolebinding created
service/guestbook-controller-manager-metrics-service created
deployment.apps/guestbook-controller-manager created

make deploy 会自动创建一个名称 为 guestbook-system 的namespace。并在该ns下创建一个名称为 guestbook-controller-manager 的 deployment

1
2
3
4
5
6
7
[root@master guestbook]# kubectl get deploy -n guestbook-system
NAME READY UP-TO-DATE AVAILABLE AGE
guestbook-controller-manager 1/1 1 0 26m

[root@master guestbook]# kubectl get pods -n guestbook-system
NAME READY STATUS RESTARTS AGE
guestbook-controller-manager-6475ff77d-kqsfv 2/2 Running 0 28m

命令执行中可能遇到的问题

1、gcr.io/kubebuilder/kube-rbac-proxy拉取失败

问题描述

  • deploy创建的Pod,并没有Running,而是 ImagePullBackOff
  • describe一下,可以看到是 gcr.io/kubebuilder/kube-rbac-proxy 镜像拉取失败
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    [root@master guestbook]# kubectl describe pods -n guestbook-system guestbook-controller-manager-6475ff77d-kqsfv
    ......
    Events:
    Type Reason Age From Message
    ---- ------ ---- ---- -------
    Normal Scheduled <unknown> default-scheduler Successfully assigned guestbook-system/guestbook-controller-manager-6475ff77d-kqsfv to master
    Normal Pulled 2m15s kubelet, master Container image "gesang321/guestbook:v1" already present on machine
    Normal Created 2m15s kubelet, master Created container manager
    Normal Started 2m15s kubelet, master Started container manager
    Warning Failed 59s (x3 over 2m15s) kubelet, master Failed to pull image "gcr.io/kubebuilder/kube-rbac-proxy:v0.15.0": rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
    Warning Failed 59s (x3 over 2m15s) kubelet, master Error: ErrImagePull
    Normal BackOff 19s (x6 over 2m15s) kubelet, master Back-off pulling image "gcr.io/kubebuilder/kube-rbac-proxy:v0.15.0"
    Warning Failed 19s (x6 over 2m15s) kubelet, master Error: ImagePullBackOff
    Normal Pulling 5s (x4 over 2m31s) kubelet, master Pulling image "gcr.io/kubebuilder/kube-rbac-proxy:v0.15.0"
      这依旧是grc仓库访问不到导致的问题
    

解决方案

  • 第一种方法
    • 在3.6打包之前,就先修改了 config/default/manager_auth_proxy_patch.yaml 文件,把 image: gcr.io/kubebuilder/kube-rbac-proxy:v0.15.0 改成 image: kubebuilder/kube-rbac-proxy:v0.15.0,这是dockerhub上一个国内可访问的镜像,然后重新打包上传,部署
  • 第二种方法
    • 和 3.6.2.3 一样,我们先拉取 kubebuilder/kube-rbac-proxy:v0.15.0,再把它的名称改成 gcr 的,这样 make deploy 的时候,就会在本地找到镜像,以为是自己需要的
      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      [root@master guestbook]# docker pull kubebuilder/kube-rbac-proxy:v0.15.0
      v0.15.0: Pulling from kubebuilder/kube-rbac-proxy
      07a64a71e011: Pull complete
      fe5ca62666f0: Pull complete
      b02a7525f878: Pull complete
      fcb6f6d2c998: Pull complete
      e8c73c638ae9: Pull complete
      1e3d9b7d1452: Pull complete
      4aa0ea1413d3: Pull complete
      7c881f9ab25e: Pull complete
      5627a970d25e: Pull complete
      c9c9ec7a3926: Pull complete
      Digest: sha256:a3768b8f9d259df714ebbf176798c380f4d929216e656dc30754eafa03a74c41
      Status: Downloaded newer image for kubebuilder/kube-rbac-proxy:v0.15.0
      [root@master guestbook]# docker tag kubebuilder/kube-rbac-proxy:v0.15.0 gcr.io/kubebuilder/kube-rbac-proxy:v0.15.0
      然后重新运行 make deploy 命令

从集群中删除CRD资源

如果我们不需要安装好的那个CRD资源了,则可以使用 make uninstall 命令卸载CRD资源.需要注意,make uninstall 的批处理命令,是guestbook目录下的 Makefile 提供的,必须在guestbook目录下执行
实际演示

1
2
3
4
5
6
7
[root@master guestbook]# make uninstall
/root/zgy/project/guestbook/bin/controller-gen-v0.14.0 rbac:roleName=manager-role crd webhook paths="./..." output:crd:artifacts:config=config/crd/bases
/root/zgy/project/guestbook/bin/kustomize-v5.3.0 build config/crd | kubectl delete --ignore-not-found=false -f -
customresourcedefinition.apiextensions.k8s.io "guestbooks.webapp.my.domain" deleted

[root@master guestbook]# kubectl get crds
No resources found in default namespace.

该命令会同时把 环境中,所有已经创建的 CR,一并删除掉

1
2
[root@master project]# kubectl get guestbook
Error from server (NotFound): Unable to list "webapp.my.domain/v1, Resource=guestbooks": the server could not find the requested resource (get guestbooks.webapp.my.domain)

从集群中删除部署的Controller

如果我们不需要 该CRD资源 及其Controller 了,则可以使用 make undeploy 命令 卸载CRD资源,并删除controller对应的deploy,需要注意,make undeploy 的批处理命令,是guestbook目录下的 Makefile 提供的,必须在guestbook目录下执行
实际演示

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
[root@master guestbook]# make undeploy
/root/zgy/project/guestbook/bin/kustomize-v5.3.0 build config/default | kubectl delete --ignore-not-found=false -f -
namespace "guestbook-system" deleted
customresourcedefinition.apiextensions.k8s.io "guestbooks.webapp.my.domain" deleted
serviceaccount "guestbook-controller-manager" deleted
role.rbac.authorization.k8s.io "guestbook-leader-election-role" deleted
clusterrole.rbac.authorization.k8s.io "guestbook-manager-role" deleted
clusterrole.rbac.authorization.k8s.io "guestbook-metrics-reader" deleted
clusterrole.rbac.authorization.k8s.io "guestbook-proxy-role" deleted
rolebinding.rbac.authorization.k8s.io "guestbook-leader-election-rolebinding" deleted
clusterrolebinding.rbac.authorization.k8s.io "guestbook-manager-rolebinding" deleted
clusterrolebinding.rbac.authorization.k8s.io "guestbook-proxy-rolebinding" deleted
service "guestbook-controller-manager-metrics-service" deleted
deployment.apps "guestbook-controller-manager" deleted

[root@master guestbook]# kubectl get crds
No resources found in default namespace.

[root@master guestbook]# kubectl get deploy -A
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
default nginx 1/1 1 1 27d
kube-system coredns 2/2 2 2 27d

该命令会同时把 环境中,所有已经创建的 CR,一并删除掉

1
2
[root@master project]# kubectl get guestbook
Error from server (NotFound): Unable to list "webapp.my.domain/v1, Resource=guestbooks": the server could not find the requested resource (get guestbooks.webapp.my.domain)