网站建设主要工作,最近的广告公司在哪里,网站增加外链方法,怎么自己做单页网站Kubebuilder Hello World
摘要#xff1a;从0开始建立kubebuilder第一个程序 文章目录Kubebuilder Hello World0. 环境 简介0.1 环境0.2 什么是kubebuilder#xff1f;1. 安装Kubebuilder1.1 需要预先准备好的环境1.2 安装kubebuilder kustomize2. 项目初始化2.1 新建…Kubebuilder Hello World
摘要从0开始建立kubebuilder第一个程序 文章目录Kubebuilder Hello World0. 环境 简介0.1 环境0.2 什么是kubebuilder1. 安装Kubebuilder1.1 需要预先准备好的环境1.2 安装kubebuilder kustomize2. 项目初始化2.1 新建并进入文件夹2.2 kubebuilder初始化3. 创建Api4. 解决端口占用问题如果没有可以跳过5. 安装CRD5.1 进入集群5.2 在集群中安装CRD6. 创建资源实例6.1 到目前为止的总结6.2 创建实例6.3 修改实例内容7. 删除实例关闭controller8. 镜像制作 部署8.1 镜像制作8.2 部署8.3 查看部署 进一步解释9. 卸载9.1 uninstall9.2 undeploy10. 参考链接0. 环境 简介
0.1 环境
博主的机器Mac amd64 arm用户需要去看安装Kubebuilder需要先看kubebuilder官方文档的Quick Start部分。
0.2 什么是kubebuilder
暂时省略以后会写一篇专门的博客分析。
1. 安装Kubebuilder
1.1 需要预先准备好的环境
需要go环境dockerminikubekind也行用来创建集群。如果没有可以自行安装不过一般到了kubebuilder学习了kubernetes应该有一定了解了所以上述环境大概率是有的。
注意这里我使用的集群环境是minikube。
1.2 安装kubebuilder kustomize
brew install kubebuilder
brew install kustomize2. 项目初始化
2.1 新建并进入文件夹
mkdir Helo
cd Helo2.2 kubebuilder初始化
kubebuilder init --domain xxx.domain --repo Helo--domain xxx.domain是我们的项目域名--repo Helo是仓库地址。
如果初始化成功将会看到如下提示可以进行kubebuilder create api操作了
Writing kustomize manifests for you to edit...
Writing scaffold for you to edit...
Get controller runtime:
$ go get sigs.k8s.io/controller-runtimev0.14.1
Update dependencies:
$ go mod tidy
Next: define a resource with:
$ kubebuilder create api使用tree命令先看一下当前目录情况⬇️
tree
.
├── Dockerfile
├── Makefile
├── PROJECT
├── README.md
├── config
│ ├── default
│ │ ├── kustomization.yaml
│ │ ├── manager_auth_proxy_patch.yaml
│ │ └── manager_config_patch.yaml
│ ├── manager
│ │ ├── kustomization.yaml
│ │ └── manager.yaml
│ ├── prometheus
│ │ ├── kustomization.yaml
│ │ └── monitor.yaml
│ └── rbac
│ ├── auth_proxy_client_clusterrole.yaml
│ ├── auth_proxy_role.yaml
│ ├── auth_proxy_role_binding.yaml
│ ├── auth_proxy_service.yaml
│ ├── kustomization.yaml
│ ├── leader_election_role.yaml
│ ├── leader_election_role_binding.yaml
│ ├── role_binding.yaml
│ └── service_account.yaml
├── go.mod
├── go.sum
├── hack
│ └── boilerplate.go.txt
└── main.go7 directories, 24 files3. 创建Api
命令行执行创建Api命令其中名称可以自己设置
API Group 是相关 API 功能的集合每个 Group 拥有一或多个 Versions GV每个 GV 都包含 N 个 api 类型称之为 Kinds不同 Version 同一个 Kinds 可能不同
kubebuilder create api --group apps --version v1 --kind Test如果创建成功将会看到如下所示的提示⬇️让我们生成manifests
kubebuilder create api --group apps --version v1 --kind Test
Create Resource [y/n]
y
Create Controller [y/n]
y
Writing kustomize manifests for you to edit...
Writing scaffold for you to edit...
api/v1/test_types.go
controllers/test_controller.go
Update dependencies:
$ go mod tidy
Running make:
$ make generate
mkdir -p /Users/levi/wrksp/Helo/bin
test -s /Users/levi/wrksp/Helo/bin/controller-gen /Users/levi/wrksp/Helo/bin/controller-gen --version | grep -q v0.11.1 || \GOBIN/Users/levi/wrksp/Helo/bin go install sigs.k8s.io/controller-tools/cmd/controller-genv0.11.1
/Users/levi/wrksp/Helo/bin/controller-gen object:headerFilehack/boilerplate.go.txt paths./...
Next: implement your new API and generate the manifests (e.g. CRDs,CRs) with:
$ make manifests此外执行tree命令后可以发现创建api后目录结构也发生了变化。需要注意的变化是生成了⬇️
api/v1crdsamplesTest类型资源实力的样例yaml
.
├── Dockerfile
├── Makefile
├── PROJECT
├── README.md
├── api
│ └── v1
│ ├── groupversion_info.go
│ ├── test_types.go
│ └── zz_generated.deepcopy.go
├── bin
│ └── controller-gen
├── config
│ ├── crd
│ │ ├── kustomization.yaml
│ │ ├── kustomizeconfig.yaml
│ │ └── patches
│ │ ├── cainjection_in_tests.yaml
│ │ └── webhook_in_tests.yaml
│ ├── default
│ │ ├── kustomization.yaml
│ │ ├── manager_auth_proxy_patch.yaml
│ │ └── manager_config_patch.yaml
│ ├── manager
│ │ ├── kustomization.yaml
│ │ └── manager.yaml
│ ├── prometheus
│ │ ├── kustomization.yaml
│ │ └── monitor.yaml
│ ├── rbac
│ │ ├── auth_proxy_client_clusterrole.yaml
│ │ ├── auth_proxy_role.yaml
│ │ ├── auth_proxy_role_binding.yaml
│ │ ├── auth_proxy_service.yaml
│ │ ├── kustomization.yaml
│ │ ├── leader_election_role.yaml
│ │ ├── leader_election_role_binding.yaml
│ │ ├── role_binding.yaml
│ │ ├── service_account.yaml
│ │ ├── test_editor_role.yaml
│ │ └── test_viewer_role.yaml
│ └── samples
│ └── apps_v1_test.yaml
├── controllers
│ ├── suite_test.go
│ └── test_controller.go
├── go.mod
├── go.sum
├── hack
│ └── boilerplate.go.txt
└── main.go14 directories, 37 files4. 解决端口占用问题如果没有可以跳过
首先使用go run ./main.go测试一下端口占用问题我这里可以看到报错信息error listening on :8080: listen tcp :8080: bind: address already in use
go run ./main.go
2023-04-17T10:22:0108:00 INFO controller-runtime.metrics Metrics server is starting to listen {addr: :8080}
2023-04-17T10:22:0108:00 ERROR controller-runtime.metrics metrics server failed to listen. You may want to disable the metrics server or use another port if it is due to conflicts {error: error listening on :8080: listen tcp :8080: bind: address already in use}
sigs.k8s.io/controller-runtime/pkg/metrics.NewListener/Users/levi/go/pkg/mod/sigs.k8s.io/controller-runtimev0.14.1/pkg/metrics/listener.go:48
sigs.k8s.io/controller-runtime/pkg/manager.New/Users/levi/go/pkg/mod/sigs.k8s.io/controller-runtimev0.14.1/pkg/manager/manager.go:407
main.main/Users/levi/wrksp/Helo/main.go:68
runtime.main/usr/local/Cellar/go/1.20.3/libexec/src/runtime/proc.go:250
2023-04-17T10:22:0108:00 ERROR setup unable to start manager {error: error listening on :8080: listen tcp :8080: bind: address already in use}
main.main/Users/levi/wrksp/Helo/main.go:88
runtime.main/usr/local/Cellar/go/1.20.3/libexec/src/runtime/proc.go:250
exit status 1通过lsof 命令查看端口占用可以看到是main其实是另一个kubebuilder实例。
lsof -i:8080
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
main 4304 levi 7u IPv6 0xc681612e5637de5 0t0 TCP *:http-alt (LISTEN)这里我并不想杀掉原来的进程释放8080因此将代码中默认的8080改为8182通过lsof事先找到一个没有被占用的端口即可。
需要修改的地方为
1main.go中将metrics-bind-address替换从8080替换为8280。 flag.StringVar(metricsAddr, metrics-bind-address, :8080, The address the metric endpoint binds to.)2Helo/config/default/manager_auth_proxy_patch.yaml中upstream和--metrics-bind-address同样需要替换成8280。 - --upstreamhttp://127.0.0.1:8080/- --metrics-bind-address127.0.0.1:8080如果8081被占用同理进行修改我这里是将8081替换成了8281。
搞定端口占用问题后进入下面步骤即可。
5. 安装CRD
5.1 进入集群
输入minikube update-context命令
# 创建集群
minikube start# 进入集群
minikube update-context## 看到的输出结果为 minikube context has been updated to point to 127.0.0.1:50336 当前的上下文为 minikube5.2 在集群中安装CRD
# 将CRD安装到集群中 (首先要进入minikube上下文)
make install
# 运行
make run
# 运行结果参考
test -s /Users/levi/wrksp/Helo/bin/controller-gen /Users/levi/wrksp/Helo/bin/controller-gen --version | grep -q v0.11.1 || \GOBIN/Users/levi/wrksp/Helo/bin go install sigs.k8s.io/controller-tools/cmd/controller-genv0.11.1
/Users/levi/wrksp/Helo/bin/controller-gen rbac:roleNamemanager-role crd webhook paths./... output:crd:artifacts:configconfig/crd/bases
/Users/levi/wrksp/Helo/bin/controller-gen object:headerFilehack/boilerplate.go.txt paths./...
go fmt ./...
go vet ./...
go run ./main.go
2023-04-17T15:11:5208:00 INFO controller-runtime.metrics Metrics server is starting to listen {addr: :8280}
2023-04-17T15:11:5208:00 INFO setup starting manager
2023-04-17T15:11:5208:00 INFO Starting server {path: /metrics, kind: metrics, addr: [::]:8280}
2023-04-17T15:11:5208:00 INFO Starting server {kind: health probe, addr: [::]:8281}
2023-04-17T15:11:5208:00 INFO Starting EventSource {controller: test, controllerGroup: apps.xxx.domain, controllerKind: Test, source: kind source: *v1.Test}
2023-04-17T15:11:5208:00 INFO Starting Controller {controller: test, controllerGroup: apps.xxx.domain, controllerKind: Test}
2023-04-17T15:11:5208:00 INFO Starting workers {controller: test, controllerGroup: apps.xxx.domain, controllerKind: Test, worker count: 1}
需要注意controller是一直运行的ctrl c 将会导致controller退出。
6. 创建资源实例
6.1 到目前为止的总结
前面的内容是创建了自定义的资源Test 类型的CRD接下来我们将自定义资源实例化创建该类型的pod。
CRD安装后kubebuilder已经自动帮我们生成了该类型的部署文件样例位于./config/samples/apps_v1_test.yaml文件中。
内容为
apiVersion: apps.xxx.domain/v1
kind: Test
metadata:labels:app.kubernetes.io/name: testapp.kubernetes.io/instance: test-sampleapp.kubernetes.io/part-of: heloapp.kubernetes.io/managed-by: kustomizeapp.kubernetes.io/created-by: heloname: test-sample
spec:# TODO(user): Add fields here6.2 创建实例
打开新的terminalcd Helo登陆集群minikube update-context创建实例
kubectl apply -f config/samples/
## 创建成功输出
test.apps.xxx.domain/test-sample created查看Test类型自定义资源的实例test-sample
# 查看Test资源实例
kubectl get Test## 查看Test资源实例的输出
NAME AGE
test-sample 15s6.3 修改实例内容
kubectl edit Test test-sample我的修改是在最下面添加了注意缩进。
spec:foo: bar可以看到的结果为
test.apps.xxx.domain/test-sample edited7. 删除实例关闭controller
删除实例
kubectl delete -f config/samples/## 删除结果
test.apps.xxx.domain test-sample deleted关闭controllerctrl c即可。 卸载controllermake uninstall——没必要删除先留着后面做实验还用。
8. 镜像制作 部署
注意这里主要是参考了《kubebuilder实战之二初次体验kubebuilder》
前面的controller是运行在k8s之外的生产环境中controller一般运行在k8s的环境内。以下操作均在目录./Helo/中进行
8.1 镜像制作
# docker build和push 2513686675是我的docker账号levitest是镜像名001是tag版本号建议设置成v0.0.1
make docker-build docker-push IMG2513686675/levitest:001## 输出
make docker-build docker-push IMG2513686675/levitest:001
test -s /Users/levi/wrksp/Helo/bin/controller-gen /Users/levi/wrksp/Helo/bin/controller-gen --version | grep -q v0.11.1 || \GOBIN/Users/levi/wrksp/Helo/bin go install sigs.k8s.io/controller-tools/cmd/controller-genv0.11.1
/Users/levi/wrksp/Helo/bin/controller-gen rbac:roleNamemanager-role crd webhook paths./... output:crd:artifacts:configconfig/crd/bases
/Users/levi/wrksp/Helo/bin/controller-gen object:headerFilehack/boilerplate.go.txt paths./...
go fmt ./...
go vet ./...
test -s /Users/levi/wrksp/Helo/bin/setup-envtest || GOBIN/Users/levi/wrksp/Helo/bin go install sigs.k8s.io/controller-runtime/tools/setup-envtestlatest
KUBEBUILDER_ASSETS/Users/levi/wrksp/Helo/bin/k8s/1.26.0-darwin-amd64 go test ./... -coverprofile cover.out
? Helo [no test files]
? Helo/api/v1 [no test files]
ok Helo/controllers 1.568s coverage: 0.0% of statements
docker build -t 2513686675/levitest:001 .
[] Building 2.6s (18/18) FINISHED [internal] load build definition from Dockerfile 0.0s transferring dockerfile: 37B 0.0s [internal] load .dockerignore 0.0s transferring context: 4.92kB 0.0s [internal] load metadata for gcr.io/distroless/static:nonroot 1.1s [internal] load metadata for docker.io/library/golang:1.19 2.5s [auth] library/golang:pull token for registry-1.docker.io 0.0s [internal] load build context 0.0s transferring context: 3.57kB 0.0s [builder 1/9] FROM docker.io/library/golang:1.19sha256:9f2dd04486e84eec72d945b077d568976981d9afed8b4e2aeb08f7ab739292b3 0.0s [stage-1 1/3] FROM gcr.io/distroless/static:nonrootsha256:149531e38c7e4554d4a6725d7d70593ef9f9881358809463800669ac89f3b0ec 0.0s CACHED [builder 2/9] WORKDIR /workspace 0.0s CACHED [builder 3/9] COPY go.mod go.mod 0.0s CACHED [builder 4/9] COPY go.sum go.sum 0.0s CACHED [builder 5/9] RUN go mod download 0.0s CACHED [builder 6/9] COPY main.go main.go 0.0s CACHED [builder 7/9] COPY api/ api/ 0.0s CACHED [builder 8/9] COPY controllers/ controllers/ 0.0s CACHED [builder 9/9] RUN CGO_ENABLED0 GOOSlinux GOARCHamd64 go build -a -o manager main.go 0.0s CACHED [stage-1 2/3] COPY --frombuilder /workspace/manager . 0.0s exporting to image 0.0s exporting layers 0.0s writing image sha256:f3aefc63f93dc5193e6e3e0c168b8c78ad2769e0a79ad018a3faaf34cb20198e 0.0s naming to docker.io/2513686675/levitest:001 0.0s
docker push 2513686675/levitest:001
The push refers to repository [docker.io/2513686675/levitest]
da2a60d3ec35: Pushed
4cb10dd2545b: Pushed
d2d7ec0f6756: Pushed
1a73b54f556b: Pushed
e624a5370eca: Pushed
d52f02c6501c: Pushed
ff5700ec5418: Pushed
399826b51fcf: Pushed
6fbdf253bbc2: Pushed
d0157aa0c95a: Pushed
001: digest: sha256:efe1d1e17537b90f84cf005b95b2c3b7065d9f6dc4c5761c6d89103963b95507 size: 24028.2 部署
kerbernetes部署controller镜像。
make deploy IMG2513686675/levitest:001bug解决Unable to connect to the server: net/http: TLS handshake timeout make deploy IMG2513686675/levitest:001## 部署输出错误信息Unable to connect to the server: net/http: TLS handshake timeout
test -s /Users/levi/wrksp/Helo/bin/controller-gen /Users/levi/wrksp/Helo/bin/controller-gen --version | grep -q v0.11.1 || \GOBIN/Users/levi/wrksp/Helo/bin go install sigs.k8s.io/controller-tools/cmd/controller-genv0.11.1
/Users/levi/wrksp/Helo/bin/controller-gen rbac:roleNamemanager-role crd webhook paths./... output:crd:artifacts:configconfig/crd/bases
test -s /Users/levi/wrksp/Helo/bin/kustomize || { curl -Ss https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/hack/install_kustomize.sh | bash -s -- 3.8.7 /Users/levi/wrksp/Helo/bin; }
cd config/manager /Users/levi/wrksp/Helo/bin/kustomize edit set image controller2513686675/levitest:001
/Users/levi/wrksp/Helo/bin/kustomize build config/default | kubectl apply -f -
Unable to connect to the server: net/http: TLS handshake timeout
make: *** [deploy] Error 1可能是代理有问题 unset http_proxy
nuset https_proxy部署成功输出
make deploy IMG2513686675/levitest:001## k8s部署controller镜像成功
test -s /Users/levi/wrksp/Helo/bin/controller-gen /Users/levi/wrksp/Helo/bin/controller-gen --version | grep -q v0.11.1 || \GOBIN/Users/levi/wrksp/Helo/bin go install sigs.k8s.io/controller-tools/cmd/controller-genv0.11.1
/Users/levi/wrksp/Helo/bin/controller-gen rbac:roleNamemanager-role crd webhook paths./... output:crd:artifacts:configconfig/crd/bases
test -s /Users/levi/wrksp/Helo/bin/kustomize || { curl -Ss https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/hack/install_kustomize.sh | bash -s -- 3.8.7 /Users/levi/wrksp/Helo/bin; }
cd config/manager /Users/levi/wrksp/Helo/bin/kustomize edit set image controller2513686675/levitest:001
/Users/levi/wrksp/Helo/bin/kustomize build config/default | kubectl apply -f -
namespace/helo-system created
customresourcedefinition.apiextensions.k8s.io/tests.apps.xxx.domain unchanged
serviceaccount/helo-controller-manager created
role.rbac.authorization.k8s.io/helo-leader-election-role created
clusterrole.rbac.authorization.k8s.io/helo-manager-role created
clusterrole.rbac.authorization.k8s.io/helo-metrics-reader created
clusterrole.rbac.authorization.k8s.io/helo-proxy-role created
rolebinding.rbac.authorization.k8s.io/helo-leader-election-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/helo-manager-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/helo-proxy-rolebinding created
service/helo-controller-manager-metrics-service created
deployment.apps/helo-controller-manager created8.3 查看部署 进一步解释
查看部署可以看到helo-controller-manager-655547d5dc-5rv59对应的容器有两个READY 2/2
# 查看部署命令
kubectl get pod --all-namespaces## 查看部署的输出内容
NAMESPACE NAME READY STATUS RESTARTS AGE
default command-demo 0/1 Completed 0 6h13m
helo-system helo-controller-manager-655547d5dc-5rv59 2/2 Running 0 4m29s
kube-system coredns-787d4945fb-85skf 1/1 Running 4 (8h ago) 3d4h
kube-system etcd-minikube 1/1 Running 5 (8h ago) 3d4h
kube-system kube-apiserver-minikube 1/1 Running 5 3d4h
kube-system kube-controller-manager-minikube 1/1 Running 4 (8h ago) 3d4h
kube-system kube-proxy-v4gdc 1/1 Running 6 (8h ago) 3d4h
kube-system kube-scheduler-minikube 1/1 Running 6 (8h ago) 3d4h
kube-system storage-provisioner 1/1 Running 11 (8h ago) 3d4h
kubernetes-dashboard dashboard-metrics-scraper-5c6664855-w4bp2 1/1 Running 0 6h27m
kubernetes-dashboard kubernetes-dashboard-55c4cbbc7c-xg6t5 1/1 Running 0 6h27m通过describe查看详细讯息因为namespace并不是default因此需要用-n指定namespace。可以发现Containers字段有两个分别是kube-rbac-proxy和manger也就是说两个容器分别是kube-rbac-proxy和manger。
kubectl describe pod helo-controller-manager-655547d5dc-5rv59 -n helo-system## 结果
kubectl describe pod helo-controller-manager-655547d5dc-5rv59 -n helo-system
Name: helo-controller-manager-655547d5dc-5rv59
Namespace: helo-system
Priority: 0
Service Account: helo-controller-manager
Node: minikube/192.168.58.2
Start Time: Mon, 17 Apr 2023 19:54:30 0800
Labels: control-planecontroller-managerpod-template-hash655547d5dc
Annotations: kubectl.kubernetes.io/default-container: manager
Status: Running
IP: 10.244.0.10
IPs:IP: 10.244.0.10
Controlled By: ReplicaSet/helo-controller-manager-655547d5dc
Containers:kube-rbac-proxy:Container ID: docker://a86bf7bbf28b6c56f7287915e8e9aacabe28e3cb94989998538963ea3229d8abImage: gcr.io/kubebuilder/kube-rbac-proxy:v0.13.1Image ID: docker-pullable://gcr.io/kubebuilder/kube-rbac-proxysha256:d4883d7c622683b3319b5e6b3a7edfbf2594c18060131a8bf64504805f875522Port: 8443/TCPHost Port: 0/TCPArgs:--secure-listen-address0.0.0.0:8443--upstreamhttp://127.0.0.1:8280/--logtostderrtrue--v0State: RunningStarted: Mon, 17 Apr 2023 19:54:46 0800Ready: TrueRestart Count: 0Limits:cpu: 500mmemory: 128MiRequests:cpu: 5mmemory: 64MiEnvironment: noneMounts:/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-spmqg (ro)manager:Container ID: docker://6c8bda780cfdbd91a1cf550bf9dc5c0c92bcb45cc12f65071c059590fbb0955dImage: 2513686675/levitest:001Image ID: docker-pullable://2513686675/levitestsha256:efe1d1e17537b90f84cf005b95b2c3b7065d9f6dc4c5761c6d89103963b95507Port: noneHost Port: noneCommand:/managerArgs:--health-probe-bind-address:8281--metrics-bind-address127.0.0.1:8280--leader-electState: RunningStarted: Mon, 17 Apr 2023 19:55:08 0800Ready: TrueRestart Count: 0Limits:cpu: 500mmemory: 128MiRequests:cpu: 10mmemory: 64MiLiveness: http-get http://:8281/healthz delay15s timeout1s period20s #success1 #failure3Readiness: http-get http://:8281/readyz delay5s timeout1s period10s #success1 #failure3Environment: noneMounts:/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-spmqg (ro)
Conditions:Type StatusInitialized True Ready True ContainersReady True PodScheduled True
Volumes:kube-api-access-spmqg:Type: Projected (a volume that contains injected data from multiple sources)TokenExpirationSeconds: 3607ConfigMapName: kube-root-ca.crtConfigMapOptional: nilDownwardAPI: true
QoS Class: Burstable
Node-Selectors: none
Tolerations: node.kubernetes.io/not-ready:NoExecute opExists for 300snode.kubernetes.io/unreachable:NoExecute opExists for 300s
Events:Type Reason Age From Message---- ------ ---- ---- -------Normal Scheduled 16m default-scheduler Successfully assigned helo-system/helo-controller-manager-655547d5dc-5rv59 to minikubeNormal Pulling 16m kubelet Pulling image gcr.io/kubebuilder/kube-rbac-proxy:v0.13.1Normal Pulled 16m kubelet Successfully pulled image gcr.io/kubebuilder/kube-rbac-proxy:v0.13.1 in 14.99332536s (14.993403934s including waiting)Normal Created 16m kubelet Created container kube-rbac-proxyNormal Started 16m kubelet Started container kube-rbac-proxyNormal Pulling 16m kubelet Pulling image 2513686675/levitest:001Normal Pulled 15m kubelet Successfully pulled image 2513686675/levitest:001 in 21.712144409s (21.71215601s including waiting)Normal Created 15m kubelet Created container managerNormal Started 15m kubelet Started container manager
查看日志
kubectl logs -f \
helo-controller-manager-655547d5dc-5rv59 \
-n hello-system \
-c manager## 输出
kubectl logs -f \
helo-controller-manager-655547d5dc-5rv59 \
-n helo-system \
-c manager
2023-04-17T11:55:08Z INFO controller-runtime.metrics Metrics server is starting to listen {addr: 127.0.0.1:8280}
2023-04-17T11:55:08Z INFO setup starting manager
2023-04-17T11:55:08Z INFO Starting server {path: /metrics, kind: metrics, addr: 127.0.0.1:8280}
2023-04-17T11:55:08Z INFO Starting server {kind: health probe, addr: [::]:8281}
I0417 11:55:08.632800 1 leaderelection.go:248] attempting to acquire leader lease helo-system/9b9dc273.xxx.domain...
I0417 11:55:08.642969 1 leaderelection.go:258] successfully acquired lease helo-system/9b9dc273.xxx.domain
2023-04-17T11:55:08Z DEBUG events helo-controller-manager-655547d5dc-5rv59_41a3e337-336f-4613-9d0e-75aea8923543 became leader {type: Normal, object: {kind:Lease,namespace:helo-system,name:9b9dc273.xxx.domain,uid:0c373ec9-47ab-45b0-bc4b-dad462f8f174,apiVersion:coordination.k8s.io/v1,resourceVersion:38253}, reason: LeaderElection}
2023-04-17T11:55:08Z INFO Starting EventSource {controller: test, controllerGroup: apps.xxx.domain, controllerKind: Test, source: kind source: *v1.Test}
2023-04-17T11:55:08Z INFO Starting Controller {controller: test, controllerGroup: apps.xxx.domain, controllerKind: Test}
2023-04-17T11:55:08Z INFO Starting workers {controller: test, controllerGroup: apps.xxx.domain, controllerKind: Test, worker count: 1}9. 卸载
9.1 uninstall
卸载CRD
make uninstall9.2 undeploy
清除部署
make undeploy10. 参考链接
kubebuilder官方文档Quick Start
kubebuilder实战之一、之二 —— 强烈推荐
使用kubebuilder开发operator详解