Problem z podłączeniem klastra do Ranczera

Dzień dobry.
Polecenie:
# curl --insecure -sfL https://172.16.102.164:8443/v3/import/qftm9qx7k8h99d46xvhx2d5qnsdt597rf7pzpmx6vbxlbmfw5pc8r6_c-m-mgz7rtjv.yaml | kubectl apply -f -
Zwraca komunikat:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Próbowałem instalować na masterach kubect i helm w wersjach takich jak w instrukcji instalacji, jednak to nie pomogło.
W liście nasłuchujących portów, nie ma procesu podpiętego do portu 8080:

# netstat -lntp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 127.0.0.1:2379          0.0.0.0:*               LISTEN      1408/etcd           
tcp        0      0 127.0.0.1:2382          0.0.0.0:*               LISTEN      1408/etcd           
tcp        0      0 127.0.0.1:2380          0.0.0.0:*               LISTEN      1408/etcd           
tcp        0      0 127.0.0.1:2381          0.0.0.0:*               LISTEN      1408/etcd           
tcp        0      0 127.0.0.1:10248         0.0.0.0:*               LISTEN      978/kubelet         
tcp        0      0 127.0.0.1:10249         0.0.0.0:*               LISTEN      6156/kube-proxy     
tcp        0      0 127.0.0.1:10258         0.0.0.0:*               LISTEN      3432/cloud-controll 
tcp        0      0 127.0.0.1:10259         0.0.0.0:*               LISTEN      1480/kube-scheduler 
tcp        0      0 127.0.0.1:10256         0.0.0.0:*               LISTEN      6156/kube-proxy     
tcp        0      0 127.0.0.1:10257         0.0.0.0:*               LISTEN      3927/kube-controlle 
tcp        0      0 127.0.0.1:9099          0.0.0.0:*               LISTEN      3047/calico-node    
tcp        0      0 172.16.102.111:2379     0.0.0.0:*               LISTEN      1408/etcd           
tcp        0      0 172.16.102.111:2380     0.0.0.0:*               LISTEN      1408/etcd           
tcp        0      0 0.0.0.0:111             0.0.0.0:*               LISTEN      1/init              
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      845/sshd: /usr/sbin 
tcp        0      0 127.0.0.53:53           0.0.0.0:*               LISTEN      784/systemd-resolve 
tcp        0      0 127.0.0.1:10010         0.0.0.0:*               LISTEN      966/containerd      
tcp        0      0 172.16.102.111:9345     0.0.0.0:*               LISTEN      858/rke2 server     
tcp6       0      0 :::9091                 :::*                    LISTEN      3047/calico-node    
tcp6       0      0 :::6443                 :::*                    LISTEN      1639/kube-apiserver 
tcp6       0      0 :::111                  :::*                    LISTEN      1/init              
tcp6       0      0 :::10250                :::*                    LISTEN      978/kubelet         
tcp6       0      0 :::10260                :::*                    LISTEN      3432/cloud-controll 
tcp6       0      0 :::22                   :::*                    LISTEN      845/sshd: /usr/sbin 

Witam,

Mam ten sam błąd. Udało się znaleźć przyczynę?

Niestety, na razie nie.

kubectl get nodes daje rezultat? Wykonaj polecenie curl z sudo

$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ezdrp-m-01 Ready control-plane,etcd,master 58m v1.28.3+rke2r2
ezdrp-m-02 Ready control-plane,etcd,master 28m v1.28.3+rke2r2
ezdrp-m-03 Ready control-plane,etcd,master 21m v1.28.3+rke2r2

Niestety jak wykonuje polecenie z sudo krzyczy, że brak kubectlapply:

curl --insecure -sfL https://XXX.XXX.XXX.XXX:8443/v3/import/gqnwtcdnpp9kdvj74rzwvknqwzd2k9rv4k4wdhbv2x5vr66jk6vpln_c-m-xvcs2xhz.yaml |kubectlapply --validate=false -f -

bash: kubectlapply: command not found
Przed wykonaniem polecenia bezpośredio z root, musiałem jeszcze ustawić zmienną KUBECONFIG.

Kubectl i apply mi się skleiło…

Polecenie z sudo daje wynik:
$ sudo curl --insecure -sfL https://172.16.102.164:8443/v3/import/gqnwtcdnpp9kdvj74rzwvknqwzd2k9rv4k4wdhbv2x5vr66jk6vpln_c-m-xvcs2xhz.yaml | kubectl apply --validate=false -f -
[sudo] password for ezd:
clusterrole.rbac.authorization.k8s.io/proxy-clusterrole-kubeapiserver unchanged
clusterrolebinding.rbac.authorization.k8s.io/proxy-role-binding-kubernetes-master unchanged
namespace/cattle-system unchanged
serviceaccount/cattle unchanged
clusterrolebinding.rbac.authorization.k8s.io/cattle-admin-binding unchanged
secret/cattle-credentials-74fc466 unchanged
clusterrole.rbac.authorization.k8s.io/cattle-admin unchanged
deployment.apps/cattle-cluster-agent configured
service/cattle-cluster-agent unchanged

A jaki wynik daje curl poniżej?

curl --insecure -sfL https://172.16.102.164:8443/v3/import/gqnwtcdnpp9kdvj74rzwvknqwzd2k9rv4k4wdhbv2x5vr66jk6vpln_c-m-xvcs2xhz.yaml


---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: proxy-clusterrole-kubeapiserver
rules:
- apiGroups: [""]
resources:
- nodes/metrics
- nodes/proxy
- nodes/stats
- nodes/log
- nodes/spec
verbs: ["get", "list", "watch", "create"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: proxy-role-binding-kubernetes-master
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: proxy-clusterrole-kubeapiserver
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: kube-apiserver
---
apiVersion: v1
kind: Namespace
metadata:
name: cattle-system

---

apiVersion: v1
kind: ServiceAccount
metadata:
name: cattle
namespace: cattle-system

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cattle-admin-binding
namespace: cattle-system
labels:
cattle.io/creator: "norman"
subjects:
- kind: ServiceAccount
name: cattle
namespace: cattle-system
roleRef:
kind: ClusterRole
name: cattle-admin
apiGroup: rbac.authorization.k8s.io

---

apiVersion: v1
kind: Secret
metadata:
name: cattle-credentials-74fc466
namespace: cattle-system
type: Opaque
data:
url: "aHR0cHM6Ly8xNzIuMTYuMTAyLjE2NDo4NDQz"
token: "Z3Fud3RjZG5wcDlrZHZqNzRyend2a25xd3pkMms5cnY0azR3ZGhidjJ4NXZyNjZqazZ2cGxu"
namespace: ""

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cattle-admin
labels:
cattle.io/creator: "norman"
rules:
- apiGroups:
- '*'
resources:
- '*'
verbs:
- '*'
- nonResourceURLs:
- '*'
verbs:
- '*'

---

apiVersion: apps/v1
kind: Deployment
metadata:
name: cattle-cluster-agent
namespace: cattle-system
annotations:
management.cattle.io/scale-available: "2"
spec:
selector:
matchLabels:
app: cattle-cluster-agent
template:
metadata:
labels:
app: cattle-cluster-agent
spec:
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- preference:
matchExpressions:
- key: node-role.kubernetes.io/controlplane
operator: In
values:
- "true"
weight: 100
- preference:
matchExpressions:
- key: node-role.kubernetes.io/control-plane
operator: In
values:
- "true"
weight: 100
- preference:
matchExpressions:
- key: node-role.kubernetes.io/master
operator: In
values:
- "true"
weight: 100
- preference:
matchExpressions:
- key: cattle.io/cluster-agent
operator: In
values:
- "true"
weight: 1
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/os
operator: NotIn
values:
- windows
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- cattle-cluster-agent
topologyKey: kubernetes.io/hostname
weight: 100
serviceAccountName: cattle
tolerations:
# No taints or no controlplane nodes found, added defaults
- effect: NoSchedule
key: node-role.kubernetes.io/controlplane
value: "true"
- effect: NoSchedule
key: "node-role.kubernetes.io/control-plane"
operator: "Exists"
- effect: NoSchedule
key: "node-role.kubernetes.io/master"
operator: "Exists"
containers:
- name: cluster-register
imagePullPolicy: IfNotPresent
env:
- name: CATTLE_IS_RKE
value: "false"
- name: CATTLE_SERVER
value: "https://172.16.102.164:8443"
- name: CATTLE_CA_CHECKSUM
value: "fb5e5cd75cc997c93dd8ce3b323f573dda2195eaa49185479afeb217a140a0af"
- name: CATTLE_CLUSTER
value: "true"
- name: CATTLE_K8S_MANAGED
value: "true"
- name: CATTLE_CLUSTER_REGISTRY
value: ""
- name: CATTLE_SERVER_VERSION
value: v2.7.9
- name: CATTLE_INSTALL_UUID
value: 76b64437-4a01-4212-be53-b409a4aab342
- name: CATTLE_INGRESS_IP_DOMAIN
value: sslip.io
image: rancher/rancher-agent:v2.7.9
volumeMounts:
- name: cattle-credentials
mountPath: /cattle-credentials
readOnly: true
volumes:
- name: cattle-credentials
secret:
secretName: cattle-credentials-74fc466
defaultMode: 320
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 0
maxSurge: 1

---
apiVersion: v1
kind: Service
metadata:
name: cattle-cluster-agent
namespace: cattle-system
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
- port: 443
targetPort: 444
protocol: TCP
name: https-internal
selector:
app: cattle-cluster-agent

Po zainstalowaniu Ranchera były wykonywane jakieś zmiany IP?

Może spróbuj odinstalować ranchera:

docker stop rancher
docker rm rancher
docker run -d --restart=unless-stopped --name rancher -p 8081:80 -p 8443:443 --privileged rancher/rancher:v2.7.9

i ponownie spróbuj zaimportować klaster.

Nie, od początku było to ip .

iptables ufw wyłączone?

sudo curl --insecure -sfL https://172.16.102.164:8443/v3/import/hbr5vs8g4cdtfhdjzn7swnr2rppqr75nkz9zfkj8mzz4bljqnjp5m8_c-m-5bzx4plw.yaml | kubectl apply -f -
[sudo] password for ezd:
clusterrole.rbac.authorization.k8s.io/proxy-clusterrole-kubeapiserver unchanged
clusterrolebinding.rbac.authorization.k8s.io/proxy-role-binding-kubernetes-master unchanged
namespace/cattle-system unchanged
serviceaccount/cattle unchanged
clusterrolebinding.rbac.authorization.k8s.io/cattle-admin-binding unchanged
secret/cattle-credentials-26e91c8 created
clusterrole.rbac.authorization.k8s.io/cattle-admin unchanged
Warning: spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[0].matchExpressions[0].key: beta.kubernetes.io/os is deprecated since v1.14; use “kubernetes.io/os” instead
deployment.apps/cattle-cluster-agent configured
service/cattle-cluster-agent unchanged

I dalej rancher pending…

Tak. ufw w ogóle niezainstalowane

To dodawanie to z roota jest czy jakiś inny user?

Dodawałem najpierw z konta root, a teraz poprzez sudo.

W obu przypadkach taki sam komunikat?

Tak. Z tym, że jak uruchamiałem z root musiałem jeszcze wcześniej ustawić KUBECONFIG. Inaczej otrzymywałem komunikat:

curl --insecure -sfL https://172.16.102.164:8443/v3/import/p256spp7pwnv6pd9l9zlksbx4srdf7qc9s2wlglzbvt49mzx9np8jl_c-m-4mgknxxj.yaml | kubectl

apply -f -
error: error validating “STDIN”: error validating data: failed to download openapi: Get “http://localhost:8080/openapi/v2?timeout=32s”: dial tcp 127.0.0.1:8080:
connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

a punkt 3.8.2 Utworzenie symlinka do kubectl i jej konfiguracji do folderu domowego spełniony?
w katalogu .kube jest dowiązanie?