Witam,
walczę od pewnego czasu aby postawić w przychodni EZDRP przy instalacji wersji nask-ezdrp-ha-19.4.15, moge prosić o pomoc co moze być przyczyną ?
zatrzymuje mi się instalacja na błedzie
PersistentVolumeClaim is not bound: ezd/wpe-rest-storage
instaluję z [Instrukcja instalacji aplikacji EZD RP – środowisko do 150 użytkowników – Podręcznik użytkownika systemu EZD RP] (https://podrecznik.ezdrp.gov.pl/instrukcja-instalacji-aplikacji-ezd-rp-srodowisko-do-150-uzytkownikow/)
helm upgrade --history-max=5 --install=true --namespace=ezd --timeout=10m0s --values=/home/shell/helm/values-nask-ezdrp-ha-19.4.15.yaml --version=19.4.15 --wait=true ezdrpapp /home/shell/helm/nask-ezdrp-ha-19.4.15.tgz
Starting delete for “ezdrp” ServiceAccount
creating 1 resource(s)
Starting delete for “ezdrp-role” Role
creating 1 resource(s)
Starting delete for “ezdrp-rolebindng” RoleBinding
creating 1 resource(s)
Starting delete for “filerepohelm-cacert” Secret
Ignoring delete failure for “filerepohelm-cacert” /v1, Kind=Secret: secrets “filerepohelm-cacert” not found
creating 1 resource(s)
Starting delete for “filerepohelm-cacert” Secret
checking 56 resources for changes
Looks like there are no changes for Secret “sso-ezdrpservercert”
Looks like there are no changes for PersistentVolumeClaim “ezdrp-api-edoreczenia-storage”
Looks like there are no changes for PersistentVolumeClaim “ezdrp-api-epuap-storage”
Looks like there are no changes for PersistentVolumeClaim “ezdrp-api-kontakty-storage”
Looks like there are no changes for PersistentVolumeClaim “filerepo-api-storage”
Looks like there are no changes for PersistentVolumeClaim “sso-identityserver-storage”
Created a new PersistentVolumeClaim called “wpe-rest-storage” in ezd
Looks like there are no changes for Service “anonimizator-api”
Looks like there are no changes for Service “btm”
Looks like there are no changes for Service “cloudadmin”
Looks like there are no changes for Service “ezdrp-api”
Looks like there are no changes for Service “ezdrp-web”
Looks like there are no changes for Service “filerepo-api”
Looks like there are no changes for Service “ezdrp-forms”
Looks like there are no changes for Service “integrator-api”
Looks like there are no changes for Service “job-trigger”
Looks like there are no changes for Service “kuip-api”
Looks like there are no changes for Service “kuip-web”
Looks like there are no changes for Service “ocr-api”
Looks like there are no changes for Service “razor”
Looks like there are no changes for Service “sso-apigateway”
Looks like there are no changes for Service “sso-customexternalproviders”
Looks like there are no changes for Service “sso-identityserver”
Looks like there are no changes for Service “teryt”
Looks like there are no changes for Service “wpe-rest”
Patch Deployment “anonimizator-api” in namespace ezd
Patch Deployment “btm” in namespace ezd
Patch Deployment “cloudadmin” in namespace ezd
Patch Deployment “ezdrp-api” in namespace ezd
Patch Deployment “ezdrp-web” in namespace ezd
Patch Deployment “filerepo-api” in namespace ezd
Patch Deployment “ezdrp-forms” in namespace ezd
Patch Deployment “integrator-api” in namespace ezd
Patch Deployment “job-trigger” in namespace ezd
Patch Deployment “kuip-api” in namespace ezd
Looks like there are no changes for Deployment “kuip-web”
Patch Deployment “ocr-api” in namespace ezd
Patch Deployment “razor” in namespace ezd
Patch Deployment “sso-apigateway” in namespace ezd
Looks like there are no changes for Deployment “sso-customexternalproviders”
Looks like there are no changes for Deployment “sso-identityserver”
Patch Deployment “teryt” in namespace ezd
Created a new Deployment called “wpe-rest” in ezd
Looks like there are no changes for Ingress “anonimizator-api”
Looks like there are no changes for Ingress “ezdrp-api”
Looks like there are no changes for Ingress “ezdrp-web”
Looks like there are no changes for Ingress “filerepo-api”
Looks like there are no changes for Ingress “ezdrp-forms”
Looks like there are no changes for Ingress “integrator-api”
Looks like there are no changes for Ingress “kuip-api”
Looks like there are no changes for Ingress “kuip-web”
Looks like there are no changes for Ingress “ocr-api”
Looks like there are no changes for Ingress “sso-customexternalproviders”
Looks like there are no changes for Ingress “sso-identityserver”
Looks like there are no changes for Ingress “teryt”
Looks like there are no changes for Ingress “wpe-rest”
beginning wait for 56 resources with timeout of 10m0s
PersistentVolumeClaim is not bound: ezd/wpe-rest-storage
PersistentVolumeClaim is not bound: ezd/wpe-rest-storage
PersistentVolumeClaim is not bound: ezd/wpe-rest-storage
PersistentVolumeClaim is not bound: ezd/wpe-rest-storage
……
Error: UPGRADE FAILED: timed out waiting for the condition
wykonaj kubectl -n describe pod wpe-rest-xxx, na koniec tego co wypluje powinno pokazać coś wiecej, z czym ma problem.
zweryfikuj jeszcze czy PersistentVolumeClaim się utworzył, w Rancher → Storage → PersistentVolumeClaims
root@jozezd:/home# kubectl -n describe pod wpe-rest-storage
Error: flags cannot be placed before plugin name: -n
wpe-rest-storage jest od 2,5dnia w stanie pending
może ktoś pomoc ? bo kończy mi się czas w jednostce na uruchomienie
IMO trochę za mało informacji, żeby coś stwierdzić definitywnie
spróbuj tak jak pisano wyżej, status poda wpe-rest-xxx (Twoje polecenie było niepoprawne, powinno być tak:
# kubectl describe pod wpe-rest-xxx --namespace=<nazwa twojego namespace>
pod xxx wstawiasz to co jest w nazwie poda, jeśli nie wiesz jak się nazywa, to zobacz w rancherze, lub z terminala:
# kubectl get pods --namespace=<nazwa twojego namespace> | grep wpe-rest
pomocne może być też
# kubectl describe pvc --namespace=<nazwa twojego namespace>
w sekcji wpe-rest-storage powinny być interesujące Cię informacje
jeśli masz namespace utworzony zgodnie z instrukcją instalacji, to będzie miał po prostu nazwę ezd.
root@jozezd:/home/adminks# kubectl describe pod wpe-rest-6dbfdf96fb-vv7xd --namespace=ezd
Name: wpe-rest-6dbfdf96fb-vv7xd
Namespace: ezd
Priority: 0
Service Account: ezdrp
Node:
Labels: app=ezdrpapp-wpe-rest
environment=Development
pod-template-hash=6dbfdf96fb
role=backend
Annotations:
Status: Pending
IP:
IPs:
Controlled By: ReplicaSet/wpe-rest-6dbfdf96fb
Containers:
wpe-rest:
Image: hub.eadministracja.nask.pl/ezdrp/wpe-rest:19.4.15
Port:
Host Port:
Limits:
cpu: 2
memory: 8000Mi
Requests:
cpu: 200m
memory: 756Mi
Liveness: http-get http://:8080/http://wpe-rest/info delay=120s timeout=1s period=10s #success=1 #failure=4
Readiness: http-get http://:8080/http://wpe-rest/info delay=50s timeout=1s period=10s #success=1 #failure=2
Environment:
Mounts:
/data from wpe-rest-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mkpch (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
wpe-rest-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: wpe-rest-storage
ReadOnly: false
kube-api-access-mkpch:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional:
DownwardAPI: true
QoS Class: Burstable
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
Warning FailedScheduling 66s (x1327 over 4d14h) default-scheduler 0/1 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling…
Name: wpe-rest-storage
Namespace: ezd
StorageClass: longhorn
Status: Pending
Volume:
Labels: app.kubernetes.io/managed-by=Helm
Annotations: meta.helm.sh/release-name: ezdrpapp
meta.helm.sh/release-namespace: ezd
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Used By: wpe-rest-6dbfdf96fb-vv7xd
Events:
Type Reason Age From Message
Warning ProvisioningFailed 3m31s (x26524 over 4d14h) persistentvolume-controller storageclass.storage.k8s.io “longhorn” not found
Nie jestem ekspertem od k8s, ale myślę, że tu może jest klucz do rozwiązania tego problemu.
Czy w ustawieniach aplikacji, w sekcji “persistent storage” zmieniłeś wszędzie z “longhorn” na “local-path”?
musiało mi jedno przeskoczyć dziękuję, zaraz puszcze dalej instalacje zobaczymy czy przejdzie.
kolejny błąd
root@jozezd:/home/adminks# kubectl describe pod ezdrp-api-7c65d799c5-q9bps --namespace=ezd
Name: ezdrp-api-7c65d799c5-q9bps
Namespace: ezd
Priority: 0
Service Account: ezdrp
Node: jozezd/10.0.0.177
Start Time: Fri, 07 Jun 2024 09:06:47 +0000
Labels: app=ezdrpapp-ezdrp-api
environment=Development
pod-template-hash=7c65d799c5
role=backend
Annotations: cattle.io/timestamp: 2024-06-02T17:51:39Z
cni.projectcalico.org/containerID: 8452db28c61cb12a34802a45d531c35395f86612a053f4d0ed181fc5407de572
cni.projectcalico.org/podIP: 10.42.0.116/32
cni.projectcalico.org/podIPs: 10.42.0.116/32
Status: Running
IP: 10.42.0.116
IPs:
IP: 10.42.0.116
Controlled By: ReplicaSet/ezdrp-api-7c65d799c5
Init Containers:
waitfor:
Container ID: containerd://ebf6ca431de6509e8033e53b8fbe3662bbdae7edb3bc46a3b1772abd43c20fe6
Image: hub.eadministracja.nask.pl/nask/wait_for:01
Image ID: hub.eadministracja.nask.pl/nask/wait_for@sha256:cb71493d3b953afa4c3db18d51509365b6d99f9728c5eaa8d7fdd104e421924d
Port:
Host Port:
Command:
sh
-c
/docker-entrypoint.sh wait_for relationaldb:${relationalDbHost}:${relationalDbPort} redis:${redisHost}:${redisPort} redisAppend:${redisAppendHost}:${redisAppendPort} rabbit:${rabbitHost}:${rabbitPort} cloudadmin:${CLOUDADMIN_HOST}:${CLOUDADMIN_PORT} teryt:${TERYT_HOST}:${TERYT_PORT}
State: Terminated
Reason: Completed
Exit Code: 0
Started: Fri, 07 Jun 2024 09:06:49 +0000
Finished: Fri, 07 Jun 2024 09:06:49 +0000
Ready: True
Restart Count: 0
Environment:
relationalDbHost: database-postgresql
relationalDbPort: 5432
CLOUDADMIN_HOST: cloudadmin
CLOUDADMIN_PORT: 2000
TERYT_HOST: teryt
TERYT_PORT: 5000
redisPort: 6379
redisHost: database-redis-master
redisAppendPort: 6379
redisAppendHost: database-redisappend-master
rabbitPort: 5672
rabbitHost: database-rabbitmq
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-knqsv (ro)
Containers:
ezdrp-api:
Container ID: containerd://f825676188a99a3dbd8375d4d87637c091ccc84e26c4eb4e48e46981a92c5e07
Image: hub.eadministracja.nask.pl/ezdrp/ezdrp-api:19.7.15
Image ID: hub.eadministracja.nask.pl/ezdrp/ezdrp-api@sha256:335cf5c0a5a0c7bceae01ccc51b54701b8aeec9589d7cff6eaf91e9bde629115
Port:
Host Port:
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 139
Started: Fri, 07 Jun 2024 09:13:23 +0000
Finished: Fri, 07 Jun 2024 09:13:41 +0000
Ready: False
Restart Count: 6
Limits:
cpu: 6
memory: 4Gi
Requests:
cpu: 3
memory: 4Gi
Liveness: tcp-socket :5000 delay=40s timeout=1s period=10s #success=1 #failure=4
Readiness: tcp-socket :5000 delay=10s timeout=1s period=10s #success=1 #failure=2
Startup: tcp-socket :5000 delay=30s timeout=1s period=30s #success=1 #failure=600
Environment:
ICE_CLOUD_ADMIN_URL: http://cloudadmin:2000
ICE_CONFIG: ezdrpapi
REQUEST_STATISTIC: OFF
LONG_EXECUTION_STATISTIC: OFF
TEST_VERSION_NAME: Wersja Testowa
Ezdrp_Feature_Ai_Klasyfikacja: Disabled
Ezdrp_Feature_Ai_Metadane: Disabled
Ezdrp_Feature_Ai_Streszczenia: Disabled
Ezdrp_Feature_Bpmn: Disabled
Ezdrp_Feature_Ewd: Disabled
Ezdrp_Feature_Muas: Disabled
Ezdrp_Feature_NoweZadania: Disabled
Ezdrp_Feature_Ocr: Disabled
Ezdrp_Feature_PieczecElektroniczna: Disabled
Ezdrp_Feature_PrzydzielanieDostepow: Enabled
ice-logging-Serilog__Properties__SeqLogger: true
ice-logging-Serilog__Properties__ConsoleLogger: true
Ezdrp_Feature_CentrumStatystyk: Disabled
statistics-config-Instance__ApiKey:
statistics-config-Instance__InstallationId:
statistics-config-Workers__TaskRequest__DefaultInterval: 24:00:00
statistics-config-Workers__SendTaskResult__DefaultInterval: 24:00:00
Mounts:
/app/edoreczenia from ezdrp-api-edoreczenia (rw)
/app/epuap from ezdrp-api-epuap (rw)
/app/zewnetrznabazakontaktow from ezdrp-api-kontakty (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-knqsv (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
ezdrp-api-epuap:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: ezdrp-api-epuap-storage
ReadOnly: false
ezdrp-api-edoreczenia:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: ezdrp-api-edoreczenia-storage
ReadOnly: false
ezdrp-api-kontakty:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: ezdrp-api-kontakty-storage
ReadOnly: false
kube-api-access-knqsv:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional:
DownwardAPI: true
QoS Class: Burstable
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
Normal Scheduled 11m default-scheduler Successfully assigned ezd/ezdrp-api-7c65d799c5-q9bps to jozezd
Normal Pulled 11m kubelet Container image “hub.eadministracja.nask.pl/nask/wait_for:01” already present on machine
Normal Created 11m kubelet Created container waitfor
Normal Started 11m kubelet Started container waitfor
Normal Created 10m (x4 over 11m) kubelet Created container ezdrp-api
Normal Started 10m (x4 over 11m) kubelet Started container ezdrp-api
Normal Pulled 9m36s (x5 over 11m) kubelet Container image “hub.eadministracja.nask.pl/ezdrp/ezdrp-api:19.7.15” already present on machine
Warning BackOff 105s (x52 over 11m) kubelet Back-off restarting failed container ezdrp-api in pod ezdrp-api-7c65d799c5-q9bps_ezd(45c9bed3-425d-49bb-934c-5958f2060feb)
To akurat już inny problem niż PVC i szczerze mówiąc nie wygląda za dobrze.
Internet mówi, że kod 139 to segmentation fault, błąd alokacji pamięci, problem z hardware, albo coś grubego.
Może nie masz wystarczająco zasobów żeby aplikacja mogła się uruchomić (totalny strzał w ciemno).
zmieniłem wpisy z konfiguracji postgress-a czyli podalem w mu w hostname adres serwera i dodałem wpisy w lokalnym DNS-e i log wyglada inaczej
root@jozezd:/home/adminks# kubectl describe pod ezdrp-api-8597469458-mq54l --namespace=ezd
Name: ezdrp-api-8597469458-mq54l
Namespace: ezd
Priority: 0
Service Account: ezdrp
Node: jozezd/10.0.0.177
Start Time: Fri, 07 Jun 2024 09:37:00 +0000
Labels: app=ezdrpapp-ezdrp-api
environment=Development
pod-template-hash=8597469458
role=backend
Annotations: cattle.io/timestamp: 2024-06-02T17:51:39Z
cni.projectcalico.org/containerID: e0a5e692560e541e30b4d3d68baf46f35ab46a0e7642ce2b5984281319f5c932
cni.projectcalico.org/podIP: 10.42.0.136/32
cni.projectcalico.org/podIPs: 10.42.0.136/32
Status: Pending
IP: 10.42.0.136
IPs:
IP: 10.42.0.136
Controlled By: ReplicaSet/ezdrp-api-8597469458
Init Containers:
waitfor:
Container ID: containerd://e78a66f080c50adbe795557457f94521a120f474f7bb18911166b2109d6a7b32
Image: hub.eadministracja.nask.pl/nask/wait_for:01
Image ID: hub.eadministracja.nask.pl/nask/wait_for@sha256:cb71493d3b953afa4c3db18d51509365b6d99f9728c5eaa8d7fdd104e421924d
Port:
Host Port:
Command:
sh
-c
/docker-entrypoint.sh wait_for relationaldb:${relationalDbHost}:${relationalDbPort} redis:${redisHost}:${redisPort} redisAppend:${redisAppendHost}:${redisAppendPort} rabbit:${rabbitHost}:${rabbitPort} cloudadmin:${CLOUDADMIN_HOST}:${CLOUDADMIN_PORT} teryt:${TERYT_HOST}:${TERYT_PORT}
State: Running
Started: Fri, 07 Jun 2024 09:37:02 +0000
Ready: False
Restart Count: 0
Environment:
relationalDbHost: ezd.spzozjozefow.pl
relationalDbPort: 5432
CLOUDADMIN_HOST: cloudadmin
CLOUDADMIN_PORT: 2000
TERYT_HOST: teryt
TERYT_PORT: 5000
redisPort: 6379
redisHost: database-redis-master
redisAppendPort: 6379
redisAppendHost: database-redisappend-master
rabbitPort: 5672
rabbitHost: database-rabbitmq
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x7rmz (ro)
Containers:
ezdrp-api:
Container ID:
Image: hub.eadministracja.nask.pl/ezdrp/ezdrp-api:19.7.15
Image ID:
Port:
Host Port:
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Limits:
cpu: 6
memory: 4Gi
Requests:
cpu: 3
memory: 4Gi
Liveness: tcp-socket :5000 delay=40s timeout=1s period=10s #success=1 #failure=4
Readiness: tcp-socket :5000 delay=10s timeout=1s period=10s #success=1 #failure=2
Startup: tcp-socket :5000 delay=30s timeout=1s period=30s #success=1 #failure=600
Environment:
ICE_CLOUD_ADMIN_URL: http://cloudadmin:2000
ICE_CONFIG: ezdrpapi
REQUEST_STATISTIC: OFF
LONG_EXECUTION_STATISTIC: OFF
TEST_VERSION_NAME: Wersja Testowa
Ezdrp_Feature_Ai_Klasyfikacja: Disabled
Ezdrp_Feature_Ai_Metadane: Disabled
Ezdrp_Feature_Ai_Streszczenia: Disabled
Ezdrp_Feature_Bpmn: Disabled
Ezdrp_Feature_Ewd: Disabled
Ezdrp_Feature_Muas: Disabled
Ezdrp_Feature_NoweZadania: Disabled
Ezdrp_Feature_Ocr: Disabled
Ezdrp_Feature_PieczecElektroniczna: Disabled
Ezdrp_Feature_PrzydzielanieDostepow: Enabled
ice-logging-Serilog__Properties__SeqLogger: true
ice-logging-Serilog__Properties__ConsoleLogger: true
Ezdrp_Feature_CentrumStatystyk: Disabled
statistics-config-Instance__ApiKey:
statistics-config-Instance__InstallationId:
statistics-config-Workers__TaskRequest__DefaultInterval: 24:00:00
statistics-config-Workers__SendTaskResult__DefaultInterval: 24:00:00
Mounts:
/app/edoreczenia from ezdrp-api-edoreczenia (rw)
/app/epuap from ezdrp-api-epuap (rw)
/app/zewnetrznabazakontaktow from ezdrp-api-kontakty (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x7rmz (ro)
Conditions:
Type Status
Initialized False
Ready False
ContainersReady False
PodScheduled True
Volumes:
ezdrp-api-epuap:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: ezdrp-api-epuap-storage
ReadOnly: false
ezdrp-api-edoreczenia:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: ezdrp-api-edoreczenia-storage
ReadOnly: false
ezdrp-api-kontakty:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: ezdrp-api-kontakty-storage
ReadOnly: false
kube-api-access-x7rmz:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional:
DownwardAPI: true
QoS Class: Burstable
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
Normal Scheduled 11m default-scheduler Successfully assigned ezd/ezdrp-api-8597469458-mq54l to jozezd
Normal Pulled 11m kubelet Container image “hub.eadministracja.nask.pl/nask/wait_for:01” already present on machine
Normal Created 11m kubelet Created container waitfor
Normal Started 11m kubelet Started container waitfor
jeśli już, to wydaje mi się to być krokiem wstecz, bo zauważ że teraz pod przy inicjalizacji zatrzymuje się
Normal Created 11m kubelet Created container waitfor
Normal Started 11m kubelet Started container waitfor
a w pierwszym przypadku inicjacja się kończyła