Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

hotel-reservation failed to create shim task #343

Open
yinfangchen opened this issue Jun 18, 2024 · 6 comments
Open

hotel-reservation failed to create shim task #343

yinfangchen opened this issue Jun 18, 2024 · 6 comments

Comments

@yinfangchen
Copy link

yinfangchen commented Jun 18, 2024

I got the following error from the events when deploying the hotel-reservation:


Normal Created 1s (x2 over 3s) kubelet Created container hotel-reserv-frontend │ Warning Failed 1s (x2 over 3s) kubelet Error: failed to start container "hotel-reserv-frontend": Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "./frontend": stat ./frontend: no such file or directory: unknown


This error appears in every service: fontend, geo, profile, rate, recommendation, reservation, search, user.

@marvin-steinke
Copy link

I'm currently experiencing the same issue. Steps to reproduce:

helm install hotelreservation DeathStarBench/hotelReservation/helm-chart/hotelreservation \
    --namespace hotelreservation \
    --create-namespace

kubectl describe pod -l app=frontend-hotelreservation -n hotelreservation

Pod Description
Name:             frontend-hotelreservation-d7c56744d-v568z
Namespace:        hotelreservation
Priority:         0
Service Account:  default
Node:             fedora/192.168.2.155
Start Time:       Thu, 18 Jul 2024 13:18:08 +0200
Labels:           app=frontend-hotelreservation
                  app.kubernetes.io/instance=hotelreservation
                  app.kubernetes.io/managed-by=Helm
                  app.kubernetes.io/name=frontend
                  app.kubernetes.io/version=0.1.0
                  helm.sh/chart=frontend-0.1.0
                  pod-template-hash=d7c56744d
                  service=frontend-hotelreservation
Annotations:      <none>
Status:           Running
IP:               10.42.0.52
IPs:
  IP:           10.42.0.52
Controlled By:  ReplicaSet/frontend-hotelreservation-d7c56744d
Containers:
  hotel-reserv-frontend:
    Container ID:  containerd://b573b727e25d44d49da6acbc7eee959e2b738dfef81b854bc982e4fb946bdf29
    Image:         docker.io/deathstarbench/hotel-reservation:latest
    Image ID:      docker.io/deathstarbench/hotel-reservation@sha256:488d8980c81eeae337d089185f2dd2643dec589809e102c2abf65a4d6e9bb436
    Port:          5000/TCP
    Host Port:     0/TCP
    Command:
      ./frontend
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       StartError
      Message:      failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "./frontend": stat ./frontend: no such file or directory: unknown
      Exit Code:    128
      Started:      Thu, 01 Jan 1970 01:00:00 +0100
      Finished:     Thu, 18 Jul 2024 13:19:47 +0200
    Ready:          False
    Restart Count:  4
    Environment:
      GC:                   100
      JAEGER_SAMPLE_RATIO:  0.01
      LOG_LEVEL:            INFO
      MEMC_TIMEOUT:         2
      TLS:                  0
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tlvx9 (ro)
      config.json from frontend-hotelreservation-config (rw,path="service-config.json")
Conditions:
  Type                        Status
  PodReadyToStartContainers   True 
  Initialized                 True 
  Ready                       False 
  ContainersReady             False 
  PodScheduled                True 
Volumes:
  frontend-hotelreservation-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      frontend-hotelreservation
    Optional:  false
  kube-api-access-tlvx9:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                   From               Message
  ----     ------     ----                  ----               -------
  Normal   Scheduled  2m55s                 default-scheduler  Successfully assigned hotelreservation/frontend-hotelreservation-d7c56744d-v568z to fedora
  Normal   Pulled     78s (x5 over 2m55s)   kubelet            Container image "docker.io/deathstarbench/hotel-reservation:latest" already present on machine
  Normal   Created    77s (x5 over 2m54s)   kubelet            Created container hotel-reserv-frontend
  Warning  Failed     77s (x5 over 2m54s)   kubelet            Error: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "./frontend": stat ./frontend: no such file or directory: unknown
  Warning  BackOff    63s (x10 over 2m51s)  kubelet            Back-off restarting failed container hotel-reserv-frontend in pod frontend-hotelreservation-d7c56744d-v568z_hotelreservation(df1ec86e-b77a-4f66-a1fc-6e09e72d00c6)

K8s Version:

Client Version: v1.29.6
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.29.3+k3s1

@abmuslim
Copy link

Access the frontend pod, run which frontend to find the executable path inside the pod, and then update the frontend deployment file with this path under command:. Thats how I solved my issue.

@KonstantinosChanioglou
Copy link

Access the frontend pod, run which frontend to find the executable path inside the pod, and then update the frontend deployment file with this path under command:. Thats how I solved my issue.

This solved my case as well, now all the pods are running! Thanks @abmuslim!

@SeeYouStellar
Copy link

I got the same error, have you solve this error @yinfangchen

@SeeYouStellar
Copy link

I got the same error, have you solve this error @yinfangchen

The container's startup path is /workspace, and the container startup command in the geo-deployment.yaml file is ./geo, which results in a "file not found" error. The Go-compiled executable files are located in the GOPATH/bin directory, which in my case is /go/bin. Therefore, I changed ./geo to /go/bin/geo. Additionally, I noticed that the container.image in the cloned geo-deployment.yaml file is not the service image that was built in the previous steps, so I changed it directly to the service image geo that was built earlier.
image

Service deployed successfully
image
image

@JacksonArthurClark
Copy link

JacksonArthurClark commented Oct 30, 2024

I ran into a similar issue when deploying with the helm charts, with an added issue wherein the configMaps weren't being set.

Here is an example from hotelReservation/helm-chart/hotelreservation/charts/user/values.yaml

Before:

name: user

ports:
  - port: 8086
    targetPort: 8086
 
container:
  command: ./user
  image: deathstarbench/hotel-reservation
  name: hotel-reserv-user
  ports:
  - containerPort: 8086

configMaps:
  - name: service-config.json
    mountPath: config.json
    value: service-config

After:

name: user

ports:
  - port: 8086
    targetPort: 8086
 
container:
  command: /go/bin/user
  image: deathstarbench/hotel-reservation
  name: hotel-reserv-user
  ports:
  - containerPort: 8086

configMaps:
  - name: service-config.json
    mountPath: /workspace/config.json
    value: service-config 

Whatever default paths they were using for command and for mountPath do not seem to be working for me. For anyone else running into this issue, be sure to update this for all of the services that are failing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants