Skip to main content
Version: Next

Default Kueue Usage Example

This example demonstrates how to use Kueue with HAMi vGPU resources. It includes a complete configuration that sets up ResourceFlavor, ClusterQueue, LocalQueue, and a sample Deployment that requests vGPU resources.

Before applying this example, ensure that HAMi and Kueue are installed, and Kueue is configured with ResourceTransformation enabled (see How to use Kueue on HAMi).

apiVersion: kueue.x-k8s.io/v1beta1
kind: ResourceFlavor
metadata:
name: hami-flavor
spec:
nodeLabels:
gpu: "on"
---
apiVersion: kueue.x-k8s.io/v1beta1
kind: ClusterQueue
metadata:
name: hami-queue
spec:
namespaceSelector: {}
resourceGroups:
- coveredResources:
- nvidia.com/gpu
- nvidia.com/total-gpucores
- nvidia.com/total-gpumem
flavors:
- name: hami-flavor
resources:
- name: nvidia.com/gpu
nominalQuota: 20
- name: nvidia.com/total-gpucores
nominalQuota: 600
- name: nvidia.com/total-gpumem
nominalQuota: 20480
---
apiVersion: v1
kind: Namespace
metadata:
name: kueue-test
---
apiVersion: kueue.x-k8s.io/v1beta1
kind: LocalQueue
metadata:
name: user-queue
namespace: kueue-test
spec:
clusterQueue: "hami-queue"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: gpu-burn
namespace: kueue-test
labels:
kueue.x-k8s.io/queue-name: user-queue
spec:
replicas: 1
selector:
matchLabels:
app-name: gpu-burn
template:
metadata:
labels:
app-name: gpu-burn
spec:
containers:
- args:
- while :; do /app/gpu_burn 300 || true; sleep 300; done
command:
- /bin/sh
- -lc
image: oguzpastirmaci/gpu-burn:latest
imagePullPolicy: IfNotPresent
name: main
resources:
limits:
nvidia.com/gpu: "2" # requesting 2 vGPU instances
nvidia.com/gpucores: "30" # 30 cores per vGPU
nvidia.com/gpumem: "1024" # 1024 MiB per vGPU