Skip to content

Model Security

The lifecycle of a model container starts from a user-created model-package.zip which contains the necessary prerequisites to generate a model container that encapsulates an inference-server, preprocessing, model-weights, and class-label map.

Inference Network Policy

Model inference is performed within an auditable and restricted network security context which protects data from exfiltration by malicious code running in model containers.

During an inference task, the platform encodes and transmits data to an inference endpoint served from a model-container version specified in the task definition. Model containers are launched within a separate cluster namespace, allowing networking policies to be explicitly applied to inference workloads.

The network path for inference tasks is enforced using a network security policy applied to the model's namespace.

podSelector: {}
  policyTypes:
    - Ingress
    - Egress
  ingress:
    - from:
        - namespaceSelector:
            matchLabels:
              kubernetes.io/metadata.name: mdai-apps
  egress: []

This means inference workloads are restricted to receiving inbound network reqiusts from platform services.