Cloud Experts Documentation

Enabling the AWS EFS CSI Driver Operator on ROSA

This content is authored by Red Hat experts, but has not yet been tested on every supported configuration. This guide has been validated on OpenShift 4.21. Operator CRD names, API versions, and console paths may differ on other versions.

Amazon Elastic File System (Amazon EFS) provides shared Network File System (NFS) storage that can be used by workloads running on Red Hat OpenShift Service on AWS (ROSA).

This guide shows how to enable the Red Hat-supported AWS EFS CSI Driver Operator on a ROSA cluster, create an EFS file system, dynamically provision a ReadWriteMany persistent volume claim (PVC), and validate shared access from multiple pods.

The flow in this guide covers:

  • creating an IAM role and policy for the AWS EFS CSI Driver Operator
  • installing the AWS EFS CSI Driver Operator from the OpenShift web console
  • creating the ClusterCSIDriver
  • creating an Amazon EFS file system and mount target
  • creating an EFS-backed StorageClass
  • dynamically provisioning a ReadWriteMany PVC
  • validating shared access from two pods
  • cleaning up the OpenShift, AWS EFS, security group, and IAM resources

The official supported installation instructions for the AWS EFS CSI Driver Operator on ROSA are available in the Red Hat OpenShift Service on AWS storage documentation.

Use AWS EFS CSI Driver Operator, not AWS EFS Operator. The AWS EFS CSI Driver Operator is the Red Hat-supported Operator. The AWS EFS Operator is a community Operator and is not supported by Red Hat.

Dynamic vs. static provisioning

The AWS EFS CSI driver supports both dynamic and static provisioning.

Dynamic provisioning

Dynamic provisioning creates new persistent volumes as subdirectories of a pre-existing EFS file system. The PVs are independent Kubernetes resources, but they share the same EFS file system.

For dynamic provisioning, the EFS CSI driver creates an AWS EFS Access Point for each dynamically provisioned PV. Due to AWS EFS Access Point limits, you can dynamically provision up to 1000 PVs from a single StorageClass and EFS file system.

EFS does not enforce the requested PVC size. For example, a PVC that requests 5Gi can store more than 5 GiB because the backing EFS file system is elastic. Monitor EFS usage and costs from AWS.

Static provisioning

Static provisioning mounts an existing EFS file system or access point as a persistent volume. This guide focuses on dynamic provisioning.

Prerequisites

You need:

  • A ROSA cluster using STS
  • The rosa CLI
  • The oc CLI
  • The AWS CLI
  • jq
  • AWS permissions to create IAM roles and policies
  • AWS permissions to create EFS file systems, mount targets, and security groups

This guide was validated on:

The validation run for this update used ROSA HCP. The same EFS CSI Operator flow applies to ROSA Classic clusters that use STS, but subnet discovery, security group naming, and cluster metadata can differ.

Set environment variables

Set the cluster name and AWS region:

Example:

Confirm that you are logged in to the correct ROSA cluster:

Set the AWS account ID:

Get the cluster OIDC endpoint:

Remove the https:// prefix for IAM trust policy use:

Expected format:

Create a scratch directory for the generated policy files:

Create the IAM policy and role

The AWS EFS CSI Driver Operator needs an IAM role that can be assumed by the Operator and controller service accounts.

Create the IAM permissions policy:

Create the IAM trust policy:

Create the IAM role:

Create and attach the IAM policy:

Keep the role ARN available. The OpenShift web console prompts for this value when installing the Operator on STS clusters:

Install the AWS EFS CSI Driver Operator

This guide uses the OpenShift web console installation path because it aligns with the supported ROSA documentation and avoids stale CLI installation YAML.

  1. Log in to the OpenShift web console.

  2. Go to Ecosystem > Software Catalog (this is formerly known as OperatorHub)

  3. Search for AWS EFS CSI.

  4. Select AWS EFS CSI Driver Operator.

    Select AWS EFS CSI Driver Operator, not AWS EFS Operator.

  5. Click Install.

  6. In the role ARN field at the top of the install page, paste the value of ROLE_ARN.

    Example:

  7. Review or set the following installation options:

    • Update channel: stable
    • Version: the version that matches your OpenShift minor version
    • Installation mode: All namespaces on the cluster
    • Installed namespace: openshift-cluster-csi-drivers
    • Update approval: Manual or Automatic

    Manual approval is safer for STS clusters because future Operator versions might require updated IAM permissions before upgrade.

  8. Click Install.

Validate that the Operator installed successfully:

Expected output includes a succeeded CSV and a running Operator pod:

The console installation creates the aws-efs-cloud-credentials secret in the openshift-cluster-csi-drivers namespace:

Create the ClusterCSIDriver

Create the ClusterCSIDriver resource:

Verify that the EFS CSI driver controller and node pods are running:

Expected output:

Check the ClusterCSIDriver conditions:

The driver is ready when the controller and node services are available:

Create an EFS file system

Create an encrypted EFS file system:

Verify that the EFS file system is available:

Expected output:

Some AWS CLI versions might not support aws efs wait file-system-available. This guide uses describe-file-systems to verify the EFS file system state.

Find worker subnets, VPC, security groups, and IAM roles

The EFS mount targets must be reachable by the ROSA worker nodes over NFS port 2049. For a Multi-AZ data plane, create one EFS mount target in each Availability Zone where worker nodes run. The commands in this section keep multi-value results newline-separated and use while read loops so they work in shells that do not split variables on whitespace automatically.

Get the worker node private IPs:

Set the worker node private IPs from the OpenShift node list:

Find the EC2 instance IDs for those workers:

Get the worker subnets, worker security groups, VPC, and worker IAM roles:

Attach EFS permissions to the worker role

The worker nodes need EFS permissions to resolve and mount the EFS file system. Attach the AWS managed EFS CSI driver policy to each worker role:

Create an EFS security group

Create a security group for the EFS mount target:

Allow NFS traffic from the worker security groups to the EFS security group:

Create EFS mount targets

Create one EFS mount target in one worker subnet per Availability Zone. This covers both Single-AZ and Multi-AZ data planes:

Verify that the mount target becomes available:

Expected output:

Create the EFS StorageClass

Create a StorageClass that uses dynamic provisioning through EFS Access Points:

Verify the StorageClass:

Expected output:

Create a test project and PVC

Create a test project:

Create a ReadWriteMany PVC:

Verify that the PVC is bound:

Expected output:

Verify that the EFS CSI driver created an access point:

Expected output:

Validate shared access from two pods

Create the first pod. This pod writes a file to the EFS-backed PVC.

Verify that the pod is running and can read the file:

Expected output:

Create a second pod that mounts the same PVC, reads the file, and appends another line:

Verify the second pod logs:

Expected output:

Verify that the first pod can see the second pod’s write:

Expected output:

Final validation

Capture the final OpenShift state:

Capture the final AWS EFS state:

A successful validation shows:

Clean up

This section removes the sample application, dynamically provisioned EFS resources, the EFS file system, the security group, the EFS CSI Driver Operator, and the IAM role and policy used for STS.

Before deleting the EFS file system, confirm that the file system ID belongs to this test. Do not delete a shared or pre-existing EFS file system.

Remove the sample workload

Delete the test pods, PVC, project, and StorageClass:

Confirm that no EFS PV remains. If the PV remains in Released state, remove the finalizer so Kubernetes can finish deleting the dynamically provisioned resource:

Confirm that the dynamically created EFS Access Point was removed. If an access point remains, delete it before deleting the file system:

Remove the EFS mount target and file system

List the mount targets:

Delete the mount targets:

Wait until no mount targets are returned:

Delete the EFS file system:

Verify that the file system is gone:

Remove the EFS security group

Delete the EFS security group:

If the command fails because the security group has a dependency, wait a few minutes after deleting the EFS mount target and try again.

Verify that the security group is gone:

Remove the EFS CSI driver and Operator

Delete the ClusterCSIDriver:

Uninstall the AWS EFS CSI Driver Operator from the OpenShift web console:

  1. Go to Ecosystem > Installed Operators.
  2. Select the openshift-cluster-csi-drivers project.
  3. Select AWS EFS CSI Driver Operator.
  4. Click Actions > Uninstall Operator.

Alternatively, remove the Subscription and CSV by CLI:

The Operator uninstall might not remove the credentials secret. Delete it explicitly if it remains:

Remove the IAM roles and policies

If you attached the AWS managed EFS CSI driver policy to the worker roles only for this guide, detach it after removing the test workload:

Detach the IAM policy from the EFS CSI Operator role, then delete the role and policy:

Remove local temporary files

Remove the IAM policy documents in the scratch directory:

Final cleanup validation

Run the following commands to confirm that no test resources remain:

Expected result:

  • no efs-demo project
  • no efs-sc StorageClass
  • no EFS PV
  • no efs.csi.aws.com ClusterCSIDriver
  • no EFS Operator pods, Subscription, or CSV
  • no aws-efs-cloud-credentials secret
  • no test EFS file system
  • no test EFS security group
  • no test IAM role or policy
Back to top

Interested in contributing to these docs?

Collaboration drives progress. Help improve our documentation The Red Hat Way.

Red Hat logo LinkedIn YouTube Facebook Twitter

Products

Tools

Try, buy & sell

Communicate

About Red Hat

We’re the world’s leading provider of enterprise open source solutions—including Linux, cloud, container, and Kubernetes. We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Subscribe to our newsletter, Red Hat Shares

Sign up now
© 2026 Red Hat