Using containers in Bitrise Workflows
Container support enables the use of any Docker container image as an execution environment for a group of Steps within a Workflow. It also enables running background services : this can be used, for example, to run advanced testing scenarios.
Beta feature
This feature is in beta and we do not recommend using it with production applications. Register here for the beta.
Container support enables the use of any Docker container image as an execution environment for a group of Steps. It also enables running background services: this can be used, for example, to run advanced testing scenarios.
Using containers comes with several potential benefits:
-
Full control over your build environment.
-
No need to install dependencies during the build itself. This reduces build times and complexity.
-
You can use the same environment that you use on your local development environment to test and build the app.
Linux only
This is a Linux only feature. macOS-based environments are not supported.
Step execution containers
Docker containers can be defined for any Bitrise app in the bitrise.yml
configuration file. You define the container in the top level of the configuration file and then refer to it in a with
group in the Workflow configuration.
To do so, you need to add a containers
property to your bitrise.yml
file. Under this property, you need to define:
-
The ID of the containers. It will be used to reference this container.
-
The name of the Docker image you want to use.
-
The version of the image.
containers: node-21: image: node:21.6 node-18: image: node:18.19
Once the containers are defined, you can refer to them in a with
group within the Workflow. Steps within the same with
group will run in the same container. You can define multiple with
groups within the same Workflow.
containers: node_22: image: node:22 python: image: python:3.12 workflows: ci: steps: - git-clone: {} - with: container: python steps: - script: title: Validate our changes using a custom python script inputs: - content: python3 validate.py - with: container: node_22 steps: - script: title: NPM install
Getting images
You can use any public Docker image from Docker Hub.
For detailed configuration options, check out Container API reference.
This example shows a simple use of the container property to enable Steps or groups of Steps to use different execution environments.
We test our application using two different versions of Node.js, using the public node images from Docker Hub. We also run a custom Python script to validate our changes before testing them.
containers: node_22: image: node:22 node_20: image: node:20 python: image: python:3.12 workflows: ci: steps: - git-clone@8: {} - with: container: python steps: - script: title: Validate our changes using a custom python script inputs: - content: python3 validate.py - with: container: node_22 steps: - script: title: NPM install inputs: - content: npm ci - script: title: NPM test inputs: - content: npm test - with: container: node_20 steps: - script: title: NPM install inputs: - content: npm ci - script: title: NPM test inputs: - content: npm test
When the ci
Workflow is running:
-
The
git-clone
Step will run on the default Bitrise environment. -
The first
script
Step is wrapped in awith
group and it has a container image defined, so a new container is created from the image:python:3.12
. The host will share the necessary working directories with the container so the cloned repository will be available inside the container. -
The Step that executes the Python script will be executed inside this newly created Python container.
-
-The
ci
Workflow has two morewith
groups, these will run after each other, both configured with a different container image for node. Eachwith
group can be configured with different images which are not shared across the groups, but they all use the same volume mounts in case there is a need for file sharing. -
Both groups install the npm packages and run the tests respectively in their own environment using different Node.js versions.
Use debian/ubuntu images
Our Script Steps use bash and therefore they might fail with alpine
based images as they usually do not come with that installed. We recommend using debian/ubuntu
based images.
File sharing across containers
To enable file sharing across different containers and the host Bitrise environment, the following folders are shared each time you run a Docker container on Bitrise:
-
/bitrise
-
/root/.bitrise:/root/.bitrise/
-
tmp:/tmp
By default, Bitrise will use /bitrise/src
as its working directory, and everything created in either of these folders will be available across all Step execution containers.
Step execution containers only
This only applies to Step execution containers. Volumes and sharing files with service containers are not supported.
Service containers
One or more service can be configured for a group of Steps, allowing running Docker containers as services for advanced integration testing. These are tied to the lifecycle of a with
group and will run alongside it in the background. Once all Steps within the groups finish, they are cleaned up.
To do so, you need to add the services
property to the top level in the bitrise.yml
file of your app. You need to define:
-
The ID of the service which is used to refer to the service in a
with
group. -
The image name of the service.
-
The image version of the service.
You can then refer to the ID of the service in a Workflow’s with
group . The Steps in the group will have access to the referred service.
containers: node-21: image: node:21.6 services: postgres: image: postgres:16 envs: - POSTGRES_USER: postgres - POSTGRES_DB: bitrise # You can reference Bitrise secrets here, e.g: $POSTGRES_PASSWORD - POSTGRES_PASSWORD: password ports: - 5432:5432 options: >- --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5 redis: image: redis:7 ports: - 6379:6379 options: >- --health-cmd "redis-cli ping" --health-interval 10s --health-timeout 5s --health-retries 5 workflows: e2e-tests: envs: - POSTGRES_DSN: "postgres://postgres:password@postgres:5432/bitrise" - REDIS_DSN: "redis://redis:6379/1" steps: - with: container: node-21 services: - postgres - redis steps: - script: title: e2e tests inputs: - content: npm run e2e-tests
The example configuration defines a Workflow called e2e-tests
.
This Workflow's with
group uses service containers to enable a more advanced integration test scenario using postgres and redis. It also utilizes a container to run the tests in a Node environment.
There are two services defined: postgres
and redis
. Each service is configured to have healthchecks using the options parameter. The CLI will respect
those options and only start the execution of the Steps once every service reports healthy.
Each service is exposed on a port, but as we are utilizing containers, they will be accessible using their name postgres:5432
and redis:6379
, respectively.
Note
If we were to omit the container configuration (not using a Step execution container) we would be able to access the services on the following addresses
-
:postgres: localhost:5432
-
redis: localhost:6379
Once the services are ready, the Step will execute the npm run e2e-tests
command, which can utilize the environment variables POSTGRES_DSN
and REDIS_DSN
defined by the Workflow to connect to services.
For detailed configuration options, check out Container API reference.
Network access for service containers
Service containers are all joined to the same docker network called bitrise
. This ensures that all of them are accessible from any other service container and Step execution container.
Running your own background workers
You can run your own background workers by executing the Docker commands yourself or by using something like docker-compose
but make sure you use the same network.
Debugging containers
Once the build has finished, the logs will be converted into a structured view. This removes all logs that were created between Steps. As the container feature works on the Workflow level, its logs will be shown just before the first Step of the respective Workflow.
To see the container logs, make sure to download the full logs from the build page: Downloading a build log.