Skip to main content

Using containers in Bitrise Workflows

概要

Container support enables the use of any Docker container image as an execution environment on a Workflow level. It also enables running background services during the Workflow: this can be used, for example, to run advanced testing scenarios.

Beta feature

This feature is in beta and we do not recommend using it with production applications. Register here for the beta.

Container support enables the use of any Docker container image as an execution environment on a Workflow level. It also enables running background services during the Workflow: this can be used, for example, to run advanced testing scenarios.

Using containers comes with several potential benefits:

  • Full control over your build environment.

  • No need to install dependencies during the build itself. This reduces build times and complexity.

  • You can use the same environment that you use on your local development environment to test and build the app.

Linux only

This is a Linux only feature. macOS-based environments are not supported.

Workflow containers

A Docker container can be specified for any Workflow. In this case, all Steps of the Workflow will run inside the container.

To set up a container, you need to add a container property to your Workflow and under this property, define the Docker image you want to use by referring to the image name and version:

Getting images

You can use any public Docker image from Docker Hub.

workflows:
  test-node:
    container:
      image: node:21.6

For detailed configuration options, check out Container API reference.

例1 Testing an app with Node.js

This example shows a simple use of the container property to enable Workflows to use different execution environments.

We test our application using two different versions of Node.js, using the public node images from Docker Hub. We also run a custom Python script to validate our changes before testing them.

Workflow-specific configuration

Container configuration is tied to the Workflow. Chaining Workflows together using the before_run and after_run properties will not affect the container configuration of any Workflows in the chain. Each Workflow can define their own container configuration which will be respected no matter how they were triggered.

trigger_map:
- push_branch: "master"
  workflow: ci
- pull_request_target_branch: "master"
  workflow: ci

workflows:
  _clone:
    steps:
      - git-clone@8: {}

  test_node_21_6:
    container:
      image: node:21.6
    steps:
    - script:
      title: NPM install
      inputs:
      - content: npm ci
    - script:
      title: NPM test
      inputs:
      - content: npm test
      
  test_node_18_19:
    container:
      image: node:18.19
    steps:
    - script:
      title: NPM install
      inputs:
      - content: npm ci
    - script:
      title: NPM test
      inputs:
      - content: npm test
      
  ci:
    container:
      image: python:3.13
    before_run:
    - _clone
    steps:
    - script:
      title: Validate our changes using a custom python script
      inputs:
      - content: python3 validate.py
    after_run:
    - test_node_21_6
    - test_node_18_19

In this example, the ci Workflow is triggered on push or pull request events against the master branch. When a build is triggered:

  • The Workflow called _clone will be executed first because of the before_run property of the ci Workflow.

  • The _clone Workflow is not configured with a container, so it will run on the default Bitrise environment.

  • ci is configured with a container image so a new container is created from the image: python:3.13 The host will share the necessary working directories with the container so the cloned repository will be available inside the container, even though we have cloned it in another Workflow.

  • The Step that executes the Python script will be executed inside this newly created Python container.

  • The ci Workflow has two Workflows defined in the after_run property:.

    • test_node_21_6

    • test_node_18_19

  • These Workflows will run after each other, both configured with a different container image for node. Each Workflow can be configured with different images which are not shared across Workflows, but they all use the same volume mounts in case there is a need for file sharing.

  • Both Workflows install the npm packages and run the tests respectively in their own environment using different Node.js versions.

Use debian/ubuntu images

Our Script Steps use bash and therefore they might fail with alpine based images as they usually do not come with that installed. We recommend using debian/ubuntu based images.


File sharing across containers

To enable file sharing across different containers and the host Bitrise environment, the following folders are shared each time you run a Docker container on Bitrise:

  • /bitrise

  • /root/.bitrise:/root/.bitrise/

  • tmp:/tmp

By default, Bitrise will use /bitrise/src as its working directory, and everything created in either of these folders will be available across all Workflow containers.

Workflow containers only

This only applies to Workflow containers. Volumes and sharing files with service containers are not supported.

Service containers

One or more service can be configured on a Workflow level, allowing running Docker containers as services for advanced integration testing. These are tied to the Workflow lifecycle and will run alongside it in the background. Once the Workflow finishes, they are cleaned up.

To do so, you need to add the services property to your Workflow in the bitrise.yml file of your app, and define the services you wish to use under it.

Chaining Workflows

Chaining Workflows with before_run or after_run will have no affect on the service lifecycle. The services will only be created and kept alive for the duration of the Workflow they were defined on. Any Workflow that might have been chained to it will respect its own configuration only.

For detailed configuration options, check out Container API reference.

例1 Service containers for advanced integration test
workflows:  
  e2e-tests:
    container:
      image: node:21.6
    services:
      postgres:
        image: postgres:16
        envs:
        - POSTGRES_USER: postgres
        - POSTGRES_DB: bitrise
        # You can reference Bitrise secrets here, e.g: $POSTGRES_PASSWORD
        - POSTGRES_PASSWORD: password
        ports:
        - 5432:5432
        options: >-
          --health-cmd pg_isready
          --health-interval 10s
          --health-timeout 5s
          --health-retries 5
      redis:
        image: redis:7
        ports:
        - 6379:6379
        options: >-
          --health-cmd "redis-cli ping"
          --health-interval 10s
          --health-timeout 5s
          --health-retries 5
    envs:
    - POSTGRES_DSN: "postgres://postgres:password@postgres:5432/bitrise"
    - REDIS_DSN: "redis://redis:6379/1"
    steps:
    - script:
        title: e2e tests
        inputs:
        - content: |
            npm run e2e-tests

The example configuration defines a Workflow called e2e-tests.

This Workflow uses service containers to enable a more advanced integration test scenario using postgres and redis. It also utilizes the Workflow container to run the tests in a Node environment.

There are two services defined: postgres and redis. Each service is configured to have healthchecks using the options parameter. The CLI will respect those options and only start the execution of the Workflow’s Steps once every service reports healthy.

Each service is exposed on a port, but as we are utilizing Workflow containers, they will be accessible using their name postgres:5432 and redis:6379, respectively.

注記

If we were to omit the container configuration (not using Workflow container) we would be able to access the services on the following addresses

  • :postgres: localhost:5432

  • redis: localhost:6379

Once the services are ready, the Workflow will execute the npm run e2e-tests command, which can utilize the environment variables POSTGRES_DSN and REDIS_DSN defined by the Workflow to connect to services.


Network access for service containers

Service containers are all joined to the same docker network called bitrise. This ensures that all of them are accessible from any other service container and the Workflow container.

Running your own background workers

You can run your own background workers by executing the Docker commands yourself or by using something like docker-compose but make sure you use the same network.

Debugging containers

Once the build has finished, the logs will be converted into a structured view. This removes all logs that were created between Steps. As the container feature works on the Workflow level, its logs will be shown just before the first Step of the respective Workflow.

To see the container logs, make sure to download the full logs from the build page: ビルドログのダウンロード.