How to Configure Your ACC Application with akinon.json and Procfile?

To run your application on ACC, it must first be containerized. The approach for this will vary based on the programming language and framework you're using—searching for "dockerize $language $framework" is a good way to discover the recommended practices.

ACC packages your application into a Docker container and deploys it on a Kubernetes cluster. To do this, it needs specific instructions on how to build and run your app.

Instead of using a traditional Dockerfile, ACC relies on an app manifest composed of two files: akinon.json and a Procfile.

akinon.json​

This file outlines the essential details about your application—what it is, how it should be built and executed, and any dependencies it requires to function correctly (such as a database or message broker). It must be placed at the root of your application's repository.

Here’s a minimal example of an akinon.json file:

{
    "name": "mcapp-9000",
    "description": "My cool app",
    "scripts": {
        "build": "build.sh"
    },
    "runtime": "python:3.10-slim",
    "formation": {
        "web": {
            "healthcheck": "/healthz"
        }
    },
    "...": "..."
}

Name and description are pretty self-explanatory, they're used to identify your app in the UI.

Scripts are used to build & run your app in different stages of its lifecycle. Since they're executed inside the container, they can use any shell available in the container (sh, bash, xonsh, etc.).

We'll explore other fields and their uses in the following sections.

Converting a Dockerfile into a build script​

ACC abstracts Dockerfile into a script that is run while building the Docker container. It effectively combines all RUN statements into one. This is to prevent users from generating a massive Docker container, often due to creating too many layers by mistake.

It's best to start from a Dockerfile and later convert it to an app manifest that ACC can understand.

Take this minimal (though not optimized) Dockerfile as an example of a Python application served by Gunicorn server.

# Dockerfile
FROM python:3.10-slim

RUN apt-get update && apt-get install -y python-dev postgresql-dev jpeg-dev g++
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY . .
CMD ["gunicorn", "app:app", "-b", "0.0.0.0:8000"]
EXPOSE 8000

We have a number of RUN commands in the Dockerfile. And an CMD command that runs starts process.

To create a build script, we combine all RUN statements. Files inside the repo are copied to the Docker container, and the script is run at the root of the repo.

# build.sh
#!/bin/bash
set -euo pipefail

apt-get update && apt-get install -y python-dev postgresql-dev jpeg-dev g++
pip install -r requirements.txt

# remember to clean up unnecessary files to reduce container size
# for example, g++ or python-dev is not used by the app, it can be uninstalled
# this will reduce the container size significantly, 
# and speed up builds & deployments.

apt-get remove -y python-dev g++

The build.sh file must be executable. On Unix-based systems, you can make this file executable by running the chmod +x build.sh command. For Windows users who cannot use chmod, you can achieve the same result by using the git update-index --chmod=+x build.sh Git command.

Then this is specified in akinon.json under $.scripts.build, and the base image under $.runtime:

{
    "...": "...",
    "scripts": {
        "build": "build.sh"  // runs `sh build.sh` at the repo root
    },
    "runtime": "python:3.10-slim" // used as the base image for the Docker container
}

This also means you can put your scripts under a subdirectory and point to it in $.scripts.build:

{
    "...": "...",
    "scripts": {
        "build": "./scripts/build.sh"
    }
}

Release Script​

Release script is an optional script that is run just before deploying your app. This is usually used to run migrations.

Keep in mind that this script can and will likely be run multiple times, so it must be idempotent, meaning it should expect some changes to be already made by previous runs.

{
    "...": "...",
    "scripts": {
        "build": "build.sh",
        "release": "release.sh"
    }
}

It contents could potentially be:

# release.sh
#!/bin/bash
set -euo pipefail

# only run migrations if the database is not already migrated.
# this check is usually performed by the ORM you're using. 

is_migrated() {
  # ...
  if test $migrated -eq 0; then
    return 0 # yes
  else
    return 1 # no
  fi
}

is_migrated || migrate

The release.sh file must be executable. You can make this file executable by running the chmod +x release.sh command.

Formations and Procfile​

Formation defines how the app is deployed on a Kubernetes cluster and how many replicas it has or how many instances it can scale to.

{
    "...": "...",
    "formation": {
        "web": {
            "min": 2,  // app is deployed with 2 replicas
            "max": "auto", // and scale up as needed
            "healthcheck": "/healthz"
        },
        "beat": {
            "min": 1, // only a single instance is deployed
            "max": 1
        },
        "worker": {
            "min": 1, // only a single instance is deployed
            "max": "auto" // but can scale up as needed
        }
    }
}

Keys of the formation are the names of the processes that are defined in Procfile. We expect to find this file at repo root.

Procfile is a file that contains a list of processes that can be run inside the container.

# format: <process_name>: <command>
web: gunicorn app:app -b 0.0.0.0:8008
worker: worker.py

This command will be used as CMD statement in the Dockerfile.

For example, if you have a Django app, it is usually served by Gunicorn. But it also needs a worker process to run background tasks.

So we define two processes, both of which will be deployed as separate containers. This means they cannot share state among each other without using a shared database or a broker like Redis.

Healthcheck​

If the app is expected to be accessed from the internet, it must have a web process which listens to all interfaces 0.0.0.0 at port 8008. And it must have a healthcheck endpoint.

A healthcheck endpoint is a path used that responds to HTTP GET requests to confirm if the app is ready to receive traffic.

It is like the following command that is run inside the container:

curl -XGET http://localhost:$PORT/healthz

After deployment, once the app starts returning HTTP 200 responses from this endpoint, it is assumed to be healthy and ready to serve traffic.

This means, that if the app is not ready to serve traffic, such as, it cannot access the database or other upstream services & APIs it depends on, it should not return HTTP 200 responses from this endpoint. If it does, the traffic will be routed to the app, and it will most likely crash, and return HTTP 500 responses.

{
    "...": "...",
    "formation": {
        "web": {
            "...": "...",
            "healthcheck": "/healthz"
        },
        "...": "..."
    }
}

It is also possible to use health checks for processes other than web. In this case, the health check path will correspond to a file rather than an endpoint.

This indicates that if the health check file exists, the deployment is assumed to be healthy. Conversely, if the file is absent, the deployment is assumed to be unhealthy.

{
    "...": "...",
    "formation": {
        "worker": {
            "...": "...",
            "healthcheck": "/tmp/healthz"
        },
        "beat": {
            "...": "...",
            "healthcheck": "/tmp/healthz"
        }
        "...": "..."
    }
}

The following example demonstrates the process of creating a health check file with Celery:

from celery.signals import beat_init, worker_ready, worker_shutdown

READINESS_FILE = Path('/tmp/healthz')

@worker_ready.connect
def worker_ready(**_):
    READINESS_FILE.touch()


@worker_shutdown.connect
def worker_shutdown(**_):
    READINESS_FILE.unlink(missing_ok=True)


@beat_init.connect
def beat_ready(**_):
    READINESS_FILE.touch()

Addons

With all this information, the app is now ready for deployment. However, most applications require a database to store state, which is provided through addons.

It's an array of objects, each of which defines an addon (with optional configuration).

{
    "...": "...",
    "addons": [
        {
            "plan": "postgresql",
            // "options": {
            //     "instance_type": "db.r5.large",
            //     "instance_count": 1
            // }
        }
    ]
}

Each addon causes a number of environment variables to be passed to the container. And the app needs to read these environment variables to configure itself.

You can define a type of addon multiple times, but they must have different roles defined in as field.

{
    "...": "...",
    "addons": [
        {
            "plan": "redis",
            "as": "cache"
        },
        {
            "plan": "redis",
            "as": "broker"
        }
    ]
}

This will cause environment variables to have different prefixes. In this case, the app will have two different Redis instances, and their details will be stored in CACHE_* and BROKER_* environment variables.

Postgresql Addon​

This addon provides a Postgresql database. It can be defined as follows:

{
    "plan": "postgresql"
    // "as": "db", // optional, set to "db" by default 
    // "options": {
    //   "instance_type": "db.r5.large",
    //   "instance_count": 1
    // }
}

This will pass the following environment variables to the container:

  • DB_HOST: the hostname of the database

  • DB_PORT: the port to connect to the host

  • DB_NAME: the name of the database

  • DB_USER: the username to connect to the database

  • DB_PASSWORD: the password of the user

Redis Addon​

This addon provides a Redis instance to the app. It can be defined as follows:

{
    "plan": "redis" 
    // "as": "cache", // optional, set as cache by default
    // "options": {
    //    "instance_type": "cache.r4.large",
    //    "instance_count": 1
    // }
}

This will pass the following environment variables to the container:

  • CACHE_HOST: the hostname of the Redis instance

  • CACHE_PORT: the port to connect to the host

  • CACHE_DATABASE_INDEX: the index of the database to use

Combining these, you'll need to prepare a Redis DSN as follows:

redis://$CACHE_HOST:$CACHE_PORT/$CACHE_DATABASE_INDEX

Sentry Addon​

This addon provides a Sentry DSN to send error logs to. It can be defined as follows:

{
    "plan": "sentry"
}

This will pass the following environment variables to the container:

  • SENTRY_DSN: the DSN of the Sentry instance

Mail Addon​

This addon provides SMTP details to allow the app to send emails. It's included with every app by default, so it doesn't need to be defined in akinon.json.

This will pass the following environment variables to the container:

  • EMAIL_HOST: the hostname of the SMTP server

  • EMAIL_PORT: the port to connect to the host

  • EMAIL_HOST_USER: the username to connect to the SMTP server

  • EMAIL_HOST_PASSWORD: the password of the user

  • EMAIL_USE_TLS: whether to use TLS or not

CDN Addon​

This addon provides a CDN instance to the app. It can be defined as follows:

{
    "plan": "cdn",
    // "scope": "project" // optional set as cache by default
}

This will pass the following environment variables to the container:

  • CDN_DOMAIN: the domain of the CDN server

  • S3_BUCKET_NAME: the name of the AWS S3 bucket

  • S3_REGION_NAME: the name of the AWS S3 region name

  • AWS_ACCESS_KEY_ID: the access key id of the AWS account

  • S3_SIGNATURE_VERSION: the signature version of the AWS S3

  • AWS_SECRET_ACCESS_KEY: the secret access key of the AWS account

Static CDN Addon​

This addon provides a Static CDN instance to the app. It can be defined as follows:

{
    "plan": "static_cdn"
}

This will pass the following environment variables to the container:

  • BASE_STATIC_URL: the url of the Static CDN server

Elasticsearch Addon​

This addon provides an Elasticsearch instance to the app. It can be defined as follows:

{
    "plan": "elasticsearch",
    // "options": {
    //    "version": "5.5",
    //    "instance_type": "t2.medium.elasticsearch",
    //    "instance_count": 1
    // }
}

This will pass the following environment variables to the container:

  • ES_HOST: the hostname of the Elasticsearch server

  • ES_PORT: the port to connect to the host

Adding Required Environment Variables in akinon.json​

When developing and deploying an extension in ACC, there might be cases where specific environment variables must be set before the extension can be added to a project. For example, sensitive configurations like SECRET_KEY or critical settings like API_URL can be defined as prerequisites for installation.

This requirement can be enforced by defining the env field in the akinon.json file. The example below includes two required environment variables:

  • SECRET_KEY: A sensitive key with specific length constraints.

  • API_URL: The base URL needed for API communication.

{
    "name": "example-extension",
    "description": "An example ACC extension",
    "runtime": "python:3.10-slim",
    "scripts": {
        "build": "build.sh"
    },
    "env": {
        "SECRET_KEY": {
            "type": "text",
            "required": true,
            "minlength": 12,
            "maxlength": 128,
            "description": "Secret Key"
        },
        "API_URL": {
            "type": "text",
            "required": true,
            "description": "Base URL for the API"
        }
    }
}

When a user attempts to deploy this extension, ACC validates the env section in the akinon.json file. If the required environment variables (SECRET_KEY or API_URL) are not provided or do not meet the specified criteria (e.g., type or length), the deployment process will be interrupted, prompting the user to input the required information.

Last updated

Was this helpful?