Help Center

Technical overview, site tour, tutorials, videos, roadmap
What is D2C?
Platform for configuring, deploying, scaling and managing apps, API's or mobile backend
in cloud or own servers.
What happens behind the scenes when you deploy applications with D2C
Applications can't run without a server on the Internet. D2C can automatically provide servers for you. Just provide an access token for your preferred cloud provider and D2C will create/destroy/resize servers as you need them. Keep in mind that the cloud provider will charge you for resource usage based on your payment plan. D2C doesn't provide its own computing resources – it automates the provisioning process at cloud providers.

If you want to deploy applications on your own in-house servers or at cloud providers that D2C doesn't currently support, you can do so. We provide a script that checks whether your host complies with the requirements to be added to the D2C dashboard and be a part of your project. Basic requirements are: Ubuntu 14.04/16.04 or Debian 7/8, SSH access from the Internet, user with sudo privileges, and several TCP/UDP ports accessible from the Internet for overlay network configuration.
Execution environment
D2C executes your services/application inside containers. We use Docker as our containerization platform. Each app is a separate container: web-app, database, load-balancer, etc. Docker is installed and configured on the hosts that are managed by D2C automatically. When you deploy your application, all the necessary files are delivered to the host, and container images are built locally on the host and then run by Docker daemon. D2C configures the environment automatically, so we don't recommend adding your current development boxes as managed hosts, in order to prevent any configuration conflicts.
D2C allows you to span your project infrastructure across multiple hosts and even different cloud providers. We use a secure, private overlay network for this purpose. Currently, we use WeaveWorks solution for Docker, but are considering moving to the new Docker Swarm mode as soon as it matures enough. The network is decentralized and doesn't require any dedicated hosts or configuration storage to operate. WeaveWorks is installed as a docker plugin (it creates several of its own containers) and forms an encrypted mesh network between your hosts. For robust operation, it needs TCP/UDP ports 6783,6784 to be accessible in both directions.

By default, application containers are started inside this private network and have dynamically assigned local ip addresses. Apps can reference each other by service name. It doesn't matter on which host the app is running – all private network intercommunication is transparent for applications.

For applications that need access from the Internet, port publishing through the host server is enabled. D2C creates public domain names for such services automatically. For example, if you publish my-node app on port 3000, you can access it at
Advanced users can configure containers in the host network context to gain more flexibility.
Persistent data
Containerized applications should be lightweight, disposable, and easily replaceable. For this reason, D2C separates the application itself from its data. Docker volumes are used to store persistent data. Data is stored locally on the hosts. If you provision your hosts with D2C at Amazon EC2, it creates a separate EBS partition for containers' data, which can later be enlarged or replaced from your dashboard.

If there is a need to move your service/app to another host, you can do so with our container migration feature – it will move all your persistent data to the other host as well.
Deployment process
D2C uses Ansible playbooks to deploy applications. Settings from service configuration pages are used as parameters to execute playbooks. We have a set of playbooks for environment configuration and different application types. You choose the application type, provide some essential parameters, supply the source code and/or deployment scripts, and start the deployment process. The process is logged task-by-task, and you can review it in your dashboard. For further details about the deployment process, see the "in depth" section.
Applications logs
D2C collects containerized applications' stdout to its own database. You can browse through logs any time in your dashboard. Keep in mind that your application should print logs to standard output for Docker daemon to be able to capture it and send it to our log facility.
You can inspect resource utilization graphs in your dashboard. This data is collected by Telegraf daemons running on managed hosts. You can monitor CPU/MEM/Disk usage per host, or per individual application (container).
Deployment: In depth
D2C uses three deployment phases: building, deploying, and running. D2C may execute all three of them one after another, or just the necessary one. For example, you may want to rebuild a base container with an upgraded Node release and run it with the same source code and data. Or you may want to update the source code and reload your application to pick up these changes.

It is a process of building the container image for your application. It includes downloading base docker OS images for the application, installing, and configuring necessary packages. There is no access to application sources in this step. All modifications done in this step are stored inside the container image; no data is saved to external volume. Treat this step like preparing a server for running your application. You use installCommands to run commands in this step.

It is a process of preparing your application to be ready to run. During this step, you have access to application source codes and data volume, but no changes will be saved to the container image. You should use this step, for example, to compile CSS, minify JS, install local code dependencies, make the initial database population, etc.

The deployment process runs in temporary containers with its own source code copy but the same data volume. The source code volume from this temporary container is placed into the main container afterwards. This allows you to achieve near zero downtime deployment by preparing a new version of code while the previous one is still running online; at the same time, if you need to migrate the current database to a new version, you have access to the live data volume. Depending on your preferred deployment process, you may wish to stop the current application first (in case of DB migration), or to keep it running and just swap the source code after the preparation process in the temporary container is completed. You use deployCommands to run commands in this step.

After the deployment process is completed, the temporary container is removed. So in this step, you may change only the source code or data volume. Any changes that are made in other locations (e.g., installing npm-packages globally) will be lost. Remember to use the build step to create container-wide changes.

In this step, the app container is started with source code and data volumes mounted. All networking is being properly configured, ports published, and DNS set up. The container image created during the build step is used to spin up the container. You use startCommand to start your application.

You should usually modify runCommand only for custom applications like Node, Python, etc. Standard applications like database engines and web servers provided by D2C use predefined commands.

Interacting with your application
There is a possibility of executing a command inside a running application container. For example, you may want to compress your data when your application doesn't support this function but some packages inside a container do. You use execCommand to achieve this.

Video guide
Reviewing the next case: starting a project on one host with Node.js, MongoDB, NGINX. After that, migrate Node.js and MongoDB to other provider and scale them on three hosts.
Our roadmap
Please, write us if you want some features faster than they are provided in our plan.
Q1 - Q2 2017
Database import (done)
One-click Master-slave Replication (done)
One-click PostgreSQL Replication (done)
Deploying own docker images (done)
Database management for: MySQL, MongoDB, PostgreSQL
Notifications by email
Providers: Azure
Q3 - Q4 2017
Continious Delivery
Backups for any service
Notifications in Slack
Advanced health check
Providers: Google
One-click Crate multi-node
QueueManager: RabbitMQ
Database: RethinkDB
Apps on: Java, Scala, Groovy, Erlang
Q1 - Q2 2018
Reverse Proxy: Varnish
Search: Solr
Apps on: Lua, Kotlin, Haskel, Clojure
Online Code Editor
Enterprise Edition
Feel free to contact us
Your opinion is very important.
Made on