Improving development performance using docker

On this post I will share our experience implementing docker to improve the performance in our development processes in the Symphonica Service Activator project.

When I started working on the Symphonica project I noticed that the whole development team was sharing the same database instance. This is a pattern I’ve seen many times in most of the development teams throughout the company and which leads to several synchronization problems among the team members.

The following are examples of situations that arise when multiple teams share resources:

  • Having to wait for resources to become available
  • Having to configure the same resources multiple times
  • Having to debug errors or system malfunctions
  • Having to ask for new resources

Most of these problems fall into one of  two categories:

  • Team synchronization
  • Limited resources

Team synchronization

In this particular project we have many different teams. One of them is the development team which is divided into three different subteams: the C++ team, the PHP team, and the Java team. Aside from the development team we also have a testing and automation team and a configuration team. All of these work in the same database instance.

Every team works at its own pace and they develop and test features at different times, so having one database instance is a big issue. If one of the teams has to make a change to the database schema it needs to be backward compatible otherwise another team’s components or the testing environment could be broken. This implies that every single team has to evolve in a synchronized manner which requires a huge effort and is very time consuming. This becomes even worse when you consider the requirements generally aren’t clear at earlier stages of the development, so each new feature implies a lot of changes, rollbacks and improvements until the final solution is reached, so even taking into account all the considerations and being extra careful, errors can still occur and lead to a system malfunction.

Limited resources

Many of the problems described in the previous paragraph could be avoided setting up new virtual machines for every team. But even with this approach, the same problems could potentially arise between two developers of the same team.

Now consider you have to support several versions of the same product or that you want to reproduce production environments in your own lab, having one virtual machine for every scenario is impossible or at least  would require a lot of resources which we don’t have. Asking the IT department to create several VM instances isn’t an option and having every developer setting up their own instances isn’t either.

The light at the end of the tunnel

The best scenario we could imagine is every developer being able to manage their own isolated environment. If they could create a database instance in their own local environment, make all the required modifications, test the features being developed, commit the final version of the features and finally destroy the database instance and start over (and all of this was done in a lightweight and fast way), this would be an amazing improvement which would save us a lot of time and help us avoid several errors.

The solution we’ve found to most of the aforementioned problems is docker.

Docker allows us to build our database schema images and distribute them so every developer is able to run its own database instance. Since all these images run on linux containers they require fewer resources, (like memory, CPU, and storage) than virtual machines and the developers should be able to run them even on their notebooks.

Implementing the new workflow

To implement this solution we would need to package the database in what the docker community calls “a docker image” and store these images in a repository, a docker registry, so all the developers could access the different database versions and run database instances as linux containers.

From the developer’s point of view we just need to start a new container using the following command:

docker run -d –rm -p 1521:1521

This command will download the database image and create a new container. So we can work without being worried about affecting anyone else’s progress and once we are finished working on our feature the database instance can be deleted.


With the proposed approach we should be able to manage our own database schemas and our own datasets. We wouldn’t have to worry about whether the database schema we want is available, if anybody is going to make a change and break the environment we are working in or if we are going to break somebody else’s environment. Creating a new database schema from scratch takes just a few seconds, and this way we can continue working focusing on developing new features. All these changes would help us boost our software development performance and quality avoiding context switching and being idle.

About the author

My name is Julián Lastiri, I am a project manager and development manager in the Symphonica and SAMOD teams

, , ,
Previous Post
Plugins: Advantages, Fuss, and Pitfalls
Next Post
What I Learned From an Excercise in Prioritization (And What You Can Too)

No results found.

You must be logged in to post a comment.