Docker for local development

With a move to multiple services or even, with a monolith, multiple services as dependencies (rabbitmq, redis and postgreSQL for example) the multiplication of services to have running in a development environment tends to make things complicated.

Multiple issues tend to appear as both the number of those services and of team members grow :

  • different people have different versions of those services
  • different people have different way to start them
  • onboarding a new team member starts to become a long list of patches and conditions

One solution is to rely on docker and docker-compose for each bounded context (service). This article will cover this approach in the case of two Ruby applications : one being the main application, the other being a library for the first one that can provide information critical for the first application work.
We will be able to see how we can work with docker and docker-compose for one service, but also in the case of a multiple services fleet.

Docker is a virtualisation solution that allows you to build images and run them as "containers" on your machine. It has a low overhead compared to virtual machines solutions. Public images for many services are available from the Docker Hub and other public repositories.

Docker-compose is a command building upon Docker that allows you to manage a fleet of containers based on a configuration file. That file contains a list of services and options to start them including either the image name or the path towards a Dockerfile that allows the building of the container image.

A simple Rails application

We will first get ourselves ready with a first RubyOnRails application. It will rely on postgreSQL for the main database and Redis for jobs.

We consider you already have the rails gem installed on your system, preferably with a version manager such as asdf.

$> rails new backend -T -d postgresql
# -T : skip the setup of a testing framework
# -d : set the database to ...

Once that is done we add rspec by adding the following line in the Gemfile and its group :development, :test block :

    gem 'rspec-rails'

This can be followed by

$> rails generate rspec:install

That command will create a spec folder and add the spec helpers we need.

docker-compose for postgresql

We now want to configure the database instance for our local environment, it's time to get our first docker-compose.yml file.

version: "3.7"
services:
  postgres:
    image: postgres:13.2
    ports:
      - "5432:5432"
    environment:
      - POSTGRES_PASSWORD=postgres
      - POSTGRES_USER=postgres
    volumes:
      - data-postgres:/var/lib/postgresql/data
volumes:
  data-postgres:
    driver: local

We have a few important points here, first within the postgres section :

  • this section define the postgres service, it could be named database or flux-capacitor, it doesn't matter
  • image : the name and tag of the image we want to use
  • ports : the mapping of ports between the host and the container's environment. It doesn't have to be the same on each side of the ':'. Here the right side is the port used by postgresql by default and within the container. The left side is the one exposed on the host.
  • environment : the list of environment variables that will be defined in the container when it's started
  • volumes : a list of data volumes that will be mounted within the container when it starts. This volume won't be destroyed between runs of the container.

And then we have a volumes section at the end of the file defining how the volume we use in the postgres service is handled. Here we don't define a specific path, docker will figure out a path on its own.

We can then start the container and check if it's running :

$> docker compose up -d
Creating network "backend_default" with the default driver
Creating volume "backend_data-postgres" with local driver
Pulling postgres (postgres:13.2)...
13.2: Pulling from library/postgres
fcad0c936ea5: Pull complete
...
c7c8064b7a1a: Pull complete
Digest: sha256:0eee5caa50478ef50b89062903a5b901eb818dfd577d2be6800a4735af75e53f
Status: Downloaded newer image for postgres:13.2
$> docker compose ps
NAME                 SERVICE             STATUS              PORTS
backend_postgres_1   postgres            running             0.0.0.0:5432->5432/tcp, :::5432->5432/tcp

We can see the container is started, its name is prefixed with "backend" which is the name of my current folder. The "PORTS" column tells us which ports are exposed on the container. We can see that the port 5432 of the host machine is linked to the port 5432 of the container.

We need to adjust our database configuration in the rails application :

# config/database.yml
default: &default
  adapter: postgresql
  encoding: unicode
  # For details on connection pooling, see Rails configuration guide
  # https://guides.rubyonrails.org/configuring.html#database-pooling
  pool: <%= ENV.fetch("RAILS_MAX_THREADS") { 5 } %>
  host: 0.0.0.0
  port: 5432
  username: postgres
  password: postgres

No need to touch the development or test sections; we can update the default one. The production settings should be given through an environment variable so we don't have to worry about that part here.
All those values (host, port, username, password) are from the docker-compose.yml file and what we know of our workstation.

database use

We can now use that database and proceed with ensuring the application is properly linked up by creating the development and test databases.

$> bundle exec rails db:create
Created database 'backend_development'
Created database 'backend_test'
$>

That looks good.

a first conclusion

This is basically how things can be done to handle those service dependencies. We can expand on this with more services such as redis or elastic search or whatever has a docker image or a way to build one.

Now, when a team member sets up a new computer there is no need to say "oh you need to install PostgreSQL 13.2" or have such a section in your onboarding documentation or script.
The team member can clone the repository, and within that clone run docker-compose up -d. This will download and start the right version with the right configuration of PostgreSQL.

And it will be the same with any other service we want to have our application depend on.

Beyond the simple Rails app

Scaffolding a model and controller

Let's now say that we have a model and a controller in our application.

We can add a simple Book model and the controller to handle it with the scaffold generator :

$> bundle exec rails g scaffold Book title:string author:string description:string

This is rather lazy but will allow us to actually focus on what we want to do : working with services.

At this step, as we want to start the rails app through a web server we will need to run rails webpacker:install.

We can start the application with rails s, and we can use our web browser to open up http://localhost:3000/books.

We can play around and create and delete books, see their list etc ...

Background jobs

We want to make things a bit more complex now by using background jobs. One of the popular ways to do this within Rails applications is to rely on Sidekiq.

We can add the sidekiq gem to the Gemfile :

gem 'sidekiq'

And run bundle install to install the gem and its dependencies.

We now require an instance of Redis to run locally. Let's update our docker-compose.yml file for this.

  redis:
    image: redis:6.2.5
    ports:
      - "6379:6379"

This entry is pretty small, but that's enough.

As we are matching the default configuration (port and so on), sidekiq should connect to it without issue, no need to worry about configuration for this example.

We can ensure our app has a sidekiq web interface by following https://github.com/mperham/sidekiq/wiki/Monitoring#web-ui .

Open up two other terminals and go to the application folder to run rails s in one and sidekiq in the other. The first one will start the application (skip that one if you already have it running in another terminal, but you will need to restart it). The other one will start a sidekiq worker.

Now if you go to http://localhost:3000/sidekiq/ you will see the dashboard of the sidekiq jobs. The http://localhost:3000/sidekiq/busy will list the jobs and runner currently running. If you stop the runner by using Ctrl c in the terminal running the runner and reload that sidekiq page you will see it disappear.

Let's now add a simple worker and trigger it when a book is added.

We add a file in app/workers/notify_worker.rb :

class NotifyWorker
  include Sidekiq::Worker

  def perform(book_name, author_name)
    Rails.logger.info "#{book_name} by #{author_name} was added."
  end
end

This is pretty useless but we still want it done.

Let's add how to trigger that job in the controller create action upon success.

  def create
    @book = Book.new(book_params)

    respond_to do |format|
      if @book.save
        NotifyWorker.perform_async(book_params[:title], book_params[:author])
        format.html { redirect_to @book, notice: "Book was successfully created." }
        format.json { render :show, status: :created, location: @book }
      else
        format.html { render :new, status: :unprocessable_entity }
        format.json { render json: @book.errors, status: :unprocessable_entity }
      end
    end
  end

Note the line that has been added :

NotifyWorker.perform_async(book_params[:title], book_params[:author])

This will queue the job and the worker will pick it up.

Conclusion

This was a bit of extra fluff but it allows us to see a few bits that are quite interesting here.

First we see how to add yet another service dependency to our fleet in the docker-compose.yml. Then we see that just like the rails app we start the worker by hand.

The reason why we don't use the docker-compose file to start those two services is that those are directly depending on the code we are bound to modify and work on. Thus, it's more practical to start and run them by hand. We will now see how that's not practical for developers working on another service relying on this one.

A second ruby app

We now want to see how to work with another Ruby application, and from the point of view of a separate team.

This service will provide a book information database allowing users to find a book title, author name and description from a ISBN identifier.

We create a simple Sinatra app for this case.

A crude application

We mainly want to mock such a service as isbn databases might be a bit ... big.

So we use a very simple sinatra application :

# app.rb
require 'sinatra'
require 'json'

ISBNS = {
  '2-7654-1005-4': {
    title: 'Lucy',
    author: 'Renard',
    description: 'A book about stars.'
  },
  '2-7754-1105-4': {
    title: 'Mountains',
    author: 'John Smith',
    description: 'A book about plains.'
  },
}

get '/' do
  'Hello world!'
end

get '/isbn/:isbn' do
  isbn = params[:isbn].to_sym

  if ISBNS.keys.include?(isbn)
    response = ISBNS[isbn].to_json
    status 200
    body response
  else
    response = { error: 'Not found' }.to_json
    status 404
    body response
  end
end

The rest of the code is in the example repository.

This will allow us to make a simple request to the service to get the details of a book based on its isbn identifier.

We can test it by visiting http://0.0.0.0:3001/isbn/2-7654-1005-4 in a web browser.

What about docker here ?

This application doesn't require a database as we are using a constant and very limited, mocked, data. But we do want to be able to run this service easily and send requests to it when we run the main backend.

We could simply start the application using the bin/http script provided. But that would require the team working on the backend service to know that. It's best if that dependency is started through docker-compose too.

So we need a Dockerfile to define how to build and start this service.

from ruby:3.0.2

RUN mkdir /var/app

WORKDIR /var/app

COPY . .

RUN bundle install

CMD ["sh", "bin/http"]

We can then build the image to try it out :

$> cd isbn-search
$> docker build -t isbn-search-test .
...
$> docker run -p "3001:3001" -ti isbn-search-test

We can then visit http://0.0.0.0:3001/isbn/2-7654-1005-4 again and see it works.

And how does that help us ?

Let's go back to the backend now.

We will modify the form and controller so that we only enter an ISBN and we get all the book details from our isbn-search service.

To make it clean we will write a small api client library in lib/.

We add the excon gem to our Gemfile and install it. Then we add a line to our application.rb file in the config section to ensure the lib folder content is loaded.

config.autoload_paths += %W(#{config.root}/lib)

And then we can create the client file in lib/internal/isbn_search/client.rb :

module Internal
  module IsbnSearch
    class Client
      def initialize
        @port = ENV.fetch('ISBN_SEARCH_PORT', 3001)
        @host = ENV.fetch('ISBN_SEARCH_HOST', '0.0.0.0')
        @path = 'isbn/'
      end

      def get(isbn)
        response = Excon.get(url_for(isbn))
        if response.status == 200
          JSON.parse(response.body)
        else
          nil
        end
      end

      private

      def url_for(isbn)
        "http://#{@host}:#{@port}/#{@path}/#{isbn}"
      end
    end
  end
end

This is a very simple HTTP Api client. If there is a 200 response we will extract the body and parse it before returning it. Otherwise we will return nil.

We can now use this within the controller to get the details of the book, and we can update the form.

<%= form_with(model: book) do |form| %>
  <% if book.errors.any? %>
    <div id="error_explanation">
      <h2><%= pluralize(book.errors.count, "error") %> prohibited this book from being saved:</h2>

      <ul>
        <% book.errors.each do |error| %>
          <li><%= error.full_message %></li>
        <% end %>
      </ul>
    </div>
  <% end %>

  <div class="field">
    <%= form.label :isbn %>
    <%= form.text_field :isbn %>
  </div>

  <div class="actions">
    <%= form.submit %>
  </div>
<% end %>

And the controller needs to be tailored as well.

class BooksController < ApplicationController
  before_action :set_book, only: %i[ show edit update destroy ]

  # GET /books or /books.json
  def index
    @books = Book.all
  end

  # GET /books/1 or /books/1.json
  def show
  end

  # GET /books/new
  def new
    @book = Book.new
  end

  # GET /books/1/edit
  def edit
  end

  # POST /books or /books.json
  def create
    @book = Book.new(book_info)

    respond_to do |format|
      if @book.save
        NotifyWorker.perform_async(book_info[:title], book_info[:author])
        format.html { redirect_to @book, notice: "Book was successfully created." }
        format.json { render :show, status: :created, location: @book }
      else
        format.html { render :new, status: :unprocessable_entity }
        format.json { render json: @book.errors, status: :unprocessable_entity }
      end
    end
  end

  # PATCH/PUT /books/1 or /books/1.json
  def update
    respond_to do |format|
      if @book.update(book_info)
        format.html { redirect_to @book, notice: "Book was successfully updated." }
        format.json { render :show, status: :ok, location: @book }
      else
        format.html { render :edit, status: :unprocessable_entity }
        format.json { render json: @book.errors, status: :unprocessable_entity }
      end
    end
  end

  # DELETE /books/1 or /books/1.json
  def destroy
    @book.destroy
    respond_to do |format|
      format.html { redirect_to books_url, notice: "Book was successfully destroyed." }
      format.json { head :no_content }
    end
  end

  private
  # Use callbacks to share common setup or constraints between actions.
  def set_book
    @book = Book.find(params[:id])
  end

  def isbn_params
    params.require(:book).permit(:isbn)
  end

  def book_info
    Internal::IsbnSearch::Client.new.get(isbn_params[:isbn])
  end
end

We have mostly changed the create and update actions, replaced the book_params method by a isbn_params one and added the book_info method to do the call to the IsbnSearch service.

Working out with docker-compose

Now that our backend service relies on the IsbnSearch one we need a way to start that service easily.
As we have a Dockerfile for the IsbnSearch service we can add the following section to the docker-compose.yml file.

  isbn_search:
    build: ../isbn-search/
    environment:
      - PORT=3001
    ports:
      - "3001:3001"

And then build and start it :

$> docker-compose up -d
Docker Compose is now in the Docker CLI, try `docker compose up`

Building isbn_search
[+] Building 2.1s (10/10) FINISHED                                                                                                                          
 => [internal] load build definition from Dockerfile  
 ...
backend_redis_1 is up-to-date
backend_postgres_1 is up-to-date
Creating backend_isbn_search_1 ... done
$>

Now we can try it out and head to http://0.0.0.0:3000/books, click on "New Book", type in "2-7654-1005-4", click on "Create Book" and it will create the book.

Conclusion

We now have a setup that is a lot more similar to how things are used in a real day to day setting. We have one backend service relying on multiple services both for storage but also for additional needs. All those dependencies can be built and started repeatedly through one simple command.

Epilogue

A similar approach could be done to allow the developers working on the isbn_search service to start the backend one as a docker container, including its storage dependencies so that they can work and test their changes against a running instance of it.

If you are interested to know how to do that we can organise a pairing session or a workshop.
Contact us directly at contact@imfiny.com for details and prices.

Subscribe to Imfiny

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe