I have been working on an angular 4 (upgraded to 6) application backed by spring boot rest api, repositories:
1- api: https://github.com/blabadi/nutracker-api
2- angular: https://github.com/blabadi/nutritionTracker
and this post is about using and adding docker to these apps to make it easier to deploy and share.
Disclaimer: this is not by any means a best practice article, but was more of a hands on way to learn more about Docker work flow and challenges that one may face.
Part 1: Dockerizing the backend project (nutracker-api)I was testing this process on amazon micro instance and these are the steps I followed:
- install docker on aws:
1- add docker file (point to a built jar for now)reference: https://www.callicoder.com/spring-boot-docker-example/
this docker file is simple
- gets a jdk image
- copies our pre built jar from the build output directory
- starts the jar (which is a spring boot jar that starts an embedded web server)
to understand the details of the docker file contents, see:
2- install docker composehttps://magescale.com/installing-docker-docker-compose-on-aws-linux-ami/
Docker compose helps in building and running multiple containers in one command
so in my case I needed to run my spring boot jar and have it connected to a mongo db instance (also hosted in docker container)
3- add docker-compose to include mongo dbhttps://medium.com/statuscode/dockerising-a-node-js-and-mongodb-app-d22047e2806f
this in the following order :
1- prepares and runs a container for mongo by using the mongo image
2- mount a real path to the container path to persist the db data.
3- builds a container from a Dockerfile located in the same directory.
4- expose this api on container port: 8080
note that we pass the mongo container name as environment variable to the api container, which uses that variable to configure the mongo client (MongoConfig.java)
you may find people adding a 'link' key to the docker compose (or when they run the container).
but that is an older way to do it. in my example I'm utilizing the docker network created by default when you run a compose file and in that case you can connect to containers by their name out of the box
so in summary for the backend project I did:1- create docker file to run the spring boot based on a jdk image
2- create a docker compose to spin a mongo container and run the Dockerfile from step 1
3- run the commands in following sequence:
a) build the jar (the -x to not execute): ./gradlew build -x testthe three steps above is what you do in case you change your code and want to re/deploy it somewhere.
docker-compose build, builds the images
docker-compose up, runs the containers
seems straight forward but i faced few issues to boil down to these and learn my mistakes.
Notes/hints related to this part:- to stop all docker containers running:
docker stop $(docker ps -a -q)
- docker-compose up :
doesn't rebuild the containers (in case you change code or Dockerfile), if you change something in the jar, it won't update automatically
- docker-compose build :
rebuilds our java app image (nutracker-api) boot application
- to access the api from browser/rest client remotely I had to use :
ports:was not enough to publicly access the docker containers but can be enough for local communications of containers on same docker network.
- There is a gradle & maven plugin to automate the docker image build & push to docker hub as part of your build steps which is more practical solution.
and saves you from having to build the code at the host machine as I had to do. i just opted-out from doing it to keep things simple
Part 2: Dockerizing the Angular app will follow in the next blog post