Run PostgreSQL in Docker

To run PostgreSQL on docker, create a directory for saving the data presistant

run docker container

In above, change the value for POSTGRES_DB, POSTGRES_USER and POSTGRES_PASSWORD as needed.

Connect to PostgreSQL server

To connect to PostgreSQL server, run

Now you are inside PostgreSQL conainer, to login, run

It will ask for password. Once you enter password, you will be in PostgreSQL command line.

See docker

Docker compose start container on boot

I have a docker container, that i need to start on server boot.

The docker-compose.yml file i used to create this docker container is

With above docker-compose.yml file. i have to start docker container manually after server reboot.

To make it auto start, add the line

Here is the modified docker-compose.yml file.

You need to rebuild docker container based on this docker-compose.yml file

Change to the folder where your docker-compse.yml file is, in my case it was

Run

docker-compose: error while loading shared libraries

When running docker-compose on CentOS 7, i get following error

To fix the error, do the following

Now create a new file

Add following content

Make it executable

Create temp folder

Now docker-compose will work.

python-flask-docker

Create Python Flask Docker Container

Create a folder and change to the folder

Create file

Add following content

Create file requirements.txt, add “Flask” to it.

Now lets create our Python Flask Application

Add following content

To test Flask application locally, install Flask using pip

Now run the application using

Now you will be able to see web application at

Press CTRL+C to stop the application.

To build Docker image, run

-t specify tag.

If all worked properly, you will see

You can see the container image listed

Now your docker image is ready. To build a container using this image, run

You can access your python application running inside docker container on URL

To stop the application, run

To start it, run

Docker Nginx Proxy

Docker Nginx Proxy allow you to run multiple docker containers on same server behind nginx proxy. This is done using

https://github.com/jwilder/nginx-proxy

To do this, you need a server with port 80 and 443 unused.

To setup nginx proxy, run following

This will start nginx proxy. You can modify .env file if you want.

Starting a Docker Web App Behind Proxy

To start a web app, all you need is to start docker container on same network as nginx proxy. By default it is “webproxy”.

Here is an example command to start a web server.

This will start a test web server. You need to point the domain specified to this servers IP, then only nginx proxy can get LetsEncrypt SSL installed.

Replace test.serverok.in with your actual domain.

If you don’t want LetsEncrypt SSL installed, you can remove following 2 options

aws-elastic-beanstalk-2

Deploy Docker Image using Elastic Beanstalk

First create a file docker-eb-run.json with following content

here i used docker container bitnami/tomcat, you can use any container.

Login to AWS Console, go to AWS Elastic Beanstalk page. Click Get Started.

On next page, it ask for

For Application code, select Upload your code, click upload button and select “docker-eb-run.json” file you created.

Click “Create application” button. AWS will start deploying your docker container in Elastic Beanstalk, it will take a few minutes to complete.

Once deployment is completed, you get URL like

aws

Docker Delete all images

Before you can delete a docker image, you need to delete any container that is based on this image. So see how to delete all docker containers, see

Delete all docker containers

To list all available docker images, run

To just display image ID only, run with -q option

Now lets pass the result to docker rmi command using xargs

This will delete all images. You can also use

See docker

Red Hat acquires CoreOS for $250 mililon

CoreOS is a container-optimized Linux operating system to be used under docker/Kubernetes.

On January 30, 2018, Red Hat, Inc. announced that it is acquiring CoreOS for $250 million.

Founded in 2013, CoreOS was created with a goal of building and delivering infrastructure for organizations of all sizes that mirrored that of large-scale software companies, automatically updating and patching servers and helping to solve pain points like downtime, security and resilience. Since its early work to popularize lightweight Linux operating systems optimized for containers, CoreOS has become well-regarded as a leader behind award-winning technologies that are enabling the broad adoption of scalable and resilient containerized applications.

CoreOS is the creator of CoreOS Tectonic, an enterprise-ready Kubernetes platform that provides automated operations, enables portability across private and public cloud providers, and is based on open source software. It also offers CoreOS Quay, an enterprise-ready container registry. CoreOS is also well-known for helping to drive many of the open source innovations that are at the heart of containerized applications, including Kubernetes, where it is a leading contributor; Container Linux, a lightweight Linux distribution created and maintained by CoreOS that automates software updates and is streamlined for running containers; etcd, the distributed data store for Kubernetes; and rkt, an application container engine, donated to the Cloud Native Computing Foundation (CNCF), that helped drive the current Open Container Initiative (OCI) standard.

https://coreos.com/blog/coreos-agrees-to-join-red-hat/

docker