Commit 676d45b2 authored by Paddy's avatar Paddy 🐧

First Draft Docker.md 2020

parent 17530994
Pipeline #559 passed with stage
in 2 minutes and 22 seconds
---
before_script:
- apt-get update -qq && apt-get install -y -qq git
pages:
stage: deploy
script:
- git clone --recurse-submodule https://gitlab-ci-token:${CI_JOB_TOKEN}@gitlab.poul.org/corsi/templates/revealjs-poul.git public
- rm -rf public/.git
- rm -rf public/deploy.py
- mv -f *.md public/slides/content.md
- mv -f *.html public/slides/
artifacts:
paths:
- public
only:
- reveal_eng
# What we are going to talk about:
- Using Docker to manage containers
- Where containers come from
- Our first container
- Manage multiple containers as a single service
- Extras
---
# Before we start to talk about HOW to use container...
----
# Let's talk about WHAT IS a container (reeeeally briefly)
---
# Container: What is it and what is not
![MagicMeme](https://giphy.com/embed/12NUbkX6p4xOO4)
----
### What is it and what is not
Containers are not some kind of sorcery, but use specific features of the Linux kernel:
- Namespaces
- Cgroups
- seccomp-bpf
----
### Namespaces really TL;DR
Namespaces are a mechanism of the kernel to isolate a process so it can't access other process on the system (unless they both share the same namespaces)
----
### Cgroups really TL;DR
CGroups area a mechanis of the kernel to limit which resources a process can access.
----
### Seccomp-bpf really TL;DR
Seccomp-bpf is a mechanism of the kernel to limit which syscall a process can call.
----
### Containers are VM?
Nope, absolutly nope.
- Containers can't emulate another architecture and can't run another OS (e.g. No windows).
- Containers are stateless, everything that is done inside a container will be not saved.
- The host OS and the container share the very same kernel, while VM do not.
Container are a way to have an application runnig safely and isolated without stressing too much the host system.
---
# Now, let's start to use some tools!
---
# Download, manage and run you container with Docker
![Docker](https://upload.wikimedia.org/wikipedia/commons/4/4e/Docker_%28container_engine%29_logo.svg)
---
## Docker
Docker is a container engine that will let us manage containers in a easy way.
----
## Docker Installation
To install docker, you can simply run:
**On Ubuntu:**
```bash
$ sudo apt-get install docker
$ sudo systemctl --now enable dockerd
```
----
## Docker: Some basic commands
```bash
# Run a container with httpd and port 8080 open
$ docker run -d -p 8080:8080 httpd
```
```bash
# Show currently running containers
$ docker ps
```
```bash
# Start/stop a specific container
$ docker start/stop container_name
```
----
## Docker: Volume
Ok, nice, but container are stateless. How can we save some data?
----
## Docker: Volume
If we want to make data persistant, we need some place where to store it, and there come in to play the volumes.
Docker can create and manage volume where our application can store (or read) our data.
```bash
# Execute a container binding a local directory
$ docker run -d -p 8080:80 -v ./local/dir:/usr/local/apache2/conf httpd
```
```bash
# Create a volume and mounting it
$ docker create volume myVol
$ docker run -d -p 8080:80 -v myVol:/usr/local/apache2/conf httpd
```
---
# Demo!
---
# This containers are tasty! Where I can get some more?
----
## Where do containers come from?
When we run `docker run [options] httpd` we are asking docker to go and get the container `httpd`. Where?
In a registry!
----
## Where do containers come from? Registry!
A registry is (more or less) a repository of prebuild containers.
There are several registry that we can use to pull or push containers: [Docker Hub](https://hub.docker.com), [Quay](https://quay.io), [Red Hat Catalog](https://catalog.redhat.com/), ecc.
----
## Where do containers come from? Registry!
How we can do it?
```bash
$ docker search keyword # We search for a keyword, e.g. httpd
$ docker pull image_name # We download only the container and its layers
$ docker run [options] image_name # We run the images and, if we don't have already downloade, we download the container
```
---
# Container DIY: How to make a container at home
---
## Container DIY
Ok, now we know how to run a container.
How we can make one?
----
## Containers DIY
To make a container we need to define in a file, called *Dockerfile*, all the steps that we need to perform to make our own image with our target application (webserver, application server, ecc)
Docker define a DSL (Domain Specific Language) to declare each of this steps.
**N.B.** A best practice is to mantain the configuration separated from the application, so we can reuse multiple times the same containers.
----
## Container DIY: An example
```Dockerfile
FROM Alpine
RUN echo "Hello World!"
```
----
## Container DIY
And to build a container we have to run:
```bash
$ docker build -f Dockerfile -t TagName
```
----
## Container DIY
Ok, nice, we can specific which packages we want, which command to run and docker will do that for us.
But what if we want to make a container with a program that we wrote or that we must build from scratch?
We have to install all the dependencies or compile it and compy inside the imatge.
Is that clean and sane to do?
----
# NO
----
## Container DIY
It's not clean because we have to install some packages only to compile our real application.
Doing that we obtain only bigger container (and with higer chance to be compromise)
----
## Container DIY
A solution to this problem are the multi-stage builds.
Multi stage builds are (obv) multiple builds of multiple container where we can first build (first container) then deploy (second container) our programs.
---
# Manage multiple containers
----
## Manage multiple containers
Ok, now we want to run a service, something like a website with WordPress. What we needs?
- A database where to store our data
- A webserver that expose our WordPress
We have to run each of them by hand? And start and stop each of them every single time?
----
# NO
----
## Manage multiple containers
We can use docker-compose.
Docker-compose will read a file in which we have all the definition of our containers, with variables and volumes.
----
## Manage multiple containers: An example
```yaml
version: '3.3'
services:
db:
image: mysql:5.7
volumes:
- db_data:/var/lib/mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: somewordpress
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpress
wordpress:
depends_on:
- db
image: wordpress:latest
ports:
- "8000:80"
restart: always
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: wordpress
WORDPRESS_DB_NAME: wordpress
volumes:
db_data: {}
```
---
# Demo!
---
# Links
---
# Extras
---
### Manage container on production scale
Ok, this is really nice, now we can manage containers in a single host.
What if we want to manage containers in multiple hosts, in a production grade enviroment?
----
### Manage containers on production scale
We can use:
- [OpenShift](https://www.openshift.com/) or [OKD](https://www.okd.io/)
- [Kubernetes](https://kubernetes.io/)
- [Rancher](https://rancher.com/docs/rancher/v2.x/en/) or [K3s](https://rancher.com/docs/k3s/latest/en/)
- Docker Swarm (no, please no, for the love of the gods, please no)
---
## Using container with SELinux
If you are using SELinux on your system (e.g. CentOS, Fedora, RHEL), you could have an hard time to find the right policy to apply to your process (e.g. when mounting a volume).
SELinux will block the process to acess a file (unless you disable selinux or force a relabel in the volume, but if you do that you should burn in hell).
You can get help with that using a tool called [Udica](https://github.com/containers/udica)[uɟit͡sa], this tool will help you to create custom security profiles that will be used from SELinux.
This diff is collapsed.
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Start webserver"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# cd ../Slides; python -m http.server 8000"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Open necessary files"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Up1-Docker:\n",
"\n",
"- Dockerfile\n",
"- genconfig.sh\n",
"- entrypoint.sh\n",
"- config.js.template\n",
"- server.js.template\n",
"\n",
"Mattermost-docker-compose:\n",
"- docker-compose.yml"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": []
}
],
"source": [
"cd ../Repos; \n",
"subl3 -n Mattermost-docker-compose/docker-compose.yml Up1-docker/*"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Cleanup before the demos"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Stop all containers"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"if [ -n \"$(docker ps --filter status=running -q)\" ]; then \n",
"echo \"Stopping:\";\n",
"docker ps --filter status=running --format \"{{.Names}} ({{.Image}})\"; \n",
"docker stop $(docker ps --filter status=running -q) 1>/dev/null;\n",
"fi;"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Delete all stopped containers"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"if [ -n \"$(docker ps -qf status=exited)\" ]; then \n",
"echo \"Removing:\";\n",
"docker ps --filter status=exited --format \"{{.Names}} ({{.Image}})\"; \n",
"docker rm $(docker ps -qf \"status=exited\") 1>/dev/null; \n",
"fi;"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Delete images"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"docker rmi alpine"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"docker rmi up1"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"docker rmi fcremo/up1"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Delete custom networks"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"docker network rm mattermost-net"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Bash",
"language": "bash",
"name": "bash"
},
"language_info": {
"codemirror_mode": "shell",
"file_extension": ".sh",
"mimetype": "text/x-sh",
"name": "bash"
}
},
"nbformat": 4,
"nbformat_minor": 0
}
# Corsi Linux Avanzati 2016 - Docker
If you want to view the slides go to https://slides.poul.org or run a webserver on ./Slides
In ./Repos you'll find (some) of the code/files used in the presentation
In ./Notebooks you'll find an ipython notebook that i've using during the talk to demo some cool stuff.
version: '2'
services:
postgres:
restart: always
image: postgres
environment:
- POSTGRES_USER=mattermost
- POSTGRES_PASSWORD=password
volumes:
- /tmp/postgres-compose:/var/lib/postgresql
mattermost:
restart: always
image: jasl8r/mattermost:2.2.0
links:
- postgres:postgres
depends_on:
- postgres
ports:
- "8080:80"
environment:
- MATTERMOST_SECRET_KEY=long-and-random-alphanumeric-string
- MATTERMOST_LINK_SALT=long-and-random-alphanumeric-string
- MATTERMOST_RESET_SALT=long-and-random-alphanumeric-string
- MATTERMOST_INVITE_SALT=long-and-random-alphanumeric-string
volumes:
- /tmp/mattermost:/opt/mattermost/data
Up1-docker @ 9bd576e6
Subproject commit 9bd576e65d519893e57cf915705afd60eb6b7aac