Containers are not some kind of sorcery, but use specific features of the Linux kernel:
- Namespaces
- Cgroups
- seccomp-bpf
----
### Namespaces really TL;DR
Namespaces are a mechanism of the kernel to isolate a process so it can't access other process on the system (unless they both share the same namespaces)
----
### Cgroups really TL;DR
CGroups area a mechanis of the kernel to limit which resources a process can access.
----
### Seccomp-bpf really TL;DR
Seccomp-bpf is a mechanism of the kernel to limit which syscall a process can call.
----
### Containers are VM?
Nope, absolutly nope.
- Containers can't emulate another architecture and can't run another OS (e.g. No windows).
- Containers are stateless, everything that is done inside a container will be not saved.
- The host OS and the container share the very same kernel, while VM do not.
Container are a way to have an application runnig safely and isolated without stressing too much the host system.
---
# Now, let's start to use some tools!
---
# Download, manage and run you container with Docker
Docker is a container engine that will let us manage containers in a easy way.
----
## Docker Installation
To install docker, you can simply run:
**On Ubuntu:**
```bash
$ sudo apt-get install docker
$ sudo systemctl --nowenable dockerd
```
----
## Docker: Some basic commands
```bash
# Run a container with httpd and port 8080 open
$ docker run -d-p 8080:8080 httpd
```
```bash
# Show currently running containers
$ docker ps
```
```bash
# Start/stop a specific container
$ docker start/stop container_name
```
----
## Docker: Volume
Ok, nice, but container are stateless. How can we save some data?
----
## Docker: Volume
If we want to make data persistant, we need some place where to store it, and there come in to play the volumes.
Docker can create and manage volume where our application can store (or read) our data.
```bash
# Execute a container binding a local directory
$ docker run -d-p 8080:80 -v ./local/dir:/usr/local/apache2/conf httpd
```
```bash
# Create a volume and mounting it
$ docker create volume myVol
$ docker run -d-p 8080:80 -v myVol:/usr/local/apache2/conf httpd
```
---
# Demo!
---
# This containers are tasty! Where I can get some more?
----
## Where do containers come from?
When we run `docker run [options] httpd` we are asking docker to go and get the container `httpd`. Where?
In a registry!
----
## Where do containers come from? Registry!
A registry is (more or less) a repository of prebuild containers.
There are several registry that we can use to pull or push containers: [Docker Hub](https://hub.docker.com), [Quay](https://quay.io), [Red Hat Catalog](https://catalog.redhat.com/), ecc.
----
## Where do containers come from? Registry!
How we can do it?
```bash
$ docker search keyword # We search for a keyword, e.g. httpd
$ docker pull image_name # We download only the container and its layers
$ docker run [options] image_name # We run the images and, if we don't have already downloade, we download the container
```
---
# Container DIY: How to make a container at home
---
## Container DIY
Ok, now we know how to run a container.
How we can make one?
----
## Containers DIY
To make a container we need to define in a file, called *Dockerfile*, all the steps that we need to perform to make our own image with our target application (webserver, application server, ecc)
Docker define a DSL (Domain Specific Language) to declare each of this steps.
**N.B.** A best practice is to mantain the configuration separated from the application, so we can reuse multiple times the same containers.
----
## Container DIY: An example
```Dockerfile
FROM Alpine
RUN echo"Hello World!"
```
----
## Container DIY
And to build a container we have to run:
```bash
$ docker build -f Dockerfile -t TagName
```
----
## Container DIY
Ok, nice, we can specific which packages we want, which command to run and docker will do that for us.
But what if we want to make a container with a program that we wrote or that we must build from scratch?
We have to install all the dependencies or compile it and compy inside the imatge.
Is that clean and sane to do?
----
# NO
----
## Container DIY
It's not clean because we have to install some packages only to compile our real application.
Doing that we obtain only bigger container (and with higer chance to be compromise)
----
## Container DIY
A solution to this problem are the multi-stage builds.
Multi stage builds are (obv) multiple builds of multiple container where we can first build (first container) then deploy (second container) our programs.
---
# Manage multiple containers
----
## Manage multiple containers
Ok, now we want to run a service, something like a website with WordPress. What we needs?
- A database where to store our data
- A webserver that expose our WordPress
We have to run each of them by hand? And start and stop each of them every single time?
----
# NO
----
## Manage multiple containers
We can use docker-compose.
Docker-compose will read a file in which we have all the definition of our containers, with variables and volumes.
----
## Manage multiple containers: An example
```yaml
version:'3.3'
services:
db:
image:mysql:5.7
volumes:
-db_data:/var/lib/mysql
restart:always
environment:
MYSQL_ROOT_PASSWORD:somewordpress
MYSQL_DATABASE:wordpress
MYSQL_USER:wordpress
MYSQL_PASSWORD:wordpress
wordpress:
depends_on:
-db
image:wordpress:latest
ports:
-"8000:80"
restart:always
environment:
WORDPRESS_DB_HOST:db:3306
WORDPRESS_DB_USER:wordpress
WORDPRESS_DB_PASSWORD:wordpress
WORDPRESS_DB_NAME:wordpress
volumes:
db_data:{}
```
---
# Demo!
---
# Links
---
# Extras
---
### Manage container on production scale
Ok, this is really nice, now we can manage containers in a single host.
What if we want to manage containers in multiple hosts, in a production grade enviroment?
----
### Manage containers on production scale
We can use:
-[OpenShift](https://www.openshift.com/) or [OKD](https://www.okd.io/)
-[Kubernetes](https://kubernetes.io/)
-[Rancher](https://rancher.com/docs/rancher/v2.x/en/) or [K3s](https://rancher.com/docs/k3s/latest/en/)
- Docker Swarm (no, please no, for the love of the gods, please no)
---
## Using container with SELinux
If you are using SELinux on your system (e.g. CentOS, Fedora, RHEL), you could have an hard time to find the right policy to apply to your process (e.g. when mounting a volume).
SELinux will block the process to acess a file (unless you disable selinux or force a relabel in the volume, but if you do that you should burn in hell).
You can get help with that using a tool called [Udica](https://github.com/containers/udica)[uɟit͡sa], this tool will help you to create custom security profiles that will be used from SELinux.