When we run `docker run [options] httpd` we are asking docker to go and get the container `httpd`. Where?
When we run `docker run [options] httpd` we are asking Docker to go and get the container `httpd`.
Where?
In a registry!
...
...
@@ -168,15 +170,15 @@ In a registry!
## Where do the containers come from? Registry!
A registry is (more or less) a repository of prebuild containers.
A registry is (more or less) a repository of prebuilt containers.
There are several registry that we can use to pull or push containers: [Docker Hub](https://hub.docker.com), [Quay](https://quay.io), [Red Hat Catalog](https://catalog.redhat.com/), [Google Containers Registry](https://gcr.io), etc.
There are several registry that can be used to pull or push containers: [Docker Hub](https://hub.docker.com), [Quay](https://quay.io), [Red Hat Catalog](https://catalog.redhat.com/), [Google Containers Registry](https://gcr.io), etc.
----
## Where do the containers come from? Registry!
How we can do it?
How can we do it?
```bash
$ docker search keyword # We search for a keyword, e.g. httpd
...
...
@@ -208,7 +210,7 @@ To make a container we need to define in a file, called *Dockerfile*, all the st
Docker define a DSL (Domain Specific Language) to declare each of this steps.
**N.B.** A best practice is to mantain the configuration separated from the application, so we can reuse multiple times the same containers.
**N.B.** A best practice is to mantain the configuration separated from the application, so that the containers can be reused multiple times.
----
...
...
@@ -225,7 +227,7 @@ ENTRYPOINT ["curl"]
## Containers DIY
Once created our Dockerfile, we can build a container:
Once our Dockerfile is created, we can build a container:
Ok, nice, we can specify which packages we want, which command we want to run and docker will do that for us.
Ok, nice, we can specify which packages we want, which command we want to run and Docker will do that for us.
----
...
...
@@ -258,9 +260,9 @@ Is that clean and sane to do?
## Containers DIY
It's not clean because we have to install some packages only to compile our real application.
It's not clean because we have to install some packages just to compile our real application.
Doing that we obtain only bigger container (and with higer chance to be compromise)
Doing that we only get bigger containers (and with higher chance to be compromise)
----
...
...
@@ -268,7 +270,7 @@ Doing that we obtain only bigger container (and with higer chance to be compromi
A solution to this problem are the multi-stage builds.
Multi stage builds are (obv) multiple builds of multiple container where we can first build (first container) then deploy (second container) our programs.
Multi stage builds are (obv) multiple builds of multiple containers where we at first we build (first container) then deploy (second container) our programs.
---
...
...
@@ -280,7 +282,7 @@ Multi stage builds are (obv) multiple builds of multiple container where we can
## Manage multiple containers
Ok, now we want to run a service, something like a website with WordPress. What we needs?
Ok, now we want to run a service, something like a website with WordPress. So what is needed?
- A database where to store our data
- A webserver that expose our WordPress
...
...
@@ -297,11 +299,11 @@ We have to run each of them by hand? And start and stop each of them every singl
We can use docker-compose.
Docker-compose will read a file in which we have all the definition of our containers, with variables and volumes.
Docker-compose will read a file in which we have all the definitions of our containers, with their variables and volumes.
----
## Manage multiple containers: An example from yesterday
## Manage multiple containers: An example from yesterday's talk
```yaml
...
...
@@ -352,21 +354,21 @@ Ok, nice, but what if:
- We want to export or import a container?
- We want to make some kind of restriction on the container?
- We want reboot my server?
- We want to reboot our server?
----
## Some fancy stuffs
### Export/import containers
Sometimes arise the problem to export and import containers (e.g. move containers between computers but not connection)
Sometimes the problem to export and import containers arises (e.g. moving containers between computers without any connection)
----
## Some fancy stuffs
### Export/import containers
To do so we can:
To do this we can:
```bash
# Export to an archive
...
...
@@ -469,7 +471,7 @@ No? Yes? It depends...
### Do I really need containers?
When I **need** a container?
When do I **need** a container?
- When I want to develop an application that will be managed by a Container Orchestrator
...
...
@@ -490,11 +492,11 @@ When is it **convenient** to use containers?
When do the containers not solve my problems?
-**Security** Containers don't protect you from bugs in the kernel (shared between host and container) or in the application
-**Privacy** Unless your data is _not inside a container_, your data is safe as long as the application is secure
-**Privacy** Unless your data is _outside a container_, your data is safe as long as the application is secured
---
### Manage container on production scale
### Manage containers on production scale
Ok, this is really nice, now we can manage containers in a single host.
...
...
@@ -515,8 +517,9 @@ We can use:
## Using container with SELinux
If you are using SELinux on your system (e.g. CentOS, Fedora, RHEL), you could have an hard time to find the right policy to apply to your process (e.g. when mounting a volume).
SELinux will block the process to acess a file (unless you disable selinux or force a relabel in the volume, but if you do that you should burn in hell).
If you are using SELinux on your system (e.g. CentOS, Fedora, RHEL), you could have an hard time to find the right policy to apply to your process (e.g. when mounting a volume).
SELinux will prevent the process to acess a file (unless you disable selinux or force a relabel in the volume, but if you do that you should burn in hell).