Softpanorama

May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix

Docker on Red Hat with firewall

News

Azure

Recommended Links

Installation of Docker on RHEL 7 using RPMs from extra repository

dockerroot group problem

Running Docker from a regular user account

Download Docker images without the pull command when you are behind proxy

Solaris Zones Solaris Ldoms VM/CMS BSD Jails Docker history

 Humor

Etc

Red Hat provides its own package for Docker that has some idiosyncrasies. One is dockerroot group problem

Installation form RPMs is relatively straightforward: Installation of Docker on RHEL 7 using RPMs from extra repository

Problems arise if your server is behind the firewall and pull of Docker images does not work. But this is solvable problem. See Download Docker images without the pull command when you are behind proxy


Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

[Jun 11, 2020] How to use the --privileged flag with container engines - Enable Sysadmin

Jun 11, 2020 | www.redhat.com

How to use the --privileged flag with container engines Let's take a deep dive into what the --privileged flag does for container engines such as Podman, Docker, and Buildah.

Posted: June 8, 2020 | by Dan Walsh (Red Hat)

Image

Image by Bilderjet from Pixabay

Linux Containers

Many users get confused about the --privileged flag. Users often equate this flag to unconfined or full root access to the host system. In this blog, I discuss what the --privileged flag does with container engines such as Podman , Docker , and Buildah .

What does the --privileged flag cause container engines to do?

What privileges does it give to the container processes?

Executing container engines with the --privileged flag tells the engine to launch the container process without any further "security" lockdown.

Note: Running container engines in rootless mode does not mean to run with more privilege than the user executing the command. Containers are blocked from additional access by Linux anyway. Your processes still run as the user process that launched them on the host. So, for example, running --privileged does not suddenly allow the container process to bind to a port less than 1024. The kernel does not allow non-root users to bind to these ports, so users launching container processes are not allowed access either.

The bottom line is that using the --privileged flag does not tell the container engines to add additional security constraints. The --privileged flag does not add any privilege over what the processes launching the containers have. Tools like Podman and Buildah do NOT give any additional access beyond the processes launched by the user.

To understand the --privileged flag, you need to understand the security enabled by container engines, and what is disabled.

Read-only kernel file systems

Kernel file systems provide a mechanism for a process to alter the way the kernel runs. They also provide information to processes on the system. By default, we don't want container processes to modify the kernel, so we mount kernel file systems as read-only within the container. The read-only mounts prevent privileged processes and processes with capabilities in the user namespace to write to the kernel file systems.

$ podman run fedora mount  | grep '(ro'
sysfs on /sys type sysfs (ro,nosuid,nodev,noexec,relatime,seclabel) tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,relatime,context="system_u:object_r:container_file_t:s0:c268,c852",mode=755,uid=3267,gid=3267)
cgroup on /sys/fs/cgroup/systemd type cgroup (ro,nosuid,nodev,noexec,relatime,seclabel,xattr,name=systemd)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (ro,nosuid,nodev,noexec,relatime,seclabel,net_cls,net_prio)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (ro,nosuid,nodev,noexec,relatime,seclabel,hugetlb)
cgroup on /sys/fs/cgroup/blkio type cgroup (ro,nosuid,nodev,noexec,relatime,seclabel,blkio)
cgroup on /sys/fs/cgroup/memory type cgroup (ro,nosuid,nodev,noexec,relatime,seclabel,memory)
cgroup on /sys/fs/cgroup/freezer type cgroup (ro,nosuid,nodev,noexec,relatime,seclabel,freezer)
cgroup on /sys/fs/cgroup/devices type cgroup (ro,nosuid,nodev,noexec,relatime,seclabel,devices)
cgroup on /sys/fs/cgroup/cpuset type cgroup (ro,nosuid,nodev,noexec,relatime,seclabel,cpuset)
cgroup on /sys/fs/cgroup/pids type cgroup (ro,nosuid,nodev,noexec,relatime,seclabel,pids)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (ro,nosuid,nodev,noexec,relatime,seclabel,cpu,cpuacct)
cgroup on /sys/fs/cgroup/perf_event type cgroup (ro,nosuid,nodev,noexec,relatime,seclabel,perf_event)
proc on /proc/asound type proc (ro,relatime)
proc on /proc/bus type proc (ro,relatime)
proc on /proc/fs type proc (ro,relatime)
proc on /proc/irq type proc (ro,relatime)
proc on /proc/sys type proc (ro,relatime)
proc on /proc/sysrq-trigger type proc (ro,relatime)
tmpfs on /proc/acpi type tmpfs (ro,relatime,context="system_u:object_r:container_file_t:s0:c268,c852",uid=3267,gid=3267)
tmpfs on /proc/scsi type tmpfs (ro,relatime,context="system_u:object_r:container_file_t:s0:c268,c852",uid=3267,gid=3267)
tmpfs on /sys/firmware type tmpfs (ro,relatime,context="system_u:object_r:container_file_t:s0:c268,c852",uid=3267,gid=3267)
tmpfs on /sys/fs/selinux type tmpfs (ro,relatime,context="system_u:object_r:container_file_t:s0:c268,c852",uid=3267,gid=3267)

Whereas when I run as --privileged , I get:

$ podman run --privileged fedora mount  | grep '(ro'
$

None of the kernel file systems are mounted read-only in --privileged mode. Usually, this is required to allow processes inside of the container to actually modify the kernel through the kernel file system.

Masking over kernel file systems

The /proc file system is namespace-aware, and certain writes can be allowed, so we don't mount it read-only. However, specific directories in the /proc file system need to be protected from writing, and in some instances, from reading. In these cases, the container engines mount tmpfs file systems over potentially dangerous directories, preventing processes inside of the container from using them.

$ podman run fedora mount  | grep /proc.*tmpfs
tmpfs on /proc/acpi type tmpfs (ro,relatime,context="system_u:object_r:container_file_t:s0:c255,c491",uid=3267,gid=3267)
devtmpfs on /proc/kcore type devtmpfs (rw,nosuid,seclabel,size=7995040k,nr_inodes=1998760,mode=755)
devtmpfs on /proc/keys type devtmpfs (rw,nosuid,seclabel,size=7995040k,nr_inodes=1998760,mode=755)
devtmpfs on /proc/latency_stats type devtmpfs (rw,nosuid,seclabel,size=7995040k,nr_inodes=1998760,mode=755)
devtmpfs on /proc/timer_list type devtmpfs (rw,nosuid,seclabel,size=7995040k,nr_inodes=1998760,mode=755)
devtmpfs on /proc/sched_debug type devtmpfs (rw,nosuid,seclabel,size=7995040k,nr_inodes=1998760,mode=755)
tmpfs on /proc/scsi type tmpfs (ro,relatime,context="system_u:object_r:container_file_t:s0:c255,c491",uid=3267,gid=3267)

With --privileged , the mount points are not masked over:

$ podman run --privileged fedora mount | grep /proc.*tmpfs
$

Linux capabilities

Linux capabilities are a mechanism for limiting the power of root. The Linux kernel splits the privileges of root (superuser) into a series of distinct units, called capabilities. In the case of rootless containers, container engines still use user namespace capabilities. These capabilities limit the power of root within the user namespace. Container engines launch the containers with a limited number of namespaces enabled to control what goes on inside of the container by default.

$ podman run -d fedora sleep 100
8b1facf07f11486e6379d14432f7c7f89da262d2aba8b55ff52af8570d0a17a9
$ podman top -l capeff
EFFECTIVE CAPS
AUDIT_WRITE,CHOWN,DAC_OVERRIDE,FOWNER,FSETID,KILL,MKNOD,NET_BIND_SERVICE,NET_RAW,SETFCAP,SETGID,SETPCAP,SETUID,SYS_CHROOT

When you launch a container with --privileged mode, the container launches with the full list of capabilities.

$ podman run --privileged -d fedora sleep 100
d571acd1ccda2e6eb31602bf509e21d632cca3d8d524781b0a0123fef17e99f4
$ podman top -l capeff
EFFECTIVE CAPS
full

Note: In rootless containers, the container processes get full namespace capabilities. These are not the same as full root capabilities. These are NOT real capabilities, but only capabilities over the user namespace. For example, a process with CAP_SETUID is allowed to change its UID to all UIDs mapped into the user namespace, but is not allowed to change the UID to any UID not mapped into the user namespace. When running a rootful container without using user namespace, a process with CAP_SETUID IS allowed to change its UID to any UID on the system.

You can manipulate the capabilities available to a container without running in --privileged mode by using the --cap-add and --cap-drop flags. For example, if you want to run the container with all capabilities, you could execute:

$ podman run --cap-add=all -d fedora sleep 100
9d167c4c0980e70623598dd718b685c0aead6d32c4bb2da35f50f8a58cbc66ea
$ podman top -l capeff
EFFECTIVE CAPS
full

Using --cap-drop=all --cap-add setuid would run a container only with the setuid capability.

$ podman run --cap-drop=all --cap-add=setuid -d fedora sleep 100
d7f9954649024e20604ae995c9a05b1efcd7194b3e019f3495a24bfe4779c6aa
$ podman top -l capeff
EFFECTIVE CAPS
SETUID

Here is a link to a talk I gave at Devcon.cz on ways to increase the security in containers. The talk covers a lot of these security features and how to make them better.

Syscall filtering - SECCOMP

Container engines control the syscall tables available to processes inside of the container. This limits the attack surface of the Linux kernel by preventing container processes from executing syscalls inside of the container. If a syscall could cause a kernel exploit and allow a container to break out, then if the syscall is not available to the container processes, you prevent the break out. By default, container engines drop many syscalls. We recently wrote a blog on how to drop many more.

$ podman run -d fedora sleep 100
7ba4decb298a0e38fe0140b8bf039a662f4cd0fd666cd7a7f95d1bc12fdddecc
$ podman top -l seccomp
SECCOMP
filter

If you execute the --privileged flag, then the container engines do not use the SECCOMP syscall filters:

$ podman run --privileged -d fedora sleep 100
1469d3629d787e11100e3e9d011c97ff0249df1092b24af874f4e1be167f3852
$ podman top -l seccomp
SECCOMP
disabled

You can also turn off syscall filtering by using the --security-opt seccomp:unconfined options without running the full --privileged flag.

$ podman run --security-opt seccomp=unconfined -d fedora sleep 100
c18858a963d2e80e25ed1d118a6e48072047d69fc6efec23b26362408a8a71d3
$ podman top -l seccomp
SECCOMP
disabled

SELinux

SELinux is a labeling system. Every process and every file system object has a label. SELinux policies define rules about what a process label is allowed to do with all of the other labels on the system. I feel SELinux is the best tool for controlling file system break outs of containers. Container engines launch container processes with a single confined SELinux label, usually container_t , and then set the container inside of the container to be labeled container_file_t . The SELinux policy rules basically say that the container_t processes can only read/write/execute files labeled container_file_t . If a container process escapes the container and attempts to write to content on the host, the Linux kernel denies access and only allows the container process to write to content labeled container_file_t .

$ podman run -d fedora sleep 100
d4194babf6b877c7100e79de92cd6717166f7302113018686cea650ea40bd7cb
$ podman top -l label
LABEL
system_u:system_r:container_t:s0:c647,c780

When you run with the --privileged flag, SELinux labels are disabled, and the container runs with the label that the container engine was executed with. This label is usually unconfined and has full access to the labels that the container engine does. In rootless mode, the container runs with container_runtime_t . In root mode, it runs with spc_t . The bottom line on both of these labels is that there is no additional confinement on the container process than what was on the container engine process.

$ podman run --privileged -d fedora sleep 100
23770ed2fef88b6a674af733a7a80b0d29bfa6a6db2888edf810eaa55ee2d93e
$ podman top -l label
LABEL
unconfined_u:system_r:container_runtime_t:s0

Like the other security mechanisms, SELinux confinement can also be disabled directly without requiring full --privilege mode.

$ podman run --security-opt label=disable -d fedora sleep 100
08d6170f71313bc98293c77686e41cebc3041e82eea189bd8c74d5b60290102f
$ podman top -l label
LABEL
unconfined_u:system_r:container_runtime_t:s0

Namespaces

What sometimes surprises users is that namespaces are NOT affected by the --privileged flag. This means that the container processes are still living in the virtualization world of containers. Even though they don't have the security constraints enabled, they do not see all of the processes on the system or the host network, for example. Users can disable individual namespaces by using the --pid=host , --net=host , --user=host , --ipc=host , --uts=host container engines flags. Years ago, I defined these containers as super privileged containers .

$ podman top -l | wc -l
2

As you can see, by default, top shows only one process running in the container, along with the header:

$ podman run --pid=host -d fedora sleep 100
a90f2ccc335343a649dfdd777e252319a16a786a801da2462d2a4dbe0d8f55ad
$ podman top -l | wc -l
421

When I run the container with --pid=host , the container engine does not use the PID namespace, and the container processes see all of the processes on the host as well as the processes inside of the container.

Similarly, --net=host disables the network namespace, allowing the container processes to use the host network.

User namespace

Container engines user namespace is not affected by the --privileged flag. Container engines do NOT use user namespace by default. However, rootless containers always use it to mount file systems and use more than a single UID. In the rootless case, user namespace can not be disabled; it is required to run rootless containers. User namespaces prevent certain privileges and add considerable security.

Recent versions of Podman use containers.conf , which allows you to change the engine's default behavior when it comes to namespaces. If you wanted all of your containers to not use a network namespace by default, you could set this in containers.conf .

Conclusion

As a security engineer, I actually do not like users running with the --privileged mode. I wish they would figure out what privileges their container requires and run with as much security as possible, or better yet, they would redesign their application to run without requiring as many privileges. It's kind of like using setenforce 0 in the SELinux world, and you know how much I love that. But the bottom line is, we need users of container engines to understand what happens when they use the --privileged flag, and why sometimes they need to disable additional features to make their container execute successfully.

The open-source community is working on tools in addition to the container engines to make this possible. A couple of examples of these tools are:

[ Free course: Deploying containerized applications . ]

[Dec 01, 2019] Docker Run Command

Dec 01, 2019 | linuxize.com

The docker run command takes the following form:

docker run [OPTIONS] IMAGE [COMMAND] [ARG...]

The name of the image from which the container should be created is the only required argument for the docker run command. If the image is not present on the local system, it is pulled from the registry.

If no command is specified, the command specified in the Dockerfile's CMD or ENTRYPOINT instructions is executed when running the container.

Starting from version 1.13, the Docker CLI has been restructured, and all commands have been grouped under the object they interacting with.

Since the run command interacts with containers, now it is a subcommand of docker container . The syntax of the new command is as follows:

docker container run [OPTIONS] IMAGE [COMMAND] [ARG...]

The old, pre 1.13 syntax is still supported. Under the hood, docker run command is an alias to docker container run . Users are encouraged to use the new command syntax.

A list of all docker container run options can be found on the Docker documentation page.

Run the Container in the Foreground

By default, when no option is provided to the docker run command, the root process is started in the foreground. This means that the standard input, output, and error from the root process are attached to the terminal session.

docker container run nginx

The output of the nginx process will be displayed on your terminal. Since, there are no connections to the webserver, the terminal is empty.

To stop the container, terminate the running Nginx process by pressing CTRL+C .

Run the Container in Detached Mode

To keep the container running when you exit the terminal session, start it in a detached mode. This is similar to running a Linux process in the background .

Use the -d option to start a detached container:

docker container run -d nginx
050e72d8567a3ec1e66370350b0069ab5219614f9701f63fcf02e8c8689f04fa

The detached container will stop when the root process is terminated.

You can list the running containers using the docker container ls command.

To attach your terminal to the detached container root process, use the docker container attach command.

Remove the Container After Exit

By default, when the container exits, its file system persists on the host system.

The --rm options tells docker run command to remove the container when it exits automatically:

docker container run --rm nginx

The Nginx image may not be the best example to clean up the container's file system after the container exits. This option is usually used on foreground containers that perform short-term tasks such as tests or database backups.

Set the Container Name

In Docker, each container is identified by its UUID and name. By default, if not explicitly set, the container's name is automatically generated by the Docker daemon.

Use the --name option to assign a custom name to the container:

docker container run -d --name my_nginx nginx

The container name must be unique. If you try to start another container with the same name, you'll get an error similar to this:

docker: Error response from daemon: Conflict. The container name "/my_nginx" is already in use by container "9...c". You have to remove (or rename) that container to be able to reuse that name.

Run docker container ls -a to list all containers, and see their names:

docker container ls
CONTAINER ID  IMAGE  COMMAND                 CREATED         STATUS         PORTS   NAMES
9d695c1f5ef4  nginx  "nginx -g 'daemon of "  36 seconds ago  Up 35 seconds  80/tcp  my_nginx

The meaningful names are useful to reference the container within a Docker network or when running docker CLI commands.

Publishing Container Ports

By default, if no ports are published, the process running in the container is accessible only from inside the container.

Publishing ports means mapping container ports to the host machine ports so that the ports are available to services outside of Docker.

To publish a port use the -p options as follows:

-p host_ip:host_port:container_port/protocol

To map the TCP port 80 (nginx) in the container to port 8080 on the host localhost interface, you would run:

docker container run --name web_server -d -p 8080:80 nginx

You can verify that the port is published by opening http://localhost:8080 in your browser or running the following curl command on the Docker host:

curl -I http://localhost:8080

The output will look something like this:

HTTP/1.1 200 OK
Server: nginx/1.17.6
Date: Tue, 26 Nov 2019 22:55:59 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 19 Nov 2019 12:50:08 GMT
Connection: keep-alive
ETag: "5dd3e500-264"
Accept-Ranges: bytes
Sharing Data (Mounting Volumes)

When a container is stopped, all data generated by the container is removed. Docker Volumes are the preferred way to make the data persist and share it across multiple containers.

To create and manage volumes, use the -p options as follows:

-v host_src:container_dest:options

To explain how this works, let's create a directory on the host and put an index.html file in it:

mkdir public_html
echo "Testing Docker Volumes" > public_html/index.html

Next, mount the public_html directory into /usr/share/nginx/html in the container:

docker run --name web_server -d -p 8080:80 -v $(pwd)/public_html:/usr/share/nginx/html nginx

Instead of specifying the absolute path to the public_html directory, we're using the $(pwd) command, which prints the current working directory .

Now, if you type http://localhost:8080 in your browser, you should see the contents of the index.html file. You can also use curl :

curl http://localhost:8080
Testing Docker Volumes
Run the Container Interactively

When dealing with the interactive processes like bash , use the -i and -t options to start the container.

The -it options tells Docker to keep the standard input attached to the terminal and allocate a pseudo-tty:

docker container run -it nginx /bin/bash

The container's Bash shell will be attached to the terminal, and the command prompt will change:

root@1da70f1937f5:/#

Now, you can interact with the container's shell and run any command inside of it.

In this example, we provided a command ( /bin/bash ) as an argument to the docker run command that was executed instead of the one specified in the Dockerfile.

Conclusion

Docker is the standard for packaging and deploying applications and an essential component of CI/CD, automation, and DevOps.

The docker container run command is used to create and run Docker containers.

If you have any questions, please leave a comment below.

[Oct 08, 2019] SithLordAJ

Oct 08, 2019 | www.reddit.com

6 days ago

Calling the uneducated people out on what they see as facts can be rewarding.

I wouldnt call them all uneducated. I think what they are is basically brainwashed. They constantly hear from the sales teams of vendors like Microsoft pitching them the idea of moving everything to Azure.

They do not hear the cons. They do not hear from the folks who know their environment and would know if something is a good fit. At least, they dont hear it enough, and they never see it first hand.

Now, I do think this is their fault... they need to seek out that info more, weigh things critically, and listen to what's going on with their teams more. Isolation from the team is their own doing.

After long enough standing on the edge and only hearing "jump!!", something stupid happens. level 3

AquaeyesTardis 18 points · 6 days ago

Apart from performance, what would be some of the downsides of containers? level 4

ztherion Programmer/Infrastructure/Linux 51 points · 6 days ago

There's little downside to containers by themselves. They're just a method of sandboxing processes and packaging a filesystem as a distributable image. From a performance perspective the impact is near negligible (unless you're doing some truly intensive disk I/O).

What can be problematic is taking a process that was designed to run on exactly n dedicated servers and converting it to a modern 2 to n autoscaling deployment that shares hosting with other apps on n a platform like Kubernetes. It's a significant challenge that requires a lot of expertise and maintenance, so there needs to be a clear business advantage to justify hiring at least one additional full time engineer to deal with it. level 5

AirFell85 11 points · 6 days ago

ELI5:

More logistical layers require more engineers to support.

1 more reply

3 more replies level 4

justabofh 33 points · 6 days ago

Containers are great for stateless stuff. So your webservers/application servers can be shoved into containers. Think of containers as being the modern version of statically linked binaries or fat applications. Static binaries have the problem that any security vulnerability requires a full rebuild of the application, and that problem is escalated in containers (where you might not even know that a broken library exists)

If you are using the typical business application, you need one or more storage components for data which needs to be available, possibly changed and access controlled.

Containers are a bad fit for stateful databases, or any stateful component, really.

Containers also enable microservices, which are great ideas at a certain organisation size (if you aren't sure you need microservices, just use a simple monolithic architecture). The problem with microservices is that you replace complexity in your code with complexity in the communications between the various components, and that is harder to see and debug. level 5

Untgradd 6 points · 6 days ago

Containers are fine for stateful services -- you can manage persistence at the storage layer the same way you would have to manage it if you were running the process directly on the host.

6 more replies level 5

malikto44 5 points · 6 days ago

Backing up containers can be a pain, so you don't want to use them for valuable data unless the data is stored elsewhere, like a database or even a file server.

For spinning up stateless applications to take workload behind a load balancer, containers are excellent.

9 more replies

33 more replies level 3

malikto44 3 points · 6 days ago

The problem is that there is an overwhelming din from vendors. Everybody and their brother, sister, mother, uncle, cousin, dog, cat, and gerbil is trying to sell you some pay-by-the-month cloud "solution".

The reason is that the cloud forces people into monthly payments, which is a guarenteed income for companies, but costs a lot more in the long run, and if something happens and one can't make the payments, business is halted, ensuring that bankruptcies hit hard and fast. Even with the mainframe, a company could limp along without support for a few quarters until they could get cash flow enough.

If we have a serious economic downturn, the fact that businesses will be completely shuttered if they can't afford their AWS bill just means fewer companies can limp along when the economy is bad, which will intensify a downturn.

1 more reply

3 more replies level 2

wildcarde815 Jack of All Trades 12 points · 6 days ago

Also if you can't work without cloud access you better have a second link. level 2

pottertown 10 points · 6 days ago

Our company viewed the move to Azure less as a cost savings measure and more of a move towards agility and "right now" sizing of our infrastructure.

Your point is very accurate, as an example our location is wholly incapable of moving moving much to the cloud due to half of us being connnected via satellite network and the other half being bent over the barrel by the only ISP in town. level 2

_The_Judge 27 points · 6 days ago

I'm sorry, but I find management these days around tech wholly inadequate. The idea that you can get an MBA and manage shit you have no idea about is absurd and just wastes everyone elses time for them to constantly ELI5 so manager can do their job effectively. level 2

laserdicks 57 points · 6 days ago

Calling the uneducated people out on what they see as facts can be rewarding

Aaand political suicide in a corporate environment. Instead I use the following:

"I love this idea! We've actually been looking into a similar solution however we weren't able to overcome some insurmountable cost sinkholes (remember: nothing is impossible; just expensive). Will this idea require an increase in internet speed to account for the traffic going to the azure cloud?" level 3

lokko12 71 points · 6 days ago

Will this idea require an increase in internet speed to account for the traffic going to the azure cloud?

No.

...then people rent on /r/sysadmin about stupid investments and say "but i told them". level 4

HORACE-ENGDAHL Jack of All Trades 61 points · 6 days ago

This exactly, you can't compromise your own argument to the extent that it's easy to shoot down in the name of giving your managers a way of saving face, and if you deliver the facts after that sugar coating it will look even worse, as it will be interpreted as you setting them up to look like idiots. Being frank, objective and non-blaming is always the best route. level 4

linuxdragons 13 points · 6 days ago

Yeah, this is a terrible example. If I were his manager I would be starting the paperwork trail after that meeting.

6 more replies level 3

messburg 61 points · 6 days ago

I think it's quite an american thing to do it so enthusiastically; to hurt no one, but the result is so condescending. It must be annoying to walk on egg shells to survive a day in the office.


And this is not a rant against soft skills in IT, at all. level 4

vagrantprodigy07 13 points · 6 days ago

It is definitely annoying. level 4

widowhanzo 27 points · 6 days ago
· edited 6 days ago

We work with Americans and they're always so positive, it's kinda annoying. They enthusiastically say "This is very interesting" when in reality it sucks and they know it.

Another less professional example, one of my (non-american) co-workers always wants to go out for coffee (while we have free and better coffee in the office), and the American coworker is always nice like "I'll go with you but I'm not having any" and I just straight up reply "No. I'm not paying for shitty coffee, I'll make a brew in the office". And that's that. Sometimes telling it as it is makes the whole conversation much shorter :D level 5

superkp 42 points · 6 days ago

Maybe the american would appreciate the break and a chance to spend some time with a coworker away from screens, but also doesn't want the shit coffee?

Sounds quite pleasant, honestly. level 6

egamma Sysadmin 39 points · 6 days ago

Yes. "Go out for coffee" is code for "leave the office so we can complain about management/company/customer/Karen". Some conversations shouldn't happen inside the building. level 7

auru21 5 points · 6 days ago

And complain about that jerk who never joins them

1 more reply level 6

Adobe_Flesh 6 points · 6 days ago

Inferior American - please compute this - the sole intention was consumption of coffee. Therefore due to existence of coffee in office, trip to coffee shop is not sane. Resistance to my reasoning is futile.

1 more reply

5 more replies level 5

ITaggie Tier II Support/Linux Admin 10 points · 6 days ago

I mean, I've taken breaks just to get away from the screens for a little while. They might just like being around you.

[Aug 22, 2019] How to Copy-Move a Docker Container to Another Host - Make Tech Easier

Aug 22, 2019 | www.maketecheasier.com

Save Container Image from Source Host

It's not required to stop the container first, but it's highly recommended that you do so. You will take a snapshot of the data in your Docker instance. If it's running while you do this, there's a small chance some files might end up being incomplete in your snapshot. Imagine someone uploading a 500MB file. When 250MB has been uploaded, you issue the docker commit command. The upload then continues, but when you restore this Docker image on another host, only 250MB out of the 500MB might be available.

So, if you can, first stop the instance.

docker stop NAME_OF_INSTANCE
Docker Move Containers With Crane

A Docker container is built out of a generic, initial image. Over time, you add your own changes to this base image. Processes running inside the container might also save their own data or make other changes. To preserve all of this, you can commit this new state to a new image.

Note that if the instance is currently running, this action will pause it while its contents are saved. If you added a lot of data to your container, this operation will take a longer time to complete. If this is a problem, you can avoid this pause by entering docker commit -p=false NAME_OF_INSTANCE mycontainerimage instead of the next command. However, don't do this unless absolutely necessary. The odds of creating an image with inconsistent/incomplete data increase in this case.

me width=

In this tutorial, a generic name has been chosen for the resulting image, mycontainerimage . You can change this name if you want to. If you do so, remember to replace it in all subsequent commands where you encounter it.

docker commit NAME_OF_INSTANCE mycontainerimage
Docker Move Containers Commit

Now, save this image to a file and compress it.

docker save mycontainerimage | gzip > mycontainerimage.tar.gz

Next, use your preferred file transfer method and copy mycontainerimage.tar.gz to the host where you want to migrate your container.

Load Container Image on Destination Host

After you log in to the host where you transferred the image, import it to Docker.

gunzip -c mycontainerimage.tar.gz | docker load

Since you never initialized this container here, you cannot start it with docker start yet. Instead, issue the same command you used in the past, when you first ran this Docker instance. The only difference now is that you will use "mycontainerimage" at the end instead of whatever image you used in the past.

The next command is just an example; don't copy and paste this unless it applies to you. (No special parameters were required when you ran the image for the first time)

docker run -d --name=PICK_NAME_FOR_CONTAINER mycontainerimage

As contrast, the following is an example of a command where parameter --publish was required to forward port 80 on the host machine to port 80 on the container:

docker run -d --name=http-server --publish 80:80 mycontainerimage

Afterwards, you can stop and start this container normally, with docker stop and docker start commands.

Transfer Image without Creating a File

Sometimes you may want to skip creating a mycontainerimage.tar.gz file. Maybe you don't have enough disk space since the container has a lot of data in it. You can save, compress, transfer, uncompress and load the image on the destination host in one command. After running the docker commit command discussed in the first section, you can use this:

docker save mycontainerimage | gzip | ssh [email protected] 'gunzip | docker load'

It should work from Windows, too, since it now has a built-in SSH client (PuTTY not necessary anymore).

me width=

Afterwards, continue with the docker run command that applies to your situation.

Conclusion

docker save and docker load are great as an ad hoc solution for moving containers around occasionally. But remember, if you do this often, you might want to set up your own private repository instead.

[Jun 14, 2019] Learn to create Dockerfile with Dockerfile example - LinuxTechLab

Jun 14, 2019 | linuxtechlab.com

Learn to create Dockerfile with Dockerfile example

by Shusain · Published October 30, 2018 · Updated October 30, 2018

We have earlier discussed how to create a docker container & also learned some important commands for managing the containers . In this tutorial, we will learn about dockerfile, all its parameters/commands with dockerfile example.

Dockerfile is a text file that contains list of commands that are used to build a docker image automatically. Basically a docker file acts as set of instructions that are needed to build a docker image.

( Recommended Read : Complete guide for creating Vagrant boxex with VirtualBox )

Dockerfile Example

Mentioned below is a Dockerfile example that we have already created, for CentOS with webserver (apache) installed on it.

FROM centos:7

MAINTAINER linuxtechlab

LABEL Remarks="This is a dockerfile example for Centos system"

RUN yum -y update && \

yum -y install httpd && \

yum clean all

COPY data/httpd.conf /etc/httpd/conf/httpd.conf

ADD data/html.tar.gz /var/www/html/

EXPOSE 80

ENV HOME /root

WORKDIR /root

ENTRYPOINT ["ping"]

CMD ["google.com"]

Parameters

We will now discuss all the parameters mentioned here one by one so that we have an understanding as to what they actually means,

FROM centos:7

FROM tells which base you would like to use for creating your dockerimage. Since we are using Centos:7, its mentioned there. We can use other OS like centos:6 , ubuntu:16.04 etc

MAINTAINER linuxtechlab

LABEL Remarks="This is a dockerfile example for Centos system"

These both fields MAINTAINER & LABEL Remarks are called labels. They are used to pass information like Maintainer of docker image, Version number, purpose or some other remarks. We can add a number of labels but its recommended to avoid unnecessary labels.

RUN yum -y update && \

yum -y install httpd && \

yum clean all

Now RUN command is responsible for installing or changing the docker image as we see fit. Here we have asked RUN to update our system & than install apcahe on it. We can also ask it to create a directory or to install some other packages.

COPY data/httpd.conf /etc/httpd/conf/httpd.conf

ADD data/html.tar.gz /var/www/html/

COPY & ADD command almost server the same purpose i.e. they are used to copy the files to docker image with one difference. Here we have used the COPY command to copy httpd.conf from data directory to the default location of httpd.conf on docker image.

And we than used ADD command to copy a tar.gz archive to apache's document directory to serve content on the webserver. But you might have noticed we didn't extract it & that's the one difference ADD & COPY command have, ADD command will automatically extract the archive at the destination folder. Also we could have used ADD in place of copy, "ADD data/httpd.conf /etc/httpd/conf/httpd.conf ".

EXPOSE 80

EXPOSE command will open the mentioned port on the docker image to allow access to outside world. We could also use EXPOSE 80/tcp or EXPOSE 53/udp.

ENV HOME /root

ENV command sets up environment variables, here we have used it to set HOME to /root. Syntax for using ENV is

ENV key value

Some examples of ENV usage are,

ENV user admin, ENV database=testdb, ENV PHPVERSION 7 etc etc.

WORKDIR /root

With WORKDIR, we can set working directory for the docker image. Here it has been set to /root.

ENTRYPOINT ["ping"]

CMD ["google.com"]

ENTRYPOINT & CMD are both used to define executable that should run once docker is up. On ENTRYPOINT, we define an executable & with CMD, we define additional parameters that are required for ENTRYPOINT. Like here, we have used ping with ENTRYPOINT but it requires additional parameter, which we provided with CMD. These both commands are used in conjunction with each other.

We can also used CMD alone with something like CMD ["bash"].

Note:- Not all these parameters are required to pass while creating a Dockerfile, you can only the ones you need.

Apart from the commands discussed above, there are some other commands as well that can be used in the Dockerfile & that are mentioned below,

USER

With USER, we can define the user to be used to execute a command like USER dan. We can specify USER with RUN, CMD or with ENTRYPOINT as well.

ONBUILD

ONBUILD command lets you add a trigger that will be executed at a later time when the current image is being used as a base image for another. For example, we have added our own content for website using the dockerfile but we might not want it to be used for other docker images. So we will add ,

ONBUILD RUN rm -rf /var/www/html/*

This will remove the contents when the image is being re-purposed.

So these were all the commands that we can use with our Dockerfiles. Mentioned below Dockerfile examples for Ubuntu & Fedora, for reference,

Ubuntu Dockerfile

# Get the base image

FROM ubuntu:16.04

# Install all packages

RUN \

apt-get update && \

apt-get -y upgrade && \

apt-get install -y apache2 && \

# adding some content for Apache server

RUN echo "This is a test docker" > /var/www/html/index.html

# Copying setting file & adding some content to be served by apache

COPY data/httpd.conf /etc/apache2/httpd.conf

# Defining a command to be run after the docker is up

ENTRYPOINT ["elinks"]

CMD ["localhost"]

Fedora Dockerfile

FROM docker.io/fedora

MAINTAINER linuxtechlab

LABEL Remarks="This is a dockerfile example for Fedora system"

# Updating dependencies, installing Apache and cleaning dnf caches to reduce container size

RUN dnf -y update && \

dnf -y install httpd && \

dnf clean all \

mkdir /data

# Copying apache configuration file & adding some content to be served by apache

COPY data/httpd.conf /etc/httpd/conf/httpd.conf

ADD data/html.tar.gz /var/www/html/

# Adding a script & granting it execute permissions

ADD data/script.sh /data

# Open http port for apache

EXPOSE 80

# Set environment variables.

ENV HOME /root

# Defining a command to be run after the docker is up

CMD ["/data/script.sh"]

Now that we know how to create a Dockerfile, we will use this newly learned skill for our next tutorial, to create a docker image & than will upload the same to DockerHub, official Docker Public Image Registry.

If you think we might have missed something or have some query regarding this tutorial, please let us know using the comment box below.

  1. onclick360 says: November 28, 2018 at 12:39 am

    Very well explained the article keep posting
    Explore more about dockerfile with 5 real-time example
    https://onclick360.com/dockerfile-example/

[Feb 11, 2019] Solving Docker permission denied while trying to connect to the Docker daemon socket

Highly recommended!
Notable quotes:
"... adding the current user to the docker group ..."
Jan 26, 2019 | techoverflow.net

Solution:

The error message tells you that your current user can't access the docker engine, because you're lacking permissions to access the unix socket to communicate with the engine.

As a temporary solution, you can use sudo to run the failed command as root.
However it is recommended to fix the issue by adding the current user to the docker group :

Run this command in your favourite shell and then completely log out of your account and log back in (if in doubt, reboot!):

sudo usermod -a -G docker $USER
sudo usermod -a -G docker $USER

After doing that, you should be able to run the command without any issues. Run docker run hello-world as a normal user in order to check if it works. Reboot if the issue still persists.

Logging out and logging back in is required because the group change will not have an effect unless your session is closed.

Control Docker Service

Now you have Docker installed onto your machine, start the Docker service in case if it is not started automatically after the installation

# systemctl start docker

# systemctl enable docker

Once the service is started, verify your installation by running the following command.

# docker run -it centos echo Hello-World

Let's see what happens when we run " docker run " command. Docker starts a container with centos base image since we are running this centos container for the first time, the output will look like below.

Unable to find image 'centos:latest' locally
Trying to pull repository docker.io/centos ...
0114405f9ff1: Download complete
511136ea3c5a: Download complete
b6718650e87e: Download complete
3d3c8202a574: Download complete
Status: Downloaded newer image for docker.io/centos:latest
Hello-World

Docker looks for centos image locally, and it is not found, it starts downloading the centos image from Docker registry. Once the image has been downloaded, it will start the container and echo the command " Hello-World " in the console which you can see at the end of the output.

[Feb 11, 2019] Getting started with Docker by Dockerizing this Blog by Benjamin Cane

Notable quotes:
"... If we wanted to start with a blank slate we could use the Ubuntu Docker image by specifying ..."
Feb 11, 2019 | bencane.com

Containers and Virtual Machines are often seen as conflicting technology, however, this is often a misunderstanding.

Virtual Machines are a way to take a physical server and provide a fully functional operating environment that shares those physical resources with other virtual machines. A Container is generally used to isolate a running process within a single host to ensure that the isolated processes cannot interact with other processes within that same system. In fact containers are closer to BSD Jails and chroot 'ed processes than full virtual machines.

What Docker provides on top of containers

Docker itself is not a container runtime environment; in fact Docker is actually container technology agnostic with efforts planned for Docker to support Solaris Zones and BSD Jails . What Docker provides is a method of managing, packaging, and deploying containers. While these types of functions may exist to some degree for virtual machines they traditionally have not existed for most container solutions and the ones that existed, were not as easy to use or fully featured as Docker.

Now that we know what Docker is, let's start learning how Docker works by first installing Docker and deploying a public pre-built container.

Starting with Installation

As Docker is not installed by default step 1 will be to install the Docker package; since our example system is running Ubuntu 14.0.4 we will do this using the Apt package manager.

# apt-get install docker.io
Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: aufs-tools cgroup-lite git git-man liberror-perl Suggested packages: btrfs-tools debootstrap lxc rinse git-daemon-run git-daemon-sysvinit git-doc git-el git-email git-gui gitk gitweb git-arch git-bzr git-cvs git-mediawiki git-svn The following NEW packages will be installed: aufs-tools cgroup-lite docker.io git git-man liberror-perl 0 upgraded, 6 newly installed, 0 to remove and 0 not upgraded. Need to get 7,553 kB of archives. After this operation, 46.6 MB of additional disk space will be used. Do you want to continue? [Y/n] y

To check if any containers are running we can execute the docker command using the ps option.

# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

The ps function of the docker command works similar to the Linux ps command. It will show available Docker containers and their current status. Since we have not started any Docker containers yet, the command shows no running containers.

Deploying a pre-built nginx Docker container

One of my favorite features of Docker is the ability to deploy a pre-built container in the same way you would deploy a package with yum or apt-get . To explain this better let's deploy a pre-built container running the nginx web server. We can do this by executing the docker command again, however, this time with the run option.

# docker run -d nginx Unable to find image 'nginx' locally Pulling repository nginx 5c82215b03d1: Download complete e2a4fb18da48: Download complete 58016a5acc80: Download complete 657abfa43d82: Download complete dcb2fe003d16: Download complete c79a417d7c6f: Download complete abb90243122c: Download complete d6137c9e2964: Download complete 85e566ddc7ef: Download complete 69f100eb42b5: Download complete cd720b803060: Download complete 7cc81e9a118a: Download complete

The run function of the docker command tells Docker to find a specified Docker image and start a container running that image. By default, Docker containers run in the foreground, meaning when you execute docker run your shell will be bound to the container's console and the process running within the container. In order to launch this Docker container in the background I included the -d (detach) flag.

By executing docker ps again we can see the nginx container running.

# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES f6d31ab01fc9 nginx:latest nginx -g 'daemon off 4 seconds ago Up 3 seconds 443/tcp, 80/tcp desperate_lalande

In the above output we can see the running container desperate_lalande and that this container has been built from the nginx:latest image.

Docker Images

Images are one of Docker's key features and is similar to a virtual machine image. Like virtual machine images, a Docker image is a container that has been saved and packaged. Docker however, doesn't just stop with the ability to create images. Docker also includes the ability to distribute those images via Docker repositories which are a similar concept to package repositories. This is what gives Docker the ability to deploy an image like you would deploy a package with yum . To get a better understanding of how this works let's look back at the output of the docker run execution.

# docker run -d nginx Unable to find image 'nginx' locally

The first message we see is that docker could not find an image named nginx locally. The reason we see this message is that when we executed docker run we told Docker to startup a container, a container based on an image named nginx . Since Docker is starting a container based on a specified image it needs to first find that image. Before checking any remote repository Docker first checks locally to see if there is a local image with the specified name.

Since this system is brand new there is no Docker image with the name nginx , which means Docker will need to download it from a Docker repository.

Pulling repository nginx 5c82215b03d1: Download complete e2a4fb18da48: Download complete 58016a5acc80: Download complete 657abfa43d82: Download complete dcb2fe003d16: Download complete c79a417d7c6f: Download complete abb90243122c: Download complete d6137c9e2964: Download complete 85e566ddc7ef: Download complete 69f100eb42b5: Download complete cd720b803060: Download complete 7cc81e9a118a: Download complete

This is exactly what the second part of the output is showing us. By default, Docker uses the Docker Hub repository, which is a repository service that Docker (the company) runs.

Like GitHub, Docker Hub is free for public repositories but requires a subscription for private repositories. It is possible however, to deploy your own Docker repository, in fact it is as easy as docker run registry . For this article we will not be deploying a custom registry service.

Stopping and Removing the Container

Before moving on to building a custom Docker container let's first clean up our Docker environment. We will do this by stopping the container from earlier and removing it.

To start a container we executed docker with the run option, in order to stop this same container we simply need to execute the docker with the kill option specifying the container name.

# docker kill desperate_lalande desperate_lalande

If we execute docker ps again we will see that the container is no longer running.

# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

However, at this point we have only stopped the container; while it may no longer be running it still exists. By default, docker ps will only show running containers, if we add the -a (all) flag it will show all containers running or not.

# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES f6d31ab01fc9 5c82215b03d1 nginx -g 'daemon off 4 weeks ago Exited (-1) About a minute ago desperate_lalande

In order to fully remove the container we can use the docker command with the rm option.

# docker rm desperate_lalande desperate_lalande

While this container has been removed; we still have a nginx image available. If we were to re-run docker run -d nginx again the container would be started without having to fetch the nginx image again. This is because Docker already has a saved copy on our local system.

To see a full list of local images we can simply run the docker command with the images option.

# docker images REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE nginx latest 9fab4090484a 5 days ago 132.8 MB
Building our own custom image

At this point we have used a few basic Docker commands to start, stop and remove a common pre-built image. In order to "Dockerize" this blog however, we are going to have to build our own Docker image and that means creating a Dockerfile .

With most virtual machine environments if you wish to create an image of a machine you need to first create a new virtual machine, install the OS, install the application and then finally convert it to a template or image. With Docker however, these steps are automated via a Dockerfile. A Dockerfile is a way of providing build instructions to Docker for the creation of a custom image. In this section we are going to build a custom Dockerfile that can be used to deploy this blog.

Understanding the Application

Before we can jump into creating a Dockerfile we first need to understand what is required to deploy this blog.

The blog itself is actually static HTML pages generated by a custom static site generator that I wrote named; hamerkop . The generator is very simple and more about getting the job done for this blog specifically. All the code and source files for this blog are available via a public GitHub repository. In order to deploy this blog we simply need to grab the contents of the GitHub repository, install Python along with some Python modules and execute the hamerkop application. To serve the generated content we will use nginx ; which means we will also need nginx to be installed.

So far this should be a pretty simple Dockerfile, but it will show us quite a bit of the Dockerfile Syntax . To get started we can clone the GitHub repository and creating a Dockerfile with our favorite editor; vi in my case.

# git clone https://github.com/madflojo/blog.git Cloning into 'blog'... remote: Counting objects: 622, done. remote: Total 622 (delta 0), reused 0 (delta 0), pack-reused 622 Receiving objects: 100% (622/622), 14.80 MiB | 1.06 MiB/s, done. Resolving deltas: 100% (242/242), done. Checking connectivity... done. # cd blog/ # vi Dockerfile
FROM - Inheriting a Docker image

The first instruction of a Dockerfile is the FROM instruction. This is used to specify an existing Docker image to use as our base image. This basically provides us with a way to inherit another Docker image. In this case we will be starting with the same nginx image we were using before.

If we wanted to start with a blank slate we could use the Ubuntu Docker image by specifying ubuntu:latest .

## Dockerfile that generates an instance of http://bencane.com FROM nginx:latest MAINTAINER Benjamin Cane <[email protected]>

In addition to the FROM instruction, I also included a MAINTAINER instruction which is used to show the Author of the Dockerfile.

As Docker supports using # as a comment marker, I will be using this syntax quite a bit to explain the sections of this Dockerfile.

Running a test build

Since we inherited the nginx Docker image our current Dockerfile also inherited all the instructions within the Dockerfile used to build that nginx image. What this means is even at this point we are able to build a Docker image from this Dockerfile and run a container from that image. The resulting image will essentially be the same as the nginx image but we will run through a build of this Dockerfile now and a few more times as we go to help explain the Docker build process.

In order to start the build from a Dockerfile we can simply execute the docker command with the build option.

# docker build -t blog /root/blog Sending build context to Docker daemon 23.6 MB Sending build context to Docker daemon Step 0 : FROM nginx:latest ---> 9fab4090484a Step 1 : MAINTAINER Benjamin Cane <[email protected]> ---> Running in c97f36450343 ---> 60a44f78d194 Removing intermediate container c97f36450343 Successfully built 60a44f78d194

In the above example I used the -t (tag) flag to "tag" the image as "blog". This essentially allows us to name the image, without specifying a tag the image would only be callable via an Image ID that Docker assigns. In this case the Image ID is 60a44f78d194 which we can see from the docker command's build success message.

In addition to the -t flag, I also specified the directory /root/blog . This directory is the "build directory", which is the directory that contains the Dockerfile and any other files necessary to build this container.

Now that we have run through a successful build, let's start customizing this image.

Using RUN to execute apt-get

The static site generator used to generate the HTML pages is written in Python and because of this the first custom task we should perform within this Dockerfile is to install Python . To install the Python package we will use the Apt package manager. This means we will need to specify within the Dockerfile that apt-get update and apt-get install python-dev are executed; we can do this with the RUN instruction.

## Dockerfile that generates an instance of http://bencane.com FROM nginx:latest MAINTAINER Benjamin Cane <[email protected]> ## Install python and pip RUN apt-get update RUN apt-get install -y python-dev python-pip

In the above we are simply using the RUN instruction to tell Docker that when it builds this image it will need to execute the specified apt-get commands. The interesting part of this is that these commands are only executed within the context of this container. What this means is even though python-dev and python-pip are being installed within the container, they are not being installed for the host itself. Or to put it simplier, within the container the pip command will execute, outside the container, the pip command does not exist.

It is also important to note that the Docker build process does not accept user input during the build. This means that any commands being executed by the RUN instruction must complete without user input. This adds a bit of complexity to the build process as many applications require user input during installation. For our example, none of the commands executed by RUN require user input.

Installing Python modules

With Python installed we now need to install some Python modules. To do this outside of Docker, we would generally use the pip command and reference a file within the blog's Git repository named requirements.txt . In an earlier step we used the git command to "clone" the blog's GitHub repository to the /root/blog directory; this directory also happens to be the directory that we have created the Dockerfile . This is important as it means the contents of the Git repository are accessible to Docker during the build process.

When executing a build, Docker will set the context of the build to the specified "build directory". This means that any files within that directory and below can be used during the build process, files outside of that directory (outside of the build context), are inaccessible.

In order to install the required Python modules we will need to copy the requirements.txt file from the build directory into the container. We can do this using the COPY instruction within the Dockerfile .

## Dockerfile that generates an instance of http://bencane.com FROM nginx:latest MAINTAINER Benjamin Cane <[email protected]> ## Install python and pip RUN apt-get update RUN apt-get install -y python-dev python-pip ## Create a directory for required files RUN mkdir -p /build/ ## Add requirements file and run pip COPY requirements.txt /build/ RUN pip install -r /build/requirements.txt

Within the Dockerfile we added 3 instructions. The first instruction uses RUN to create a /build/ directory within the container. This directory will be used to copy any application files needed to generate the static HTML pages. The second instruction is the COPY instruction which copies the requirements.txt file from the "build directory" ( /root/blog ) into the /build directory within the container. The third is using the RUN instruction to execute the pip command; installing all the modules specified within the requirements.txt file.

COPY is an important instruction to understand when building custom images. Without specifically copying the file within the Dockerfile this Docker image would not contain the requirements.txt file. With Docker containers everything is isolated, unless specifically executed within a Dockerfile a container is not likely to include required dependencies.

Re-running a build

Now that we have a few customization tasks for Docker to perform let's try another build of the blog image again.

# docker build -t blog /root/blog Sending build context to Docker daemon 19.52 MB Sending build context to Docker daemon Step 0 : FROM nginx:latest ---> 9fab4090484a Step 1 : MAINTAINER Benjamin Cane <[email protected]> ---> Using cache ---> 8e0f1899d1eb Step 2 : RUN apt-get update ---> Using cache ---> 78b36ef1a1a2 Step 3 : RUN apt-get install -y python-dev python-pip ---> Using cache ---> ef4f9382658a Step 4 : RUN mkdir -p /build/ ---> Running in bde05cf1e8fe ---> f4b66e09fa61 Removing intermediate container bde05cf1e8fe Step 5 : COPY requirements.txt /build/ ---> cef11c3fb97c Removing intermediate container 9aa8ff43f4b0 Step 6 : RUN pip install -r /build/requirements.txt ---> Running in c50b15ddd8b1 Downloading/unpacking jinja2 (from -r /build/requirements.txt (line 1)) Downloading/unpacking PyYaml (from -r /build/requirements.txt (line 2)) <truncated to reduce noise> Successfully installed jinja2 PyYaml mistune markdown MarkupSafe Cleaning up... ---> abab55c20962 Removing intermediate container c50b15ddd8b1 Successfully built abab55c20962

From the above build output we can see the build was successful, but we can also see another interesting message; ---> Using cache . What this message is telling us is that Docker was able to use its build cache during the build of this image.

Docker build cache

When Docker is building an image, it doesn't just build a single image; it actually builds multiple images throughout the build processes. In fact we can see from the above output that after each "Step" Docker is creating a new image.

 Step 5 : COPY requirements.txt /build/ ---> cef11c3fb97c

The last line from the above snippet is actually Docker informing us of the creating of a new image, it does this by printing the Image ID ; cef11c3fb97c . The useful thing about this approach is that Docker is able to use these images as cache during subsequent builds of the blog image. This is useful because it allows Docker to speed up the build process for new builds of the same container. If we look at the example above we can actually see that rather than installing the python-dev and python-pip packages again, Docker was able to use a cached image. However, since Docker was unable to find a build that executed the mkdir command, each subsequent step was executed.

The Docker build cache is a bit of a gift and a curse; the reason for this is that the decision to use cache or to rerun the instruction is made within a very narrow scope. For example, if there was a change to the requirements.txt file Docker would detect this change during the build and start fresh from that point forward. It does this because it can view the contents of the requirements.txt file. The execution of the apt-get commands however, are another story. If the Apt repository that provides the Python packages were to contain a newer version of the python-pip package; Docker would not be able to detect the change and would simply use the build cache. This means that an older package may be installed. While this may not be a major issue for the python-pip package it could be a problem if the installation was caching a package with a known vulnerability.

For this reason it is useful to periodically rebuild the image without using Docker's cache. To do this you can simply specify --no-cache=True when executing a Docker build.

Deploying the rest of the blog

With the Python packages and modules installed this leaves us at the point of copying the required application files and running the hamerkop application. To do this we will simply use more COPY and RUN instructions.

## Dockerfile that generates an instance of http://bencane.com FROM nginx:latest MAINTAINER Benjamin Cane <[email protected]> ## Install python and pip RUN apt-get update RUN apt-get install -y python-dev python-pip ## Create a directory for required files RUN mkdir -p /build/ ## Add requirements file and run pip COPY requirements.txt /build/ RUN pip install -r /build/requirements.txt ## Add blog code nd required files COPY static /build/static COPY templates /build/templates COPY hamerkop /build/ COPY config.yml /build/ COPY articles /build/articles ## Run Generator RUN /build/hamerkop -c /build/config.yml

Now that we have the rest of the build instructions, let's run through another build and verify that the image builds successfully.

# docker build -t blog /root/blog/ Sending build context to Docker daemon 19.52 MB Sending build context to Docker daemon Step 0 : FROM nginx:latest ---> 9fab4090484a Step 1 : MAINTAINER Benjamin Cane <[email protected]> ---> Using cache ---> 8e0f1899d1eb Step 2 : RUN apt-get update ---> Using cache ---> 78b36ef1a1a2 Step 3 : RUN apt-get install -y python-dev python-pip ---> Using cache ---> ef4f9382658a Step 4 : RUN mkdir -p /build/ ---> Using cache ---> f4b66e09fa61 Step 5 : COPY requirements.txt /build/ ---> Using cache ---> cef11c3fb97c Step 6 : RUN pip install -r /build/requirements.txt ---> Using cache ---> abab55c20962 Step 7 : COPY static /build/static ---> 15cb91531038 Removing intermediate container d478b42b7906 Step 8 : COPY templates /build/templates ---> ecded5d1a52e Removing intermediate container ac2390607e9f Step 9 : COPY hamerkop /build/ ---> 59efd1ca1771 Removing intermediate container b5fbf7e817b7 Step 10 : COPY config.yml /build/ ---> bfa3db6c05b7 Removing intermediate container 1aebef300933 Step 11 : COPY articles /build/articles ---> 6b61cc9dde27 Removing intermediate container be78d0eb1213 Step 12 : RUN /build/hamerkop -c /build/config.yml ---> Running in fbc0b5e574c5 Successfully created file /usr/share/nginx/html//2011/06/25/checking-the-number-of-lwp-threads-in-linux Successfully created file /usr/share/nginx/html//2011/06/checking-the-number-of-lwp-threads-in-linux <truncated to reduce noise> Successfully created file /usr/share/nginx/html//archive.html Successfully created file /usr/share/nginx/html//sitemap.xml ---> 3b25263113e1 Removing intermediate container fbc0b5e574c5 Successfully built 3b25263113e1
Running a custom container

With a successful build we can now start our custom container by running the docker command with the run option, similar to how we started the nginx container earlier.

# docker run -d -p 80:80 --name=blog blog 5f6c7a2217dcdc0da8af05225c4d1294e3e6bb28a41ea898a1c63fb821989ba1

Once again the -d (detach) flag was used to tell Docker to run the container in the background. However, there are also two new flags. The first new flag is --name , which is used to give the container a user specified name. In the earlier example we did not specify a name and because of that Docker randomly generated one. The second new flag is -p , this flag allows users to map a port from the host machine to a port within the container.

The base nginx image we used exposes port 80 for the HTTP service. By default, ports bound within a Docker container are not bound on the host system as a whole. In order for external systems to access ports exposed within a container the ports must be mapped from a host port to a container port using the -p flag. The command above maps port 80 from the host, to port 80 within the container. If we wished to map port 8080 from the host, to port 80 within the container we could do so by specifying the ports in the following syntax -p 8080:80 .

From the above command it appears that our container was started successfully, we can verify this by executing docker ps .

# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d264c7ef92bd blog:latest nginx -g 'daemon off 3 seconds ago Up 3 seconds 443/tcp, 0.0.0.0:80->80/tcp blog
Wrapping up

At this point we now have a running custom Docker container. While we touched on a few Dockerfile instructions within this article we have yet to discuss all the instructions. For a full list of Dockerfile instructions you can checkout Docker's reference page , which explains the instructions very well.

Another good resource is their Dockerfile Best Practices page which contains quite a few best practices for building custom Dockerfiles. Some of these tips are very useful such as strategically ordering the commands within the Dockerfile. In the above examples our Dockerfile has the COPY instruction for the articles directory as the last COPY instruction. The reason for this is that the articles directory will change quite often. It's best to put instructions that will change oftenat the lowest point possible within the Dockerfile to optimize steps that can be cached.

In this article we covered how to start a pre-built container and how to build, then deploy a custom container. While there is quite a bit to learn about Docker this article should give you a good idea on how to get started. Of course, as always if you think there is anything that should be added drop it in the comments below.

[Jan 27, 2019] If you are using the Docker package supplied by Red Hat / CentOS, the dockerroot group is automatically added to the system. You will need to edit (or create) /etc/docker/daemon.json to include the following: group : dockerroot

Highly recommended!
Jan 27, 2019 | rancher.com

If you are using the Docker Docker package supplied by Red Hat / CentOS, the package name is docker . You can check the installed package by executing:

rpm -q docker

If you are using the Docker package supplied by Red Hat / CentOS, the dockerroot group is automatically added to the system. You will need to edit (or create) /etc/docker/daemon.json to include the following:

{
    "group": "dockerroot"
}

Restart Docker after editing or creating the file. After restarting Docker, you can check the group permission of the Docker socket ( /var/run/docker.sock ), which should show dockerroot as group:

srw-rw----. 1 root dockerroot 0 Jul  4 09:57 /var/run/docker.sock

Add the SSH user you want to use to this group, this can't be the root user.

usermod -aG dockerroot <user_name>

To verify that the user is correctly configured, log out of the node and login with your SSH user, and execute docker ps :

ssh <user_name>@node
$ docker ps
CONTAINER

[Jan 26, 2019] You need to add user to dockerroot group and create daemon.json file to be able to use docker from a regular user account after Docker installation from Red Hat executables by Aslan Brooke

Highly recommended!
It is better to stop docker and start it after change of daemon.json. Restart does not always work as intended and socket can remain owned incorrectly.
Oct 15, 2018 | blog.aslanbrooke.com

Originally from Run Docker Without Sudo – Aslan Brooke's Blog

Update the /etc/docker/daemon.json as follow (will require root priveleges):
{
"live-restore": true,
"group": "dockerroot"
}

Add user (replace ) to "dockerroot" group using the below command and then restart the docker service.

usermod -aG dockerroot <user name> 
restart docker service

[Jan 26, 2019] How do I download Docker images without using the pull command when you are behind firewall

Highly recommended!
Jan 26, 2019 | stackoverflow.com

Ephreal , Jun 19, 2016 at 9:29

Is there a way I can download a Docker image/container using, for example, Firefox and not using the built-in docker-pull .

I am blocked by the company firewall and proxy, and I can't get a hole through it.

My problem is that I cannot use Docker to get images, that is, Docker save/pull and other Docker supplied functions since it is blocked by a firewall.

i cannot get access to the docker hub. I get a x509: Certificate signed by unknown authority. My company are using zScaler as man-in-the-middle firewall – Ephreal Jun 19 '16 at 10:38

erikbwork , Apr 25, 2017 at 13:54

Possible duplicate of How to copy docker images from one host to another without via repository?erikbwork Apr 25 '17 at 13:54

vikas027 , Dec 12, 2016 at 11:30

Just an alternative - This is what I did in my organization for couchbase image where I was blocked by a proxy. On my personal laptop (OS X)
~$ $ docker save couchbase > couchbase.tar
~$ ls -lh couchbase.docker
-rw-------  1 vikas  devops   556M 12 Dec 21:15 couchbase.tar
~$ xz -9 couchbase.tar
~$ ls -lh couchbase.tar.xz
-rw-r--r--  1 vikas  staff   123M 12 Dec 22:17 couchbase.tar.xz

Then, I uploaded the compressed tar ball to Dropbox and downloaded on my work machine. For some reason Dropbox was open :)

On my work laptop (CentOS 7)
$ docker load < couchbase.tar.xz

References

Ephreal , Dec 15, 2016 at 15:43

Thank you; didn't know you could save an image into a tar ball. I will try this. – Ephreal Dec 15 '16 at 15:43
I just had to deal with this issue myself - downloading an image from a restricted machine with Internet access, but no Docker client for use on a another restricted machine with the Docker client, but no Internet access. I posted my question to the DevOps Stack Exchange site :

With help from the Docker Community I was able to find a resolution to my problem. What follows is my solution.


So it turns out that the Moby Project has a shell script on the Moby GitHub account which can download images from Docker Hub in a format that can be imported into Docker:

The usage syntax for the script is given by the following:

download-frozen-image-v2.sh target_dir image[:tag][@digest] ...

The image can then be imported with tar and docker load :

tar -cC 'target_dir' . | docker load

To verify that the script works as expected, I downloaded an Ubuntu image from Docker Hub and loaded it into Docker:

user@host:~$ bash download-frozen-image-v2.sh ubuntu ubuntu:latest
user@host:~$ tar -cC 'ubuntu' . | docker load
user@host:~$ docker run --rm -ti ubuntu bash
root@1dd5e62113b9:/#

In practice I would have to first copy the data from the Internet client (which does not have Docker installed) to the target/destination machine (which does have Docker installed):

user@nodocker:~$ bash download-frozen-image-v2.sh ubuntu ubuntu:latest
user@nodocker:~$ tar -C 'ubuntu' -cf 'ubuntu.tar' .
user@nodocker:~$ scp ubuntu.tar user@hasdocker:~

and then load and use the image on the target host:

user@hasdocker:~ docker load ubuntu.tar
user@hasdocker:~ docker run --rm -ti ubuntu bash
root@1dd5e62113b9:/#

[Jan 26, 2019] How To Install and Use Docker on CentOS 7

Nov 02, 2016 | www.digitalocean.com
Introduction

Docker is an application that makes it simple and easy to run application processes in a container, which are like virtual machines, only more portable, more resource-friendly, and more dependent on the host operating system. For a detailed introduction to the different components of a Docker container, check out The Docker Ecosystem: An Introduction to Common Components .

There are two methods for installing Docker on CentOS 7. One method involves installing it on an existing installation of the operating system. The other involves spinning up a server with a tool called Docker Machine that auto-installs Docker on it.

In this tutorial, you'll learn how to install and use it on an existing installation of CentOS 7.

Prerequisites

Note: Docker requires a 64-bit version of CentOS 7 as well as a kernel version equal to or greater than 3.10. The default 64-bit CentOS 7 Droplet meets these requirements.

All the commands in this tutorial should be run as a non-root user. If root access is required for the command, it will be preceded by sudo . Initial Setup Guide for CentOS 7 explains how to add users and give them sudo access.

Step 1 -- Installing Docker

The Docker installation package available in the official CentOS 7 repository may not be the latest version. To get the latest and greatest version, install Docker from the official Docker repository. This section shows you how to do just that.

But first, let's update the package database:

Now run this command. It will add the official Docker repository, download the latest version of Docker, and install it:

After installation has completed, start the Docker daemon:

Verify that it's running:

The output should be similar to the following, showing that the service is active and running:

Output ● docker.service - Docker Application Container Engine Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled) Active: active (running) since Sun 2016-05-01 06:53:52 CDT; 1 weeks 3 days ago Docs: https://docs.docker.com Main PID: 749 (docker)

Lastly, make sure it starts at every server reboot:

Installing Docker now gives you not just the Docker service (daemon) but also the docker command line utility, or the Docker client. We'll explore how to use the docker command later in this tutorial.

Step 2 -- Executing Docker Command Without Sudo (Optional)

By default, running the docker command requires root privileges -- that is, you have to prefix the command with sudo . It can also be run by a user in the docker group, which is automatically created during the installation of Docker. If you attempt to run the docker command without prefixing it with sudo or without being in the docker group, you'll get an output like this:

Output docker: Cannot connect to the Docker daemon. Is the docker daemon running on this host?. See 'docker run --help'.

If you want to avoid typing sudo whenever you run the docker command, add your username to the docker group:

You will need to log out of the Droplet and back in as the same user to enable this change.

If you need to add a user to the docker group that you're not logged in as, declare that username explicitly using:

The rest of this article assumes you are running the docker command as a user in the docker user group. If you choose not to, please prepend the commands with sudo .

Step 3 -- Using the Docker Command

With Docker installed and working, now's the time to become familiar with the command line utility. Using docker consists of passing it a chain of options and subcommands followed by arguments. The syntax takes this form:

To view all available subcommands, type:

As of Docker 1.11.1, the complete list of available subcommands includes:

Output attach Attach to a running container build Build an image from a Dockerfile commit Create a new image from a container's changes cp Copy files/folders between a container and the local filesystem create Create a new container diff Inspect changes on a container's filesystem events Get real time events from the server exec Run a command in a running container export Export a container's filesystem as a tar archive history Show the history of an image images List images import Import the contents from a tarball to create a filesystem image info Display system-wide information inspect Return low-level information on a container or image kill Kill a running container load Load an image from a tar archive or STDIN login Log in to a Docker registry logout Log out from a Docker registry logs Fetch the logs of a container network Manage Docker networks pause Pause all processes within a container port List port mappings or a specific mapping for the CONTAINER ps List containers pull Pull an image or a repository from a registry push Push an image or a repository to a registry rename Rename a container restart Restart a container rm Remove one or more containers rmi Remove one or more images run Run a command in a new container save Save one or more images to a tar archive search Search the Docker Hub for images start Start one or more stopped containers stats Display a live stream of container(s) resource usage statistics stop Stop a running container tag Tag an image into a repository top Display the running processes of a container unpause Unpause all processes within a container update Update configuration of one or more containers version Show the Docker version information volume Manage Docker volumes wait Block until a container stops, then print its exit code

To view the switches available to a specific command, type:

To view system-wide information, use:

Step 4 -- Working with Docker Images

Docker containers are run from Docker images. By default, it pulls these images from Docker Hub, a Docker registry managed by Docker, the company behind the Docker project. Anybody can build and host their Docker images on Docker Hub, so most applications and Linux distributions you'll need to run Docker containers have images that are hosted on Docker Hub.

To check whether you can access and download images from Docker Hub, type:

The output, which should include the following, should indicate that Docker in working correctly:

Output Hello from Docker. This message shows that your installation appears to be working correctly. ...

You can search for images available on Docker Hub by using the docker command with the search subcommand. For example, to search for the CentOS image, type:

The script will crawl Docker Hub and return a listing of all images whose name match the search string. In this case, the output will be similar to this:

Output NAME DESCRIPTION STARS OFFICIAL AUTOMATED centos The official build of CentOS. 2224 [OK] jdeathe/centos-ssh CentOS-6 6.7 x86_64 / CentOS-7 7.2.1511 x8... 22 [OK] jdeathe/centos-ssh-apache-php CentOS-6 6.7 x86_64 / Apache / PHP / PHP M... 17 [OK] million12/centos-supervisor Base CentOS-7 with supervisord launcher, h... 11 [OK] nimmis/java-centos This is docker images of CentOS 7 with dif... 10 [OK] torusware/speedus-centos Always updated official CentOS docker imag... 8 [OK] nickistre/centos-lamp LAMP on centos setup 3 [OK] ...

In the OFFICIAL column, OK indicates an image built and supported by the company behind the project. Once you've identifed the image that you would like to use, you can download it to your computer using the pull subcommand, like so:

After an image has been downloaded, you may then run a container using the downloaded image with the run subcommand. If an image has not been downloaded when docker is executed with the run subcommand, the Docker client will first download the image, then run a container using it:

To see the images that have been downloaded to your computer, type:

The output should look similar to the following:

[secondary_lable Output]
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
centos              latest              778a53015523        5 weeks ago         196.7 MB
hello-world         latest              94df4f0ce8a4        2 weeks ago         967 B

As you'll see later in this tutorial, images that you use to run containers can be modified and used to generate new images, which may then be uploaded ( pushed is the technical term) to Docker Hub or other Docker registries.

Step 5 -- Running a Docker Container

The hello-world container you ran in the previous step is an example of a container that runs and exits, after emitting a test message. Containers, however, can be much more useful than that, and they can be interactive. After all, they are similar to virtual machines, only more resource-friendly.

As an example, let's run a container using the latest image of CentOS. The combination of the -i and -t switches gives you interactive shell access into the container:

Your command prompt should change to reflect the fact that you're now working inside the container and should take this form:

Output [root@59839a1b7de2 /]#

Important: Note the container id in the command prompt. In the above example, it is 59839a1b7de2 .

Now you may run any command inside the container. For example, let's install MariaDB server in the running container. No need to prefix any command with sudo , because you're operating inside the container with root privileges:

Step 6 -- Committing Changes in a Container to a Docker Image

When you start up a Docker image, you can create, modify, and delete files just like you can with a virtual machine. The changes that you make will only apply to that container. You can start and stop it, but once you destroy it with the docker rm command, the changes will be lost for good.

This section shows you how to save the state of a container as a new Docker image.

After installing MariaDB server inside the CentOS container, you now have a container running off an image, but the container is different from the image you used to create it.

To save the state of the container as a new image, first exit from it:

Then commit the changes to a new Docker image instance using the following command. The -m switch is for the commit message that helps you and others know what changes you made, while -a is used to specify the author. The container ID is the one you noted earlier in the tutorial when you started the interactive docker session. Unless you created additional repositories on Docker Hub, the repository is usually your Docker Hub username:

For example:

Note: When you commit an image, the new image is saved locally, that is, on your computer. Later in this tutorial, you'll learn how to push an image to a Docker registry like Docker Hub so that it may be assessed and used by you and others.

After that operation has completed, listing the Docker images now on your computer should show the new image, as well as the old one that it was derived from:

The output should be of this sort:

Output REPOSITORY TAG IMAGE ID CREATED SIZE finid/centos-mariadb latest 23390430ec73 6 seconds ago 424.6 MB centos latest 778a53015523 5 weeks ago 196.7 MB hello-world latest 94df4f0ce8a4 2 weeks ago 967 B

In the above example, centos-mariadb is the new image, which was derived from the existing CentOS image from Docker Hub. The size difference reflects the changes that were made. And in this example, the change was that MariaDB server was installed. So next time you need to run a container using CentOS with MariaDB server pre-installed, you can just use the new image. Images may also be built from what's called a Dockerfile. But that's a very involved process that's well outside the scope of this article. We'll explore that in a future article.

Step 7 -- Listing Docker Containers

After using Docker for a while, you'll have many active (running) and inactive containers on your computer. To view the active ones, use:

You will see output similar to the following:

Output CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES f7c79cc556dd centos "/bin/bash" 3 hours ago Up 3 hours silly_spence

To view all containers -- active and inactive, pass it the -a switch:

To view the latest container you created, pass it the -l switch:

Stopping a running or active container is as simple as typing:

The container-id can be found in the output from the docker ps command.

Step 8 -- Pushing Docker Images to a Docker Repository

The next logical step after creating a new image from an existing image is to share it with a select few of your friends, the whole world on Docker Hub, or other Docker registry that you have access to. To push an image to Docker Hub or any other Docker registry, you must have an account there.

This section shows you how to push a Docker image to Docker Hub.

To create an account on Docker Hub, register at Docker Hub . Afterwards, to push your image, first log into Docker Hub. You'll be prompted to authenticate:

If you specified the correct password, authentication should succeed. Then you may push your own image using:

It will take sometime to complete, and when completed, the output will be of this sort:

Output The push refers to a repository [docker.io/finid/centos-mariadb] 670194edfaf5: Pushed 5f70bf18a086: Mounted from library/centos 6a6c96337be1: Mounted from library/centos ...

After pushing an image to a registry, it should be listed on your account's dashboard, like that show in the image below.

[Dec 20, 2018] The docker.io Debian package is back to life by Arnaud Rebillout

Notable quotes:
"... first time in two years ..."
"... one-year leap forward ..."
"... Debian Go Packaging Team ..."
"... If you're running Debian 8 Jessie, you can install Docker 1.6.2, through backports. This version was released on May 14, 2015. That's 3 years old, but Debian Jessie is fairly old as well. ..."
Apr 07, 2018 | www.collabora.com

Last week, a new version of docker.io, the Docker package provided by Debian, was uploaded to Debian Unstable. Quickly afterwards, the package moved to Debian Testing. This is good news for Debian users, as before that the package was more or less abandoned in "unstable", and the future was uncertain.

The most striking fact about this change: it's the first time in two years that docker.io has migrated to "testing". Another interesting fact is that, version-wise, the package is moving from 1.13.1 from early 2017 to version 18.03 from March 2018: that's a one-year leap forward.

Let me give you a very rough summary of how things came to be. I personally started to work on that early in 2018. I joined the Debian Go Packaging Team and I started to work on the many, many Docker dependencies that needed to be updated in order to update the Docker package itself. I could get some of this work uploaded to Debian, but ultimately I was a bit stuck on how to solve the circular dependencies that plague the Docker package. This is where another Debian Developer, Dmitry Smirnov, jumped in. We discussed the current status and issues, and then he basically did all the job, from updating the package to tackling all the long-time opened bugs.

This is for the short story, let me know give you some more details.

The Docker package in Debian

To better understand why this update of the docker.io package is such a good news, let's have quick look at the current Debian offer:

    rmadison -u debian docker.io

If you're running Debian 8 Jessie, you can install Docker 1.6.2, through backports. This version was released on May 14, 2015. That's 3 years old, but Debian Jessie is fairly old as well.

If you're running Debian 9 Stretch (ie. Debian stable), then you have no install candidate. No-thing. The current Debian doesn't provide any package for Docker. That's a bit sad.

What's even more sad is that for quite a while, looking into Debian unstable didn't look promising either. There used to be a package there, but it had bugs that prevented it to migrate to Debian testing. This package was stuck at the version 1.13.1 , released on Feb 8, 2017. Looking at the git history, there was not much happening.

As for the reason for this sad state of things, I can only guess. Packaging Docker is a tedious work, mainly due to a very big dependency tree. After handling all these dependencies, there are other issues to tackle, some related to Go packaging itself, and others due to Docker release process and development workflow. In the end, it's quite difficult to find the right approach to package Docker, and it's easy to make mistakes that cost hours of works. I did this kind of mistakes. More than once.

So packaging Docker is not for the faint of heart, and maybe it's too much of a burden for one developer alone. There was a docker-maint mailing list that suggests an attempt to coordinate the effort, however this list was already dead by the time I found it. It looks like the people involved walked away.

Another explanation for the disinterest in the Docker package could be that Docker itself already provides a Debian package on docker.com. One can always fall back to this solution, so why bothering with the extra-work of doing a Debian package proper?

That's what the next part is about!

Docker.io vs Docker-ce

You have two options to install Docker on Debian: you can get the package from docker.com (this package is named docker-ce ), or you can get it from the Debian repositories (this package is named docker.io ). You can rebuild both of these packages from source: for docker-ce you can fetch the source code with git (it includes the packaging files), and for docker.io you can just get the source package with apt , like for every other Debian package.

So what's the difference between these two packages?

No suspense, straight answer: what differs is the build process, and mostly, the way dependencies are handled.

Docker is written in Go, and Golang comes with some tooling that allows applications to keep a local copy of their dependencies in their source tree. In Go-talk, this is called vendoring . Docker makes heavy use of that (like many other Go applications), which means that the code is more or less self-contained. You can build Docker without having to solve external dependencies, as everything needed is already in-tree.

That's how the docker-ce package provided by Docker is built, and that's what makes the packaging files for this package trivial. You can look at these files at https://github.com/docker/docker-ce/tree/master/components/packaging/deb . So everything is in-tree, there's almost no external build dependency, and hence it's real easy for Docker to provide a new package for 'docker-ce' every month.

On the other hand, the docker.io package provided by Debian takes a completely different approach: Docker is built against the libraries that are packaged in Debian, instead of using the local copies that are present in the Docker source tree. So if Docker is using libABC version 1.0, then it has a build dependency on libABC . You can have a look at the current build dependencies at https://salsa.debian.org/docker-team/docker/blob/master/debian/control .

There are more than 100 dependencies there, and that's one reason why the Debian package is a quite time-consuming to maintain. To give you a rough estimation, in order to get the current "stable" release of Docker to Debian "unstable", it took up to 40 uploads of related packages to stabilize the dependency tree.

It's quite an effort. And once again, why bother? For this part I'll quote Dmitry as he puts it better than me:

> Debian cares about reusable libraries, and packaging them individually allows to
> build software from tested components, as Golang runs no tests for vendored
> libraries. It is a mind blowing argument given that perhaps there is more code
> in "vendor" than in the source tree.
>
> Private vendoring have all disadvantages of static linking ,
> making it impossible to provide meaningful security support. On top of that, it
> is easy to lose control of vendored tree; it is difficult to track changes in
> vendored dependencies and there is no incentive to upgrade vendored components.

That's about it, whether it matters is up to you and your use-case. But it's definitely something you should know about if you want to make an informed decision on which package you're about to install and use.

To finish with this article, I'd like to give more details on the packaging of docker.io , and what was done to get this new version in Debian.

Under the hood of the docker.io package

Let's have a brief overview of the difficulties we had to tackle while packaging this new version of Docker.

The most outstanding one is circular dependencies. It's especially present in the top-level dependencies of Docker: docker/swarmkit , docker/libnetwork , containerd ... All of these are Docker build dependencies, and all of these depend on Docker to build. Good luck with that ;)

To solve this issue, the new docker.io package leverages MUT (Multiple Upstream Tarball) to have these different components downloaded and built all at once, instead of being packaged separately. In this particular case it definitely makes sense, as we're really talking about different parts of Docker. Even if they live in different git repositories, these components are not standalone libraries, and there's absolutely no good reason to package them separately.

Another issue with Docker is "micro-packaging", ie. wasting time packaging small git repositories that, in the end, are only used by one application (Docker in our case). This issue is quite interesting, really. Let me try to explain.

Golang makes it extremely easy to split a codebase among several git repositories. It's so easy that some projects (Docker in our case) do it extensively, as part of their daily workflow. And in the end, at a first glance you can't really say if a dependency of Docker is really a standalone project (that would require a proper packaging), or only just a part of Docker codebase, that happens to live in a different git repository. In this second case, there's really no reason to package it independently of Docker.

As a packager, if you're not a bit careful, you can easily fall in this trap, and start packaging every single dependency without thinking: that's "micro-packaging". It's bad in the sense that it increases the maintenance cost on the long-run, and doesn't bring any benefit. As I said before, docker.io has currently 100+ dependencies, and probably a few of them fall in this category.

While working on this new version of docker.io , we decided to stop packaging such dependencies. The guideline is that if a dependency has no semantic versioning , and no consumer other than Docker, then it's not a library, it's just a part of Docker codebase.

Even though some tools like dh-make-golang make it very easy to package simple Go packages, it doesn't mean that everything should be packaged. Understanding that, and taking a bit of time to think before packaging, is the key to successful Go packaging!

Last words

I could go on for a while on the technical details, there's a lot to say, but let's not bore you to death, so that's it. I hope by now you understand that:

  1. There's now an up-to-date docker.io package in Debian.
  2. docker.io and docker-ce both give you a Docker binary, but through a very different build process.
  3. Maintaining the 'docker.io' package is not an easy task.

If you care about having a Docker package in Debian, feel free to try it out, and feel free to join the maintenance effort!

Let's finish with a few credits. I've been working on that topic, albeit sparingly, for the last 4 months, thanks to the support of Collabora . As for Dmitry Smirnov, the work he did on the docker.io package represents a three weeks, full-time effort, which was sponsored by Libre Solutions Pty Ltd .

I'd like to thank the Debian Go Packaging Team for their support, and also the reviewers of this article, namely Dmitry Smirnov and Héctor Orón Martínez.

Last but not least, I will attend DebConf18 in Taiwan, where I will give a speak on this topic. There's also a BoF on Go Packaging planned.

See you there!

[Dec 20, 2018] Musings from the Chiefio

Notable quotes:
"... It isn't a full Virtual Machine, so it avoids that overhead and inefficiency, but it does isolate your applications from "update and die" problems, most of the time. "Docker" is a big one. ..."
Dec 20, 2018 | chiefio.wordpress.com

Sidebar on Containers: The basic idea is to isolate a bit of production application from all the rest of the system and make sure it has a consistent environment. So you package up your DNS server with the needed files and systems config and what-all and stick it in a container that runs under a host operating system.

It isn't a full Virtual Machine, so it avoids that overhead and inefficiency, but it does isolate your applications from "update and die" problems, most of the time. "Docker" is a big one.

Lately Red Hat et. al. have been pushing for a strongly systemD dependent kubernets instead.

The need to rapidly toss a VM into production and bring up a 'container' application on it drove (IMHO) much of the push to move all sorts of stuff into systemD to make booting very fast (even if it then doesn't work reliably /snarc;)

Much of the commercial world has moved to putting things in Docker or other container systems.

On BSD their equivalent is called "jails" as it keeps each application instance isolated from the system and from other applications. On "my Cray" we used a precursor tech of change root "chroot" to isolate things for security; but I got off that train before it reached the "jails" and "docker" station.

[Dec 16, 2018] What are the benefits using Docker?

Dec 16, 2018 | www.quora.com

The main benefit of Docker is that it automatically solves the problems with versioning and cross-platform deployment, as the images can be easily recombined to form any version and can run in any environment where Docker is installed. "Run anywhere" meme...


James Lee , former Software Engineer at Google (2013-2016) Answered Jul 12 · Author has 106 answers and 258.1k answer views

There are many beneifits of Docker. Firstly, I would mention the beneifits of Docker and then let you know about the future of Docker. The content mentioned here is from my recent article on Docker.

Docker Beneifits:

Docker is an open-source project based on Linux containers. It uses the features based on the Linux Kernel. For example, namespaces and control groups create containers. But are containers new? No, Google has been using it for years! They have their own container technology. There are some other Linux container technologies like Solaris Zones, LXC, etc.

These container technologies are already there before Docker came into existence. Then why Docker? What difference did it make? Why is it on the rise? Ok, I will tell you why!

Number 1: Docker offers ease of use

Taking advantage of containers wasn't an easy task with earlier technologies. Docker has made it easy for everyone like developers, system admins, architects, and more. Test portable applications are easy to build. Anyone can package an application from their laptop. He/She can then run it unmodified on any public/private cloud or bare metal. The slogan is, "build once, run anywhere"!

Number 2: Docker offers speed

Being lightweight, the containers are fast. They also consume fewer resources. One can easily run a Docker container in seconds. On the other side, virtual machines usually take longer as they go through the whole process of booting up the complete virtual operating system, every time!

Number 3: The Docker Hub

Docker offers an ecosystem known as the Docker Hub. You can consider it as an app store for Docker images. It contains many public images created by the community. These images are ready to use. You can easily search the images as per your requirements.

Number 4: Docker gives modularity and scalability

It is possible to break down the application functionality into individual containers. Docker gives this freedom! It is easy to link containers together and create your application with Docker. One can easily scale and update components independently in the future.

The Future

A lot of people come and ask me that "Will Docker eat up virtual machines?" I don't think so! Docker is gaining a lot of momentum but this won't affect virtual machines. This reason is that virtual machines are better under certain circumstances as compared to Docker. For example, if there is a requirement of running multiple applications on multiple servers, then virtual machines is a better choice. On the contrary, if there is a requirement to run multiple copies of a single application, Docker is a better choice.

Docker containers could create a problem when it comes to security because containers share the same kernel. The barriers between containers are quite thin. But I do believe that security and management improve with experience and exposure. Docker certainly has a great future! I hope that this Docker tutorial has helped you understand the basics of Containers, VM's, and Dockers. But Docker in itself is an ocean. It isn't possible to study Docker in just one article. For an in-depth study of Docker, I recommend this Docker course.

Please feel free to Like/Subscribe/Comment on my YouTube Videos/Channel mentioned below :

David Polstra , Person at ReactiveOps (2016-present) Updated Oct 5, 2017 · Author has 65 answers and 53.7k answer views

I work at ReactiveOps where we specialize in DevOps-as-a-Service and Kubernetes Consulting. One of our engineers, EJ Etherington , recently addressed this in a blog post:

"Docker is both a daemon (a process running in the background) and a client command. It's like a virtual machine but it's different in important ways. First, there's less duplication. With each extra VM you run, you duplicate the virtualization of CPU and memory and quickly run out resources when running locally. Docker is great at setting up a local development environment because it easily adds the running process without duplicating the virtualized resource. Second, it's more modular. Docker makes it easy to run multiple versions or instances of the same program without configuration headaches and port collisions. Try that in a VM!

With Docker, developers can focus on writing code without worrying about the system on which their code will run. Applications become truly portable. You can repeatably run your application on any other machine running Docker with confidence. For operations staff, Docker is lightweight, easily allowing the running and management of applications with different requirements side by side in isolated containers. This flexibility can increase resource use per server and may reduce the number of systems needed because of its lower overhead, which in turn reduces cost.

Docker has made Linux containerization technology easy to use.

There are a dozen reasons to use Docker. I'll focus here on three: consistency, speed and isolation. By consistency , I mean that Docker provides a consistent environment for your application from development all the way through production – you run from the same starting point every time. By speed , I mean you can rapidly run a new process on a server. Because the image is preconfigured and installed with the process you want to run, it takes the challenge of running a process out of the equation. By isolation , I mean that by default each Docker container that's running is isolated from the network, the file system and other running processes.

A fourth reason is Docker's layered file system. Starting from a base image, every change you make to a container or image becomes a new layer in the file system. As a result, file system layers are cached, reducing the number of repetitive steps during the Docker build process AND reducing the time it takes to upload and download similar images. It also allows you to save the container state if, for example, you need troubleshoot why a container is failing. The file system layers are like Git, but at the file system level. Each Docker image is a particular combination of layers in the same way that each Git branch is a particular combination of commits."

I hope this was helpful. If you would like to learn more, you can read the entire post: Docker Is a Valuable DevOps Tool - One That's Worth Using

Bill William Bill William , M.C.A Software and Applications & Java, SRM University, Kattankulathur (2006) Answered Jan 5, 2018

Docker is the most popular file format for Linux-based container development and deployments. If you're using containers, you're most likely familiar with the container-specific toolset of Docker tools that enable you to create and deploy container images to a cloud-based container hosting environment.

This can work great for brand-new environments, but it can be a challenge to mix container tooling with the systems and tools you need to manage your traditional IT environments. And, if you're deploying your containers locally, you still need to manage the underlying infrastructure and environment.

Portability: let's suppose in the case of Linux you have your own customized Nginx container. You can run that Nginx container anywhere, no matter it's a cloud or data center on even your own laptop as long as you have a docker engine running Linux OS.

Rollback: you can just run your previous build image and all charges will automatically roll back.

Image Simplicity: Every image has a tree hierarchy and all the child images depend upon its parent image. For example, let's suppose there is a vulnerability in docker container, you can easily identify and patch that parent image and when you will rebuild child, variability will automatically remove from the child images also.

Container Registry: You can store all images at a central location, you can apply ACLs, you can do vulnerability scanning and image signing.

Runtime: No matter you want to run thousand of container you can start all within five seconds.

Isolation: We can run hundred of the process in one Os and all will be isolated to each other.

Docker Learning hub

[Dec 16, 2018] What are some disadvantages of using Docker - Quora

Dec 16, 2018 | www.quora.com

Ethen , Web Designer (2015-present) Answered Aug 30, 2018 · Author has 154 answers and 56.2k answer views

Docker is an open platform for every one of the developers bringing them a large number of open source venture including the arrangement open source Docker tools , and the management framework with in excess of 85,000 Dockerized applications. Docker is even today accepted to be something more than only an application stage. What's more, the compartment eco framework is proceeding to develop so quick that with such a large number of Docker devices being made accessible on the web, it starts to feel like an overwhelming undertaking when you are simply attempting to comprehend the accessible alternatives kept directly before you.

Disadvantages Of Docker

Containers don't run at bare-metal speeds.

The container ecosystem is fractured.

Persistent data storage is complicated.

Graphical applications don't work well.

Not all applications benefit from containers.

Advantages Of Docker

Swapnil Kulkarni , Engineering Lead at Persistent Systems (2018-present) Answered Nov 9, 2017 · Author has 58 answers and 24.9k answer views

From my personal experience, I think people just want to containerize everything without looking at how the architectural considerations change which basically ruins the technology.

e.g. How will someone benefit from creating FAT container images of a size of a VM when the basic advantage of docker is to ship lightweight images.

[Nov 28, 2018] Getting started with Kubernetes 5 misunderstandings, explained by Kevin Casey

Nov 19, 2018 | enterprisersproject.com
Among growing container trends , here's an important one: As containers go, so goes container orchestration. That's because most organizations quickly realize that managing containers in production can get complicated in a hurry. Orchestration solves that problem, and while there are multiple options, Kubernetes has become the de facto leader .

[ Want to help others understand Kubernetes? Check out our related article, How to explain Kubernetes in plain English. ]

Kubernetes' star appeal does lead to some misunderstandings and outright myths, though. We asked a range of IT leaders and container experts to identify the biggest misconceptions about Kubernetes – and the realities behind each of them – to help people who are just getting going with the technology. Here are five important ones to know before you get your hands dirty.

Misunderstanding #1: Kubernetes is only for public cloud

Reality: Kubernetes is commonly referred to as a cloud-native technology, and for good reason. The project, which was first developed by a team at Google , currently calls the Cloud Native Computing Foundation home. ( Red Hat , one of the first companies to work with Google on Kubernetes, has become the second-leading contributor to Kubernetes upstream project.)

"Kubernetes is cloud-native in the sense that it has been designed to take advantage of cloud computing architecture [and] to support scale and resilience for distributed applications," says Raghu Kishore Vempati, principal systems engineer at Aricent .

Just remember that "cloud-native" is not wholly synonymous with "public cloud."

"Kubernetes can run on different platforms, be it a personal laptop, VM, rack of bare-metal servers, public/private cloud environment, et cetera," Vempati says.

Notes Red Hat technology evangelist Gordon Haff , "You can cluster together groups of hosts running Linux containers, and Kubernetes helps you easily and efficiently manage those clusters. These clusters can span hosts across public, private, and hybrid clouds ."

Misunderstanding #2: Kubernetes is a finished product

Reality: Kubernetes isn't really a product at all, much less a finished one.

"Kubernetes is an open source project, not a product," says Murli Thirumale, co-founder and CEO at Portworx . (Portworx co-founder and VP of product management Eric Han was the first Kubernetes product manager while at Google.)

The Kubernetes ecosystem moves very quickly.

New users should understand a fundamental reality here: The Kubernetes ecosystem moves very quickly. It's even been dubbed the fastest-moving project in open source history.

"Take your eyes off of it for only one moment, and everything changes," Frank Reno, senior technical product manager at Sumo Logic . "It is a fast-paced, highly active community that develops Kubernetes and the related projects. As it changes, it also changes the way you need to look at and develop things. It's all for the better, but still, much to keep up on."

Misunderstanding #3: Kubernetes is simple to run out of the box

"For those new to Kubernetes there's often an 'aha' moment as they realize it's not that easy to do right."

Reality: It may be "easy" to get it up and running on a local machine, but it can quickly get more complicated from there. "For those new to Kubernetes, there's often an 'aha' moment as they realize it's not that easy to do right," says Amir Jerbi, co-founder and CTO at Aqua Security .

Jerbi notes that this is a key reason for the growth of commercial Kubernetes platforms on top of the open source project, as well as managed services and consultancies. "Setting up and managing K8s correctly requires time, knowledge, and skills, and the skill gap should not be underestimated," Jerbi says.

Some organizations are still going to learn that the hard way, drawn in by the considerable potential of Kubernetes and the table-stakes necessity of a using container management or orchestration tool for running containers at scale in a production environment.

"Kubernetes is a very popular and very powerful platform," says Wei Lien Dang, VP of products at StackRox . "Given the DIY mindset that comes along with open source software, users often think they should be working directly in the Kubernetes system itself. But this understanding is misguided."

Dang points to needs such as supporting high availability and resilience. Both, he says, become easier when using abstraction layers on top of the core Kubernetes platform, such as a UX layer to enable various end users to get the most value out of the technology.

"One of the major benefits of open source software is that it can be downloaded and used with no license cost – but very often, making this community software usable in a corporate environment will require a significant investment in technical effort to integrate [or] bundle with other technologies," says Andy Kennedy, managing director at Tier 2 Consulting . "For example, in order to provide a full set of orchestrated services, Kubernetes relies on other services provided by open source projects, such as registry, security, telemetry, networking, and automation."

Complete container application platforms, such as Red Hat OpenShift , eliminate the need to assemble those pieces yourself.

This gets back to the difference between the Kubernetes project and the maturing Kubernetes platforms built on top of that project.

"Do-it-yourself Kubernetes can work with some dedicated resources, but consider a more productized and supported [platform]," says Portworx's Thirumale. "These will help you go to production faster." Misunderstanding #4: Kubernetes is an all-encompassing framework for building and deploying applications

Reality: "By itself, Kubernetes does not provide any primitives for applications such as databases, middleware, storage, [and so forth]," says Aricent's Vempati.

Developers still need to include the necessary services and components for their respective applications, Vempati notes, yet some people overlook this.

"Kubernetes is a platform for managing containerized workloads and services with independent and composable processes," Vempati says. "How the applications and services are orchestrated on the platform is for the developers to define."

You can't just "lift and shift" a monolithic app into Kubernetes and say, boom, we have a microservices architecture.

In a similar vein, some folks simply misunderstand what Kubernetes does in a more fundamental way. Jared Sikander, CTO at NetEnrich , encounters a key misconception in the marketplace that Kubernetes "provides containerization and microservices ." That's a misnomer. It's a tool for deploying and managing containers and containerized microservices. You can't just "lift and shift" a monolithic app into Kubernetes and say, boom, we have a microservices architecture now.

"In reality, you have to refactor your applications into microservices," Sikander says. "Kubernetes provides the platform to deploy and scale your microservices."

[ Want more advice? Read Microservices and containers: 5 pitfalls to avoid . ]

Misunderstanding #5: Kubernetes inherently secures your containers

Reality: Container security is one of the brave new worlds in the broader threat landscape. (That's evident in the growing number of container security firms, such as Aqua, StackRox, and others.)

Kubernetes does have critical capabilities for managing the security of your containers, but keep in mind it is not in and of itself a security platform, per se.

"Kubernetes has a lot of powerful controls built in for network policy enforcement, for example, but accessing them natively in Kubernetes means working in a YAML file," says Dang from StackRox. This also gets back to leveraging the right tools or abstraction layers on top of Kubernetes to make its security-oriented features more consumable.

It's also a matter of rethinking your old security playbook for containers and for hybrid cloud and multi-cloud environments in general.

[ Read our related article: Container security fundamentals: 5 things to know . ]

"As enterprises increasingly flock to Kubernetes, too many organizations are still making the dangerous mistake of relying on their previously used security measures – which really aren't suited to protecting Kubernetes and containerized environments," says Gary Duan, CTO at NeuVector . "While traditional firewalls and endpoint security are postured to defend against external threats, malicious threats to containers often grow and expand laterally via internal traffic, where more traditional tools have zero visibility."

MORE ON CONTAINERS

Security, like other considerations with containers and Kubernetes, is also a very different animal when you're ready to move into production.

In part two of this series, we clear up some of the misconceptions about running Kubernetes in a production environment versus experimenting with it in a test or dev environment. The differences can be significant.

[Nov 15, 2018] Behind the scenes with Linux containers by Seth Kenlon

Nov 12, 2018 | opensource.com

Become a better container troubleshooter by using LXC to understand how they work.

Can you have Linux containers without Docker ? Without OpenShift ? Without Kubernetes ?

Yes, you can. Years before Docker made containers a household term (if you live in a data center, that is), the LXC project developed the concept of running a kind of virtual operating system, sharing the same kernel, but contained within defined groups of processes.

Docker built on LXC, and today there are plenty of platforms that leverage the work of LXC both directly and indirectly. Most of these platforms make creating and maintaining containers sublimely simple, and for large deployments, it makes sense to use such specialized services. However, not everyone's managing a large deployment or has access to big services to learn about containerization. The good news is that you can create, use, and learn containers with nothing more than a PC running Linux and this article. This article will help you understand containers by looking at LXC, how it works, why it works, and how to troubleshoot when something goes wrong.

Sidestepping the simplicity Linux Containers If you're looking for a quick-start guide to LXC, refer to the excellent Linux Containers website. Installing LXC

If it's not already installed, you can install LXC with your package manager.

On Fedora or similar, enter:

$ sudo dnf install lxc lxc-templates lxc-doc

On Debian, Ubuntu, and similar, enter:

$ sudo apt install lxc
Creating a network bridge

Most containers assume a network will be available, and most container tools expect the user to be able to create virtual network devices. The most basic unit required for containers is the network bridge, which is more or less the software equivalent of a network switch. A network switch is a little like a smart Y-adapter used to split a headphone jack so two people can hear the same thing with separate headsets, except instead of an audio signal, a network switch bridges network data.

You can create your own software network bridge so your host computer and your container OS can both send and receive different network data over a single network device (either your Ethernet port or your wireless card). This is an important concept that often gets lost once you graduate from manually generating containers, because no matter the size of your deployment, it's highly unlikely you have a dedicated physical network card for each container you run. It's vital to understand that containers talk to virtual network devices, so you know where to start troubleshooting if a container loses its network connection.

To create a network bridge on your machine, you must have the appropriate permissions. For this article, use the sudo command to operate with root privileges. (However, LXC docs provide a configuration to grant users permission to do this without using sudo .)

$ sudo ip link add br0 type bridge

Verify that the imaginary network interface has been created:

$ sudo ip addr show br0
7: br0: <BROADCAST,MULTICAST> mtu 1500 qdisc
noop state DOWN group default qlen 1000
link/ether 26:fa:21:5f:cf:99 brd ff:ff:ff:ff:ff:ff

Since br0 is seen as a network interface, it requires its own IP address. Choose a valid local IP address that doesn't conflict with any existing IP address on your network and assign it to the br0 device:

$ sudo ip addr add 192.168.168.168 dev br0

And finally, ensure that br0 is up and running:

$ sudo ip link set br0 up
Setting the container config

The config file for an LXC container can be as complex as it needs to be to define a container's place in your network and the host system, but for this example the config is simple. Create a file in your favorite text editor and define a name for the container and the network's required settings:

lxc.utsname = opensourcedotcom
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = br0
lxc.network.hwaddr = 4a:49:43:49:79:bd
lxc.network.ipv4 = 192.168.168.1/24
lxc.network.ipv6 = 2003:db8:1:0:214:1234:fe0b:3596

Save this file in your home directory as mycontainer.conf .

The lxc.utsname is arbitrary. You can call your container whatever you like; it's the name you'll use when starting and stopping it.

The network type is set to veth , which is a kind of virtual Ethernet patch cable. The idea is that the veth connection goes from the container to the bridge device, which is defined by the lxc.network.link property, set to br0 . The IP address for the container is in the same network as the bridge device but unique to avoid collisions.

With the exception of the veth network type and the up network flag, you invent all the values in the config file. The list of properties is available from man lxc.container.conf . (If it's missing on your system, check your package manager for separate LXC documentation packages.) There are several example config files in /usr/share/doc/lxc/examples , which you should review later.

Launching a container shell

At this point, you're two-thirds of the way to an operable container: you have the network infrastructure, and you've installed the imaginary network cards in an imaginary PC. All you need now is to install an operating system.

However, even at this stage, you can see LXC at work by launching a shell within a container space.

$ sudo lxc-execute --name basic \
--rcfile ~/mycontainer.conf /bin/bash \
--logfile mycontainer.log
#

In this very bare container, look at your network configuration. It should look familiar, yet unique, to you.

# /usr/sbin/ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state [...]
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
[...]
22: eth0@if23: <BROADCAST,MULTICAST,UP,LOWER_UP> [...] qlen 1000
link/ether 4a:49:43:49:79:bd brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.168.167/24 brd 192.168.168.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 2003:db8:1:0:214:1234:fe0b:3596/64 scope global
valid_lft forever preferred_lft forever
[...]

Your container is aware of its fake network infrastructure and of a familiar-yet-unique kernel.

# uname -av
Linux opensourcedotcom 4.18.13-100.fc27.x86_64 #1 SMP Wed Oct 10 18:34:01 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

Use the exit command to leave the container:

# exit
Installing the container operating system

Building out a fully containerized environment is a lot more complex than the networking and config steps, so you can borrow a container template from LXC. If you don't have any templates, look for a separate LXC template package in your software repository.

The default LXC templates are available in /usr/share/lxc/templates .

$ ls -m /usr/share/lxc/templates/
lxc-alpine, lxc-altlinux, lxc-archlinux, lxc-busybox, lxc-centos, lxc-cirros, lxc-debian, lxc-download, lxc-fedora, lxc-gentoo, lxc-openmandriva, lxc-opensuse, lxc-oracle, lxc-plamo, lxc-slackware, lxc-sparclinux, lxc-sshd, lxc-ubuntu, lxc-ubuntu-cloud

Pick your favorite, then create the container. This example uses Slackware.

$ sudo lxc-create --name slackware --template slackware

Watching a template being executed is almost as educational as building one from scratch; it's very verbose, and you can see that lxc-create sets the "root" of the container to /var/lib/lxc/slackware/rootfs and several packages are being downloaded and installed to that directory.

Reading through the template files gives you an even better idea of what's involved: LXC sets up a minimal device tree, common spool files, a file systems table (fstab), init files, and so on. It also prevents some services that make no sense in a container (like udev for hardware detection) from starting. Since the templates cover a wide spectrum of typical Linux configurations, if you intend to design your own, it's wise to base your work on a template closest to what you want to set up; otherwise, you're sure to make errors of omission (if nothing else) that the LXC project has already stumbled over and accounted for.

Once you've installed the minimal operating system environment, you can start your container.

$ sudo lxc-start --name slackware \
--rcfile ~/mycontainer.conf

You have started the container, but you have not attached to it. (Unlike the previous basic example, you're not just running a shell this time, but a containerized operating system.) Attach to it by name.

$ sudo lxc-attach --name slackware
#

Check that the IP address of your environment matches the one in your config file.

# / usr / sbin / ip addr SHOW | grep eth
34 : eth0@if35: < BROADCAST , MULTICAST , UP , LOWER_UP > mtu 1500 [ ... ] 1000
link / ether 4a: 49 : 43 : 49 : 79 :bd brd ff:ff:ff:ff:ff:ff link - netnsid 0
inet 192 . 168 . 168 . 167 / 24 brd 192 . 168 . 168 . 255 scope global eth0

Exit the container, and shut it down.

# exit
$ sudo lxc-stop slackware Running real-world containers with LXC

In real life, LXC makes it easy to create and run safe and secure containers. Containers have come a long way since the introduction of LXC in 2008, so use its developers' expertise to your advantage.

While the LXC instructions on linuxcontainers.org make the process simple, this tour of the manual side of things should help you understand what's going on behind the scenes.

[Sep 05, 2018] A sysadmin's guide to containers - Opensource.com

Notable quotes:
"... Linux container internals. Illustration by Scott McCarty. CC BY-SA 4.0 ..."
Sep 05, 2018 | opensource.com

A sysadmin's guide to containers What you need to know to understand how containers work. 27 Aug 2018 Daniel J Walsh (Red Hat) Feed 30 up 2 comments toolbox drawing Image by :

Internet Archive Book Images . Modified by Opensource.com. CC BY-SA 4.0 x Get the newsletter

Join the 85,000 open source advocates who receive our giveaway alerts and article roundups.

https://opensource.com/eloqua-embedded-email-capture-block.html?offer_id=70160000000QzXNAA0

The term "containers" is heavily overused. Also, depending on the context, it can mean different things to different people.

Traditional Linux containers are really just ordinary processes on a Linux system. These groups of processes are isolated from other groups of processes using resource constraints (control groups [cgroups]), Linux security constraints (Unix permissions, capabilities, SELinux, AppArmor, seccomp, etc.), and namespaces (PID, network, mount, etc.).

containers_primer_cover_1.jpg Containers primer sheet Download the Containers Primer

If you boot a modern Linux system and took a look at any process with cat /proc/PID/cgroup , you see that the process is in a cgroup. If you look at /proc/PID/status , you see capabilities. If you look at /proc/self/attr/current , you see SELinux labels. If you look at /proc/PID/ns , you see the list of namespaces the process is in. So, if you define a container as a process with resource constraints, Linux security constraints, and namespaces, by definition every process on a Linux system is in a container. This is why we often say Linux is containers, containers are Linux . Container runtimes are tools that modify these resource constraints, security, and namespaces and launch the container.

Docker introduced the concept of a container image , which is a standard TAR file that combines:

Docker " tar 's up" the rootfs and the JSON file to create the base image . This enables you to install additional content on the rootfs, create a new JSON file, and tar the difference between the original image and the new image with the updated JSON file. This creates a layered image .

The definition of a container image was eventually standardized by the Open Container Initiative (OCI) standards body as the OCI Image Specification .

Tools used to create container images are called container image builders . Sometimes container engines perform this task, but several standalone tools are available that can build container images.

Docker took these container images ( tarballs ) and moved them to a web service from which they could be pulled, developed a protocol to pull them, and called the web service a container registry .

Container engines are programs that can pull container images from container registries and reassemble them onto container storage . Container engines also launch container runtimes (see below).

linux_container_internals_2.0_-_hosts.png Linux container internals

Linux container internals. Illustration by Scott McCarty. CC BY-SA 4.0

Container storage is usually a copy-on-write (COW) layered filesystem. When you pull down a container image from a container registry, you first need to untar the rootfs and place it on disk. If you have multiple layers that make up your image, each layer is downloaded and stored on a different layer on the COW filesystem. The COW filesystem allows each layer to be stored separately, which maximizes sharing for layered images. Container engines often support multiple types of container storage, including overlay , devicemapper , btrfs , aufs , and zfs .

Linux Containers

After the container engine downloads the container image to container storage, it needs to create a container runtime configuration. The runtime configuration combines input from the caller/user along with the content of the container image specification. For example, the caller might want to specify modifications to a running container's security, add additional environment variables, or mount volumes to the container.

The layout of the container runtime configuration and the exploded rootfs have also been standardized by the OCI standards body as the OCI Runtime Specification .

Finally, the container engine launches a container runtime that reads the container runtime specification; modifies the Linux cgroups, Linux security constraints, and namespaces; and launches the container command to create the container's PID 1 . At this point, the container engine can relay stdin / stdout back to the caller and control the container (e.g., stop, start, attach).

Note that many new container runtimes are being introduced to use different parts of Linux to isolate containers. People can now run containers using KVM separation (think mini virtual machines) or they can use other hypervisor strategies (like intercepting all system calls from processes in containers). Since we have a standard runtime specification, these tools can all be launched by the same container engines. Even Windows can use the OCI Runtime Specification for launching Windows containers.

At a much higher level are container orchestrators. Container orchestrators are tools used to coordinate the execution of containers on multiple different nodes. Container orchestrators talk to container engines to manage containers. Orchestrators tell the container engines to start containers and wire their networks together. Orchestrators can monitor the containers and launch additional containers as the load increases. Topics Containers Containers column Cloud About the author Daniel J Walsh - Daniel Walsh has worked in the computer security field for almost 30 years. Dan joined Red Hat in August 2001. Dan leads the RHEL Docker enablement team since August 2013, but has been working on container technology for several years. He has led the SELinux project, concentrating on the application space and policy development. Dan helped developed sVirt, Secure Virtualization. He also created the SELinux Sandbox, the Xguest user and the Secure Kiosk. Previously, Dan worked Netect/Bindview... More about me

[Jul 16, 2017] How to install and setup LXC (Linux Container) on Fedora Linux 26 – nixCraft

Jul 16, 2017 | www.cyberciti.biz
How to install and setup LXC (Linux Container) on Fedora Linux 26 Posted on July 13, 2017 July 13, 2017 in Categories Fedora Linux , Linux , Linux Containers (LXC) last updated July 13, 2017 H ow do I install, create and manage LXC (Linux Containers – an operating system-level virtualization) on Fedora Linux version 26 server?

LXC is an acronym for Linux Containers. It is nothing but an operating system-level virtualization technology for running multiple isolated Linux distros (systems containers) on a single Linux host. This tutorial shows you how to install and manage LXC containers on Fedora Linux server.

Our sample setup

welcome-fedora-lxc
The LXC often described as a lightweight virtualization technology. You can think LXC as chrooted jail on steroids. There is no guest operating system involved. You can only run Linux distros with LXC. You can not run MS-Windows or *BSD or any other operating system with LXC. You can run CentOS, Fedora, Ubuntu, Debian, Gentoo or any other Linux distro using LXC. Traditional virtualization such as KVM/XEN/VMWARE and paravirtualization need a full operating system image for each instance. You can run any operating system using traditional virtualization.

Installation

Type the following dnf command to install lxc and related packages on Fedora 26:
$ sudo dnf install lxc lxc-templates lxc-extra debootstrap libvirt perl gpg
Sample outputs:

Fig.01:  LXC Installation on Fedora 26
Fig.01: LXC Installation on Fedora 26

Start and enable needed services

First start virtualization daemon named libvirtd and lxc using the systemctl command:
$ sudo systemctl start libvirtd.service
$ sudo systemctl start lxc.service
$ sudo systemctl enable lxc.service

Sample outputs:

Created symlink /etc/systemd/system/multi-user.target.wants/lxc.service ? /usr/lib/systemd/system/lxc.service.

Verify that services are running:
$ sudo systemctl status libvirtd.service
Sample outputs:

? libvirtd.service - Virtualization daemon
   Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled)
   Active: active (running) since Thu 2017-07-13 07:25:30 UTC; 40s ago
     Docs: man:libvirtd(8)
           http://libvirt.org
 Main PID: 3688 (libvirtd)
   CGroup: /system.slice/libvirtd.service
           ??3688 /usr/sbin/libvirtd
           ??3760 /usr/sbin/dnsmasq --conf-file
=
/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper

           ??3761 /usr/sbin/dnsmasq --conf-file
=
/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper

 
Jul 13 07:25:31 nixcraft-f26 dnsmasq
[3760]
: compile time options: IPv6 GNU-getopt DBus no-i18n IDN DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth DNSSEC loop-detect inotify
Jul 13 07:25:31 nixcraft-f26 dnsmasq-dhcp
[3760]
: DHCP, IP range 192.168.122.2 -- 192.168.122.254, lease time 1h
Jul 13 07:25:31 nixcraft-f26 dnsmasq-dhcp
[3760]
: DHCP, sockets bound exclusively to interface virbr0
Jul 13 07:25:31 nixcraft-f26 dnsmasq
[3760]
: reading /etc/resolv.conf
Jul 13 07:25:31 nixcraft-f26 dnsmasq
[3760]
: using nameserver 139.162.11.5#53
Jul 13 07:25:31 nixcraft-f26 dnsmasq
[3760]
: using nameserver 139.162.13.5#53
Jul 13 07:25:31 nixcraft-f26 dnsmasq
[3760]
: using nameserver 139.162.14.5#53
Jul 13 07:25:31 nixcraft-f26 dnsmasq
[3760]
: read /etc/hosts - 3 addresses
Jul 13 07:25:31 nixcraft-f26 dnsmasq
[3760]
: read /var/lib/libvirt/dnsmasq/default.addnhosts - 0 addresses
Jul 13 07:25:31 nixcraft-f26 dnsmasq-dhcp
[3760]
: read /var/lib/libvirt/dnsmasq/default.hostsfile

? libvirtd.service - Virtualization daemon Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled) Active: active (running) since Thu 2017-07-13 07:25:30 UTC; 40s ago Docs: man:libvirtd(8) http://libvirt.org Main PID: 3688 (libvirtd) CGroup: /system.slice/libvirtd.service ??3688 /usr/sbin/libvirtd ??3760 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper ??3761 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper Jul 13 07:25:31 nixcraft-f26 dnsmasq[3760]: compile time options: IPv6 GNU-getopt DBus no-i18n IDN DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth DNSSEC loop-detect inotify Jul 13 07:25:31 nixcraft-f26 dnsmasq-dhcp[3760]: DHCP, IP range 192.168.122.2 -- 192.168.122.254, lease time 1h Jul 13 07:25:31 nixcraft-f26 dnsmasq-dhcp[3760]: DHCP, sockets bound exclusively to interface virbr0 Jul 13 07:25:31 nixcraft-f26 dnsmasq[3760]: reading /etc/resolv.conf Jul 13 07:25:31 nixcraft-f26 dnsmasq[3760]: using nameserver 139.162.11.5#53 Jul 13 07:25:31 nixcraft-f26 dnsmasq[3760]: using nameserver 139.162.13.5#53 Jul 13 07:25:31 nixcraft-f26 dnsmasq[3760]: using nameserver 139.162.14.5#53 Jul 13 07:25:31 nixcraft-f26 dnsmasq[3760]: read /etc/hosts - 3 addresses Jul 13 07:25:31 nixcraft-f26 dnsmasq[3760]: read /var/lib/libvirt/dnsmasq/default.addnhosts - 0 addresses Jul 13 07:25:31 nixcraft-f26 dnsmasq-dhcp[3760]: read /var/lib/libvirt/dnsmasq/default.hostsfile

And:
$ sudo systemctl status lxc.service
Sample outputs:

? lxc.service - LXC Container Initialization and Autoboot Code
   Loaded: loaded (/usr/lib/systemd/system/lxc.service; enabled; vendor preset: disabled)
   Active: active (exited) since Thu 2017-07-13 07:25:34 UTC; 1min 3s ago
     Docs: man:lxc-autostart
           man:lxc
 Main PID: 3830 (code
=
exited, status=0/SUCCESS)

      CPU: 9ms
 
Jul 13 07:25:34 nixcraft-f26 systemd
[1]
: Starting LXC Container Initialization and Autoboot Code...
Jul 13 07:25:34 nixcraft-f26 systemd
[1]
: Started LXC Container Initialization and Autoboot Code.

? lxc.service - LXC Container Initialization and Autoboot Code Loaded: loaded (/usr/lib/systemd/system/lxc.service; enabled; vendor preset: disabled) Active: active (exited) since Thu 2017-07-13 07:25:34 UTC; 1min 3s ago Docs: man:lxc-autostart man:lxc Main PID: 3830 (code=exited, status=0/SUCCESS) CPU: 9ms Jul 13 07:25:34 nixcraft-f26 systemd[1]: Starting LXC Container Initialization and Autoboot Code... Jul 13 07:25:34 nixcraft-f26 systemd[1]: Started LXC Container Initialization and Autoboot Code. LXC networking

To view configured networking interface for lxc, run:
$ sudo brctl show
Sample outputs:

bridge name	bridge id		STP enabled	interfaces
virbr0		8000.525400293323	yes		virbr0-nic

You must set default bridge to virbr0 in the file /etc/lxc/default.conf:
$ sudo vi /etc/lxc/default.conf
Sample config (replace lxcbr0 with virbr0 for lxc.network.link):

lxc.network.type = veth
lxc.network.link = 
virbr0

lxc.network.flags = up
lxc.network.hwaddr = 00:16:3e:xx:xx:xx

Save and close the file. To see DHCP range used by containers, enter:
$ sudo systemctl status libvirtd.service | grep range
Sample outputs:

Jul 13 07:25:31 nixcraft-f26 dnsmasq-dhcp[3760]: DHCP, IP range 192.168.122.2 -- 192.168.122.254, lease time 1h

To check the current kernel for lxc support, enter:
$ lxc-checkconfig
Sample outputs:

Kernel configuration not found at /proc/config.gz; searching...
Kernel configuration found at /boot/config-4.11.9-300.fc26.x86_64
--- Namespaces ---
Namespaces: enabled
Utsname namespace: enabled
Ipc namespace: enabled
Pid namespace: enabled
User namespace: enabled
Network namespace: enabled

--- Control groups ---
Cgroup: enabled
Cgroup clone_children flag: enabled
Cgroup device: enabled
Cgroup sched: enabled
Cgroup cpu account: enabled
Cgroup memory controller: enabled
Cgroup cpuset: enabled

--- Misc ---
Veth pair device: enabled
Macvlan: enabled
Vlan: enabled
Bridges: enabled
Advanced netfilter: enabled
CONFIG_NF_NAT_IPV4: enabled
CONFIG_NF_NAT_IPV6: enabled
CONFIG_IP_NF_TARGET_MASQUERADE: enabled
CONFIG_IP6_NF_TARGET_MASQUERADE: enabled
CONFIG_NETFILTER_XT_TARGET_CHECKSUM: enabled
FUSE (for use with lxcfs): enabled

--- Checkpoint/Restore ---
checkpoint restore: enabled
CONFIG_FHANDLE: enabled
CONFIG_EVENTFD: enabled
CONFIG_EPOLL: enabled
CONFIG_UNIX_DIAG: enabled
CONFIG_INET_DIAG: enabled
CONFIG_PACKET_DIAG: enabled
CONFIG_NETLINK_DIAG: enabled
File capabilities: enabled

Note : Before booting a new kernel, you can check its configuration
usage : CONFIG=/path/to/config /usr/bin/lxc-checkconfig
How can I create a Ubuntu Linux container?

Type the following command to create Ubuntu 16.04 LTS container:
$ sudo lxc-create -t download -n ubuntu-c1 -- -d ubuntu -r xenial -a amd64
Sample outputs:

Setting up the GPG keyring
Downloading the image index
Downloading the rootfs
Downloading the metadata
The image cache is now ready
Unpacking the rootfs

---
You just created an Ubuntu container (release=xenial, arch=amd64, variant=default)

To enable sshd, run: apt-get install openssh-server

For security reason, container images ship without user accounts
and without a root password.

Use lxc-attach or chroot directly into the rootfs to set a root password
or create user accounts.

To setup admin password, run:
$ sudo chroot /var/lib/lxc/ubuntu-c1/rootfs/ passwd ubuntu

Enter new UNIX password: 
Retype new UNIX password: 
passwd: password updated successfully

Make sure root account is locked out:
$ sudo chroot /var/lib/lxc/ubuntu-c1/rootfs/ passwd
To start container run:
$ sudo lxc-start -n ubuntu-c1
To login to the container named ubuntu-c1 use ubuntu user and password set earlier:
$ lxc-console -n ubuntu-c1
Sample outputs:

Fig.02:  Launch a console for the specified container
Fig.02: Launch a console for the specified container

You can now install packages and configure your server. For example, to enable sshd, run apt-get command / apt command :
ubuntu@ubuntu-c1:~$ sudo apt-get install openssh-server
To exit from lxc-console type Ctrl+a q to exit the console session and back to the host .

How do I create a Debain Linux container?

Type the following command to create Debian 9 ("stretch") container:
$ sudo lxc-create -t download -n debian-c1 -- -d debian -r stretch -a amd64
Sample outputs:

Setting up the GPG keyring
Downloading the image index
Downloading the rootfs
Downloading the metadata
The image cache is now ready
Unpacking the rootfs

---
You just created a Debian container (release=stretch, arch=amd64, variant=default)

To enable sshd, run: apt-get install openssh-server

For security reason, container images ship without user accounts
and without a root password.

Use lxc-attach or chroot directly into the rootfs to set a root password
or create user accounts.

Setup root account password , run:
$ sudo chroot /var/lib/lxc/debian-c1/rootfs/ passwd
Start the container and login into it for management purpose, run:
$ sudo lxc-start -n debian-c1
$ lxc-console -n debian-c1

How do I create a CentOS Linux container?

Type the following command to create CentOS 7 container:
$ sudo lxc-create -t download -n centos-c1 -- -d centos -r 7 -a amd64
Sample outputs:

Setting up the GPG keyring
Downloading the image index
Downloading the rootfs
Downloading the metadata
The image cache is now ready
Unpacking the rootfs

---
You just created a CentOS container (release=7, arch=amd64, variant=default)

To enable sshd, run: yum install openssh-server

For security reason, container images ship without user accounts
and without a root password.

Use lxc-attach or chroot directly into the rootfs to set a root password
or create user accounts.

Set the root account password and start the container:
$ sudo chroot /var/lib/lxc/centos-c1/rootfs/ passwd
$ sudo lxc-start -n centos-c1
$ lxc-console -n centos-c1

How do I create a Fedora Linux container?

Type the following command to create Fedora 25 container:
$ sudo lxc-create -t download -n fedora-c1 -- -d fedora -r 25 -a amd64
Sample outputs:

Setting up the GPG keyring
Downloading the image index
Downloading the rootfs
Downloading the metadata
The image cache is now ready
Unpacking the rootfs

---
You just created a Fedora container (release=25, arch=amd64, variant=default)

To enable sshd, run: dnf install openssh-server

For security reason, container images ship without user accounts
and without a root password.

Use lxc-attach or chroot directly into the rootfs to set a root password
or create user accounts.

Set the root account password and start the container:
$ sudo chroot /var/lib/lxc/fedora-c1/rootfs/ passwd
$ sudo lxc-start -n fedora-c1
$ lxc-console -n fedora-c1

How do I create a CentOS 6 Linux container and store it in btrfs ?

You need to create or format hard disk as btrfs and use that one:
# mkfs.btrfs /dev/sdb
# mount /dev/sdb /mnt/btrfs/

If you do not have /dev/sdb create an image using the dd or fallocate command as follows:
# fallocate -l 10G /nixcraft-btrfs.img
# losetup /dev/loop0 /nixcraft-btrfs.img
# mkfs.btrfs /dev/loop0
# mount /dev/loop0 /mnt/btrfs/
# btrfs filesystem show

Sample outputs:

Label: none  uuid: 4deee098-94ca-472a-a0b5-0cd36a205c35
	Total devices 1 FS bytes used 361.53MiB
	devid    1 size 10.00GiB used 3.02GiB path /dev/loop0

Now create a CentOS 6 LXC:
# lxc-create -B btrfs -P /mnt/btrfs/ -t download -n centos6-c1 -- -d centos -r 6 -a amd64
# chroot /mnt/btrfs/centos6-c1/rootfs/ passwd
# lxc-start -P /mnt/btrfs/ -n centos6-c1
# lxc-console -P /mnt/btrfs -n centos6-c1
# lxc-ls -P /mnt/btrfs/ -f

Sample outputs:

NAME       STATE   AUTOSTART GROUPS IPV4            IPV6 
centos6-c1 RUNNING 0         -      192.168.122.145 -    
How do I see a list of all available images?

Type the following command:
$ lxc-create -t download -n NULL -- --list
Sample outputs:

Setting up the GPG keyring
Downloading the image index

---
DIST	RELEASE	ARCH	VARIANT	BUILD
---
alpine	3.1	amd64	default	20170319_17:50
alpine	3.1	armhf	default	20161230_08:09
alpine	3.1	i386	default	20170319_17:50
alpine	3.2	amd64	default	20170504_18:43
alpine	3.2	armhf	default	20161230_08:09
alpine	3.2	i386	default	20170504_17:50
alpine	3.3	amd64	default	20170712_17:50
alpine	3.3	armhf	default	20170103_17:50
alpine	3.3	i386	default	20170712_17:50
alpine	3.4	amd64	default	20170712_17:50
alpine	3.4	armhf	default	20170111_20:27
alpine	3.4	i386	default	20170712_17:50
alpine	3.5	amd64	default	20170712_17:50
alpine	3.5	i386	default	20170712_17:50
alpine	3.6	amd64	default	20170712_17:50
alpine	3.6	i386	default	20170712_17:50
alpine	edge	amd64	default	20170712_17:50
alpine	edge	armhf	default	20170111_20:27
alpine	edge	i386	default	20170712_17:50
archlinux	current	amd64	default	20170529_01:27
archlinux	current	i386	default	20170529_01:27
centos	6	amd64	default	20170713_02:16
centos	6	i386	default	20170713_02:16
centos	7	amd64	default	20170713_02:16
debian	jessie	amd64	default	20170712_22:42
debian	jessie	arm64	default	20170712_22:42
debian	jessie	armel	default	20170711_22:42
debian	jessie	armhf	default	20170712_22:42
debian	jessie	i386	default	20170712_22:42
debian	jessie	powerpc	default	20170712_22:42
debian	jessie	ppc64el	default	20170712_22:42
debian	jessie	s390x	default	20170712_22:42
debian	sid	amd64	default	20170712_22:42
debian	sid	arm64	default	20170712_22:42
debian	sid	armel	default	20170712_22:42
debian	sid	armhf	default	20170711_22:42
debian	sid	i386	default	20170712_22:42
debian	sid	powerpc	default	20170712_22:42
debian	sid	ppc64el	default	20170712_22:42
debian	sid	s390x	default	20170712_22:42
debian	stretch	amd64	default	20170712_22:42
debian	stretch	arm64	default	20170712_22:42
debian	stretch	armel	default	20170711_22:42
debian	stretch	armhf	default	20170712_22:42
debian	stretch	i386	default	20170712_22:42
debian	stretch	powerpc	default	20161104_22:42
debian	stretch	ppc64el	default	20170712_22:42
debian	stretch	s390x	default	20170712_22:42
debian	wheezy	amd64	default	20170712_22:42
debian	wheezy	armel	default	20170712_22:42
debian	wheezy	armhf	default	20170712_22:42
debian	wheezy	i386	default	20170712_22:42
debian	wheezy	powerpc	default	20170712_22:42
debian	wheezy	s390x	default	20170712_22:42
fedora	22	amd64	default	20170216_01:27
fedora	22	i386	default	20170216_02:15
fedora	23	amd64	default	20170215_03:33
fedora	23	i386	default	20170215_01:27
fedora	24	amd64	default	20170713_01:27
fedora	24	i386	default	20170713_01:27
fedora	25	amd64	default	20170713_01:27
fedora	25	i386	default	20170713_01:27
gentoo	current	amd64	default	20170712_14:12
gentoo	current	i386	default	20170712_14:12
opensuse	13.2	amd64	default	20170320_00:53
opensuse	42.2	amd64	default	20170713_00:53
oracle	6	amd64	default	20170712_11:40
oracle	6	i386	default	20170712_11:40
oracle	7	amd64	default	20170712_11:40
plamo	5.x	amd64	default	20170712_21:36
plamo	5.x	i386	default	20170712_21:36
plamo	6.x	amd64	default	20170712_21:36
plamo	6.x	i386	default	20170712_21:36
ubuntu	artful	amd64	default	20170713_03:49
ubuntu	artful	arm64	default	20170713_03:49
ubuntu	artful	armhf	default	20170713_03:49
ubuntu	artful	i386	default	20170713_03:49
ubuntu	artful	ppc64el	default	20170713_03:49
ubuntu	artful	s390x	default	20170713_03:49
ubuntu	precise	amd64	default	20170713_03:49
ubuntu	precise	armel	default	20170713_03:49
ubuntu	precise	armhf	default	20170713_03:49
ubuntu	precise	i386	default	20170713_03:49
ubuntu	precise	powerpc	default	20170713_03:49
ubuntu	trusty	amd64	default	20170713_03:49
ubuntu	trusty	arm64	default	20170713_03:49
ubuntu	trusty	armhf	default	20170713_03:49
ubuntu	trusty	i386	default	20170713_03:49
ubuntu	trusty	powerpc	default	20170713_03:49
ubuntu	trusty	ppc64el	default	20170713_03:49
ubuntu	xenial	amd64	default	20170713_03:49
ubuntu	xenial	arm64	default	20170713_03:49
ubuntu	xenial	armhf	default	20170713_03:49
ubuntu	xenial	i386	default	20170713_03:49
ubuntu	xenial	powerpc	default	20170713_03:49
ubuntu	xenial	ppc64el	default	20170713_03:49
ubuntu	xenial	s390x	default	20170713_03:49
ubuntu	yakkety	amd64	default	20170713_03:49
ubuntu	yakkety	arm64	default	20170713_03:49
ubuntu	yakkety	armhf	default	20170713_03:49
ubuntu	yakkety	i386	default	20170713_03:49
ubuntu	yakkety	powerpc	default	20170713_03:49
ubuntu	yakkety	ppc64el	default	20170713_03:49
ubuntu	yakkety	s390x	default	20170713_03:49
ubuntu	zesty	amd64	default	20170713_03:49
ubuntu	zesty	arm64	default	20170713_03:49
ubuntu	zesty	armhf	default	20170713_03:49
ubuntu	zesty	i386	default	20170713_03:49
ubuntu	zesty	powerpc	default	20170317_03:49
ubuntu	zesty	ppc64el	default	20170713_03:49
ubuntu	zesty	s390x	default	20170713_03:49
---
How do I list the containers existing on the system?

Type the following command:
$ lxc-ls -f
Sample outputs:

NAME      STATE   AUTOSTART GROUPS IPV4            IPV6 
centos-c1 RUNNING 0         -      192.168.122.174 -    
debian-c1 RUNNING 0         -      192.168.122.241 -    
fedora-c1 RUNNING 0         -      192.168.122.176 -    
ubuntu-c1 RUNNING 0         -      192.168.122.56  - 
How do I query information about a container?

The syntax is:
$ lxc-info -n {container}
$ lxc-info -n centos-c1

Sample outputs:

Name:           centos-c1
State:          RUNNING
PID:            5749
IP:             192.168.122.174
CPU use:        0.87 seconds
BlkIO use:      6.51 MiB
Memory use:     31.66 MiB
KMem use:       3.01 MiB
Link:           vethQIP1US
 TX bytes:      2.04 KiB
 RX bytes:      8.77 KiB
 Total bytes:   10.81 KiB
How do I stop/start/restart a container?

The syntax is:
$ sudo lxc-start -n {container}
$ sudo lxc-start -n fedora-c1
$ sudo lxc-stop -n {container}
$ sudo lxc-stop -n fedora-c1

How do I monitor container statistics?

To display containers, updating every second, sorted by memory use:
$ lxc-top --delay 1 --sort m
To display containers, updating every second, sorted by cpu use:
$ lxc-top --delay 1 --sort c
To display containers, updating every second, sorted by block I/O use:
$ lxc-top --delay 1 --sort b
Sample outputs:

Fig.03: Shows  container  statistics with lxc-top
Fig.03: Shows container statistics with lxc-top

How do I destroy/delete a container?

The syntax is:
$ sudo lxc-destroy -n {container}
$ sudo lxc-stop -n fedora-c2
$ sudo lxc-destroy -n fedora-c2

If a container is running, stop it first and destroy it:
$ sudo lxc-destroy -f -n fedora-c2

How do I creates, lists, and restores container snapshots?

The syntax is as follows as per snapshots operation. Please note that you must use snapshot aware file system such as BTRFS/ZFS or LVM.

Create snapshot for a container

$ sudo lxc-snapshot -n {container} -c "comment for snapshot"
$ sudo lxc-snapshot -n centos-c1 -c "13/July/17 before applying patches"

List snapshot for a container

$ sudo lxc-snapshot -n centos-c1 -L -C

Restore snapshot for a container

$ sudo lxc-snapshot -n centos-c1 -r snap0

Destroy/Delete snapshot for a container

$ sudo lxc-snapshot -n centos-c1 -d snap0

Posted by: Vivek Gite

The author is the creator of nixCraft and a seasoned sysadmin and a trainer for the Linux operating system/Unix shell scripting. He has worked with global clients and in various industries, including IT, education, defense and space research, and the nonprofit sector. Follow him on Twitter , Facebook , Google+ .

Recommended Links

Google matched content

Softpanorama Recommended

Top articles

[Feb 11, 2019] Solving Docker permission denied while trying to connect to the Docker daemon socket Published on Jan 26, 2019 | techoverflow.net

[Jan 27, 2019] If you are using the Docker package supplied by Red Hat / CentOS, the dockerroot group is automatically added to the system. You will need to edit (or create) /etc/docker/daemon.json to include the following: group : dockerroot Published on Jan 27, 2019 | rancher.com

[Jan 26, 2019] You need to add user to dockerroot group and create daemon.json file to be able to use docker from a regular user account after Docker installation from Red Hat executables by Aslan Brooke Published on Oct 15, 2018 | blog.aslanbrooke.com

[Jan 26, 2019] How do I download Docker images without using the pull command when you are behind firewall Published on Jan 26, 2019 | stackoverflow.com

Sites

Top articles

Sites

...



Etc

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

Bulletin:

Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D


Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to to buy a cup of coffee for authors of this site

Disclaimer:

The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Last modified: June, 11, 2020