Softpanorama

May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix

Containers

Getting started with Docker by Dockerizing this Blog Benjamin Cane

smaller overhead than traditional virtual machines.

Containers and Virtual Machines are often seen as conflicting technology, however, this is often a misunderstanding. Virtual Machines are a way to take a physical server and provide a fully functional operating environment that shares those physical resources with other virtual machines. A Container is generally used to isolate a running process within a single host to ensure that the isolated processes cannot interact with other processes within that same system. In fact containers are closer to BSD Jails and chroot'ed processes than full virtual machines.

What Docker provides on top of containers

Docker itself is not a container runtime environment; in fact Docker is actually container technology agnostic with efforts planned for Docker to support Solaris Zones and BSD Jails. What Docker provides is a method of managing, packaging, and deploying containers. While these types of functions may exist to some degree for virtual machines they traditionally have not existed for most container solutions and the ones that existed, were not as easy to use or fully featured as Docker.

Now that we know what Docker is, let's start learning how Docker works by first installing Docker and deploying a public pre-built container.

Starting with Installation

As Docker is not installed by default step 1 will be to install the Docker package; since our example system is running Ubuntu 14.0.4 we will do this using the Apt package manager.

# apt-get install docker.io
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following extra packages will be installed:
  aufs-tools cgroup-lite git git-man liberror-perl
Suggested packages:
  btrfs-tools debootstrap lxc rinse git-daemon-run git-daemon-sysvinit git-doc
  git-el git-email git-gui gitk gitweb git-arch git-bzr git-cvs git-mediawiki
  git-svn
The following NEW packages will be installed:
  aufs-tools cgroup-lite docker.io git git-man liberror-perl
0 upgraded, 6 newly installed, 0 to remove and 0 not upgraded.
Need to get 7,553 kB of archives.
After this operation, 46.6 MB of additional disk space will be used.
Do you want to continue? [Y/n] y

To check if any containers are running we can execute the docker command using the ps option.

# docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

The ps function of the docker command works similar to the Linux ps command. It will show available Docker containers and their current status. Since we have not started any Docker containers yet, the command shows no running containers.

Deploying a pre-built nginx Docker container

One of my favorite features of Docker is the ability to deploy a pre-built container in the same way you would deploy a package with yum or apt-get. To explain this better let's deploy a pre-built container running the nginx web server. We can do this by executing the docker command again, however, this time with the run option.

# docker run -d nginx
Unable to find image 'nginx' locally
Pulling repository nginx
5c82215b03d1: Download complete 
e2a4fb18da48: Download complete 
58016a5acc80: Download complete 
657abfa43d82: Download complete 
dcb2fe003d16: Download complete 
c79a417d7c6f: Download complete 
abb90243122c: Download complete 
d6137c9e2964: Download complete 
85e566ddc7ef: Download complete 
69f100eb42b5: Download complete 
cd720b803060: Download complete 
7cc81e9a118a: Download complete 

The run function of the docker command tells Docker to find a specified Docker image and start a container running that image. By default, Docker containers run in the foreground, meaning when you execute docker run your shell will be bound to the container's console and the process running within the container. In order to launch this Docker container in the background I included the -d (detach) flag.

By executing docker ps again we can see the nginx container running.

# docker ps
CONTAINER ID        IMAGE               COMMAND                CREATED             STATUS              PORTS               NAMES
f6d31ab01fc9        nginx:latest        nginx -g 'daemon off   4 seconds ago       Up 3 seconds        443/tcp, 80/tcp     desperate_lalande 

In the above output we can see the running container desperate_lalande and that this container has been built from the nginx:latest image.

Docker Images

Images are one of Docker's key features and is similar to a virtual machine image. Like virtual machine images, a Docker image is a container that has been saved and packaged. Docker however, doesn't just stop with the ability to create images. Docker also includes the ability to distribute those images via Docker repositories which are a similar concept to package repositories. This is what gives Docker the ability to deploy an image like you would deploy a package with yum. To get a better understanding of how this works let's look back at the output of the docker run execution.

# docker run -d nginx
Unable to find image 'nginx' locally

The first message we see is that docker could not find an image named nginx locally. The reason we see this message is that when we executed docker run we told Docker to startup a container, a container based on an image named nginx. Since Docker is starting a container based on a specified image it needs to first find that image. Before checking any remote repository Docker first checks locally to see if there is a local image with the specified name.

Since this system is brand new there is no Docker image with the name nginx, which means Docker will need to download it from a Docker repository.

Pulling repository nginx
5c82215b03d1: Download complete 
e2a4fb18da48: Download complete 
58016a5acc80: Download complete 
657abfa43d82: Download complete 
dcb2fe003d16: Download complete 
c79a417d7c6f: Download complete 
abb90243122c: Download complete 
d6137c9e2964: Download complete 
85e566ddc7ef: Download complete 
69f100eb42b5: Download complete 
cd720b803060: Download complete 
7cc81e9a118a: Download complete 

This is exactly what the second part of the output is showing us. By default, Docker uses the Docker Hub repository, which is a repository service that Docker (the company) runs.

Like GitHub, Docker Hub is free for public repositories but requires a subscription for private repositories. It is possible however, to deploy your own Docker repository, in fact it is as easy as docker run registry. For this article we will not be deploying a custom registry service.

Stopping and Removing the Container

Before moving on to building a custom Docker container let's first clean up our Docker environment. We will do this by stopping the container from earlier and removing it.

To start a container we executed docker with the run option, in order to stop this same container we simply need to execute the docker with the kill option specifying the container name.

# docker kill desperate_lalande
desperate_lalande

If we execute docker ps again we will see that the container is no longer running.

# docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

However, at this point we have only stopped the container; while it may no longer be running it still exists. By default, docker ps will only show running containers, if we add the -a (all) flag it will show all containers running or not.

# docker ps -a
CONTAINER ID        IMAGE               COMMAND                CREATED             STATUS                           PORTS               NAMES
f6d31ab01fc9        5c82215b03d1        nginx -g 'daemon off   4 weeks ago         Exited (-1) About a minute ago                       desperate_lalande  

In order to fully remove the container we can use the docker command with the rm option.

# docker rm desperate_lalande
desperate_lalande

While this container has been removed; we still have a nginx image available. If we were to re-run docker run -d nginx again the container would be started without having to fetch the nginx image again. This is because Docker already has a saved copy on our local system.

To see a full list of local images we can simply run the docker command with the images option.

# docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
nginx               latest              9fab4090484a        5 days ago          132.8 MB

Building our own custom image

At this point we have used a few basic Docker commands to start, stop and remove a common pre-built image. In order to "Dockerize" this blog however, we are going to have to build our own Docker image and that means creating a Dockerfile.

With most virtual machine environments if you wish to create an image of a machine you need to first create a new virtual machine, install the OS, install the application and then finally convert it to a template or image. With Docker however, these steps are automated via a Dockerfile. A Dockerfile is a way of providing build instructions to Docker for the creation of a custom image. In this section we are going to build a custom Dockerfile that can be used to deploy this blog.

Understanding the Application

Before we can jump into creating a Dockerfile we first need to understand what is required to deploy this blog.

The blog itself is actually static HTML pages generated by a custom static site generator that I wrote named; hamerkop. The generator is very simple and more about getting the job done for this blog specifically. All the code and source files for this blog are available via a public GitHub repository. In order to deploy this blog we simply need to grab the contents of the GitHub repository, install Python along with some Python modules and execute the hamerkop application. To serve the generated content we will use nginx; which means we will also need nginx to be installed.

So far this should be a pretty simple Dockerfile, but it will show us quite a bit of the Dockerfile Syntax. To get started we can clone the GitHub repository and creating a Dockerfile with our favorite editor; vi in my case.

# git clone https://github.com/madflojo/blog.git
Cloning into 'blog'...
remote: Counting objects: 622, done.
remote: Total 622 (delta 0), reused 0 (delta 0), pack-reused 622
Receiving objects: 100% (622/622), 14.80 MiB | 1.06 MiB/s, done.
Resolving deltas: 100% (242/242), done.
Checking connectivity... done.
# cd blog/
# vi Dockerfile

FROM - Inheriting a Docker image

The first instruction of a Dockerfile is the FROM instruction. This is used to specify an existing Docker image to use as our base image. This basically provides us with a way to inherit another Docker image. In this case we will be starting with the same nginx image we were using before, if we wanted to start with a blank slate we could use the Ubuntu Docker image by specifying ubuntu:latest.

## Dockerfile that generates an instance of http://bencane.com

FROM nginx:latest
MAINTAINER Benjamin Cane <[email protected]>

In addition to the FROM instruction, I also included a MAINTAINER instruction which is used to show the Author of the Dockerfile.

As Docker supports using # as a comment marker, I will be using this syntax quite a bit to explain the sections of this Dockerfile.

Running a test build

Since we inherited the nginx Docker image our current Dockerfile also inherited all the instructions within the Dockerfile used to build that nginx image. What this means is even at this point we are able to build a Docker image from this Dockerfile and run a container from that image. The resulting image will essentially be the same as the nginx image but we will run through a build of this Dockerfile now and a few more times as we go to help explain the Docker build process.

In order to start the build from a Dockerfile we can simply execute the docker command with the build option.

# docker build -t blog /root/blog 
Sending build context to Docker daemon  23.6 MB
Sending build context to Docker daemon 
Step 0 : FROM nginx:latest
 ---> 9fab4090484a
Step 1 : MAINTAINER Benjamin Cane <[email protected]>
 ---> Running in c97f36450343
 ---> 60a44f78d194
Removing intermediate container c97f36450343
Successfully built 60a44f78d194

In the above example I used the -t (tag) flag to "tag" the image as "blog". This essentially allows us to name the image, without specifying a tag the image would only be callable via an Image ID that Docker assigns. In this case the Image ID is 60a44f78d194 which we can see from the docker command's build success message.

In addition to the -t flag, I also specified the directory /root/blog. This directory is the "build directory", which is the directory that contains the Dockerfile and any other files necessary to build this container.

Now that we have run through a successful build, let's start customizing this image.

Using RUN to execute apt-get

The static site generator used to generate the HTML pages is written in Python and because of this the first custom task we should perform within this Dockerfile is to install Python. To install the Python package we will use the Apt package manager. This means we will need to specify within the Dockerfile that apt-get update and apt-get install python-dev are executed; we can do this with the RUN instruction.

## Dockerfile that generates an instance of http://bencane.com

FROM nginx:latest
MAINTAINER Benjamin Cane <[email protected]>

## Install python and pip
RUN apt-get update
RUN apt-get install -y python-dev python-pip

In the above we are simply using the RUN instruction to tell Docker that when it builds this image it will need to execute the specified apt-get commands. The interesting part of this is that these commands are only executed within the context of this container. What this means is even though python-dev and python-pip are being installed within the container, they are not being installed for the host itself. Or to put it simplier, within the container the pip command will execute, outside the container, the pip command does not exist.

It is also important to note that the Docker build process does not accept user input during the build. This means that any commands being executed by the RUN instruction must complete without user input. This adds a bit of complexity to the build process as many applications require user input during installation. For our example, none of the commands executed by RUN require user input.

Installing Python modules

With Python installed we now need to install some Python modules. To do this outside of Docker, we would generally use the pip command and reference a file within the blog's Git repository named requirements.txt. In an earlier step we used the git command to "clone" the blog's GitHub repository to the /root/blog directory; this directory also happens to be the directory that we have created the Dockerfile. This is important as it means the contents of the Git repository are accessible to Docker during the build process.

When executing a build, Docker will set the context of the build to the specified "build directory". This means that any files within that directory and below can be used during the build process, files outside of that directory (outside of the build context), are inaccessible.

In order to install the required Python modules we will need to copy the requirements.txt file from the build directory into the container. We can do this using the COPY instruction within the Dockerfile.

## Dockerfile that generates an instance of http://bencane.com

FROM nginx:latest
MAINTAINER Benjamin Cane <[email protected]>

## Install python and pip
RUN apt-get update
RUN apt-get install -y python-dev python-pip

## Create a directory for required files
RUN mkdir -p /build/

## Add requirements file and run pip
COPY requirements.txt /build/
RUN pip install -r /build/requirements.txt

Within the Dockerfile we added 3 instructions. The first instruction uses RUN to create a /build/ directory within the container. This directory will be used to copy any application files needed to generate the static HTML pages. The second instruction is the COPY instruction which copies the requirements.txt file from the "build directory" (/root/blog) into the /build directory within the container. The third is using the RUN instruction to execute the pip command; installing all the modules specified within the requirements.txt file.

COPY is an important instruction to understand when building custom images. Without specifically copying the file within the Dockerfile this Docker image would not contain the requirements.txt file. With Docker containers everything is isolated, unless specifically executed within a Dockerfile a container is not likely to include required dependencies.

Re-running a build

Now that we have a few customization tasks for Docker to perform let's try another build of the blog image again.

# docker build -t blog /root/blog
Sending build context to Docker daemon 19.52 MB
Sending build context to Docker daemon 
Step 0 : FROM nginx:latest
 ---> 9fab4090484a
Step 1 : MAINTAINER Benjamin Cane <[email protected]>
 ---> Using cache
 ---> 8e0f1899d1eb
Step 2 : RUN apt-get update
 ---> Using cache
 ---> 78b36ef1a1a2
Step 3 : RUN apt-get install -y python-dev python-pip
 ---> Using cache
 ---> ef4f9382658a
Step 4 : RUN mkdir -p /build/
 ---> Running in bde05cf1e8fe
 ---> f4b66e09fa61
Removing intermediate container bde05cf1e8fe
Step 5 : COPY requirements.txt /build/
 ---> cef11c3fb97c
Removing intermediate container 9aa8ff43f4b0
Step 6 : RUN pip install -r /build/requirements.txt
 ---> Running in c50b15ddd8b1
Downloading/unpacking jinja2 (from -r /build/requirements.txt (line 1))
Downloading/unpacking PyYaml (from -r /build/requirements.txt (line 2))
<truncated to reduce noise>
Successfully installed jinja2 PyYaml mistune markdown MarkupSafe
Cleaning up...
 ---> abab55c20962
Removing intermediate container c50b15ddd8b1
Successfully built abab55c20962

From the above build output we can see the build was successful, but we can also see another interesting message; ---> Using cache. What this message is telling us is that Docker was able to use its build cache during the build of this image.

Docker build cache

When Docker is building an image, it doesn't just build a single image; it actually builds multiple images throughout the build processes. In fact we can see from the above output that after each "Step" Docker is creating a new image.

 Step 5 : COPY requirements.txt /build/
  ---> cef11c3fb97c

The last line from the above snippet is actually Docker informing us of the creating of a new image, it does this by printing the Image ID; cef11c3fb97c. The useful thing about this approach is that Docker is able to use these images as cache during subsequent builds of the blog image. This is useful because it allows Docker to speed up the build process for new builds of the same container. If we look at the example above we can actually see that rather than installing the python-dev and python-pip packages again, Docker was able to use a cached image. However, since Docker was unable to find a build that executed the mkdir command, each subsequent step was executed.

The Docker build cache is a bit of a gift and a curse; the reason for this is that the decision to use cache or to rerun the instruction is made within a very narrow scope. For example, if there was a change to the requirements.txt file Docker would detect this change during the build and start fresh from that point forward. It does this because it can view the contents of the requirements.txt file. The execution of the apt-get commands however, are another story. If the Apt repository that provides the Python packages were to contain a newer version of the python-pip package; Docker would not be able to detect the change and would simply use the build cache. This means that an older package may be installed. While this may not be a major issue for the python-pip package it could be a problem if the installation was caching a package with a known vulnerability.

For this reason it is useful to periodically rebuild the image without using Docker's cache. To do this you can simply specify --no-cache=True when executing a Docker build.

Deploying the rest of the blog

With the Python packages and modules installed this leaves us at the point of copying the required application files and running the hamerkop application. To do this we will simply use more COPY and RUN instructions.

## Dockerfile that generates an instance of http://bencane.com

FROM nginx:latest
MAINTAINER Benjamin Cane <[email protected]>

## Install python and pip
RUN apt-get update
RUN apt-get install -y python-dev python-pip

## Create a directory for required files
RUN mkdir -p /build/

## Add requirements file and run pip
COPY requirements.txt /build/
RUN pip install -r /build/requirements.txt

## Add blog code nd required files
COPY static /build/static
COPY templates /build/templates
COPY hamerkop /build/
COPY config.yml /build/
COPY articles /build/articles

## Run Generator
RUN /build/hamerkop -c /build/config.yml

Now that we have the rest of the build instructions, let's run through another build and verify that the image builds successfully.

# docker build -t blog /root/blog/
Sending build context to Docker daemon 19.52 MB
Sending build context to Docker daemon 
Step 0 : FROM nginx:latest
 ---> 9fab4090484a
Step 1 : MAINTAINER Benjamin Cane <[email protected]>
 ---> Using cache
 ---> 8e0f1899d1eb
Step 2 : RUN apt-get update
 ---> Using cache
 ---> 78b36ef1a1a2
Step 3 : RUN apt-get install -y python-dev python-pip
 ---> Using cache
 ---> ef4f9382658a
Step 4 : RUN mkdir -p /build/
 ---> Using cache
 ---> f4b66e09fa61
Step 5 : COPY requirements.txt /build/
 ---> Using cache
 ---> cef11c3fb97c
Step 6 : RUN pip install -r /build/requirements.txt
 ---> Using cache
 ---> abab55c20962
Step 7 : COPY static /build/static
 ---> 15cb91531038
Removing intermediate container d478b42b7906
Step 8 : COPY templates /build/templates
 ---> ecded5d1a52e
Removing intermediate container ac2390607e9f
Step 9 : COPY hamerkop /build/
 ---> 59efd1ca1771
Removing intermediate container b5fbf7e817b7
Step 10 : COPY config.yml /build/
 ---> bfa3db6c05b7
Removing intermediate container 1aebef300933
Step 11 : COPY articles /build/articles
 ---> 6b61cc9dde27
Removing intermediate container be78d0eb1213
Step 12 : RUN /build/hamerkop -c /build/config.yml
 ---> Running in fbc0b5e574c5
Successfully created file /usr/share/nginx/html//2011/06/25/checking-the-number-of-lwp-threads-in-linux
Successfully created file /usr/share/nginx/html//2011/06/checking-the-number-of-lwp-threads-in-linux
<truncated to reduce noise>
Successfully created file /usr/share/nginx/html//archive.html
Successfully created file /usr/share/nginx/html//sitemap.xml
 ---> 3b25263113e1
Removing intermediate container fbc0b5e574c5
Successfully built 3b25263113e1

Running a custom container

With a successful build we can now start our custom container by running the docker command with the run option, similar to how we started the nginx container earlier.

# docker run -d -p 80:80 --name=blog blog
5f6c7a2217dcdc0da8af05225c4d1294e3e6bb28a41ea898a1c63fb821989ba1

Once again the -d (detach) flag was used to tell Docker to run the container in the background. However, there are also two new flags. The first new flag is --name, which is used to give the container a user specified name. In the earlier example we did not specify a name and because of that Docker randomly generated one. The second new flag is -p, this flag allows users to map a port from the host machine to a port within the container.

The base nginx image we used exposes port 80 for the HTTP service. By default, ports bound within a Docker container are not bound on the host system as a whole. In order for external systems to access ports exposed within a container the ports must be mapped from a host port to a container port using the -p flag. The command above maps port 80 from the host, to port 80 within the container. If we wished to map port 8080 from the host, to port 80 within the container we could do so by specifying the ports in the following syntax -p 8080:80.

From the above command it appears that our container was started successfully, we can verify this by executing docker ps.

# docker ps
CONTAINER ID        IMAGE               COMMAND                CREATED             STATUS              PORTS                         NAMES
d264c7ef92bd        blog:latest         nginx -g 'daemon off   3 seconds ago       Up 3 seconds        443/tcp, 0.0.0.0:80->80/tcp   blog  

Wrapping up

At this point we now have a running custom Docker container. While we touched on a few Dockerfile instructions within this article we have yet to discuss all the instructions. For a full list of Dockerfile instructions you can checkout Docker's reference page, which explains the instructions very well.

Another good resource is their Dockerfile Best Practices page which contains quite a few best practices for building custom Dockerfiles. Some of these tips are very useful such as strategically ordering the commands within the Dockerfile. In the above examples our Dockerfile has the COPY instruction for the articles directory as the last COPY instruction. The reason for this is that the articles directory will change quite often. It's best to put instructions that will change oftenat the lowest point possible within the Dockerfile to optimize steps that can be cached.

In this article we covered how to start a pre-built container and how to build, then deploy a custom container. While there is quite a bit to learn about Docker this article should give you a good idea on how to get started. Of course, as always if you think there is anything that should be added drop it in the comments below.

Git, Docker, and continuous integration for TeX documents Opensource.com

Git, Docker, and continuous integration for TeX documentsPosted 08 Dec 2015 by

Harsh Vakharia

Feed up

5 readers like this

Image by :
opensource.com
Reddit logo 18

The power of Git, Docker, and continuous integration (CI) can be leveraged to make TeX document compilation easy while keeping track of different variants and versions. On the top of these technologies, a flexible workflow can be developed to reflect successive changes in TeX documents in each PDF-versioned with a progressive number, document-v4.pdf, say. So, let's create a workflow that can automate the process for us.

1. Objective

Use Git, Docker, and continuous integration (CI) to build TeX documents and upload the PDF to Dropbox with a proper branch structure and versioning.

2. Workflow

Git, Docker, and continuous integration diagram

3. Requirements

We need:

  1. Git service

    • Bitbucket
    • GitHub
  2. Continuous integration and deployment service
    • Semaphore
    • Travis
  3. Dropbox account

3.1. Dropbox Uploader

Note: While configuring Dropbox API, use of App Folder only as access level is recommended.

To upload generated PDF to Dropbox, we will use Dropbox Uploader.

  1. Follow Dropbox Uploader's README.md and configure it on your local machine.
  2. Verify that you have ~/.dropbox_uploader file on your local machine. We will use this file later.

4. Prepare Git repository

See best practices for writing Dockerfiles.

Initialize Git (git init) and use following directory structure for your git repository:

├── docker
|   ├── Dockerfile
├── document.tex

Now for docker/Dockerfile, we will use harshjv/texlive-2015 docker image. This image contains TeX Live 2015.

4.1. Dockerfile

FROM harshjv/texlive-2015
RUN tlmgr update --self --all

This will build Docker image with latest packages for building TeX documents.

5. Add your TeX documents

Add some TeX files to your repository.

6. Configure Git service

For Webhooks/Services, we need to configure Git services as follows;

6.1. For Bitbucket

No configuration needed as of now.

6.2. For GitHub

  1. Go to your repository's Settings -> Webhooks & services -> Services
  2. Select Travis CI

7. Configure continuous integration service

7.1. For Semaphore

  1. Do an initial commit and push to remote branch (if empty remote repository), because it requires, at least, one remote branch.
  2. Go to SemaphoreCI
  3. Add new project
  4. Select your Git service
  5. Select your repository
  6. Select branch (e.g., master)

7. Select your account

After these initial steps, now it's time to configure build and deployment.

7.1.1. Setup thread

This setup script builds docker/Dockerfile with latest TeX packages and fetches Dropbox Uploader script.

sudo docker build -t texlive docker
curl "https://raw.githubusercontent.com/andreafabrizi/Dropbox-Uploader/master/dropbox_uploader.sh" -o dropbox_uploader.sh
chmod +x dropbox_uploader.sh

7.1.2. Build Thread

Note: For document.lex in master branch, document.pdf will be uploaded as:

Dropbox/[YOUR_APP_FOLDER]/master/document-latest.pdf
Dropbox/[YOUR_APP_FOLDER]/master/document-v[BUILD_NUMBER].pdf

This build script compiles document.lex LeX document and builds document.pdf PDF. Then, using Dropbox Uploader, versioned PDF files are stored in branch's individual folder.

sudo docker run -it -v ${SEMAPHORE_PROJECT_DIR}:/var/texlive texlive sh -c "pdflatex document.lex"
./dropbox_uploader.sh upload document.pdf ${BRANCH_NAME}/document-latest.pdf
./dropbox_uploader.sh upload document.pdf ${BRANCH_NAME}/document-v${SEMAPHORE_BUILD_NUMBER}.pdf

Any successive commit will overwrite document-latest.pdf and will create a new document-v[BUILD_NUMBER].pdf file.

7.1.3. Configuration Files

  1. Visit Project Settings -> Configuration Files
  2. Add configuration file
  3. Add .dropbox_uploader to file path
  4. (Optional) Check encryption
  5. Paste content of ~/.dropbox_uploader
    • In MacOS X, pbcopy ~/.dropbox_uploader

7.2. For Travis

The script for setup procedure and building remains the same. Put these scripts into individual files and create a .travis.yml file. Refer this post for relevant instructions.

8. Git commit and push

It's time to push the documents/changes.

git add .
git commit -m "Add some details"
git push origin master

9. Success!

Tip: If you want to share only the latest PDF after changes, use URL shortener service like Bit.ly to point to this latest document after getting sharing link from Dropbox.

Why? Because for each branch folder, [DOCUMENT_NAME]-latest.pdf will always contain latest PDF.

After a successful build, generated PDF files will be available at Dropbox/[YOUR_APP_FOLDER]/master folder.


Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

[Dec 20, 2018] The docker.io Debian package is back to life by Arnaud Rebillout

Notable quotes:
"... first time in two years ..."
"... one-year leap forward ..."
"... Debian Go Packaging Team ..."
"... If you're running Debian 8 Jessie, you can install Docker 1.6.2, through backports. This version was released on May 14, 2015. That's 3 years old, but Debian Jessie is fairly old as well. ..."
Apr 07, 2018 | www.collabora.com

Last week, a new version of docker.io, the Docker package provided by Debian, was uploaded to Debian Unstable. Quickly afterwards, the package moved to Debian Testing. This is good news for Debian users, as before that the package was more or less abandoned in "unstable", and the future was uncertain.

The most striking fact about this change: it's the first time in two years that docker.io has migrated to "testing". Another interesting fact is that, version-wise, the package is moving from 1.13.1 from early 2017 to version 18.03 from March 2018: that's a one-year leap forward.

Let me give you a very rough summary of how things came to be. I personally started to work on that early in 2018. I joined the Debian Go Packaging Team and I started to work on the many, many Docker dependencies that needed to be updated in order to update the Docker package itself. I could get some of this work uploaded to Debian, but ultimately I was a bit stuck on how to solve the circular dependencies that plague the Docker package. This is where another Debian Developer, Dmitry Smirnov, jumped in. We discussed the current status and issues, and then he basically did all the job, from updating the package to tackling all the long-time opened bugs.

This is for the short story, let me know give you some more details.

The Docker package in Debian

To better understand why this update of the docker.io package is such a good news, let's have quick look at the current Debian offer:

    rmadison -u debian docker.io

If you're running Debian 8 Jessie, you can install Docker 1.6.2, through backports. This version was released on May 14, 2015. That's 3 years old, but Debian Jessie is fairly old as well.

If you're running Debian 9 Stretch (ie. Debian stable), then you have no install candidate. No-thing. The current Debian doesn't provide any package for Docker. That's a bit sad.

What's even more sad is that for quite a while, looking into Debian unstable didn't look promising either. There used to be a package there, but it had bugs that prevented it to migrate to Debian testing. This package was stuck at the version 1.13.1 , released on Feb 8, 2017. Looking at the git history, there was not much happening.

As for the reason for this sad state of things, I can only guess. Packaging Docker is a tedious work, mainly due to a very big dependency tree. After handling all these dependencies, there are other issues to tackle, some related to Go packaging itself, and others due to Docker release process and development workflow. In the end, it's quite difficult to find the right approach to package Docker, and it's easy to make mistakes that cost hours of works. I did this kind of mistakes. More than once.

So packaging Docker is not for the faint of heart, and maybe it's too much of a burden for one developer alone. There was a docker-maint mailing list that suggests an attempt to coordinate the effort, however this list was already dead by the time I found it. It looks like the people involved walked away.

Another explanation for the disinterest in the Docker package could be that Docker itself already provides a Debian package on docker.com. One can always fall back to this solution, so why bothering with the extra-work of doing a Debian package proper?

That's what the next part is about!

Docker.io vs Docker-ce

You have two options to install Docker on Debian: you can get the package from docker.com (this package is named docker-ce ), or you can get it from the Debian repositories (this package is named docker.io ). You can rebuild both of these packages from source: for docker-ce you can fetch the source code with git (it includes the packaging files), and for docker.io you can just get the source package with apt , like for every other Debian package.

So what's the difference between these two packages?

No suspense, straight answer: what differs is the build process, and mostly, the way dependencies are handled.

Docker is written in Go, and Golang comes with some tooling that allows applications to keep a local copy of their dependencies in their source tree. In Go-talk, this is called vendoring . Docker makes heavy use of that (like many other Go applications), which means that the code is more or less self-contained. You can build Docker without having to solve external dependencies, as everything needed is already in-tree.

That's how the docker-ce package provided by Docker is built, and that's what makes the packaging files for this package trivial. You can look at these files at https://github.com/docker/docker-ce/tree/master/components/packaging/deb . So everything is in-tree, there's almost no external build dependency, and hence it's real easy for Docker to provide a new package for 'docker-ce' every month.

On the other hand, the docker.io package provided by Debian takes a completely different approach: Docker is built against the libraries that are packaged in Debian, instead of using the local copies that are present in the Docker source tree. So if Docker is using libABC version 1.0, then it has a build dependency on libABC . You can have a look at the current build dependencies at https://salsa.debian.org/docker-team/docker/blob/master/debian/control .

There are more than 100 dependencies there, and that's one reason why the Debian package is a quite time-consuming to maintain. To give you a rough estimation, in order to get the current "stable" release of Docker to Debian "unstable", it took up to 40 uploads of related packages to stabilize the dependency tree.

It's quite an effort. And once again, why bother? For this part I'll quote Dmitry as he puts it better than me:

> Debian cares about reusable libraries, and packaging them individually allows to
> build software from tested components, as Golang runs no tests for vendored
> libraries. It is a mind blowing argument given that perhaps there is more code
> in "vendor" than in the source tree.
>
> Private vendoring have all disadvantages of static linking ,
> making it impossible to provide meaningful security support. On top of that, it
> is easy to lose control of vendored tree; it is difficult to track changes in
> vendored dependencies and there is no incentive to upgrade vendored components.

That's about it, whether it matters is up to you and your use-case. But it's definitely something you should know about if you want to make an informed decision on which package you're about to install and use.

To finish with this article, I'd like to give more details on the packaging of docker.io , and what was done to get this new version in Debian.

Under the hood of the docker.io package

Let's have a brief overview of the difficulties we had to tackle while packaging this new version of Docker.

The most outstanding one is circular dependencies. It's especially present in the top-level dependencies of Docker: docker/swarmkit , docker/libnetwork , containerd ... All of these are Docker build dependencies, and all of these depend on Docker to build. Good luck with that ;)

To solve this issue, the new docker.io package leverages MUT (Multiple Upstream Tarball) to have these different components downloaded and built all at once, instead of being packaged separately. In this particular case it definitely makes sense, as we're really talking about different parts of Docker. Even if they live in different git repositories, these components are not standalone libraries, and there's absolutely no good reason to package them separately.

Another issue with Docker is "micro-packaging", ie. wasting time packaging small git repositories that, in the end, are only used by one application (Docker in our case). This issue is quite interesting, really. Let me try to explain.

Golang makes it extremely easy to split a codebase among several git repositories. It's so easy that some projects (Docker in our case) do it extensively, as part of their daily workflow. And in the end, at a first glance you can't really say if a dependency of Docker is really a standalone project (that would require a proper packaging), or only just a part of Docker codebase, that happens to live in a different git repository. In this second case, there's really no reason to package it independently of Docker.

As a packager, if you're not a bit careful, you can easily fall in this trap, and start packaging every single dependency without thinking: that's "micro-packaging". It's bad in the sense that it increases the maintenance cost on the long-run, and doesn't bring any benefit. As I said before, docker.io has currently 100+ dependencies, and probably a few of them fall in this category.

While working on this new version of docker.io , we decided to stop packaging such dependencies. The guideline is that if a dependency has no semantic versioning , and no consumer other than Docker, then it's not a library, it's just a part of Docker codebase.

Even though some tools like dh-make-golang make it very easy to package simple Go packages, it doesn't mean that everything should be packaged. Understanding that, and taking a bit of time to think before packaging, is the key to successful Go packaging!

Last words

I could go on for a while on the technical details, there's a lot to say, but let's not bore you to death, so that's it. I hope by now you understand that:

  1. There's now an up-to-date docker.io package in Debian.
  2. docker.io and docker-ce both give you a Docker binary, but through a very different build process.
  3. Maintaining the 'docker.io' package is not an easy task.

If you care about having a Docker package in Debian, feel free to try it out, and feel free to join the maintenance effort!

Let's finish with a few credits. I've been working on that topic, albeit sparingly, for the last 4 months, thanks to the support of Collabora . As for Dmitry Smirnov, the work he did on the docker.io package represents a three weeks, full-time effort, which was sponsored by Libre Solutions Pty Ltd .

I'd like to thank the Debian Go Packaging Team for their support, and also the reviewers of this article, namely Dmitry Smirnov and Héctor Orón Martínez.

Last but not least, I will attend DebConf18 in Taiwan, where I will give a speak on this topic. There's also a BoF on Go Packaging planned.

See you there!

[Dec 20, 2018] Musings from the Chiefio

Notable quotes:
"... It isn't a full Virtual Machine, so it avoids that overhead and inefficiency, but it does isolate your applications from "update and die" problems, most of the time. "Docker" is a big one. ..."
Dec 20, 2018 | chiefio.wordpress.com

Sidebar on Containers: The basic idea is to isolate a bit of production application from all the rest of the system and make sure it has a consistent environment. So you package up your DNS server with the needed files and systems config and what-all and stick it in a container that runs under a host operating system.

It isn't a full Virtual Machine, so it avoids that overhead and inefficiency, but it does isolate your applications from "update and die" problems, most of the time. "Docker" is a big one.

Lately Red Hat et. al. have been pushing for a strongly systemD dependent kubernets instead.

The need to rapidly toss a VM into production and bring up a 'container' application on it drove (IMHO) much of the push to move all sorts of stuff into systemD to make booting very fast (even if it then doesn't work reliably /snarc;)

Much of the commercial world has moved to putting things in Docker or other container systems.

On BSD their equivalent is called "jails" as it keeps each application instance isolated from the system and from other applications. On "my Cray" we used a precursor tech of change root "chroot" to isolate things for security; but I got off that train before it reached the "jails" and "docker" station.

[Dec 16, 2018] What are the benefits using Docker?

Dec 16, 2018 | www.quora.com

The main benefit of Docker is that it automatically solves the problems with versioning and cross-platform deployment, as the images can be easily recombined to form any version and can run in any environment where Docker is installed. "Run anywhere" meme...


James Lee , former Software Engineer at Google (2013-2016) Answered Jul 12 · Author has 106 answers and 258.1k answer views

There are many beneifits of Docker. Firstly, I would mention the beneifits of Docker and then let you know about the future of Docker. The content mentioned here is from my recent article on Docker.

Docker Beneifits:

Docker is an open-source project based on Linux containers. It uses the features based on the Linux Kernel. For example, namespaces and control groups create containers. But are containers new? No, Google has been using it for years! They have their own container technology. There are some other Linux container technologies like Solaris Zones, LXC, etc.

These container technologies are already there before Docker came into existence. Then why Docker? What difference did it make? Why is it on the rise? Ok, I will tell you why!

Number 1: Docker offers ease of use

Taking advantage of containers wasn't an easy task with earlier technologies. Docker has made it easy for everyone like developers, system admins, architects, and more. Test portable applications are easy to build. Anyone can package an application from their laptop. He/She can then run it unmodified on any public/private cloud or bare metal. The slogan is, "build once, run anywhere"!

Number 2: Docker offers speed

Being lightweight, the containers are fast. They also consume fewer resources. One can easily run a Docker container in seconds. On the other side, virtual machines usually take longer as they go through the whole process of booting up the complete virtual operating system, every time!

Number 3: The Docker Hub

Docker offers an ecosystem known as the Docker Hub. You can consider it as an app store for Docker images. It contains many public images created by the community. These images are ready to use. You can easily search the images as per your requirements.

Number 4: Docker gives modularity and scalability

It is possible to break down the application functionality into individual containers. Docker gives this freedom! It is easy to link containers together and create your application with Docker. One can easily scale and update components independently in the future.

The Future

A lot of people come and ask me that "Will Docker eat up virtual machines?" I don't think so! Docker is gaining a lot of momentum but this won't affect virtual machines. This reason is that virtual machines are better under certain circumstances as compared to Docker. For example, if there is a requirement of running multiple applications on multiple servers, then virtual machines is a better choice. On the contrary, if there is a requirement to run multiple copies of a single application, Docker is a better choice.

Docker containers could create a problem when it comes to security because containers share the same kernel. The barriers between containers are quite thin. But I do believe that security and management improve with experience and exposure. Docker certainly has a great future! I hope that this Docker tutorial has helped you understand the basics of Containers, VM's, and Dockers. But Docker in itself is an ocean. It isn't possible to study Docker in just one article. For an in-depth study of Docker, I recommend this Docker course.

Please feel free to Like/Subscribe/Comment on my YouTube Videos/Channel mentioned below :

David Polstra , Person at ReactiveOps (2016-present) Updated Oct 5, 2017 · Author has 65 answers and 53.7k answer views

I work at ReactiveOps where we specialize in DevOps-as-a-Service and Kubernetes Consulting. One of our engineers, EJ Etherington , recently addressed this in a blog post:

"Docker is both a daemon (a process running in the background) and a client command. It's like a virtual machine but it's different in important ways. First, there's less duplication. With each extra VM you run, you duplicate the virtualization of CPU and memory and quickly run out resources when running locally. Docker is great at setting up a local development environment because it easily adds the running process without duplicating the virtualized resource. Second, it's more modular. Docker makes it easy to run multiple versions or instances of the same program without configuration headaches and port collisions. Try that in a VM!

With Docker, developers can focus on writing code without worrying about the system on which their code will run. Applications become truly portable. You can repeatably run your application on any other machine running Docker with confidence. For operations staff, Docker is lightweight, easily allowing the running and management of applications with different requirements side by side in isolated containers. This flexibility can increase resource use per server and may reduce the number of systems needed because of its lower overhead, which in turn reduces cost.

Docker has made Linux containerization technology easy to use.

There are a dozen reasons to use Docker. I'll focus here on three: consistency, speed and isolation. By consistency , I mean that Docker provides a consistent environment for your application from development all the way through production – you run from the same starting point every time. By speed , I mean you can rapidly run a new process on a server. Because the image is preconfigured and installed with the process you want to run, it takes the challenge of running a process out of the equation. By isolation , I mean that by default each Docker container that's running is isolated from the network, the file system and other running processes.

A fourth reason is Docker's layered file system. Starting from a base image, every change you make to a container or image becomes a new layer in the file system. As a result, file system layers are cached, reducing the number of repetitive steps during the Docker build process AND reducing the time it takes to upload and download similar images. It also allows you to save the container state if, for example, you need troubleshoot why a container is failing. The file system layers are like Git, but at the file system level. Each Docker image is a particular combination of layers in the same way that each Git branch is a particular combination of commits."

I hope this was helpful. If you would like to learn more, you can read the entire post: Docker Is a Valuable DevOps Tool - One That's Worth Using

Bill William Bill William , M.C.A Software and Applications & Java, SRM University, Kattankulathur (2006) Answered Jan 5, 2018

Docker is the most popular file format for Linux-based container development and deployments. If you're using containers, you're most likely familiar with the container-specific toolset of Docker tools that enable you to create and deploy container images to a cloud-based container hosting environment.

This can work great for brand-new environments, but it can be a challenge to mix container tooling with the systems and tools you need to manage your traditional IT environments. And, if you're deploying your containers locally, you still need to manage the underlying infrastructure and environment.

Portability: let's suppose in the case of Linux you have your own customized Nginx container. You can run that Nginx container anywhere, no matter it's a cloud or data center on even your own laptop as long as you have a docker engine running Linux OS.

Rollback: you can just run your previous build image and all charges will automatically roll back.

Image Simplicity: Every image has a tree hierarchy and all the child images depend upon its parent image. For example, let's suppose there is a vulnerability in docker container, you can easily identify and patch that parent image and when you will rebuild child, variability will automatically remove from the child images also.

Container Registry: You can store all images at a central location, you can apply ACLs, you can do vulnerability scanning and image signing.

Runtime: No matter you want to run thousand of container you can start all within five seconds.

Isolation: We can run hundred of the process in one Os and all will be isolated to each other.

Docker Learning hub

[Dec 16, 2018] What are some disadvantages of using Docker - Quora

Dec 16, 2018 | www.quora.com

Ethen , Web Designer (2015-present) Answered Aug 30, 2018 · Author has 154 answers and 56.2k answer views

Docker is an open platform for every one of the developers bringing them a large number of open source venture including the arrangement open source Docker tools , and the management framework with in excess of 85,000 Dockerized applications. Docker is even today accepted to be something more than only an application stage. What's more, the compartment eco framework is proceeding to develop so quick that with such a large number of Docker devices being made accessible on the web, it starts to feel like an overwhelming undertaking when you are simply attempting to comprehend the accessible alternatives kept directly before you.

Disadvantages Of Docker

Containers don't run at bare-metal speeds.

The container ecosystem is fractured.

Persistent data storage is complicated.

Graphical applications don't work well.

Not all applications benefit from containers.

Advantages Of Docker

Swapnil Kulkarni , Engineering Lead at Persistent Systems (2018-present) Answered Nov 9, 2017 · Author has 58 answers and 24.9k answer views

From my personal experience, I think people just want to containerize everything without looking at how the architectural considerations change which basically ruins the technology.

e.g. How will someone benefit from creating FAT container images of a size of a VM when the basic advantage of docker is to ship lightweight images.

[Nov 28, 2018] Getting started with Kubernetes 5 misunderstandings, explained by Kevin Casey

Nov 19, 2018 | enterprisersproject.com
Among growing container trends , here's an important one: As containers go, so goes container orchestration. That's because most organizations quickly realize that managing containers in production can get complicated in a hurry. Orchestration solves that problem, and while there are multiple options, Kubernetes has become the de facto leader .

[ Want to help others understand Kubernetes? Check out our related article, How to explain Kubernetes in plain English. ]

Kubernetes' star appeal does lead to some misunderstandings and outright myths, though. We asked a range of IT leaders and container experts to identify the biggest misconceptions about Kubernetes – and the realities behind each of them – to help people who are just getting going with the technology. Here are five important ones to know before you get your hands dirty.

Misunderstanding #1: Kubernetes is only for public cloud

Reality: Kubernetes is commonly referred to as a cloud-native technology, and for good reason. The project, which was first developed by a team at Google , currently calls the Cloud Native Computing Foundation home. ( Red Hat , one of the first companies to work with Google on Kubernetes, has become the second-leading contributor to Kubernetes upstream project.)

"Kubernetes is cloud-native in the sense that it has been designed to take advantage of cloud computing architecture [and] to support scale and resilience for distributed applications," says Raghu Kishore Vempati, principal systems engineer at Aricent .

Just remember that "cloud-native" is not wholly synonymous with "public cloud."

"Kubernetes can run on different platforms, be it a personal laptop, VM, rack of bare-metal servers, public/private cloud environment, et cetera," Vempati says.

Notes Red Hat technology evangelist Gordon Haff , "You can cluster together groups of hosts running Linux containers, and Kubernetes helps you easily and efficiently manage those clusters. These clusters can span hosts across public, private, and hybrid clouds ."

Misunderstanding #2: Kubernetes is a finished product

Reality: Kubernetes isn't really a product at all, much less a finished one.

"Kubernetes is an open source project, not a product," says Murli Thirumale, co-founder and CEO at Portworx . (Portworx co-founder and VP of product management Eric Han was the first Kubernetes product manager while at Google.)

The Kubernetes ecosystem moves very quickly.

New users should understand a fundamental reality here: The Kubernetes ecosystem moves very quickly. It's even been dubbed the fastest-moving project in open source history.

"Take your eyes off of it for only one moment, and everything changes," Frank Reno, senior technical product manager at Sumo Logic . "It is a fast-paced, highly active community that develops Kubernetes and the related projects. As it changes, it also changes the way you need to look at and develop things. It's all for the better, but still, much to keep up on."

Misunderstanding #3: Kubernetes is simple to run out of the box

"For those new to Kubernetes there's often an 'aha' moment as they realize it's not that easy to do right."

Reality: It may be "easy" to get it up and running on a local machine, but it can quickly get more complicated from there. "For those new to Kubernetes, there's often an 'aha' moment as they realize it's not that easy to do right," says Amir Jerbi, co-founder and CTO at Aqua Security .

Jerbi notes that this is a key reason for the growth of commercial Kubernetes platforms on top of the open source project, as well as managed services and consultancies. "Setting up and managing K8s correctly requires time, knowledge, and skills, and the skill gap should not be underestimated," Jerbi says.

Some organizations are still going to learn that the hard way, drawn in by the considerable potential of Kubernetes and the table-stakes necessity of a using container management or orchestration tool for running containers at scale in a production environment.

"Kubernetes is a very popular and very powerful platform," says Wei Lien Dang, VP of products at StackRox . "Given the DIY mindset that comes along with open source software, users often think they should be working directly in the Kubernetes system itself. But this understanding is misguided."

Dang points to needs such as supporting high availability and resilience. Both, he says, become easier when using abstraction layers on top of the core Kubernetes platform, such as a UX layer to enable various end users to get the most value out of the technology.

"One of the major benefits of open source software is that it can be downloaded and used with no license cost – but very often, making this community software usable in a corporate environment will require a significant investment in technical effort to integrate [or] bundle with other technologies," says Andy Kennedy, managing director at Tier 2 Consulting . "For example, in order to provide a full set of orchestrated services, Kubernetes relies on other services provided by open source projects, such as registry, security, telemetry, networking, and automation."

Complete container application platforms, such as Red Hat OpenShift , eliminate the need to assemble those pieces yourself.

This gets back to the difference between the Kubernetes project and the maturing Kubernetes platforms built on top of that project.

"Do-it-yourself Kubernetes can work with some dedicated resources, but consider a more productized and supported [platform]," says Portworx's Thirumale. "These will help you go to production faster." Misunderstanding #4: Kubernetes is an all-encompassing framework for building and deploying applications

Reality: "By itself, Kubernetes does not provide any primitives for applications such as databases, middleware, storage, [and so forth]," says Aricent's Vempati.

Developers still need to include the necessary services and components for their respective applications, Vempati notes, yet some people overlook this.

"Kubernetes is a platform for managing containerized workloads and services with independent and composable processes," Vempati says. "How the applications and services are orchestrated on the platform is for the developers to define."

You can't just "lift and shift" a monolithic app into Kubernetes and say, boom, we have a microservices architecture.

In a similar vein, some folks simply misunderstand what Kubernetes does in a more fundamental way. Jared Sikander, CTO at NetEnrich , encounters a key misconception in the marketplace that Kubernetes "provides containerization and microservices ." That's a misnomer. It's a tool for deploying and managing containers and containerized microservices. You can't just "lift and shift" a monolithic app into Kubernetes and say, boom, we have a microservices architecture now.

"In reality, you have to refactor your applications into microservices," Sikander says. "Kubernetes provides the platform to deploy and scale your microservices."

[ Want more advice? Read Microservices and containers: 5 pitfalls to avoid . ]

Misunderstanding #5: Kubernetes inherently secures your containers

Reality: Container security is one of the brave new worlds in the broader threat landscape. (That's evident in the growing number of container security firms, such as Aqua, StackRox, and others.)

Kubernetes does have critical capabilities for managing the security of your containers, but keep in mind it is not in and of itself a security platform, per se.

"Kubernetes has a lot of powerful controls built in for network policy enforcement, for example, but accessing them natively in Kubernetes means working in a YAML file," says Dang from StackRox. This also gets back to leveraging the right tools or abstraction layers on top of Kubernetes to make its security-oriented features more consumable.

It's also a matter of rethinking your old security playbook for containers and for hybrid cloud and multi-cloud environments in general.

[ Read our related article: Container security fundamentals: 5 things to know . ]

"As enterprises increasingly flock to Kubernetes, too many organizations are still making the dangerous mistake of relying on their previously used security measures – which really aren't suited to protecting Kubernetes and containerized environments," says Gary Duan, CTO at NeuVector . "While traditional firewalls and endpoint security are postured to defend against external threats, malicious threats to containers often grow and expand laterally via internal traffic, where more traditional tools have zero visibility."

MORE ON CONTAINERS

Security, like other considerations with containers and Kubernetes, is also a very different animal when you're ready to move into production.

In part two of this series, we clear up some of the misconceptions about running Kubernetes in a production environment versus experimenting with it in a test or dev environment. The differences can be significant.

[Nov 15, 2018] Behind the scenes with Linux containers by Seth Kenlon

Nov 12, 2018 | opensource.com

Become a better container troubleshooter by using LXC to understand how they work.

Can you have Linux containers without Docker ? Without OpenShift ? Without Kubernetes ?

Yes, you can. Years before Docker made containers a household term (if you live in a data center, that is), the LXC project developed the concept of running a kind of virtual operating system, sharing the same kernel, but contained within defined groups of processes.

Docker built on LXC, and today there are plenty of platforms that leverage the work of LXC both directly and indirectly. Most of these platforms make creating and maintaining containers sublimely simple, and for large deployments, it makes sense to use such specialized services. However, not everyone's managing a large deployment or has access to big services to learn about containerization. The good news is that you can create, use, and learn containers with nothing more than a PC running Linux and this article. This article will help you understand containers by looking at LXC, how it works, why it works, and how to troubleshoot when something goes wrong.

Sidestepping the simplicity Linux Containers If you're looking for a quick-start guide to LXC, refer to the excellent Linux Containers website. Installing LXC

If it's not already installed, you can install LXC with your package manager.

On Fedora or similar, enter:

$ sudo dnf install lxc lxc-templates lxc-doc

On Debian, Ubuntu, and similar, enter:

$ sudo apt install lxc
Creating a network bridge

Most containers assume a network will be available, and most container tools expect the user to be able to create virtual network devices. The most basic unit required for containers is the network bridge, which is more or less the software equivalent of a network switch. A network switch is a little like a smart Y-adapter used to split a headphone jack so two people can hear the same thing with separate headsets, except instead of an audio signal, a network switch bridges network data.

You can create your own software network bridge so your host computer and your container OS can both send and receive different network data over a single network device (either your Ethernet port or your wireless card). This is an important concept that often gets lost once you graduate from manually generating containers, because no matter the size of your deployment, it's highly unlikely you have a dedicated physical network card for each container you run. It's vital to understand that containers talk to virtual network devices, so you know where to start troubleshooting if a container loses its network connection.

To create a network bridge on your machine, you must have the appropriate permissions. For this article, use the sudo command to operate with root privileges. (However, LXC docs provide a configuration to grant users permission to do this without using sudo .)

$ sudo ip link add br0 type bridge

Verify that the imaginary network interface has been created:

$ sudo ip addr show br0
7: br0: <BROADCAST,MULTICAST> mtu 1500 qdisc
noop state DOWN group default qlen 1000
link/ether 26:fa:21:5f:cf:99 brd ff:ff:ff:ff:ff:ff

Since br0 is seen as a network interface, it requires its own IP address. Choose a valid local IP address that doesn't conflict with any existing IP address on your network and assign it to the br0 device:

$ sudo ip addr add 192.168.168.168 dev br0

And finally, ensure that br0 is up and running:

$ sudo ip link set br0 up
Setting the container config

The config file for an LXC container can be as complex as it needs to be to define a container's place in your network and the host system, but for this example the config is simple. Create a file in your favorite text editor and define a name for the container and the network's required settings:

lxc.utsname = opensourcedotcom
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = br0
lxc.network.hwaddr = 4a:49:43:49:79:bd
lxc.network.ipv4 = 192.168.168.1/24
lxc.network.ipv6 = 2003:db8:1:0:214:1234:fe0b:3596

Save this file in your home directory as mycontainer.conf .

The lxc.utsname is arbitrary. You can call your container whatever you like; it's the name you'll use when starting and stopping it.

The network type is set to veth , which is a kind of virtual Ethernet patch cable. The idea is that the veth connection goes from the container to the bridge device, which is defined by the lxc.network.link property, set to br0 . The IP address for the container is in the same network as the bridge device but unique to avoid collisions.

With the exception of the veth network type and the up network flag, you invent all the values in the config file. The list of properties is available from man lxc.container.conf . (If it's missing on your system, check your package manager for separate LXC documentation packages.) There are several example config files in /usr/share/doc/lxc/examples , which you should review later.

Launching a container shell

At this point, you're two-thirds of the way to an operable container: you have the network infrastructure, and you've installed the imaginary network cards in an imaginary PC. All you need now is to install an operating system.

However, even at this stage, you can see LXC at work by launching a shell within a container space.

$ sudo lxc-execute --name basic \
--rcfile ~/mycontainer.conf /bin/bash \
--logfile mycontainer.log
#

In this very bare container, look at your network configuration. It should look familiar, yet unique, to you.

# /usr/sbin/ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state [...]
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
[...]
22: eth0@if23: <BROADCAST,MULTICAST,UP,LOWER_UP> [...] qlen 1000
link/ether 4a:49:43:49:79:bd brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.168.167/24 brd 192.168.168.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 2003:db8:1:0:214:1234:fe0b:3596/64 scope global
valid_lft forever preferred_lft forever
[...]

Your container is aware of its fake network infrastructure and of a familiar-yet-unique kernel.

# uname -av
Linux opensourcedotcom 4.18.13-100.fc27.x86_64 #1 SMP Wed Oct 10 18:34:01 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

Use the exit command to leave the container:

# exit
Installing the container operating system

Building out a fully containerized environment is a lot more complex than the networking and config steps, so you can borrow a container template from LXC. If you don't have any templates, look for a separate LXC template package in your software repository.

The default LXC templates are available in /usr/share/lxc/templates .

$ ls -m /usr/share/lxc/templates/
lxc-alpine, lxc-altlinux, lxc-archlinux, lxc-busybox, lxc-centos, lxc-cirros, lxc-debian, lxc-download, lxc-fedora, lxc-gentoo, lxc-openmandriva, lxc-opensuse, lxc-oracle, lxc-plamo, lxc-slackware, lxc-sparclinux, lxc-sshd, lxc-ubuntu, lxc-ubuntu-cloud

Pick your favorite, then create the container. This example uses Slackware.

$ sudo lxc-create --name slackware --template slackware

Watching a template being executed is almost as educational as building one from scratch; it's very verbose, and you can see that lxc-create sets the "root" of the container to /var/lib/lxc/slackware/rootfs and several packages are being downloaded and installed to that directory.

Reading through the template files gives you an even better idea of what's involved: LXC sets up a minimal device tree, common spool files, a file systems table (fstab), init files, and so on. It also prevents some services that make no sense in a container (like udev for hardware detection) from starting. Since the templates cover a wide spectrum of typical Linux configurations, if you intend to design your own, it's wise to base your work on a template closest to what you want to set up; otherwise, you're sure to make errors of omission (if nothing else) that the LXC project has already stumbled over and accounted for.

Once you've installed the minimal operating system environment, you can start your container.

$ sudo lxc-start --name slackware \
--rcfile ~/mycontainer.conf

You have started the container, but you have not attached to it. (Unlike the previous basic example, you're not just running a shell this time, but a containerized operating system.) Attach to it by name.

$ sudo lxc-attach --name slackware
#

Check that the IP address of your environment matches the one in your config file.

# / usr / sbin / ip addr SHOW | grep eth
34 : eth0@if35: < BROADCAST , MULTICAST , UP , LOWER_UP > mtu 1500 [ ... ] 1000
link / ether 4a: 49 : 43 : 49 : 79 :bd brd ff:ff:ff:ff:ff:ff link - netnsid 0
inet 192 . 168 . 168 . 167 / 24 brd 192 . 168 . 168 . 255 scope global eth0

Exit the container, and shut it down.

# exit
$ sudo lxc-stop slackware Running real-world containers with LXC

In real life, LXC makes it easy to create and run safe and secure containers. Containers have come a long way since the introduction of LXC in 2008, so use its developers' expertise to your advantage.

While the LXC instructions on linuxcontainers.org make the process simple, this tour of the manual side of things should help you understand what's going on behind the scenes.

[Sep 05, 2018] A sysadmin's guide to containers - Opensource.com

Notable quotes:
"... Linux container internals. Illustration by Scott McCarty. CC BY-SA 4.0 ..."
Sep 05, 2018 | opensource.com

A sysadmin's guide to containers What you need to know to understand how containers work. 27 Aug 2018 Daniel J Walsh (Red Hat) Feed 30 up 2 comments toolbox drawing Image by :

Internet Archive Book Images . Modified by Opensource.com. CC BY-SA 4.0 x Get the newsletter

Join the 85,000 open source advocates who receive our giveaway alerts and article roundups.

https://opensource.com/eloqua-embedded-email-capture-block.html?offer_id=70160000000QzXNAA0

The term "containers" is heavily overused. Also, depending on the context, it can mean different things to different people.

Traditional Linux containers are really just ordinary processes on a Linux system. These groups of processes are isolated from other groups of processes using resource constraints (control groups [cgroups]), Linux security constraints (Unix permissions, capabilities, SELinux, AppArmor, seccomp, etc.), and namespaces (PID, network, mount, etc.).

containers_primer_cover_1.jpg Containers primer sheet Download the Containers Primer

If you boot a modern Linux system and took a look at any process with cat /proc/PID/cgroup , you see that the process is in a cgroup. If you look at /proc/PID/status , you see capabilities. If you look at /proc/self/attr/current , you see SELinux labels. If you look at /proc/PID/ns , you see the list of namespaces the process is in. So, if you define a container as a process with resource constraints, Linux security constraints, and namespaces, by definition every process on a Linux system is in a container. This is why we often say Linux is containers, containers are Linux . Container runtimes are tools that modify these resource constraints, security, and namespaces and launch the container.

Docker introduced the concept of a container image , which is a standard TAR file that combines:

Docker " tar 's up" the rootfs and the JSON file to create the base image . This enables you to install additional content on the rootfs, create a new JSON file, and tar the difference between the original image and the new image with the updated JSON file. This creates a layered image .

The definition of a container image was eventually standardized by the Open Container Initiative (OCI) standards body as the OCI Image Specification .

Tools used to create container images are called container image builders . Sometimes container engines perform this task, but several standalone tools are available that can build container images.

Docker took these container images ( tarballs ) and moved them to a web service from which they could be pulled, developed a protocol to pull them, and called the web service a container registry .

Container engines are programs that can pull container images from container registries and reassemble them onto container storage . Container engines also launch container runtimes (see below).

linux_container_internals_2.0_-_hosts.png Linux container internals

Linux container internals. Illustration by Scott McCarty. CC BY-SA 4.0

Container storage is usually a copy-on-write (COW) layered filesystem. When you pull down a container image from a container registry, you first need to untar the rootfs and place it on disk. If you have multiple layers that make up your image, each layer is downloaded and stored on a different layer on the COW filesystem. The COW filesystem allows each layer to be stored separately, which maximizes sharing for layered images. Container engines often support multiple types of container storage, including overlay , devicemapper , btrfs , aufs , and zfs .

Linux Containers

After the container engine downloads the container image to container storage, it needs to create a container runtime configuration. The runtime configuration combines input from the caller/user along with the content of the container image specification. For example, the caller might want to specify modifications to a running container's security, add additional environment variables, or mount volumes to the container.

The layout of the container runtime configuration and the exploded rootfs have also been standardized by the OCI standards body as the OCI Runtime Specification .

Finally, the container engine launches a container runtime that reads the container runtime specification; modifies the Linux cgroups, Linux security constraints, and namespaces; and launches the container command to create the container's PID 1 . At this point, the container engine can relay stdin / stdout back to the caller and control the container (e.g., stop, start, attach).

Note that many new container runtimes are being introduced to use different parts of Linux to isolate containers. People can now run containers using KVM separation (think mini virtual machines) or they can use other hypervisor strategies (like intercepting all system calls from processes in containers). Since we have a standard runtime specification, these tools can all be launched by the same container engines. Even Windows can use the OCI Runtime Specification for launching Windows containers.

At a much higher level are container orchestrators. Container orchestrators are tools used to coordinate the execution of containers on multiple different nodes. Container orchestrators talk to container engines to manage containers. Orchestrators tell the container engines to start containers and wire their networks together. Orchestrators can monitor the containers and launch additional containers as the load increases. Topics Containers Containers column Cloud About the author Daniel J Walsh - Daniel Walsh has worked in the computer security field for almost 30 years. Dan joined Red Hat in August 2001. Dan leads the RHEL Docker enablement team since August 2013, but has been working on container technology for several years. He has led the SELinux project, concentrating on the application space and policy development. Dan helped developed sVirt, Secure Virtualization. He also created the SELinux Sandbox, the Xguest user and the Secure Kiosk. Previously, Dan worked Netect/Bindview... More about me

[Jul 16, 2017] How to install and setup LXC (Linux Container) on Fedora Linux 26 – nixCraft

Jul 16, 2017 | www.cyberciti.biz
How to install and setup LXC (Linux Container) on Fedora Linux 26 Posted on July 13, 2017 July 13, 2017 in Categories Fedora Linux , Linux , Linux Containers (LXC) last updated July 13, 2017 H ow do I install, create and manage LXC (Linux Containers – an operating system-level virtualization) on Fedora Linux version 26 server?

LXC is an acronym for Linux Containers. It is nothing but an operating system-level virtualization technology for running multiple isolated Linux distros (systems containers) on a single Linux host. This tutorial shows you how to install and manage LXC containers on Fedora Linux server.

Our sample setup

welcome-fedora-lxc
The LXC often described as a lightweight virtualization technology. You can think LXC as chrooted jail on steroids. There is no guest operating system involved. You can only run Linux distros with LXC. You can not run MS-Windows or *BSD or any other operating system with LXC. You can run CentOS, Fedora, Ubuntu, Debian, Gentoo or any other Linux distro using LXC. Traditional virtualization such as KVM/XEN/VMWARE and paravirtualization need a full operating system image for each instance. You can run any operating system using traditional virtualization.

Installation

Type the following dnf command to install lxc and related packages on Fedora 26:
$ sudo dnf install lxc lxc-templates lxc-extra debootstrap libvirt perl gpg
Sample outputs:

Fig.01:  LXC Installation on Fedora 26
Fig.01: LXC Installation on Fedora 26

Start and enable needed services

First start virtualization daemon named libvirtd and lxc using the systemctl command:
$ sudo systemctl start libvirtd.service
$ sudo systemctl start lxc.service
$ sudo systemctl enable lxc.service

Sample outputs:

Created symlink /etc/systemd/system/multi-user.target.wants/lxc.service ? /usr/lib/systemd/system/lxc.service.

Verify that services are running:
$ sudo systemctl status libvirtd.service
Sample outputs:

? libvirtd.service - Virtualization daemon
   Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled)
   Active: active (running) since Thu 2017-07-13 07:25:30 UTC; 40s ago
     Docs: man:libvirtd(8)
           http://libvirt.org
 Main PID: 3688 (libvirtd)
   CGroup: /system.slice/libvirtd.service
           ??3688 /usr/sbin/libvirtd
           ??3760 /usr/sbin/dnsmasq --conf-file
=
/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper

           ??3761 /usr/sbin/dnsmasq --conf-file
=
/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper

 
Jul 13 07:25:31 nixcraft-f26 dnsmasq
[3760]
: compile time options: IPv6 GNU-getopt DBus no-i18n IDN DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth DNSSEC loop-detect inotify
Jul 13 07:25:31 nixcraft-f26 dnsmasq-dhcp
[3760]
: DHCP, IP range 192.168.122.2 -- 192.168.122.254, lease time 1h
Jul 13 07:25:31 nixcraft-f26 dnsmasq-dhcp
[3760]
: DHCP, sockets bound exclusively to interface virbr0
Jul 13 07:25:31 nixcraft-f26 dnsmasq
[3760]
: reading /etc/resolv.conf
Jul 13 07:25:31 nixcraft-f26 dnsmasq
[3760]
: using nameserver 139.162.11.5#53
Jul 13 07:25:31 nixcraft-f26 dnsmasq
[3760]
: using nameserver 139.162.13.5#53
Jul 13 07:25:31 nixcraft-f26 dnsmasq
[3760]
: using nameserver 139.162.14.5#53
Jul 13 07:25:31 nixcraft-f26 dnsmasq
[3760]
: read /etc/hosts - 3 addresses
Jul 13 07:25:31 nixcraft-f26 dnsmasq
[3760]
: read /var/lib/libvirt/dnsmasq/default.addnhosts - 0 addresses
Jul 13 07:25:31 nixcraft-f26 dnsmasq-dhcp
[3760]
: read /var/lib/libvirt/dnsmasq/default.hostsfile

? libvirtd.service - Virtualization daemon Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled) Active: active (running) since Thu 2017-07-13 07:25:30 UTC; 40s ago Docs: man:libvirtd(8) http://libvirt.org Main PID: 3688 (libvirtd) CGroup: /system.slice/libvirtd.service ??3688 /usr/sbin/libvirtd ??3760 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper ??3761 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper Jul 13 07:25:31 nixcraft-f26 dnsmasq[3760]: compile time options: IPv6 GNU-getopt DBus no-i18n IDN DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth DNSSEC loop-detect inotify Jul 13 07:25:31 nixcraft-f26 dnsmasq-dhcp[3760]: DHCP, IP range 192.168.122.2 -- 192.168.122.254, lease time 1h Jul 13 07:25:31 nixcraft-f26 dnsmasq-dhcp[3760]: DHCP, sockets bound exclusively to interface virbr0 Jul 13 07:25:31 nixcraft-f26 dnsmasq[3760]: reading /etc/resolv.conf Jul 13 07:25:31 nixcraft-f26 dnsmasq[3760]: using nameserver 139.162.11.5#53 Jul 13 07:25:31 nixcraft-f26 dnsmasq[3760]: using nameserver 139.162.13.5#53 Jul 13 07:25:31 nixcraft-f26 dnsmasq[3760]: using nameserver 139.162.14.5#53 Jul 13 07:25:31 nixcraft-f26 dnsmasq[3760]: read /etc/hosts - 3 addresses Jul 13 07:25:31 nixcraft-f26 dnsmasq[3760]: read /var/lib/libvirt/dnsmasq/default.addnhosts - 0 addresses Jul 13 07:25:31 nixcraft-f26 dnsmasq-dhcp[3760]: read /var/lib/libvirt/dnsmasq/default.hostsfile

And:
$ sudo systemctl status lxc.service
Sample outputs:

? lxc.service - LXC Container Initialization and Autoboot Code
   Loaded: loaded (/usr/lib/systemd/system/lxc.service; enabled; vendor preset: disabled)
   Active: active (exited) since Thu 2017-07-13 07:25:34 UTC; 1min 3s ago
     Docs: man:lxc-autostart
           man:lxc
 Main PID: 3830 (code
=
exited, status=0/SUCCESS)

      CPU: 9ms
 
Jul 13 07:25:34 nixcraft-f26 systemd
[1]
: Starting LXC Container Initialization and Autoboot Code...
Jul 13 07:25:34 nixcraft-f26 systemd
[1]
: Started LXC Container Initialization and Autoboot Code.

? lxc.service - LXC Container Initialization and Autoboot Code Loaded: loaded (/usr/lib/systemd/system/lxc.service; enabled; vendor preset: disabled) Active: active (exited) since Thu 2017-07-13 07:25:34 UTC; 1min 3s ago Docs: man:lxc-autostart man:lxc Main PID: 3830 (code=exited, status=0/SUCCESS) CPU: 9ms Jul 13 07:25:34 nixcraft-f26 systemd[1]: Starting LXC Container Initialization and Autoboot Code... Jul 13 07:25:34 nixcraft-f26 systemd[1]: Started LXC Container Initialization and Autoboot Code. LXC networking

To view configured networking interface for lxc, run:
$ sudo brctl show
Sample outputs:

bridge name	bridge id		STP enabled	interfaces
virbr0		8000.525400293323	yes		virbr0-nic

You must set default bridge to virbr0 in the file /etc/lxc/default.conf:
$ sudo vi /etc/lxc/default.conf
Sample config (replace lxcbr0 with virbr0 for lxc.network.link):

lxc.network.type = veth
lxc.network.link = 
virbr0

lxc.network.flags = up
lxc.network.hwaddr = 00:16:3e:xx:xx:xx

Save and close the file. To see DHCP range used by containers, enter:
$ sudo systemctl status libvirtd.service | grep range
Sample outputs:

Jul 13 07:25:31 nixcraft-f26 dnsmasq-dhcp[3760]: DHCP, IP range 192.168.122.2 -- 192.168.122.254, lease time 1h

To check the current kernel for lxc support, enter:
$ lxc-checkconfig
Sample outputs:

Kernel configuration not found at /proc/config.gz; searching...
Kernel configuration found at /boot/config-4.11.9-300.fc26.x86_64
--- Namespaces ---
Namespaces: enabled
Utsname namespace: enabled
Ipc namespace: enabled
Pid namespace: enabled
User namespace: enabled
Network namespace: enabled

--- Control groups ---
Cgroup: enabled
Cgroup clone_children flag: enabled
Cgroup device: enabled
Cgroup sched: enabled
Cgroup cpu account: enabled
Cgroup memory controller: enabled
Cgroup cpuset: enabled

--- Misc ---
Veth pair device: enabled
Macvlan: enabled
Vlan: enabled
Bridges: enabled
Advanced netfilter: enabled
CONFIG_NF_NAT_IPV4: enabled
CONFIG_NF_NAT_IPV6: enabled
CONFIG_IP_NF_TARGET_MASQUERADE: enabled
CONFIG_IP6_NF_TARGET_MASQUERADE: enabled
CONFIG_NETFILTER_XT_TARGET_CHECKSUM: enabled
FUSE (for use with lxcfs): enabled

--- Checkpoint/Restore ---
checkpoint restore: enabled
CONFIG_FHANDLE: enabled
CONFIG_EVENTFD: enabled
CONFIG_EPOLL: enabled
CONFIG_UNIX_DIAG: enabled
CONFIG_INET_DIAG: enabled
CONFIG_PACKET_DIAG: enabled
CONFIG_NETLINK_DIAG: enabled
File capabilities: enabled

Note : Before booting a new kernel, you can check its configuration
usage : CONFIG=/path/to/config /usr/bin/lxc-checkconfig
How can I create a Ubuntu Linux container?

Type the following command to create Ubuntu 16.04 LTS container:
$ sudo lxc-create -t download -n ubuntu-c1 -- -d ubuntu -r xenial -a amd64
Sample outputs:

Setting up the GPG keyring
Downloading the image index
Downloading the rootfs
Downloading the metadata
The image cache is now ready
Unpacking the rootfs

---
You just created an Ubuntu container (release=xenial, arch=amd64, variant=default)

To enable sshd, run: apt-get install openssh-server

For security reason, container images ship without user accounts
and without a root password.

Use lxc-attach or chroot directly into the rootfs to set a root password
or create user accounts.

To setup admin password, run:
$ sudo chroot /var/lib/lxc/ubuntu-c1/rootfs/ passwd ubuntu

Enter new UNIX password: 
Retype new UNIX password: 
passwd: password updated successfully

Make sure root account is locked out:
$ sudo chroot /var/lib/lxc/ubuntu-c1/rootfs/ passwd
To start container run:
$ sudo lxc-start -n ubuntu-c1
To login to the container named ubuntu-c1 use ubuntu user and password set earlier:
$ lxc-console -n ubuntu-c1
Sample outputs:

Fig.02:  Launch a console for the specified container
Fig.02: Launch a console for the specified container

You can now install packages and configure your server. For example, to enable sshd, run apt-get command / apt command :
ubuntu@ubuntu-c1:~$ sudo apt-get install openssh-server
To exit from lxc-console type Ctrl+a q to exit the console session and back to the host .

How do I create a Debain Linux container?

Type the following command to create Debian 9 ("stretch") container:
$ sudo lxc-create -t download -n debian-c1 -- -d debian -r stretch -a amd64
Sample outputs:

Setting up the GPG keyring
Downloading the image index
Downloading the rootfs
Downloading the metadata
The image cache is now ready
Unpacking the rootfs

---
You just created a Debian container (release=stretch, arch=amd64, variant=default)

To enable sshd, run: apt-get install openssh-server

For security reason, container images ship without user accounts
and without a root password.

Use lxc-attach or chroot directly into the rootfs to set a root password
or create user accounts.

Setup root account password , run:
$ sudo chroot /var/lib/lxc/debian-c1/rootfs/ passwd
Start the container and login into it for management purpose, run:
$ sudo lxc-start -n debian-c1
$ lxc-console -n debian-c1

How do I create a CentOS Linux container?

Type the following command to create CentOS 7 container:
$ sudo lxc-create -t download -n centos-c1 -- -d centos -r 7 -a amd64
Sample outputs:

Setting up the GPG keyring
Downloading the image index
Downloading the rootfs
Downloading the metadata
The image cache is now ready
Unpacking the rootfs

---
You just created a CentOS container (release=7, arch=amd64, variant=default)

To enable sshd, run: yum install openssh-server

For security reason, container images ship without user accounts
and without a root password.

Use lxc-attach or chroot directly into the rootfs to set a root password
or create user accounts.

Set the root account password and start the container:
$ sudo chroot /var/lib/lxc/centos-c1/rootfs/ passwd
$ sudo lxc-start -n centos-c1
$ lxc-console -n centos-c1

How do I create a Fedora Linux container?

Type the following command to create Fedora 25 container:
$ sudo lxc-create -t download -n fedora-c1 -- -d fedora -r 25 -a amd64
Sample outputs:

Setting up the GPG keyring
Downloading the image index
Downloading the rootfs
Downloading the metadata
The image cache is now ready
Unpacking the rootfs

---
You just created a Fedora container (release=25, arch=amd64, variant=default)

To enable sshd, run: dnf install openssh-server

For security reason, container images ship without user accounts
and without a root password.

Use lxc-attach or chroot directly into the rootfs to set a root password
or create user accounts.

Set the root account password and start the container:
$ sudo chroot /var/lib/lxc/fedora-c1/rootfs/ passwd
$ sudo lxc-start -n fedora-c1
$ lxc-console -n fedora-c1

How do I create a CentOS 6 Linux container and store it in btrfs ?

You need to create or format hard disk as btrfs and use that one:
# mkfs.btrfs /dev/sdb
# mount /dev/sdb /mnt/btrfs/

If you do not have /dev/sdb create an image using the dd or fallocate command as follows:
# fallocate -l 10G /nixcraft-btrfs.img
# losetup /dev/loop0 /nixcraft-btrfs.img
# mkfs.btrfs /dev/loop0
# mount /dev/loop0 /mnt/btrfs/
# btrfs filesystem show

Sample outputs:

Label: none  uuid: 4deee098-94ca-472a-a0b5-0cd36a205c35
	Total devices 1 FS bytes used 361.53MiB
	devid    1 size 10.00GiB used 3.02GiB path /dev/loop0

Now create a CentOS 6 LXC:
# lxc-create -B btrfs -P /mnt/btrfs/ -t download -n centos6-c1 -- -d centos -r 6 -a amd64
# chroot /mnt/btrfs/centos6-c1/rootfs/ passwd
# lxc-start -P /mnt/btrfs/ -n centos6-c1
# lxc-console -P /mnt/btrfs -n centos6-c1
# lxc-ls -P /mnt/btrfs/ -f

Sample outputs:

NAME       STATE   AUTOSTART GROUPS IPV4            IPV6 
centos6-c1 RUNNING 0         -      192.168.122.145 -    
How do I see a list of all available images?

Type the following command:
$ lxc-create -t download -n NULL -- --list
Sample outputs:

Setting up the GPG keyring
Downloading the image index

---
DIST	RELEASE	ARCH	VARIANT	BUILD
---
alpine	3.1	amd64	default	20170319_17:50
alpine	3.1	armhf	default	20161230_08:09
alpine	3.1	i386	default	20170319_17:50
alpine	3.2	amd64	default	20170504_18:43
alpine	3.2	armhf	default	20161230_08:09
alpine	3.2	i386	default	20170504_17:50
alpine	3.3	amd64	default	20170712_17:50
alpine	3.3	armhf	default	20170103_17:50
alpine	3.3	i386	default	20170712_17:50
alpine	3.4	amd64	default	20170712_17:50
alpine	3.4	armhf	default	20170111_20:27
alpine	3.4	i386	default	20170712_17:50
alpine	3.5	amd64	default	20170712_17:50
alpine	3.5	i386	default	20170712_17:50
alpine	3.6	amd64	default	20170712_17:50
alpine	3.6	i386	default	20170712_17:50
alpine	edge	amd64	default	20170712_17:50
alpine	edge	armhf	default	20170111_20:27
alpine	edge	i386	default	20170712_17:50
archlinux	current	amd64	default	20170529_01:27
archlinux	current	i386	default	20170529_01:27
centos	6	amd64	default	20170713_02:16
centos	6	i386	default	20170713_02:16
centos	7	amd64	default	20170713_02:16
debian	jessie	amd64	default	20170712_22:42
debian	jessie	arm64	default	20170712_22:42
debian	jessie	armel	default	20170711_22:42
debian	jessie	armhf	default	20170712_22:42
debian	jessie	i386	default	20170712_22:42
debian	jessie	powerpc	default	20170712_22:42
debian	jessie	ppc64el	default	20170712_22:42
debian	jessie	s390x	default	20170712_22:42
debian	sid	amd64	default	20170712_22:42
debian	sid	arm64	default	20170712_22:42
debian	sid	armel	default	20170712_22:42
debian	sid	armhf	default	20170711_22:42
debian	sid	i386	default	20170712_22:42
debian	sid	powerpc	default	20170712_22:42
debian	sid	ppc64el	default	20170712_22:42
debian	sid	s390x	default	20170712_22:42
debian	stretch	amd64	default	20170712_22:42
debian	stretch	arm64	default	20170712_22:42
debian	stretch	armel	default	20170711_22:42
debian	stretch	armhf	default	20170712_22:42
debian	stretch	i386	default	20170712_22:42
debian	stretch	powerpc	default	20161104_22:42
debian	stretch	ppc64el	default	20170712_22:42
debian	stretch	s390x	default	20170712_22:42
debian	wheezy	amd64	default	20170712_22:42
debian	wheezy	armel	default	20170712_22:42
debian	wheezy	armhf	default	20170712_22:42
debian	wheezy	i386	default	20170712_22:42
debian	wheezy	powerpc	default	20170712_22:42
debian	wheezy	s390x	default	20170712_22:42
fedora	22	amd64	default	20170216_01:27
fedora	22	i386	default	20170216_02:15
fedora	23	amd64	default	20170215_03:33
fedora	23	i386	default	20170215_01:27
fedora	24	amd64	default	20170713_01:27
fedora	24	i386	default	20170713_01:27
fedora	25	amd64	default	20170713_01:27
fedora	25	i386	default	20170713_01:27
gentoo	current	amd64	default	20170712_14:12
gentoo	current	i386	default	20170712_14:12
opensuse	13.2	amd64	default	20170320_00:53
opensuse	42.2	amd64	default	20170713_00:53
oracle	6	amd64	default	20170712_11:40
oracle	6	i386	default	20170712_11:40
oracle	7	amd64	default	20170712_11:40
plamo	5.x	amd64	default	20170712_21:36
plamo	5.x	i386	default	20170712_21:36
plamo	6.x	amd64	default	20170712_21:36
plamo	6.x	i386	default	20170712_21:36
ubuntu	artful	amd64	default	20170713_03:49
ubuntu	artful	arm64	default	20170713_03:49
ubuntu	artful	armhf	default	20170713_03:49
ubuntu	artful	i386	default	20170713_03:49
ubuntu	artful	ppc64el	default	20170713_03:49
ubuntu	artful	s390x	default	20170713_03:49
ubuntu	precise	amd64	default	20170713_03:49
ubuntu	precise	armel	default	20170713_03:49
ubuntu	precise	armhf	default	20170713_03:49
ubuntu	precise	i386	default	20170713_03:49
ubuntu	precise	powerpc	default	20170713_03:49
ubuntu	trusty	amd64	default	20170713_03:49
ubuntu	trusty	arm64	default	20170713_03:49
ubuntu	trusty	armhf	default	20170713_03:49
ubuntu	trusty	i386	default	20170713_03:49
ubuntu	trusty	powerpc	default	20170713_03:49
ubuntu	trusty	ppc64el	default	20170713_03:49
ubuntu	xenial	amd64	default	20170713_03:49
ubuntu	xenial	arm64	default	20170713_03:49
ubuntu	xenial	armhf	default	20170713_03:49
ubuntu	xenial	i386	default	20170713_03:49
ubuntu	xenial	powerpc	default	20170713_03:49
ubuntu	xenial	ppc64el	default	20170713_03:49
ubuntu	xenial	s390x	default	20170713_03:49
ubuntu	yakkety	amd64	default	20170713_03:49
ubuntu	yakkety	arm64	default	20170713_03:49
ubuntu	yakkety	armhf	default	20170713_03:49
ubuntu	yakkety	i386	default	20170713_03:49
ubuntu	yakkety	powerpc	default	20170713_03:49
ubuntu	yakkety	ppc64el	default	20170713_03:49
ubuntu	yakkety	s390x	default	20170713_03:49
ubuntu	zesty	amd64	default	20170713_03:49
ubuntu	zesty	arm64	default	20170713_03:49
ubuntu	zesty	armhf	default	20170713_03:49
ubuntu	zesty	i386	default	20170713_03:49
ubuntu	zesty	powerpc	default	20170317_03:49
ubuntu	zesty	ppc64el	default	20170713_03:49
ubuntu	zesty	s390x	default	20170713_03:49
---
How do I list the containers existing on the system?

Type the following command:
$ lxc-ls -f
Sample outputs:

NAME      STATE   AUTOSTART GROUPS IPV4            IPV6 
centos-c1 RUNNING 0         -      192.168.122.174 -    
debian-c1 RUNNING 0         -      192.168.122.241 -    
fedora-c1 RUNNING 0         -      192.168.122.176 -    
ubuntu-c1 RUNNING 0         -      192.168.122.56  - 
How do I query information about a container?

The syntax is:
$ lxc-info -n {container}
$ lxc-info -n centos-c1

Sample outputs:

Name:           centos-c1
State:          RUNNING
PID:            5749
IP:             192.168.122.174
CPU use:        0.87 seconds
BlkIO use:      6.51 MiB
Memory use:     31.66 MiB
KMem use:       3.01 MiB
Link:           vethQIP1US
 TX bytes:      2.04 KiB
 RX bytes:      8.77 KiB
 Total bytes:   10.81 KiB
How do I stop/start/restart a container?

The syntax is:
$ sudo lxc-start -n {container}
$ sudo lxc-start -n fedora-c1
$ sudo lxc-stop -n {container}
$ sudo lxc-stop -n fedora-c1

How do I monitor container statistics?

To display containers, updating every second, sorted by memory use:
$ lxc-top --delay 1 --sort m
To display containers, updating every second, sorted by cpu use:
$ lxc-top --delay 1 --sort c
To display containers, updating every second, sorted by block I/O use:
$ lxc-top --delay 1 --sort b
Sample outputs:

Fig.03: Shows  container  statistics with lxc-top
Fig.03: Shows container statistics with lxc-top

How do I destroy/delete a container?

The syntax is:
$ sudo lxc-destroy -n {container}
$ sudo lxc-stop -n fedora-c2
$ sudo lxc-destroy -n fedora-c2

If a container is running, stop it first and destroy it:
$ sudo lxc-destroy -f -n fedora-c2

How do I creates, lists, and restores container snapshots?

The syntax is as follows as per snapshots operation. Please note that you must use snapshot aware file system such as BTRFS/ZFS or LVM.

Create snapshot for a container

$ sudo lxc-snapshot -n {container} -c "comment for snapshot"
$ sudo lxc-snapshot -n centos-c1 -c "13/July/17 before applying patches"

List snapshot for a container

$ sudo lxc-snapshot -n centos-c1 -L -C

Restore snapshot for a container

$ sudo lxc-snapshot -n centos-c1 -r snap0

Destroy/Delete snapshot for a container

$ sudo lxc-snapshot -n centos-c1 -d snap0

Posted by: Vivek Gite

The author is the creator of nixCraft and a seasoned sysadmin and a trainer for the Linux operating system/Unix shell scripting. He has worked with global clients and in various industries, including IT, education, defense and space research, and the nonprofit sector. Follow him on Twitter , Facebook , Google+ .

Recommended Links

Google matched content

Softpanorama Recommended

Top articles

Sites

Top articles

Sites

...



Etc

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

Bulletin:

Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D


Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to to buy a cup of coffee for authors of this site

Disclaimer:

The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Last modified: December, 22, 2018