This guide is intended to help partners get started using and developing for the OpenShift Container Platform.
Quick Start
Terminology - If you’re new to OpenShift, start here to get acquainted with the common terms.
Installation - If you don’t have access to a running installation or want to set up a local environment, check out the installation guide for more information on installing the Red Hat Container Development Kit (CDK).
Deployment - If you already have a container deployed to a public registry (such as Docker Hub) or a Dockerfile, view the container deployment guide. This guide also covers having OpenShift perform the image build steps for you against your source code repository.
Basic Usage - Once you’ve got an application deployed, there are some basic commands that can be run to understand all of its pieces and how to interact with it.
Integration - If you’re just beginning to port an application to be run on containers, or beginning new development, the integration guide can provide more information on how to use OpenShift in an active development environment.
Installation
While there are a number of ways to install the upstream OpenShift Origin releases, it is recommended that partners test their integrations against an enterprise installation of OpenShift Container Platform. There are two suggested approaches based on the desired deployment.
In either case, developers can access the Red Hat bits via the no-cost Red Hat Enterprise Linux Developer Suite subscription, accessed through http://developers.redhat.com/.
Installation in a VM - CDK
The Container Development Kit (CDK) is a pre-built environment for running and developing containers using Red Hat OpenShift. It runs as a virtual machine and supports a number of different virtualization providers and host operating systems.
There are two pieces that must be downloaded, with both being found at http://developers.redhat.com/products/cdk/download/.
-
The Red Hat Container Tools are used in conjunction with Vagrant to start the pre-built VM images. These tools include the necessary vagrant files and plugins for starting a VM running either Red Hat OpenShift or upstream Kubernates on its own, as well as registering the VM with RHN.
-
The virtual machine image appropriate for your preferred hypervisor (the currently supported hypervisors include VirtualBox, libvirt, and HyperV.
Details on the CDK installation process can be found on the Red Hat Customer Portal. Additional information on interacting with the CDK VMs can be found at the Vagrant Documentation.
Installation in Docker - OpenShift Client
If a VM is not feasible or desired, OpenShift can be run directly inside of Docker on the host machine. There are two steps required:
-
Download the OpenShift client from the Red Hat Customer Portal product page. Builds are provided for Linux, macOS, and Windows. The client is a single executable named
oc
and can be used for both setting up the cluster as well as all further command line interaction with the running server. -
Run the cluster creation command, specifying the appropriate Red Hat hosted image:
$ oc cluster up --image=registry.access.redhat.com/openshift3/ose
Once the oc command finishes, information will be provided on how to access the server. For example:
-- Server Information ...
OpenShift server started.
The server is accessible via web console at:
https://159.203.119.95:8443
You are logged in as:
User: developer
Password: developer
To login as administrator:
oc login -u system:admin
As expected, running docker ps
shows a number of deployed containers, all
of which service the running installation:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
06576c15b6a3 registry.access.redhat.com/openshift3/ose-docker-registry:v3.4.0.40 "/bin/sh -c 'DOCKER_R" 9 minutes ago Up 9 minutes k8s_registry.a8db0f16_docker-registry-1-rnk4b_default_5644c474-e7e4-11e6-a0c8-362219689e3e_ea45467b
7a4093685a82 registry.access.redhat.com/openshift3/ose-haproxy-router:v3.4.0.40 "/usr/bin/openshift-r" 9 minutes ago Up 9 minutes k8s_router.a21b2f8_router-1-8tg8h_default_58b1ede7-e7e4-11e6-a0c8-362219689e3e_da907d85
9b13ed6c7d2d registry.access.redhat.com/openshift3/ose-pod:v3.4.0.40 "/pod" 9 minutes ago Up 9 minutes k8s_POD.8f3ae681_router-1-8tg8h_default_58b1ede7-e7e4-11e6-a0c8-362219689e3e_279eb0a6
7850f7da7bd3 registry.access.redhat.com/openshift3/ose-pod:v3.4.0.40 "/pod" 9 minutes ago Up 9 minutes k8s_POD.b6fc0873_docker-registry-1-rnk4b_default_5644c474-e7e4-11e6-a0c8-362219689e3e_034c5b1d
5d6c6d7ed3b0 registry.access.redhat.com/openshift3/ose:v3.4.0.40 "/usr/bin/openshift s" 10 minutes ago Up 10 minutes origin
Building or Deploying an Existing Container
OpenShift can deploy existing code in a number of ways. Pre-built containers stored in a Docker registry, such as Docker Hub can be downloaded and deployed directly to OpenShift. OpenShift can also build images from source code in a git repository, regardless of whether or not a Dockerfile is present.
Deploying from Docker Hub
The simplest way to get an existing image into OpenShift is to retrieve the image from Docker Hub. OpenShift will automatically create a new image stream and map to the image in Docker Hub.
Example
Create a new application in the current project, specifying the name of the image in Docker Hub:
$ oc new-app jdob/python-web
That’s it. OpenShift will take care of retrieving the image and setting up all of the necessary resources to deploy pods for the application.
Building from a Dockerfile in OpenShift
When creating a new application, OpenShift can be passed the location of a git repository to automatically trigger a new image build. The type of build performed will depend on what is found in the repository.
If a Dockerfile is present, OpenShift will perform the following steps:
-
A build configuration is created to correspond to building an image from the repository’s Dockerfile. The location of the repository, as well as the settings identifying the build as a Docker build (
Strategy: Docker
), will be present. -
An image stream is created to reference images built by the configuration, which is later used as a trigger for new deployments. In practice, when a new build is performed, the application will be redeployed with the new image.
-
The first build of the created configuration is started.
-
The remainder of the necessary components, as described in Anatomy of a Project are created, including the replication controller and service.
Example
The standard command for creating an application is used, but instead of referencing a specific image to deploy, the URL to the git repository is provided:
$ oc new-app https://github.com/jdob-openshift/python-web
Of particular interest in this example is the created build configuration.
The list of build configurations can be found using the commands in the
query commands section and specifying
the buildconfig
(or bc
for short) type:
$ oc get buildconfig
NAME TYPE FROM LATEST
python-web Docker Git 1
Specific details about the build configuration are retrieved using the
describe
command and the name of the configuration itself:
$ oc describe bc python-web
Name: python-web
Namespace: dockerfile-build
Created: 49 minutes ago
Labels: app=python-web
Annotations: openshift.io/generated-by=OpenShiftNewApp
Latest Version: 1
Strategy: Docker (1)
URL: https://github.com/jdob-openshift/python-web (2)
From Image: ImageStreamTag openshift/python:latest (3)
Output to: ImageStreamTag python-web:latest
Build Run Policy: Serial
Triggered by: Config, ImageChange
Webhook GitHub:
URL: https://localhost:8443/oapi/v1/namespaces/dockerfile-build/buildconfigs/python-web/webhooks/iqQhagYGyA4OrZ3jXZpa/github
Webhook Generic:
URL: https://localhost:8443/oapi/v1/namespaces/dockerfile-build/buildconfigs/python-web/webhooks/B1KFkXvJmi_5KycoFYkw/generic
AllowEnv: false
1 | The strategy is set to Docker , indicating that the image should be built
using a Dockerfile found in the repository. For comparison, see the
Source to Image section. |
2 | The URL value corresponds to the git repository where the source is found. |
3 | The From Image is the base image for the build and is derived directly
from the Dockerfile. |
It is also worth noting that webhook URLs are provided. These URLs can be used to inform OpenShift of changes to the source code and trigger a new build. Depending on the deployment configuration, new images built from this trigger will automatically be deployed (this is the default behavior for the deployment configuration automatically created by this process). GitHub, for example, supports adding this URL to a repository to automatically trigger a new build when a commit is pushed.
Source to Image
OpenShift’s Source-to-Image (S2I for short) functionality takes the ability to build images from a repository one step further by removing the need for an explicit Dockerfile.
Not all Docker images can be used as the basis for an S2I build. Builder Images, as they are known, have minimal but specific requirements regarding files that OpenShift will invoke during the build. More information on creating new builder images can be found in the create_builder_image section.
There are two ways to determine the base image that will be used in the build:
-
Explicitly specifying the image when the application is created.
-
If no base image is indicated, OpenShift will attempt to choose an appropriate base. For example, the presence of a
requirements.txt
file will cause OpenShift to attempt to use the latest Python builder image as the base.
Once the git repository (and optionally, an explicit base image) are specified, OpenShift takes the following steps:
-
A container is started using the builder image.
-
The source code is downloaded from the specified repository and injected into the container.
-
The builder container performs the appropriate steps depending on the base image. Each builder image includes scripts that perform the necessary steps to install applications of the supported type. For example, this may include installing Python packages via pip or copying HTML pages to the configured hosting directory.
-
The builder container, now containing the installed source code, is committed to an image. The image’s entrypoint is set based on the builder image (in most cases, this is defaulted to a standard but can be overridden using environment variables).
As with Building from a Dockerfile in OpenShift, the other components of a service are automatically created, a build is triggered, and a new deployment performed.
It is important to understand how base images are configured. Since there is no user-defined Dockerfile in place, applications are restricted to using ports exposed by the builder image. Depending on the application, it may be useful to define a custom builder image with the appropriate ports exposed. |
Example
The standard command for creating an application is used, but instead of referencing a specific image to deploy, the URL to the git repository is provided:
$ oc new-app python:3.4~https://github.com/jdob-openshift/python-web-source
Note the snippet preceding the git repository URL. This is used to tell
OpenShift the base image to build from and is indicated by adding the
image and tag before the repository URL, separated by a ~
.
The build configuration for an S2I will differ slightly from one built from a Dockerfile:
$ oc describe bc python-web-source
Name: python-web-source
Namespace: source-build
Created: 6 minutes ago
Labels: app=python-web-source
Annotations: openshift.io/generated-by=OpenShiftNewApp
Latest Version: 5
Strategy: Source (1)
URL: https://github.com/jdob-openshift/python-web-source (2)
From Image: ImageStreamTag openshift/python:3.4 (3)
Output to: ImageStreamTag python-web-source:latest
Build Run Policy: Serial
Triggered by: Config, ImageChange
Webhook GitHub:
URL: https://localhost:8443/oapi/v1/namespaces/web2/buildconfigs/python-web-source/webhooks/AipYevj9pknT6SDWJNR0/github
Webhook Generic:
URL: https://localhost:8443/oapi/v1/namespaces/web2/buildconfigs/python-web-source/webhooks/R0U9P0QgPy3ncXMQphnC/generic
AllowEnv: false
1 | The strategy is set to Source , indicating that the image should be built
against a builder image. |
2 | The URL value corresponds to the git repository where the source is found. |
3 | The From Image is the builder image for the build. This value can be
derived automatically by OpenShift or, in this example, explicitly set
when the application is created. |
As noted in Building from a Dockerfile in OpenShift, the webhook URLs can be used to have the source repository automatically trigger a new build when new commits are made.
Unlike deploying an existing image, applications created through S2I are automatically configured with a route by default. |
Next Steps
Now that an application is deployed, see the Anatomy of a Project section guide for more information on the different resources that were created, or the Basic Usage guide for other ways to interact with the newly deployed application. Information on enhancing container images to utilize OpenShift’s features can be found in the Integrating with OpenShift guide.
Basic Usage
Resource Query Commands
The CLI provides two useful commands for listing and inspecting resources in the OpenShift installation.
Get
The get
command displays a list of a particular resource type, along with
some basic summary information on each entry. The desired resource type is
specified as an argument to the get
call and the output will vary based
on the type of resource being queried. The full list of resource types can
be found by calling oc get
with no arguments.
For example, retrieving the list of image stream resources will display tagging and repository information:
$ oc get is
NAME DOCKER REPO TAGS UPDATED
python-web 172.30.53.244:5000/python-web/python-web latest 2 hours ago
Retrieving the list of services, however, displays information on the associated IP addresses and ports:
$ oc get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
python-web 172.30.167.215 <none> 8080/TCP 2h
Describe
The describe
command is used to retrieve specific details on a particular
resource. In addition to the resource’s name, the type of resource must also
be provided (in many cases, resources of different types will share the same
name to ease understanding their relationship). If the resource name is
omitted, details about all resources of that type are displayed (depending
on your environment, the output may be very long and unwieldy).
Using the image stream found above, detailed information can be displayed using:
$ oc describe is python-web
Name: python-web
Namespace: python-web
Created: 2 hours ago
Labels: app=python-web
Annotations: openshift.io/generated-by=OpenShiftNewApp
openshift.io/image.dockerRepositoryCheck=2016-10-14T15:05:43Z
Docker Pull Spec: 172.30.53.244:5000/python-web/python-web
Unique Images: 1
Tags: 1
latest
tagged from jdob/python-web
* jdob/python-web@sha256:3f87be1825405ee8c7da23d7a6916090ecbb2d6e7b04fcd0fd1dc194173d2bc0
2 hours ago
As with get
, the output will vary based on the type of resource being
described.
Routes
When a container is created in OpenShift, it is initially assigned an IP address and an internal service name within the scope of its project. The service name allows it to be accessed by other applications running inside of the same project. This becomes a useful default for large projects that have a number of internal services but only a small amount of public endpoints.
An explicit step is required to make a container publicly accessible. The application must be exposed by creating a route. When a route is exposed, the host name can be specified. In most cases, the DNS resolution for the hostname is handled outside of OpenShift. If a hostname is not provided, OpenShift will generate an xip.io address that can be used local to the OpenShift instance.
By default, HAProxy is used to manage the routes, however plugins for other providers are available Routes may optionally be configured with TLS credentials for secure communications.
Routes are created through the expose
command. Arguments are supported for
customizing the route (the most common being --hostname
when using an
existing DNS server), but for development purposes, the defaults are usually
sufficient:
$ oc expose service python-web
route "python-web" exposed
$ oc get route
NAME HOST/PORT PATH SERVICES PORT TERMINATION
python-web python-web-python-web.apps.10.2.2.2.xip.io python-web 8080-tcp
$ oc describe route
Name: python-web
Namespace: python-web
Created: 6 seconds ago
Labels: app=python-web
Annotations: openshift.io/host.generated=true
Requested Host: python-web-python-web.apps.10.2.2.2.xip.io exposed on router router 6 seconds ago
Path: <none>
TLS Termination: <none>
Insecure Policy: <none>
Endpoint Port: 8080-tcp
Service: python-web
Weight: 100 (100%)
Endpoints: 172.17.0.12:8080
In the above example, OpenShift generated a corresponding xip.io address that can be used to access the service. A quick test from the host running the OpenShift VM shows the service can be accessed:
$ curl python-web-python-web.apps.10.2.2.2.xip.io
Hello World
More information on routes can be found in the corresponding section of the OpenShift documentation.
Persistent Storage
Along with routes, a common configuration made to pods is the addition of persistent storage volumes. As this is a lengthy topic, it can be found in the Persistent Storage section of this guide.
Remote Shell into a Container
While debugging an application, there are many times where it can be useful open a terminal into a running container. Both the web UI and command line interface support this directly; there is no need to look up IP addresses or manually deal with SSH keys.
The rsh
command is used to connect to a specific pod by its name. The
pod names can be retrieved using the get
command under the pods
resource type:
$ oc get pods
NAME READY STATUS RESTARTS AGE
python-web-4-hwwub 1/1 Running 1 6d
$ oc rsh python-web-4-hwwub
# ls
bin boot dev etc home lib lib64 media mnt opt proc root run sbin src srv sys tmp usr var
Copying Files into a Running Container
Similar to opening a remote shell, the OpenShift interfaces provide built-in support for copying files from a local system into a running container.
Keep in mind that containers are typically treated as ephemeral. Files copied to a running container in this fashion are not guaranteed to survive pod restarts or scaling operations. This functionality is primarily intended for debugging or development situations. |
Apropos of its name, the rsync
command will attempt to use rsync to
transmit files if the service is available on the destination container. If
it is not, OpenShift will fall back to sending a tarfile with the contents.
Keep in mind that the normal restrictions when using tar over rsync will
be present; the destination directory must exist and the entire contents will
be transmitted rather than only sending changed files.
The rsync
command works similar to the tranditional rsync command,
accepting the source and destination directories. Instead of specifying a
hostname or IP address, the pod name is used in the destination:
$ oc get pods
NAME READY STATUS RESTARTS AGE
python-web-4-hwwub 1/1 Running 1 6d
$ oc rsync . python-web-4-hwwub:/doc
WARNING: cannot use rsync: rsync not available in container
[output truncated]
$ oc rsh python-web-4-hwwub
# ls /doc
Makefile README.md _build advanced.rst basic-usage.rst
In the above example, rsync was not supported by the container, so the
|
Persistent Storage
There are two main concepts to be aware of regarding persistent storage in OpenShift:
-
A
persistent volume
is the actual allocation of storage in OpenShift. These are created by the cluster administrator and will define behavior such as capacity, access modes, reclamation policy, and the type of storage (NFS, GlusterFS, AWS Elastic Block Stores, etc.) -
A
persistent volume claim
is the assignment and usage of a persistent volume by a user. A claim is made within the scope of a project and can be attached to a deployment configuration (in practice, this effectively attaches it to all of the pods deployed by that configuration).
The creation of the persistent volumes is outside the scope of this guide (by default, the CDK installation provides a few volumes for testing purposes). This section will cover the creation and usage of the claims. Detailed information on configuring persistent volumes can be found in the online documentation.
Persistent Volumes
The first thing to keep in mind is that non-cluster admins do not have permissions to view the configured persisted volumes. This is considered an administrative task; users only needs to be concerned with their requested and assigned claims:
$ oc whoami
user
$ oc get pv
User "user" cannot list all persistentvolumes in the cluster
As a reference, the same command, when run as the cluster admin, displays details on the the volumes available:
$ oc whoami
admin
$ oc get pv
NAME CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
pv01 10Gi RWO,RWX Available 39d
pv02 10Gi RWO,RWX Available 39d
pv03 10Gi RWO,RWX Available 39d
pv04 10Gi RWO,RWX Available 39d
pv05 10Gi RWO,RWX Available 39d
Creating a Persistent Volume Claim
For context, the following example will be run against an existing project with a previously deployed application. The actual behavior of the application is irrelevant; this example will look directly at the filesystem across multiple pods:
$ oc project
Using project "python-web" on server "https://10.2.2.2:8443".
$ oc get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
python-web 172.30.29.153 <none> 8080/TCP 40m
$ oc get dc
NAME REVISION DESIRED CURRENT TRIGGERED BY
python-web 2 1 1 config,image(python-web:latest)
The last line above is important to note. Volumes are not allocated directly against a pod (or pods) but rather the deployment configuration responsible for their creation.
Before the volume is created, a quick sanity check run directly on the pod shows that the mount point does not already exist:
$ oc get pods
NAME READY STATUS RESTARTS AGE
python-web-3-sox29 1/1 Running 0 17m
$ oc rsh python-web-3-sox29
sh-4.2$ ls /demo
ls: cannot access /d: No such file or directory
The volume
command can be used to both request a volume and specify its
attachment point into the running containers:
$ oc volume \
dc python-web \ (1)
--add \ (2)
--claim-size 512M \ (3)
--mount-path /demo \ (4)
--name demo-vol (5)
persistentvolumeclaims/pvc-axv7b
deploymentconfigs/python-web
1 | Indicates where the volume will be added. As with other
many other commands in the client, two pieces of information are needed:
the resource type ("dc" is shorthand for "deployment configuration")
and the name of the resource. Alternatively, they could be joined with
a front slash into a single term (dc/python-web ). |
2 | The volume action being performed. In this case, a volume is
being added to the configuration. By comparison, --remove is used to
detach a volume. |
3 | While the requests is for a volume "of at least 512MB". The actual allocated volume may be larger as OpenShift will fulfill the claim with the best fit available. |
4 | Dictates where in the pod(s) to mount the storage volume. |
5 | Identifier used to reference the volume at a later time. |
The output shows that a new claim has been created (pvc-axv7b
) and the
deployment configuration python-web
has been edited.
There are a few things to verify at this point. Details about a claim can
be found under the pvc
resource type:
$ oc get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
pvc-axv7b Bound pv05 10Gi RWO,RWX 40m
$ oc describe pvc pvc-axv7b 1 ↵
Name: pvc-axv7b
Namespace: python-web
Status: Bound
Volume: pv05
Labels: <none>
Capacity: 10Gi
Access Modes: RWO,RWX
No events.
Notice that, despite the claim only requesting 512M, the provided capacity is
10G. The output of the get pv
command above shows that the installation
is only configured with 10Gi volumes, which makes it the "best fit" for the
claim.
The volume
command can also provide details on the attached volume,
including which deployment configurations have volumes and where they are
mounted:
$ oc volume dc --all
deploymentconfigs/python-web
pvc/pvc-axv7b (allocated 10GiB) as demo-vol
mounted at /demo
Repeating the earlier test on the pod, there is now a /demo
mount point
available:
$ oc rsh python-web-3-sox29
sh-4.2$ ls /demo
sh-4.2$
Persistent Volumes Across Pods
To reinforce the concept that the volume is attached to the deployment configuration, and thus all pods spawned by it, the application can be scaled and used to show the mount points refer to the same volume:
$ oc scale dc python-web --replicas 2
deploymentconfig "python-web" scaled
$ oc get pods
NAME READY STATUS RESTARTS AGE
python-web-3-ka3y2 1/1 Running 0 1m
python-web-3-sox29 1/1 Running 0 1h
The newly created pod, ka3y2
, will have the same configuration as the
previous pod since they were created from the same deployment configuration.
In particular, this includes the mounted volume:
$ oc rsh python-web-3-ka3y2
sh-4.2$ ls /demo
sh-4.2$
Proof that they refer to the same volume can be seen by adding a file to the volume on one of the pods and verifying its existence on the other:
$ oc rsh python-web-3-ka3y2
sh-4.2$ echo "Hello World" > /demo/test
sh-4.2$ exit
$ oc rsh python-web-3-sox29
sh-4.2$ ls /demo
test
sh-4.2$ cat /demo/test
Hello World
sh-4.2$
Detaching a Persistent Volume
In addition to demonstrating the volume is shared across pods, it is also
important to emphasize the "persistent" aspect of it. The volume
command
is also used to detach a volume from a configuration (and thus all of its
pods):
$ oc volume dc python-web --remove --name demo-vol
deploymentconfigs/python-web
Note that the --name demo-vol
argument refers to the name specified
during creation above.
Attempting to reconnect to the pod to verify the volume was detached shows a potentially surprising result:
$ oc rsh python-web-3-sox29 1 ↵
Error from server: pods "python-web-3-sox29" not found
Keep in mind that changing a pod’s volumes is no different than any other change to the configuration. The pods are not updated, but rather the old pods are scaled down while new pods, with the updated configuration, are deployed. The deployment configuration event log shows that a new deployment was created for the change (the format of the output below is heavily modified for readability, but the data was returned from the call):
$ oc describe dc python-web
Events:
FirstSeen Reason Message
--------- ------ -------
1h DeploymentCreated Created new deployment "python-web-2" for version 2
1h DeploymentCreated Created new deployment "python-web-3" for version 3
11m DeploymentScaled Scaled deployment "python-web-3" from 1 to 2
2m DeploymentCreated Created new deployment "python-web-4" for version 4
The four events listed correspond to the examples run in this section:
-
Initial successful deployment (the "version 1" intentionally skipped).
-
The "version 3" deployment corresponds to adding the volume.
-
The scaling operation retains the deployment version (the "3" in
python-web-3
) and creates a new pod. -
The "version 4" deployment was made to activate the change to remove the volume.
Getting back to verifying the volume was removed, one of the new pods can be accessed to check for the presence of the mount point:
$ oc get pods
NAME READY STATUS RESTARTS AGE
python-web-4-j3u0r 1/1 Running 0 11m
python-web-4-vrq2t 1/1 Running 0 11m
$ oc rsh python-web-4-j3u0r
sh-4.2$ ls /demo
ls: cannot access /demo: No such file or directory
Reattaching a Persistent Volume
Detaching a volume from a deployment configuration does not release the volume or reclaim its space. Listing the volumes as above (again as a cluster admin) shows the volume is still in use:
$ oc get pv
NAME CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
pv01 10Gi RWO,RWX Available 39d
pv02 10Gi RWO,RWX Available 39d
pv03 10Gi RWO,RWX Available 39d
pv04 10Gi RWO,RWX Available 39d
pv05 10Gi RWO,RWX Bound python-web/pvc-axv7b 39d
Additionally, as the non-cluster admin user, the claim is still present as a resource:
$ oc get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
pvc-axv7b Bound pv05 10Gi RWO,RWX 1h
The volume can be added back into the deployment configuration (or a different
one if so desired) using a variation of the volume
command initially used:
$ oc volume \
dc python-web
--add
--type pvc
--claim-name pvc-axv7b
--mount-path /demo-2
--name demo-vol-2
deploymentconfigs/python-web
The difference in this call is that instead of specifying details about the
claim being requested (such as its capacity), a specific claim is referenced
(the name being found using the get pvc
command above). For demo purposes,
it has been mounted to a slightly different path and using a different volume
name.
As with the previous configuration changes, new pods have been deployed. Connecting to one of these pods shows the contents of the volume were untouched:
$ oc get pods
NAME READY STATUS RESTARTS AGE
python-web-5-49tsa 1/1 Running 0 2m
python-web-5-s8yni 1/1 Running 0 2m
$ oc rsh python-web-5-49tsa
sh-4.2$ ls /demo-2
test
sh-4.2$ cat /demo-2/test
Hello World
sh-4.2$
Releasing a Persistent Volume
The example above demonstrates that removing a volume does not release the volume nor delete its contents. That requires another step. Remember that claims are resources, similar to routes or deployment configurations, and can be deleted in the same fashion (using the updated name from the reattach example):
$ oc volume dc python-web --remove --name demo-vol-2
deploymentconfigs/python-web
$ oc delete pvc pvc-axv7b
persistentvolumeclaim "pvc-axv7b" deleted
Listing the claims for the user shows none allocated:
$ oc get pvc
The cluster administrator shows that the previously bound volume is now free:
$ oc get pv
NAME CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
pv01 10Gi RWO,RWX Available 39d
pv02 10Gi RWO,RWX Available 39d
pv03 10Gi RWO,RWX Available 39d
pv04 10Gi RWO,RWX Available 39d
pv05 10Gi RWO,RWX Available 39d
Integrating with OpenShift
While it is simple to deploy an image on OpenShift, there are certain guidelines and integration techniques that should be adhered to.
Creating Images
The OpenShift documentation contains a section on image creation guidelines, so rather than repeat that information here, it’s recommended to be familiar with the practices in that document. Points of particular interest will be included in the rest of this guide.
Recommended Practices
There are some recommended practices for your images to ensure full compatibility with Red Hat Products.
-
Containers should not run as root.
-
Containers running as root represent a security vulnerability as a process that breaks out of the container will retain root privileges on the host system.
-
Containers run as root by default - More information on users can be found here.
-
-
Containers should not request host-level privileges.
-
Containers requiring host-level privileges may not function correctly in all environments - namely those that the application deployer does not have full control over the host system.
-
OpenShift Online and OpenShift Dedicated do not support privileged containers.
-
-
Containers should use a Red Hat provided base image. They should also not modify content provided by Red Hat packages or layers.
-
The following dockerfile snippet shows a way to to build an image on rhel.
FROM registry.access.redhat.com/rhel7
-
This ensures that the application’s runtime dependencies are fully supported by Red Hat.
-
Environment Variables
It is impractical to rebuild an image for each possible configuration, especially if the image is owned by someone else. The recommended mechanism for configuring a service running in a container is through environment variables.
Environment variables are set in the deployment configuration. They can be set when the configuration is first created or can be added/modified/removed after an application has been deployed. It is important to realize that changes to the environment variables constitute a deployment configuration change. The existing pods are not modified; rather, new pods are deployed with the changed configuration and will automatically replace the existing ones (this is easily seen on the UI overview page with the graphical depictions of scaling up and down).
This paradigm is used both for images created for deployment as well as those
intended to be used as builder images with
source to image. For example,
the Python builder image will, by default, attempt to run a file named
app.py
. If an application cannot be modified to use this naming scheme,
the environment variable APP_FILE
can be specified to indicate a new
script to run when the container is launched.
Example
Below is the output for the deployment configuration of a simple web application. The application is written such that it will output "Hello World" when it is not overridden with environment variables.
As a reminder, the list of a particular resource can be viewed using the
get
command, followed by the resource type in question (dc
is an
abbreviaton for deployment configuration):
$ oc get dc
NAME REVISION DESIRED CURRENT TRIGGERED BY
python-web 3 1 1 config,image(python-web:latest)
The details of the configuration can be displayed with the describe
command (some of the irrelevant information has been removed for brevity):
$ oc describe dc python-web
Name: python-web
Namespace: python-web
Created: 10 days ago
Labels: app=python-web
Annotations: openshift.io/generated-by=OpenShiftNewApp
Latest Version: 3
Selector: app=python-web,deploymentconfig=python-web
Replicas: 1
Triggers: Config, Image(python-web@latest, auto=true)
Strategy: Rolling
Template:
Labels: app=python-web
deploymentconfig=python-web
Annotations: openshift.io/container.python-web.image.entrypoint=["/bin/sh","-c","cd /src/www; /bin/bash -c 'python3 -u /src/web.py'"]
openshift.io/generated-by=OpenShiftNewApp
Containers:
python-web:
Image: jdob/python-web@sha256:3f87be1825405ee8c7da23d7a6916090ecbb2d6e7b04fcd0fd1dc194173d2bc0
Port: 8080/TCP
Volume Mounts: <none>
Environment Variables: <none>
No volumes.
Note that there are no environment variables set for the application. Viewing the application (through its route), displays the default "Hello World" text:
$ oc get route
NAME HOST/PORT PATH SERVICES PORT TERMINATION
python-web python-web-python-web.apps.10.2.2.2.xip.io python-web 8080-tcp
$ curl http://python-web-python-web.apps.10.2.2.2.xip.io
Hello World
There are a few options for editing environment variables. The UI can be used to navigate to the deployment configuration. The "Environment" tab can be used to view and modify environment variables for the configuration. When changes are saved by pressing the "Save" button, a new deployment is triggered using the new configuration values.
Alternatively, the CLI’s edit
command can be used to interactively edit
the YAML representation of many resources. This command, called by specifying
a resource type and name, opens a text editor in which changes can be made.
When the file is saved and the editor is closed, the changes are sent to
the server and the appropriate action is taken. In this case, the change in
configuration will cause a redeployment.
Below is a snippet of the deployment configuration while being edited
(removed sections are replaced with [snip]
for readability). The
changes made are highlighted:
$ oc edit dc python-web
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: DeploymentConfig
metadata:
[snip]
spec:
[snip]
template:
metadata:
[snip]
spec:
containers:
- env:
- name: TEXT (1)
value: Goodbye World (1)
image: jdob/python-web@sha256:3f87be1825405ee8c7da23d7a6916090ecbb2d6e7b04fcd0fd1dc194173d2bc0
imagePullPolicy: Always
name: python-web
1 | A new environment variable named TEXT is introduced into the container. |
If present, the value of the TEXT
variable is output by the web server when
it is accessed. For reference, the relevant Python line in the application is:
m = os.environ.get('TEXT', None) or 'Hello World'
At this point, there are a few ways to monitor the changes being made. The
UI presents a graphical view of the existing pods scaling down while new ones
are created with the new configuration. The CLI’s status
command can be
used to show that a new deployment was made:
$ oc status 1 ↵
In project python-web on server https://localhost:8443
http://python-web-python-web.apps.10.2.2.2.xip.io to pod port 8080-tcp (svc/python-web)
dc/python-web deploys istag/python-web:latest
deployment #2 deployed 9 minutes ago - 1 pod
deployment #1 deployed 48 minutes ago
Notice that a new deployment was made, corresponding to the updated deployment
configuration that was submitted. As proof of the environment variable’s
presence in the container, the previous curl
command can be run again:
$ curl http://python-web-python-web.apps.10.2.2.2.xip.io
Goodbye World
Users
One of the more common obstacles encountered with creating new images revolves around the user running the container process. By default, Docker containers are run as root. This can become a security issue as any process that breaks out of the container will retain the same privileges on the host machine; root in a container would have access to root on the host.
By default, OpenShift will issue a warning when loading an image defined to run as root and, in many cases, the deployment will fail with some form of permission error. These failures are due to the fact that OpenShift creates a random, non-privileged user (with no corresponding UID on the host machine) and runs the container with that user. This is an added security benefit provided by OpenShift and, while not difficult, must be acknowledged when creating images.
Since OpenShift is generating a random UID, the solution isn’t as simple as creating and using a user (by its name) within the container. There are potential security issues where a created user can still give itself root privileges. The use of a random ID, specified by OpenShift, also supports added security for multi-tenancy by forcing persistent storage volume UIDs to be unique for each project.
In short, since OpenShift runs containers as a randomized, non-privileged user, the image must be constructed with those limitations in mind.
The common solution is to make the necessary files and directories writable by the root group.
Example
Below is a snippet from a Dockerfile used to run httpd as a non-privileged
container. This setup will host pages from the /opt/app-root
directory.
For brevity, the Dockerfile EXPOSE
and corresponding
httpd configuration changes to serve on a non-privileged port are not
included in the snippet.
# Create a non root account called 'default' to be the owner of all the
# files which the Apache httpd server will be hosting. This account
# needs to be in group 'root' (gid=0) as that is the group that the
# Apache httpd server would use if the container is later run with a
# unique user ID not present in the host account database, using the
# command 'docker run -u'.
ENV HOME=/opt/app-root
RUN mkdir -p ${HOME} && \
useradd -u 1001 -r -g 0 -d ${HOME} -s /sbin/nologin \ (1)
-c "Default Application User" default
# Fixup all the directories under the account so they are group writable
# to the 'root' group (gid=0) so they can be updated if necessary, such
# as would occur if using 'oc rsync' to copy files into a container.
RUN chown -R 1001:0 /opt/app-root && \
find ${HOME} -type d -exec chmod g+ws {} \; (2)
# Ensure container runs as non root account from its home directory.
WORKDIR ${HOME}
USER 1001 (3)
1 | The user is created through a numeric ID. |
2 | Group permissions are given to the necessary directories. |
3 | Indicate that the image should be run as the non-root user. |
Note the usage of a numeric UID instead of the named user. This is done for portability across hosting providers and will pass checks to ensure that, at very least, the container is not being run as root (this check is impossible using named users).
Labels
In this context, labels refer to the Docker concept of labels: metadata on an image. These are specified in the Dockerfile and are included in the built image.
Partner container certification requires that images include the following labels:
-
name
-
vendor
-
version
-
release
The container partner team provides a sample Dockerfile that can be used as a template for images suitable for certification. It can be found on GitHub.
Example
The following snippet uses the Dockerfile LABEL
directive to define
the minimum required labels:
LABEL name="jdob/python-web" \
vendor="Red Hat" \
version="1.0" \
release="1"
The labels can be viewed using the docker inspect
command (the output
below is truncated:
$ docker inspect --format {{.ContainerConfig.Labels}} jdob/python-web
map[name:jdob/python-web release:1 vendor:Red Hat version:1.0]
Authenticating to the OpenShift APIs
Service accounts may be used to authenticate against the OpenShift API without the need to use a regular user’s credentials. This can be used for integrations that require extra information about the running system in which they are deployed, such as for discovery or monitoring purposes. Service accounts are identified by a username and its roles can be manipulated in the same way.
In order to properly configure permissions for a service account, some understanding of the security role system is required.
Security Context Constraints
Operations on security context constraints can only be performed by an admin user, including listing or describing existing SCCs. |
Security Context Constraints (SCC for short) define a set of access permissions. Users and service accounts are added to SCCs to permit them the privileges defined by the SCC.
A list of all defined SCCs can be retrieved using the get
command and
the scc
resource type:
NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY READONLYROOTFS VOLUMES
anyuid false [] MustRunAs RunAsAny RunAsAny RunAsAny 10 false [configMap downwardAPI emptyDir persistentVolumeClaim secret]
hostaccess false [] MustRunAs MustRunAsRange MustRunAs RunAsAny <none> false [configMap downwardAPI emptyDir hostPath persistentVolumeClaim secret]
hostmount-anyuid false [] MustRunAs RunAsAny RunAsAny RunAsAny <none> false [configMap downwardAPI emptyDir hostPath nfs persistentVolumeClaim secret]
hostnetwork false [] MustRunAs MustRunAsRange MustRunAs MustRunAs <none> false [configMap downwardAPI emptyDir persistentVolumeClaim secret]
nonroot false [] MustRunAs MustRunAsNonRoot RunAsAny RunAsAny <none> false [configMap downwardAPI emptyDir persistentVolumeClaim secret]
privileged true [] RunAsAny RunAsAny RunAsAny RunAsAny <none> false [*]
restricted false [] MustRunAs MustRunAsRange MustRunAs RunAsAny <none> false [configMap downwardAPI emptyDir persistentVolumeClaim secret]
Specific details are displayed using the describe
command. Below is the
output for the default restricted
SCC:
Name: restricted
Priority: <none>
Access:
Users: <none>
Groups: system:authenticated
Settings:
Allow Privileged: false
Default Add Capabilities: <none>
Required Drop Capabilities: KILL,MKNOD,SYS_CHROOT,SETUID,SETGID
Allowed Capabilities: <none>
Allowed Volume Types: configMap,downwardAPI,emptyDir,persistentVolumeClaim,secret
Allow Host Network: false
Allow Host Ports: false
Allow Host PID: false
Allow Host IPC: false
Read Only Root Filesystem: false
Run As User Strategy: MustRunAsRange
UID: <none>
UID Range Min: <none>
UID Range Max: <none>
SELinux Context Strategy: MustRunAs
User: <none>
Role: <none>
Type: <none>
Level: <none>
FSGroup Strategy: MustRunAs
Ranges: <none>
Supplemental Groups Strategy: RunAsAny
Ranges: <none>
The SCC description includes information on what is permitted to users in the
SCC. The Access
section indicates which users are granted access to the
SCC. Note that service accounts are treated as users in this context and
will appear in this list as well.
Users are granted access to an SCC through the admin policy (adm policy
)
command:
$ oc adm policy add-scc-to-user restricted onboard
$ oc describe scc restricted
Name: restricted
Priority: <none>
Access:
Users: onboard
Groups: system:authenticated
[output truncated]
Service Accounts
Service accounts exist within the scope of a particular project. Given that,
cluster admin privileges are not required. Like other API objects, they are
created and deleted through the create
command:
$ oc create serviceaccount onboard-sa
serviceaccount "onboard-sa" created
$ oc get sa
NAME SECRETS AGE
builder 2 <invalid>
default 2 <invalid>
deployer 2 <invalid>
onboard-sa 2 <invalid>
In the example above, the service account will be created in the currently
active project. A different project may be specified using the -n
flag.
All projects are configured with three default service accounts:
-
builder - Build pods use this SCC to push images into the internal Docker registry and manipulate image streams.
-
deployer - Used to view and edit replication controllers.
-
default - Used to run all non-builder pods unless explicitly overridden.
Service accounts can be added to SCCs in the same way as users with one notable exception. The username for the service account must be fully qualified as a service account and identifying the project in which it exists. The template for the user name is:
system:serviceaccount:<project>:<sa-name>
For example, to add the previously created service account (assuming it was
under the project name demo
):
$ oc adm policy add-scc-to-user restricted system:serviceaccount:demo:onboard-sa
$ oc describe scc restricted
Name: restricted
Priority: <none>
Access:
Users: system:serviceaccount:demo:onboard-sa
Groups: system:authenticated
Authenticating as a Service Account
There are two ways to retrieve an API token for the service account.
Externally Retrieving a Token
The describe
command can be used to show the tokens that were created
for a service account:
$ oc describe sa onboard-sa 1 ↵
Name: onboard-sa
Namespace: guestbook
Labels: <none>
Image pull secrets: onboard-sa-dockercfg-myuk7
Mountable secrets: onboard-sa-token-tuwfj
onboard-sa-dockercfg-myuk7
Tokens: onboard-sa-token-n79y5
onboard-sa-token-tuwfj
In this case, the list Tokens
is of interest. The token itself can be
retrieved through the describe
command for the secret (the actual
token value is truncated for brevity):
$ oc describe secret onboard-sa-token-n79y5 1 ↵
Name: onboard-sa-token-n79y5
Namespace: guestbook
Labels: <none>
Annotations: kubernetes.io/service-account.name=onboard-sa
kubernetes.io/service-account.uid=efe81599-bd6f-11e6-b14e-5254009f9a8b
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1066 bytes
namespace: 9 bytes
token: eyJhbGciOi...
Assuming the token value is saved to an environment variable named TOKEN
,
the list of users can be retrieved with the following curl
command:
$ TOKEN="$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)"
$ curl -k "https://10.1.2.2:8443/oapi/v1/users/~" -H "Authorization: Bearer $TOKEN" 1 ↵
{
"kind": "User",
"apiVersion": "v1",
"metadata": {
"name": "system:serviceaccount:demo:onboard-sa",
"selfLink": "/oapi/v1/users/system:serviceaccount:demo:onboard-sa",
"creationTimestamp": null
},
"identities": null,
"groups": [
"system:serviceaccounts",
"system:serviceaccounts:demo"
]
}
From Within a Container
The API token for the service account associated with a deployment configuration is automatically injected into each container when it is created. The service account for a container can be changed from the default to an account with the proper permissions based on the need. The token is stored inside the container at:
/var/run/secrets/kubernetes.io/serviceaccount/token
Using the same curl command as above:
$ TOKEN="$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)"
$ curl -k "https://10.1.2.2:8443/oapi/v1/users/~" -H "Authorization: Bearer $TOKEN" 1 ↵
{
"kind": "User",
"apiVersion": "v1",
"metadata": {
"name": "system:serviceaccount:demo:default",
"selfLink": "/oapi/v1/users/system:serviceaccount:demo:default",
"creationTimestamp": null
},
"identities": null,
"groups": [
"system:serviceaccounts",
"system:serviceaccounts:demo"
]
}
This output was taken from a container with no additional configuration,
so the self reference refers to the project’s default
service account.
Example
An example PHP script using the container’s token to access the API of the OpenShift instance in which it is deployed can be found on GitHub. It can be built and deployed as a source to image application.
While it is not practical to repeat the code here, there are a few sections of note.
$token_file = "/var/run/secrets/kubernetes.io/serviceaccount/token";
$f = fopen($token_file, "r");
$token = fread($f, filesize($token_file));
fclose($f);
$auth = "Authorization: Bearer $token";
The block above reads in the contents of the token file to use for
authenticating against the API. The token is passed in the
Authentication: Bearer
header.
$url = "https://openshift.default.svc.cluster.local/oapi/v1/users/~";
The alias openshift.default.svc.cluster.local
is made available to all
containers by default and can be used to access the control plane for that
container.
curl_setopt($curl, CURLOPT_SSL_VERIFYPEER, false);
The API is run on HTTPS. For simplicity and portability of this example, verification of the SSL certificate is disabled. Extra steps may be necessary to provide the proper CA for the container.
OpenShift Network Plugins
For instances where the default network behaviour does not meet the user’s needs, OpenShift supports the same plugin model as Kubernetes does for networking pods.
Container Network Interfaces
CNIs are the recommended method for usage of network plugins. Any external
networking solution can be used to plumb networking for OpenShift as long as it
follows the CNI spec. Then, OpenShift needs to be launched by setting the
network plugin name to cni
in the master/node config yaml files:
networkConfig:
networkPluginName: "cni"
When done through the Ansible installer,
specify sdn_network_plugin_name=cni
as an option when installing OpenShift.
The default behavior of the OpenShift Ansible installation allows a firewall
passthrough for the VXLAN port (4789), so if a plugin needs other ports
(for management/control/data) to be open, then the installer needs to be
changed accordingly.
More information about CNIs can be found in the Kubernetes documentation and in the CNI spec.
Requirements
OpenShift networking requires the following items to be kept in mind when writing CNI plugins in addition to the basic Kubernetes requirements.
Red items are not necessary for a functional OpenShift cluster, but some things will need to be worked around for the administrator’s benefit. |
-
A plugin can follow the NetworkPolicy objects from Kubernetes and implement the user/admin intent on multi-tenancy. Alternatively, plugins can ignore multi-tenancy completely or implement a model where multi-tenancy is based on projects (i.e. Kubernetes namespaces), where:
-
Each namespace should be treated like a tenant where its pods and services are isolated from another project’s pods/services
-
Support exists for operations like merge/join networks even when they belong to different namespaces
-
-
Certain services in the cluster will be run as infrastructure services (e.g. load balancer, registry, DNS server). The plugin should allow for a 'global' tenant which is-accessible-by/can-access all pods of the cluster. For example, a load balancer can run in two modes - private and global. The global load balancer should have access to all tenants/namespaces of the cluster. A private load balancer is one that is launched as a pod by a particular namespace, and this should obey tenant isolation rules.
-
Access to all pods from the host, which is particularly important if kube-proxy is used by the SDN solution to support Kubernetes services. Please note that iptables based kube-proxy will be enabled by default in OpenShift. This will have to be overridden specially if the plugin wants a different behaviour.
-
The proxy can be disabled by giving the option
--disable proxy
to OpenShift’s node process. For example, the proxy may be disabled for the OpenShift node’s systemd service by adding the optionOPTIONS="--loglevel=2 --disable proxy"
to the sysconfig file (/etc/sysconfig/origin-node
in case of origin).
-
-
Access to external networks by pods whether through NAT or direct access.
-
OpenShift builds docker images as part of the developer workflow. The build is run through the
docker build
call. This means that docker’s default networking will be invoked for this container (CNI/kube-plugin will not run as this is not a pod). These containers still need a network and access to an external network. -
Respect the
PodSecurityContext::HostNetwork=true
setting for infrastructure pods. Another option is to provide an externally routable IP address to the pod. This is used for the load balancer pods which are the entry point for all external traffic funneling into the cluster.-
Note that the HostPort ←→ ContainerPort mapping will not be available by default if the CNI plugin is enabled (as the default docker networking is turned off). The plugin will have to implement this functionality on its own.
-
Anatomy of a Project
More information on the |
When applications are deployed, either directly from an image or built using source to image, there are a number of resources that OpenShift creates to support the application.
Image Stream
If an existing image stream is not found, OpenShift will create one. The image stream is used to provide images when creating new containers. Additionally, triggers can be created to automatically react to changes in an image stream contents and roll out updated pods.
The type name imagestreams
(or is
for short) is used with CLI query
commands:
$ oc get is
NAME DOCKER REPO TAGS UPDATED
python-web 172.30.53.244:5000/python-web/python-web latest 50 minutes ago
In the output above, the 172.30.53.244
address corresponds to the internal
Docker registry created and managed by the OpenShift installation. It runs
in a container under the default
project, which can be accessed by a user
with cluster admin privileges:
$ oc project default
Now using project "default" on server "https://localhost:8443".
$ oc get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
docker-registry 172.30.53.244 <none> 5000/TCP 20d
kubernetes 172.30.0.1 <none> 443/TCP,53/UDP,53/TCP 20d
router 172.30.198.186 <none> 80/TCP,443/TCP,1936/TCP 20d
More information about the created input stream can be viewed using the
describe
command:
$ oc describe is python-web
Name: python-web
Namespace: python-web
Created: 56 minutes ago
Labels: app=python-web
Annotations: openshift.io/generated-by=OpenShiftNewApp
openshift.io/image.dockerRepositoryCheck=2016-10-14T15:05:43Z
Docker Pull Spec: 172.30.53.244:5000/python-web/python-web
Unique Images: 1
Tags: 1
latest
tagged from jdob/python-web
* jdob/python-web@sha256:3f87be1825405ee8c7da23d7a6916090ecbb2d6e7b04fcd0fd1dc194173d2bc0
56 minutes ago
Replication Controller
A replication controller is created when an application is deployed and is used to control the number of running pods. Each application deployment gets its own replication controller.
The resource type replicationcontrollers
(or rc
for short) is used
with the CLI query commands:
$ oc get rc
NAME DESIRED CURRENT AGE
python-web-1 1 1 1h
The describe
command displays extra information, including details on the
image used to provision and details on the running and desired pods:
$ oc describe rc python-web
Name: python-web-1
Namespace: python-web
Image(s): jdob/python-web@sha256:3f87be1825405ee8c7da23d7a6916090ecbb2d6e7b04fcd0fd1dc194173d2bc0
Selector: app=python-web,deployment=python-web-1,deploymentconfig=python-web
Labels: app=python-web
openshift.io/deployment-config.name=python-web
Replicas: 1 current / 1 desired
Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed
No volumes.
No events.
Deployment Configuration
The next level up is the deployment configuration which describes when and how deployments of the application will be run.
The resource type deploymentconfigs
(or dc
for short) is used with
the CLI query commands:
$ oc get dc
NAME REVISION DESIRED CURRENT TRIGGERED BY
python-web 1 1 1 config,image(python-web:latest)
Information on the deployment’s triggers and update strategy, as well as
details on deployments done using the configuration, are displayed by the
describe
command:
$ oc describe dc
Name: python-web
Namespace: python-web
Created: 2 hours ago
Labels: app=python-web
Annotations: openshift.io/generated-by=OpenShiftNewApp
Latest Version: 1
Selector: app=python-web,deploymentconfig=python-web
Replicas: 1
Triggers: Config, Image(python-web@latest, auto=true)
Strategy: Rolling
Template:
Labels: app=python-web
deploymentconfig=python-web
Annotations: openshift.io/container.python-web.image.entrypoint=["/bin/sh","-c","cd /src/www; /bin/bash -c 'python3 -u /src/web.py'"]
openshift.io/generated-by=OpenShiftNewApp
Containers:
python-web:
Image: jdob/python-web@sha256:3f87be1825405ee8c7da23d7a6916090ecbb2d6e7b04fcd0fd1dc194173d2bc0
Port: 8080/TCP
Volume Mounts: <none>
Environment Variables: <none>
No volumes.
Deployment #1 (latest):
Name: python-web-1
Created: 2 hours ago
Status: Complete
Replicas: 1 current / 1 desired
Selector: app=python-web,deployment=python-web-1,deploymentconfig=python-web
Labels: app=python-web,openshift.io/deployment-config.name=python-web
Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed
Service
The last layer of interest is the created service. The service acts as the entry point into the running application, taking care of distributing requests to the appropriate pod.
The resource type services
is used with the CLI query commands:
$ oc get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
python-web 172.30.167.215 <none> 8080/TCP 1h
Details about a service include the internal IP address and ports in use:
$ oc describe service python-web
Name: python-web
Namespace: python-web
Labels: app=python-web
Selector: app=python-web,deploymentconfig=python-web
Type: ClusterIP
IP: 172.30.167.215
Port : 8080-tcp 8080/TCP
Endpoints: 172.17.0.12:8080
Session Affinity: None
No events.
Terminology
build configuration
A build configuration describes a single build definition and a set of triggers for when a new build should be created.
container
The fundamental piece of an OpenShift application is a container. A container is a way to isolate and limit process interactions with minimal overhead and footprint. In most cases, a container will be limited to a single process providing a specific service (e.g. web server, database).
deployment configuration
A deployment configuration contains the details of a particular application deployment:
-
The configuration used in the replication controller definition, such as the number of replicas to ensure
-
Triggers for automatically performing an updated deployment, such as when an image is tagged or the source code in a source-to-image deployment is changed
-
The strategy for transitioning between deployments when upgrading
-
Lifecycle hooks
image
An image is a pre-built, binary file that contains all of the necessary components to run a single container; a container is the working instantiation of an image. Additionally, an image defines certain information on how to interact with containers created from the image, such as what ports are exposed by the container.
OpenShift uses the same image format as Docker; existing Docker images can easily be used to build containers through OpenShift. Additionally, OpenShift provides a number of ways to build images, either from a Dockerfile or directly from source hosted in a git repository.
image stream
An image stream is a series of Docker images identified by one or more tags. Image streams are capable of aggregating images from a variety of sources into a single view, including
-
Images stored in OpenShift’s integrated Docker repository
-
Images from external Docker registries
-
Other image streams
pod
Pods come from the Kubernetes concept of the same name. A pod is a set of one or more containers deployed together to act as if they are on a single host, sharing an internal IP, ports, and local storage.
It is important to realize that OpenShift treats pods as immutable. Any changes, be it the underlying image, pod configuration, or environment variable values, cause new pods to be created and phase out the existing pods. Being immutable also means that any state is not maintained between pods when they are recreated.
project
An OpenShift project corresponds to a Kubernetes namespace. They are used to organize and group objects in the system, such as services and deployment configurations, as well as provide security policies specific to those resources.
replication controller
A replication controller is used to ensure a specified number of pods for an application are running at a given time. The replication controller automatically reacts to changes to deployed pods, both the removal of existing pods (deletion, crashing, etc.) or the addition of extra pods that are not desired. The pods are automatically added or removed from the service to ensure its uptime.
route
A route is the method to access a service through a hostname. By default, services are only accessible by other pods within the project. A route is created, or exposed, and configured to make the service publicly accessible through a hostname and optional secure communications.
service
A service functions as a load balancer and proxy to underlying pods. Services are assigned IP addresses and ports and will delegate requests to an appropriate pod that can field it.
source-to-image
Source-to-image is a feature that allows OpenShift to build a Docker image from a source code repository. An application is created within a project that includes a URL to the repository and an optional builder image to base the build on. Web hooks may also be configured to trigger OpenShift to build a new image when the source repository is modified.
Getting Involved
There are a number of ways to engage with the OpenShift teams and begin to promote new integrations and ideas.
OpenShift Blog
The OpenShift Blog hosts posts on a variety of topics. In particular, the ecosystem tag highlights posts written by ISV teams to showcase their integrations.
Commons
OpenShift Commons provides a community for partners, users, customers, and contributors to build connections and share best practices, facilitating collaboration and awareness of project involvement with OpenShift. Additionally, regular briefings are hosted and allow for deeper dives into various topics.
Primed
Red Hat OpenShift Primed is a technical readiness designation that acknowledges the first steps of an ISV’s technology working with OpenShift by providing the ISV a designated logo and awareness through OpenShift online properties such as Hub and OpenShift Commons. To earn this designation, ISVs must demonstrate an initial commitment to OpenShift.
To sign up for the program, visit https://www.openshift.com/primed.
-
Click on Apply Now.
-
Sign in with GitHub to get access to the Primed application.
-
The Primed application:
Name Name of Organization
Summary Information about OpenShift integration
More Information URL Link to organization page
Evidence of integration This needs to demonstrate the integration - blog entries, how-to guides, source code, or videos will satisfy the requirement.
Primed For OpenShift Offering
Version Number OpenShift Version Number
=== Red Hat Connect
A natural progression from the Primed program is Red Hat Connect for Technical Partners.
The program is designed to assist companies who are actively seeking to develop, test, certify and support their products and solutions with the Red Hat portfolio. Participants in the program will gain access to a strong ecosystem of companies that are building software, hardware and cloud-based solutions for enterprise customers across the world.
Partners developing solutions for OpenShift should apply for access to the containers zone. All members of the program are required certify their images to gain access to the ecosystem.
Certification
Red Hat Container Certification is available for software vendors that offer commercial applications packaged and distributed as containers. The certification process is intended to help ensure that applications are built and packaged according to Red Hat’s enterprise standards.
In addition to the practices outlined under recommended practices, software vendors should ensure that their containers meet the following requirements.
-
Images should contain a directory named
/licenses
that holds all relevant licensing information.-
End users should be aware of the terms and conditions applicable to the image.
-
-
Image dockerfiles should contain the following labels:
-
name
-
vendor
-
version
-
release
-
-
Images should contain a help file as outlined here.
-
Images should have a tag other than latest so that they can be uniquely identified.
The policy describing these standards can be found here (Red Hat Connect login required).
Red Hat Partner Engineering mantains an example Dockerfile here which hits all the basic requirements for certification.