New concepts for deployment that we want to explain in a Conceptual Architectural Diagram:
AWS = Amazon Web Services
Amazon Web Services is a subsidiary of Amazon.com that provides on-demand cloud computing platforms to individuals, companies and governments, on a paid subscription basis with a free-tier option available for 12 months.
with subservices / products:
- RDS = (Amazon) Relational Database Service. Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud.
- ECR = (Amazon) Elastic Container Registry. Amazon ECR is a fully-managed Docker container registry that makes it easy for developers to store, manage, and deploy Docker container images. Amazon ECR is integrated with Amazon Elastic Container Service (ECS), simplifying your development to production workflow.
- VPC = (Amazon) Virtual Private Cloud. Amazon VPC lets you provision a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define.
- Amazon S3 = Cloud Storage Service, Object storage built to store and retrieve any amount of data from anywhere. A web service.
- Amazon Route 53
- Amazon Aurora
would be useful to have diagrams to explain the interplay of the subservices diagrammatically.
AWS to be used in what deployment environments ?
Instead of building a virtual machine from scratch, which would be a slow and tedious process, Vagrant uses a base image to quickly clone a virtual machine. These base images are known as "boxes" in Vagrant, and specifying the box to use for your Vagrant environment is always the first step after creating a new Vagrantfile.
A marketplace for vagrant boxes, https://app.vagrantup.com/folio .
Playbooks are Ansible’s configuration, deployment, and orchestration language. They can describe a policy you want your remote systems to enforce, or a set of steps in a general IT process.
Ansible Roles are ways of automatically loading certain vars_files, tasks, and handlers based on a known file structure. Grouping content by roles also allows easy sharing of roles with other users.
Docker replaces XAMPP (LAMP).
Docker is a tool designed to make it easier to create, deploy, and run applications by using containers. Containers allow a developer to package up an application with all of the parts it needs, such as libraries and other dependencies, and ship it all out as one package. By doing so, thanks to the container, the developer can rest assured that the application will run on any other Linux machine regardless of any customized settings that machine might have that could differ from the machine used for writing and testing the code.
Docker provides an additional layer of abstraction and automation of operating-system-level virtualization on Windows and Linux. Docker uses the resource isolation features of the Linux kernel such as cgroups and kernel namespaces, and a union-capable file system such as OverlayFS and others to allow independent "containers" to run within a single Linux instance, avoiding the overhead of starting and maintaining virtual machines (VMs).
A container image is a lightweight, stand-alone, executable package of a piece of software that includes everything needed to run it: code, runtime, system tools, system libraries, settings. Available for both Linux and Windows based apps, containerized software will always run the same, regardless of the environment. Containers isolate software from its surroundings, for example differences between development and staging environments and help reduce conflicts between teams running different software on the same infrastructure.
Dockerfiles: scripts to build containers, step-by-step, layer-by-layer, automatically from a source (base) image.
The major difference between a container and an image is the top writable layer. All writes to the container that add new or modify existing data are stored in this writable layer. When the container is deleted, the writable layer is also deleted. The underlying image remains unchanged.
Because each container has its own writable container layer, and all changes are stored in this container layer, multiple containers can share access to the same underlying image and yet have their own data state.
A Docker image is built up from a series of layers. Each layer represents an instruction in the image’s Dockerfile. Each layer except the very last one is read-only.
Official Repositories. Docker Store is the new place to discover public Docker content https://hub.docker.com/explore/ .
Scripts for Continuous Intergration, e.g. Travis CI: https://docs.travis-ci.com/user/customizing-the-build/
The process of deploying multiple containers to implement an application can be optimized through automation. This becomes more and more valuable as the number of containers and hosts grow. This type of automation is referred to as orchestration. Orchestration can include a number of features, including:
Instantiating a set of containers
Rescheduling failed containers
Linking containers together through agreed interfaces
Exposing services to machines outside of the cluster
Scaling out or down the cluster by adding or removing containers
rkt is an application container engine developed for modern production cloud-native environments. It has been introduced by CoreOS in December 2014. rkt integrates with Kubernetes.