current position:Home>K8s Foundation

K8s Foundation

2022-01-27 03:56:26 Wei Xiaoya

K8s Introduce

First , It is a new leading solution of distributed architecture based on container technology .Kubernetes(k8s) yes Google Open source container cluster management system ( Google inside :Borg). stay Docker On the basis of technology , Provides deployment runs for containerized applications 、 Resource scheduling 、 Service discovery and dynamic scaling , Improved the large - scale container cluster management convenience .

K8s Official code

Kubernetes Is a complete distributed system support platform , Has the complete cluster management ability , We will expand security protection and access mechanisms at multiple levels 、 Multi-tenant application support capability 、 Transparent service registration and discovery mechanism 、 Built-in intelligent load balancer 、 Strong fault detection and self - repair ability 、 Service rolling upgrade and online capacity expansion 、 Extensible automatic resource scheduling and multi-granular resource quota management capabilities . meanwhile Kubernetes Provide sound management tools , It covers development 、 The deployment of test 、 Operation and maintenance monitoring in all aspects .

Kubernetes in ,Service Is the core of the distributed cluster architecture , One Service Object ownership such as Xiaguan Key features :

  • Has a unique specified name
  • Have a virtual IP(Cluster IP、Service IP、 or VIP) And port number
  • Be able to experience some kind of remote service capability
  • Is mapped to a set of container applications that provide this service capability

Service Service processes are currently based on Socket Communication means to provide external services , such as Redis、Memcache、MySQL、Web Server, Or a specific one that implements a specific business TCP Server process , Although a Service Services are typically provided by multiple related service processes , Each service process has a separate one Endpoint(IP+Port) Access points , but Kubernetes Enables us to connect to a specified through a service Service On . With Kubernetes Transparent load balancing and fault recovery mechanism of traitors , No matter how many service processes are in the back end , It does not matter whether a service process is redeployed to another machine due to a failure , Will not affect the normal call of our team service , More importantly, this Service Once created, the itself will not change , It means in Kubernetes In the cluster , We don't have to serve IP The change of address is a headache .

Containers provide powerful isolation capabilities , It is necessary to regard as Service The set of processes that provide the service is isolated in a container . So ,Kubernetes Designed Pod object , Wrap each service process to a corresponding Pod in , Make it a Pod A container running in . In order to establish Service And Pod Relationship management between ,Kubernetes For each Pod Put a label on it Label, Like running MySQL Of Pod Put on name=mysql label , To run PHP Of Pod Put on name=php label , And then give them the corresponding Service Defines a label selector Label Selector, So that's a neat solution Service On Pod Correlation problem of .

In terms of cluster management ,Kubernetes Divide the machines in the cluster into one Master Nodes and a group of work nodes Node, among , stay Master The node runs a set of processes related to cluster management kube-apiserver、kube-controller-manager and kube-scheduler, These processes implement resource management for the entire cluster 、Pod Dispatch 、 Stretch and stretch 、 safety control 、 System monitoring and error correction and other management capabilities , And it's all done automatically .Node As the work node in the cluster , Run the real application , stay Node On Kubernetes The minimum running unit of management is Pod.Node Running on the Kubernetes Of kubelet、kube-proxy Service process , These service processes are responsible Pod The creation of 、 start-up 、 monitor 、 restart 、 Destroy and implement the software mode load balancer .

stay Kubernetes In the cluster , It addresses tradition IT System service expansion and upgrade of the two major problems . You just need to expand Service The associated Pod Create a Replication Controller abbreviation (RC), Then Service The expansion and subsequent upgrading of the problem will be solved . In a RC The following is included in the definition file 3 Key information .

  • The goal is Pod The definition of
  • The goal is Pod Number of copies to run (Replicas)
  • The target to monitor Pod label (Label)

Create a good RC after ,Kubernetes Will pass RC of Label Filter out correspondence Pod Instance and monitor its status and quantity in real time , If the number of instances is less than the number of copies defined , Will be based on RC As defined in Pod Template to create a new one Pod, And then new Pod Dispatch to the appropriate Node Upstart operation , know Pod The number of instances reaches the target , This process is completely automated .

Kubernetes advantage :

  • Container arrangement

  • Lightweight

  • Open source

  • Stretch and stretch

  • Load balancing

K8s characteristic

  • Self repair
    In case of node failure , Restart the failed container , Replacement and redeployment , Guaranteed number of copies expected ; Kill the container that failed the health check , And will not process client requests until they are ready , Make sure the online service is uninterrupted .

  • Stretch and stretch
    Use command 、UI Or based on CPU Application examples of automatic rapid capacity expansion and capacity reduction , Ensure high availability during application business peak concurrency ; Reclaim resources when business is at a low ebb , Run services at minimum cost .

  • Automatic deployment and rollback
    K8S Use rollover apps , Update one at a time Pod, Instead of deleting all of them at the same time Pod, If there is a problem during the update process , Changes will be rolled back , Ensure the upgrade does not affect the business .

  • Service discovery and load balancing
    K8S Provide a unified access entry for multiple containers , And all containers associated with load balancing , Allows users to think about no containers IP Network problems .

  • Confidentiality and configuration management
    Manage confidential data and application configuration , Instead of exposing sensitive data in a mirror image , Improve safety . And you can store some common configurations in K8S among , Easy to use for applications .
    The biggest problem with containerized applications is that it is difficult for us to configure applications in containers . After starting a container based on the image , What if you expect the application in the container to change its configuration ? If we define a entrypoint Script for , This script can accept the variable parameters passed in by the user , Convert the value of the variable into a configuration that can be read by the application in the container , Thus, the configuration of container application can be completed .
    The reason for this trouble is that early applications were not developed for cloud primitives , So those applications need to read the configuration file to get the configuration , The cloud native application can obtain the configuration based on the environment variables .
    This configuration method enables an image to meet the needs of users running the same image into containers with different configurations in different environments . But among the choreography tools , There will be some problems with this configuration , For example, where to save the configuration information ? If we use the orchestration platform to automate container startup , But every time we start the container, we have to manually pass the value of the environment variable , This is a very troublesome thing , So we need an external component to store the configuration information , When the image is started as a container , Just let the image load the configuration information in the external configuration center , You can complete the configuration of the application .

  • Storage choreography
    Mount external storage system , Whether it's from local storage 、 Public cloud (AWS)、 Or networked storage (NFS,Ceph,GFS) Are used as part of cluster resources , Greatly improve the flexibility of storage use . Dynamic provisioning of storage volumes , When the container needs to store volumes , Create storage volumes that meet the needs of the container itself .

  • The batch
    Provide one-time tasks , Timing task , Meet batch data processing and analysis scenarios .

K8s Architecture and components

K8s It is to combine the resources of multiple hosts into a large resource pool , And agree to calculate externally 、 Storage and other capabilities of the cluster .

Schematic architecture :
 Insert picture description here

1.Master

k8s The management node of the cluster , Responsible for cluster management , Provides access to the cluster's resource data . Have Etcd Storage service ( Optional ), function Api Server process ,Controller Manager Service progress and Scheduler Service process , Associated work node Node.Kubernetes API server Provide HTTP Rest The key service process of the interface , yes Kubernetes Increase in all resources 、 Delete 、 Change 、 Check the only access to the operation . It is also a cluster-controlled entry process ;Kubernetes Controller Manager yes Kubernetes An automated control center for all resource objects ;Kubernetes Schedule Is responsible for resource scheduling (Pod Dispatch ) The process of

API Server : Responsible for receiving and processing requests
Scheduler: The request created by the scheduling container
Controller: Make sure that the created container is in a healthy state
Controller Manager: Ensure that the controller on the backend node is in a healthy state

2.Node

Node yes Kubernetes Run in a cluster architecture Pod Service node of ( Also called agent or minion).Node yes Kubernetes The unit of cluster operation , Used to carry to be distributed Pod Operation of , yes Pod Running host . relation Master The management node , Have a name and IP、 System resource information . function docker eninge service , Daemon kunelet And load balancer kube-proxy.

Every Node Each node runs the following set of key processes
kubelet: Responsible for Pod The creation of a container for 、 Start stop task
kube-proxy: Realization Kubernetes Service An important component of the communication and load balancing mechanism
Docker Engine(Docker):Docker engine , Responsible for the creation and management of native containers
  Node Nodes can be dynamically added to at run time Kubernetes In the cluster , By default ,kubelet Will think master Register yourself , This is also Kubernetes The recommended Node Management style ,kubelet The process will be timed Master Reporting personal information , Such as operating system 、Docker edition 、CPU And memory , And which ones Pod Running and so on , such Master We can know each Node Resource usage of the node , An efficient and balanced resource scheduling strategy .、

  • kubelet
    kubelet yes Master stay Node nodes Agent, Manages the life cycle of the native run container , Like creating a container 、Pod Mount the data volume 、 download Secret、 Get container node status .kubelet Each one Pod Convert to a set of containers .
  • kube-proxy
    stay node Node-on-node implementation Pod Network proxy , Maintain network rules and four - layer load balancing
  • docker or rocket
    Container engine , Run container .

K8s Professional term

pod

pod yes K8s Minimum scheduling logic unit ,pod It can be understood as the shell of the container , It makes a layer of abstract encapsulation for the container .pod The interior is mainly used to put containers ,pod The feature of is that multiple containers can be added to the same network namespace , One pod Can contain multiple containers , The same pod Different containers in can share storage volumes . One pod Whether there is one container or multiple containers , Once this pod Dispatch to node When running on , This one pod All containers in can only run in the same node On .

To run on Node Node , A combination of several related containers .Pod The contained containers run on the same host machine , Use the same network namespace 、IP Address and port , Can pass localhost Carry on communication .Pod yes Kurbernetes Create 、 The smallest unit of scheduling and management , It provides a higher level of abstraction than the container , Makes deployment and administration more flexible . One Pod It can contain one container or more related containers .

Pod There are actually two types : Ordinary Pod And static Pod, The latter is special , It doesn't exist Kubernetes Of etcd In storage , It's stored in a particular Node In a specific file above , And only here Node Start the . Ordinary Pod Once created , Will be put in etcd In storage , Is then Kubernetes Master Schedule to touch a specific Node Upper bound , Then the Pod By the corresponding Node Upper kubelet Processes are instantiated into a set of related Docker The ice container starts , stay . By default , When Pod When one of the containers in the ,Kubernetes Will automatically detect this question and restart this Pod( restart Pod All the containers in ), If Pod Where Node Downtime , I'm going to take this Node All the Pod Reschedule to another node .

node

node yes k8s Work nodes in the cluster , Responsible for operation by master Various tasks assigned . And the core task is to pod Form run container .

node It can be any form of computing device , As long as this device can have the traditional meaning CPU、 Memory 、 Storage space , And it can fit k8s Cluster agent for , It can be used as a whole k8s One member of the cluster .

tag chooser

When a lot of pod When running in a cluster , How to realize classified management ? For example, you want to delete a certain class pod, For example, I want the controller to manage some pod, How does it manage ? How to choose 、 Detect these pod Well ? To be sure , So many pod, We can't go through pod To identify the container , because pod Will be created and deleted at any time , When one pod After the fault is deleted , Regenerated pod The name of is the same as the deleted pod The name must be different , But the program running inside is the same , So we can't rely on pod To identify . At the same time, we may want to put a kind of pod Grouping , Such as creating 4 individual nginx Of pod, It is expected to use a controller for unified management , Delete a controller and put this 4 individual pod All deleted , The controller also ensures this 4 individual pod Are running , Lack of — One to make up one , Kill one more , Exactly what we expect 4 individual pod Talent .

In order to be able to achieve pod distinguish , Need to be in pod Add some metadata to it , similar dockerfle Medium label How to label , Like creating pod Attach a name to it when app Of key, Then set the value to nginx, So when we do it in batch pod Scheduling management , You can check pod If there app This key, And whether its value is nginx, In this way to identify pod Is it what we want to control pod.

The label is in k8s Manage large-scale pod A very important way to classify, identify and manage resources .

So how to start from many pod Select what we want pod Well ? We can implement this function through the tag selector component. Tag selector is a mechanism to filter qualified resource objects according to tags .

copyright notice
author[Wei Xiaoya],Please bring the original link to reprint, thank you.
https://en.cdmana.com/2022/01/202201270356180137.html

Random recommended