· ☕ 2 分钟

https://istio.io/latest/docs/tasks/observability/metrics/tcp-metrics/

Understanding TCP telemetry collection

In this task, you used Istio configuration to automatically generate and report metrics for all traffic to a TCP service within the mesh. TCP Metrics for all active connections are recorded every 15s by default and this timer is configurable via tcpReportingDuration. Metrics for a connection are also recorded at the end of the connection.

TCP attributes

Several TCP-specific attributes enable TCP policy and control within Istio. These attributes are generated by Envoy Proxies and obtained from Istio using Envoy’s Node Metadata. Envoy forwards Node Metadata to Peer Envoys using ALPN based tunneling and a prefix based protocol. We define a new protocol istio-peer-exchange, that is advertised and prioritized by the client and the server sidecars in the mesh. ALPN negotiation resolves the protocol to istio-peer-exchange for connections between Istio enabled proxies, but not between an Istio enabled proxy and any other proxy. This protocol extends TCP as follows:


· ☕ 1 分钟

https://istio.io/latest/docs/tasks/traffic-management/ingress/secure-ingress/

from worknode

no cert check TLS

1
2
3
4
5
6
export DOMAIN=fortio-server.idm-mark.svc.cluster.local
export INGRESS_IP=10.97.117.127
export SECURE_INGRESS_PORT=8080

curl -v -HHost:$DOMAIN --resolve "$DOMAIN:$SECURE_INGRESS_PORT:$INGRESS_IP" \
-k "https://$DOMAIN:$SECURE_INGRESS_PORT/fortio/"

simple TLS

1
2
curl -v -HHost:httpbin.example.com --resolve "httpbin.example.com:$SECURE_INGRESS_PORT:$INGRESS_HOST" \
--cacert example.com.crt "https://httpbin.example.com:$SECURE_INGRESS_PORT/status/418"

through gateway

no cert check TLS

1
2
3
4
5
6
export DOMAIN=fortio-server.idm-mark.svc.cluster.local
export INGRESS_IP=10.100.122.140
export SECURE_INGRESS_PORT=80

curl -v -HHost:$DOMAIN --resolve "$DOMAIN:$SECURE_INGRESS_PORT:$INGRESS_IP" \
-k "https://$DOMAIN:$SECURE_INGRESS_PORT/fortio/"

no cert check TLS

1
2
3
4
5
6
export DOMAIN=fortio-server.idm-mark.svc.cluster.local
export INGRESS_IP=10.100.122.140
export SECURE_INGRESS_PORT=80

curl -v -HHost:$DOMAIN --resolve "$DOMAIN:$SECURE_INGRESS_PORT:$INGRESS_IP" \
-k "https://$DOMAIN:$SECURE_INGRESS_PORT/fortio/"

· ☕ 0 分钟

· ☕ 3 分钟

https://istio.io/latest/docs/ops/configuration/traffic-management/tls-configuration/

Sidecars

Sidecar traffic has a variety of associated connections. Let’s break them down one at a time.

image-20220209152510166

Sidecar proxy network connections

  1. External inbound traffic This is traffic coming from an outside client that is captured by the sidecar. If the client is inside the mesh, this traffic may be encrypted with Istio mutual TLS. By default, the sidecar will be configured to accept both mTLS and non-mTLS traffic, known as PERMISSIVE mode. The mode can alternatively be configured to STRICT, where traffic must be mTLS, or DISABLE, where traffic must be plaintext. The mTLS mode is configured using a PeerAuthentication resource.
  2. Local inbound traffic This is traffic going to your application service, from the sidecar. This traffic will always be forwarded as-is. Note that this does not mean it’s always plaintext; the sidecar may pass a TLS connection through. It just means that a new TLS connection will never be originated from the sidecar.
  3. Local outbound traffic This is outgoing traffic from your application service that is intercepted by the sidecar. Your application may be sending plaintext or TLS traffic. If automatic protocol selection is enabled, Istio will automatically detect the protocol. Otherwise you should use the port name in the destination service to manually specify the protocol.
  4. External outbound traffic This is traffic leaving the sidecar to some external destination. Traffic can be forwarded as is, or a TLS connection can be initiated (mTLS or standard TLS). This is controlled using the TLS mode setting in the trafficPolicy of a DestinationRule resource. A mode setting of DISABLE will send plaintext, while SIMPLE, MUTUAL, and ISTIO_MUTUAL will originate a TLS connection.

The key takeaways are:


· ☕ 1 分钟

https://istio.io/v1.4/docs/tasks/security/authentication/mtls-migration/

Ensure that your cluster is in PERMISSIVE mode before migrating to mutual TLS. Run the following command to check:

1
2
3
4
5
6
$ kubectl get meshpolicy default -o yaml
...
spec:
  peers:
  - mtls:
      mode: PERMISSIVE

In PERMISSIVE mode, the Envoy sidecar relies on the ALPN value istio to decide whether to terminate the mutual TLS traffic. If your workloads (without Envoy sidecar) have enabled mutual TLS directly to the services with Envoy sidecars, enabling PERMISSIVE mode may cause these connections to fail.


· ☕ 2 分钟

SPIFFE

https://spiffe.io/docs/latest/spiffe-about/overview/

https://spiffe.io/docs/latest/spiffe-about/spiffe-concepts/

old school Official SPIFFE method:

https://blog.envoyproxy.io/securing-the-service-mesh-with-spire-0-3-abb45cd79810

1_k82LV4Sw3QXi7YWOb0sgDg

Workload

A workload is a single piece of software, deployed with a particular configuration for a single purpose; it may comprise multiple running instances of software, all of which perform the same task. The term “workload” may encompass a range of different definitions of a software system, including:

  • A web server running a Python web application, running on a cluster of virtual machines with a load-balancer in front of it.
  • An instance of a MySQL database.
  • A worker program processing items on a queue.
  • A collection of independently deployed systems that work together, such as a web application that uses a database service. The web application and database could also individually be considered workloads.

SPIFFE ID

A SPIFFE ID is a string that uniquely and specifically identifies a workload. SPIFFE IDs may also be assigned to intermediate systems that a workload runs on (such as a group of virtual machines). For example, spiffe://acme.com/billing/payments is a valid SPIFFE ID.


· ☕ 2 分钟

x-forwarded-client-cert

https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http_conn_man/headers#x-forwarded-client-cert

x-forwarded-client-cert (XFCC) is a proxy header which indicates certificate information of part or all of the clients or proxies that a request has flowed through, on its way from the client to the server. A proxy may choose to sanitize/append/forward the XFCC header before proxying the request.

The XFCC header value is a comma (",") separated string. Each substring is an XFCC element, which holds information added by a single proxy. A proxy can append the current client certificate information as an XFCC element, to the end of the request’s XFCC header after a comma.


· ☕ 2 分钟

https://istio.io/latest/docs/ops/common-problems/network-issues/#double-tls

Double TLS (TLS origination for a TLS request)

When configuring Istio to perform TLS origination, you need to make sure that the application sends plaintext requests to the sidecar, which will then originate the TLS.

TLS Origination

TLS origination occurs when an Istio proxy (sidecar or egress gateway) is configured to accept unencrypted internal HTTP connections, encrypt the requests, and then forward them to HTTPS servers that are secured using simple or mutual TLS. This is the opposite of TLS termination where an ingress proxy accepts incoming TLS connections, decrypts the TLS, and passes unencrypted requests on to internal mesh services.


· ☕ 1 分钟

How Does the CPU Manager Work?

When CPU manager is enabled with the “static” policy, it manages a shared pool of CPUs. Initially this shared pool contains all the CPUs in the compute node. When a container with integer CPU request in a Guaranteed pod is created by the Kubelet, CPUs for that container are removed from the shared pool and assigned exclusively for the lifetime of the container. Other containers are migrated off these exclusively allocated CPUs.


· ☕ 1 分钟

注意:kubelet修改cpu_manager策略配置,一定要停掉kubelet服务,并删除/var/lib/kubelet/cpu_manager_state 文件,再重启kubelet,否则会导致kubelet服务重启失败。


· ☕ 1 分钟

Memory Manager Goals

  • 保证最少 NUMA Node 去满足 POD 的内存需求: Offer guaranteed memory (and hugepages) allocation over a minimum number of NUMA nodes for containers (within a pod).

  • 长远是让pod中的所有 container 运行在尽量少的 NUMA NODE 中: Guaranteeing the affinity of memory and hugepages to the same NUMA node for the whole group of containers (within a pod). This is a long-term goal which will be achieved along with PR #1752 and the implementation of hintprovider.GetPodLevelTopologyHints() API in the Memory Manager.- Offer guaranteed memory (and hugepages) allocation over a minimum number of NUMA nodes for containers (within a pod).


· ☕ 2 分钟

K8s Memory Manager

Requriement

Your Kubernetes server must be at or later than version v1.21. To check the version, enter kubectl version.

To align memory resources with other requested resources in a Pod Spec:

Starting from v1.22, the Memory Manager is enabled by default through MemoryManager feature gate.


· ☕ 2 分钟

Topology Manager Scopes and Policies

Topology Manager provides two distinct knobs: scope and policy.

The scope defines the granularity at which you would like resource alignment to be performed (e.g. at the pod or container level). And the policy defines the actual strategy used to carry out the alignment (e.g. best-effort, restricted, single-numa-node, etc.).

Topology Manager Scopes

The Topology Manager can deal with the alignment of resources in a couple of distinct scopes: