寫點東西吧,懒人。

Istio Canary(金丝雀) 上线
· ☕ 1 分钟

按比例分配分配新旧版本流量

VirtualService:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: reviews
spec:
  hosts:
    - reviews
  http:
  - route:
    - destination:
        host: reviews
        subset: v1
      weight: 75
    - destination:
        host: reviews
        subset: v2
      weight: 25

DestinationRule :

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: bookinfo-ratings
spec:
  host: reviews
  trafficPolicy:
    loadBalancer:
      simple: LEAST_CONN
  subsets:
  - name: v1
    labels:
      version: v1
    trafficPolicy:
      loadBalancer:
        simple: ROUND_ROBIN
  - name: v2
    labels:
      version: v2
    trafficPolicy:
      loadBalancer:
        simple: ROUND_ROBIN

根据请求源 pod 的 label 路由

使用 sourceLabels 规则,可以根据源 pod 的 label 进行路由。这里用了 version 这个 label。即根据pod的应用版本进行路由。
这样的路由规则实际是使用于发起调用方的 sidecar。


系统级跟踪 eBPF 工具 —— bpftrace 入门
· ☕ 1 分钟

bpftrace 简介

bpftrace 简单使用

查询可以跟踪的内核函数,以 sleep 为关键字

1
2
3
4
5
6
7
8
$ bpftrace -l '*open*'

tracepoint:syscalls:sys_exit_open_tree
tracepoint:syscalls:sys_enter_open
...
kprobe:vfs_open
kprobe:tcp_try_fastopen
...

跟踪所有 sys_enter_open() 系统调用

1
$ bpftrace -e 'tracepoint:syscalls:sys_enter_open{ printf("%s %s\n", comm,str(args->filename)); }' | grep vi

然后在另外一个终端中

1
$ vi /etc/hosts

可以看到在 bpftrace 终端中输出:


Kernel - Page Frame 回收
· ☕ 4 分钟

From [Understanding The Linux Kernel]

Page Frame 回收

之前我们了解到,Linux 倾向用最多的内存做 Page Cache。这使我们不得不考虑如何在内存不足前回收内存。问题是,回收内存的程序本身也可能有 IO 操作,也可能需要内存。


Kernel - Pagecache
· ☕ 1 分钟

简介

page cache 存放的数据的类型

  • 普通的文件
  • 目录数据
  • 直接读取自 block device file 的数据
  • 已经被swap out的用户进程内存的数据(可以强制内核在page cahce中保留一些已经被swap out的数据)
  • 归属于一些特殊 filesystem 的内存 page,如用于进程间通讯的 shm filesystem

page cache 的标识体系

page cache 中的每个 page 均归属于文件. 这个文件 — 或更精确来说,是文件的 inode 被称为 page 的owner.


Kernel - Pagecache - Core
· ☕ 3 分钟

address_space 数据结构

Page cahce 的核心数据结构是 addrees_space。一般来说,每个 inode (Kernel 用来存放文件元信息的内存中的数据结构,可以视为一个文件的描述信息)中包含一个 addrees_space


· ☕ 4 分钟

https://tenzir.com/blog/production-debugging-bpftrace-uprobes/
https://shaharmike.com/cpp/vtable-part1/

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
#include <iostream>

class Parent {
 public:
  virtual void Foo() {}
  virtual void FooNotOverridden() {}
};

class Derived : public Parent {
 public:
  void Foo() override {}
};

int main() {
  Parent p1, p2;
  Derived d1, d2;

  std::cout << "done" << std::endl;
}
$ # compile our code with debug symbols and start debugging using gdb
$ clang++ -std=c++14 -stdlib=libc++ -g main.cpp && gdb ./a.out
...
(gdb) # ask gdb to automatically demangle C++ symbols
(gdb) set print asm-demangle on
(gdb) set print demangle on
(gdb) # set breakpoint at main
(gdb) b main
Breakpoint 1 at 0x4009ac: file main.cpp, line 15.
(gdb) run
Starting program: /home/shmike/cpp/a.out

Breakpoint 1, main () at main.cpp:15
15	  Parent p1, p2;
(gdb) # skip to next line
(gdb) n
16	  Derived d1, d2;
(gdb) # skip to next line
(gdb) n
18	  std::cout << "done" << std::endl;
(gdb) # print p1, p2, d1, d2 - we'll talk about what the output means soon
(gdb) p p1
$1 = {_vptr$Parent = 0x400bb8 <vtable for Parent+16>}
(gdb) p p2
$2 = {_vptr$Parent = 0x400bb8 <vtable for Parent+16>}
(gdb) p d1
$3 = {<Parent> = {_vptr$Parent = 0x400b50 <vtable for Derived+16>}, <No data fields>}
(gdb) p d2
$4 = {<Parent> = {_vptr$Parent = 0x400b50 <vtable for Derived+16>}, <No data fields>}

Here’s what we learned from the above:


· ☕ 3 分钟

Terminology

  • Cluster: a logical service with a set of endpoints that Envoy forwards requests to.

  • Downstream: an entity connecting to Envoy. This may be a local application (in a sidecar model) or a network node. In non-sidecar models, this is a remote client.

  • Endpoints: network nodes that implement a logical service. They are grouped into clusters. Endpoints in a cluster are upstream of an Envoy proxy.

  • Filter: a module in the connection or request processing pipeline providing some aspect of request handling. An analogy from Unix is the composition of small utilities (filters) with Unix pipes (filter chains).


· ☕ 0 分钟