
This article attempts to analyze the implementation details of the Http Proxy main flow in Agentgateway. It allows readers to understand the working principle and implementation method of Agentgateway as an Http Proxy at the L7 layer. Agentgateway is essentially an HTTP Proxy, but adds support for AI (LLM/MCP/A2A) stateful protocols on top of HTTP. Therefore, analyzing the main flow of the HTTP Proxy layer is analyzing the main flow of Agentgateway.
This article is excerpted from the Http Proxy Main Flow section of the open source book “Agentgateway Insider” I am writing, and has been organized and supplemented for publication. For more details, please refer to the book.
Agentgateway Introduction
Agentgateway is an open-source and cross-platform data plane designed for AI agent systems, capable of establishing secure, scalable, and maintainable bidirectional connections between agents, MCP tool servers, and LLM providers. It makes up for the deficiencies of traditional gateways in handling MCP/A2A protocols in terms of state management, long sessions, asynchronous messaging, security, observability, multi-tenancy, etc., providing enterprise-level capabilities such as unified access, protocol upgrades, tool virtualization, authentication and permission control, traffic governance, metrics and tracing. It also supports Kubernetes Gateway API, dynamic configuration updates, and embedded developer self-service portals, helping to quickly build and scale agent-based AI environments. I think the current stage of agentgateway is more like an outbound bus (external dependency bus) for AI Agent applications.
Http Proxy Analysis
Agentgateway Configuration File
The Http Proxy main flow analyzed in this section is based on the following Agentgateway configuration file
https://github.com/labilezhu/pub-diy/blob/main/ai/agentgateway/ag-dev/devcontainer-config.yaml
|
|
Trigger LLM Request
|
|
Response:
* Host localhost:3100 was resolved.
* IPv6: ::1
* IPv4: 127.0.0.1
* Trying [::1]:3100...
* Connected to localhost (::1) port 3100
* using HTTP/1.x
> POST /compatible-mode/v1/chat/completions HTTP/1.1
> Host: localhost:3100
> User-Agent: curl/8.14.1
> Accept: */*
> Content-Type: application/json
> Content-Length: 104
>
* upload completely sent off: 104 bytes
< HTTP/1.1 200 OK
< vary: Origin,Access-Control-Request-Method,Access-Control-Request-Headers, Accept-Encoding
< x-request-id: 120e5847-3394-923c-a494-8eb9f81cb36e
< x-dashscope-call-gateway: true
< content-type: application/json
< server: istio-envoy
< req-cost-time: 873
< req-arrive-time: 1766635349727
< resp-start-time: 1766635350601
< x-envoy-upstream-service-time: 873
< date: Thu, 25 Dec 2025 04:02:30 GMT
< transfer-encoding: chunked
<
* Connection #0 to host localhost left intact
{"model":"qwen-plus","usage":{"prompt_tokens":10,"completion_tokens":20,"total_tokens":30,"prompt_tokens_details":{"cached_tokens":0}},"choices":[{"message":{"content":"Hello! ٩(◕‿◕。)۶ How can I assist you today?","role":"assistant"},"finish_reason":"stop","index":0,"logprobs":null}],"object":"chat.completion","created":1766635351,"system_fingerprint":null,"id":"chatcmpl-120e5847-3394-923c-a494-8eb9f81cb36e"}
Http Proxy Main Flow Chart
1. L4 Connection Accept Flow Chart
Through vscode Debug, we can see the Http Proxy main flow as shown in the figure below:
The ⚓ icon in the figure links to the local vscode source code when double-clicked. See the Source Code Navigation Diagram Links to VSCode Source Code section in the book.
It can be seen that the main http proxy logic is placed in the Gateway struct. There are two key spawn points:
- In
Gateway::run(), aGateway::run_bind()async future is spawned for each listening port. This task is responsible for listening to the port andaccepting new connections. - After
Gateway::run_bind()accepts a new connection, it spawns aGateway::handle_tunnel()async future for each connection. This task is responsible for handling all events for each connection.
- If the connection’s tunnel protocol is
Direct(i.e., direct connection), it callsGateway::proxy_bind()to hand it over to the HTTPProxy module for processing.
2. L7 HTTP Layer Flow
Gateway::proxy()calls the HTTP Server module ofhyper-utilto read and interpret HTTP request headers. After interpretation is complete, it calls back toHTTPProxy::proxy().
3. L8 AI Proxy Route Layer
HTTPProxy::proxy_internal()executes various Policies and Routes, untilHTTPProxy::attempt_upstream()initiates a call to the upstream (in the current configuration, the LLM AI Provider backend).
4. Upstream(backend) call
HTTPProxy::make_backend_call()calls the HTTP Client module ofhyper-utilto build and send HTTP requests to the upstream. It includes connection pool management logic.