Skip to content

bug: ai plugins log summaries and payloads even when logging options are set to false #13118

@mikyll

Description

@mikyll

Current Behavior

When using ai-proxy (and similarly ai-proxy-multi), setting:

  • logging.summaries = false
  • logging.payloads = false

does not fully disable AI request logging at info level in APISIX runtime logs (error.log).

I still see request-related logs emitted from the AI driver at INFO level, including request metadata and potentially payload-bearing structures (e.g. API key and Authorization: Bearer header).

This, even though is not a issue with higher log levels than info, differs from what stated in documentation.

Expected Behavior

According to ai-proxy documentation, logging.summaries and logging.payloads are set to false by default, and information about the requests should not be logged.

  • logging.summaries:

    If true, logs request LLM model, duration, request, and response tokens.

  • logging.payloads:

    If true, logs request and response payload.

Error Logs

With logging.summaries = true and logging.payloads = true:

2026/03/25 17:14:10 [info] 89#89: *2112 [lua] trusted-addresses.lua:46: is_trusted(): trusted_addresses_matcher is not initialized, client: 10.89.29.187, server: _, request: "POST /chat/completions HTTP/1.1", host: "localhost:9080", request_id: "a96099056c57a2dfe01abd8b776708c8"
2026/03/25 17:14:10 [info] 89#89: *2112 [lua] ai.lua:243: handler(): use ai plane to match route, client: 10.89.29.187, server: _, request: "POST /chat/completions HTTP/1.1", host: "localhost:9080", request_id: "a96099056c57a2dfe01abd8b776708c8"
2026/03/25 17:14:10 [info] 89#89: *2112 [lua] ai.lua:247: handler(): renew route cache: count=3001, client: 10.89.29.187, server: _, request: "POST /chat/completions HTTP/1.1", host: "localhost:9080", request_id: "a96099056c57a2dfe01abd8b776708c8"
2026/03/25 17:14:10 [info] 89#89: *2112 [lua] radixtree.lua:493: common_route_data(): path: /chat/completions operator: =, client: 10.89.29.187, server: _, request: "POST /chat/completions HTTP/1.1", host: "localhost:9080", request_id: "a96099056c57a2dfe01abd8b776708c8"
2026/03/25 17:14:10 [info] 89#89: *2112 [lua] ai.lua:77: match(): route match mode: ai_match, client: 10.89.29.187, server: _, request: "POST /chat/completions HTTP/1.1", host: "localhost:9080", request_id: "a96099056c57a2dfe01abd8b776708c8"
2026/03/25 17:14:10 [info] 89#89: *2112 [lua] ai.lua:80: match(): route cache key: /chat/completions, client: 10.89.29.187, server: _, request: "POST /chat/completions HTTP/1.1", host: "localhost:9080", request_id: "a96099056c57a2dfe01abd8b776708c8"
2026/03/25 17:14:10 [info] 89#89: *2112 [lua] radixtree_host_uri.lua:161: orig_router_http_matching(): route match mode: radixtree_host_uri, client: 10.89.29.187, server: _, request: "POST /chat/completions HTTP/1.1", host: "localhost:9080", request_id: "a96099056c57a2dfe01abd8b776708c8"
2026/03/25 17:14:10 [info] 89#89: *2112 [lua] init.lua:733: http_access_phase(): matched route: {"clean_handlers":{},"orig_modifiedIndex":1774458846,"has_domain":false,"modifiedIndex":1774458846,"key":"/routes/ai_endpoint","value":{"plugins":{"ai-proxy":{"_meta":{},"keepalive_pool":30,"auth":{"header":{"Authorization":"Bearer myapikey"}},"logging":{"payloads":true,"summaries":true},"ssl_verify":true,"options":{"model":"gemini-2.5-flash"},"provider":"openai-compatible","override":{"endpoint":"https://generativelanguage.googleapis.com/v1beta/openai/chat/completions"},"timeout":30000,"keepalive_timeout":60000,"keepalive":true}},"uri":"/chat/completions","priority":0,"id":"ai_endpoint","status":1}}, client: 10.89.29.187, server: _, request: "POST /chat/completions HTTP/1.1", host: "localhost:9080", request_id: "a96099056c57a2dfe01abd8b776708c8"
2026/03/25 17:14:10 [info] 89#89: *2112 [lua] openai-base.lua:258: request extra_opts to LLM server: {"model_options":{"model":"gemini-2.5-flash"},"endpoint":"https://generativelanguage.googleapis.com/v1beta/openai/chat/completions","auth":{"header":{"Authorization":"Bearer myapikey"}},"conf":{}}, client: 10.89.29.187, server: _, request: "POST /chat/completions HTTP/1.1", host: "localhost:9080", request_id: "a96099056c57a2dfe01abd8b776708c8"
2026/03/25 17:14:10 [info] 89#89: *2112 [lua] openai-base.lua:336: sending request to LLM server: {"method":"POST","scheme":"https","ssl_verify":true,"headers":{"Authorization":"Bearer myapikey","Content-Type":"application/json"},"query":{},"port":443,"ssl_server_name":"generativelanguage.googleapis.com","host":"generativelanguage.googleapis.com","path":"/v1beta/openai/chat/completions","body":{"messages":[{"content":"Explain to me how AI works","role":"user"}],"model":"gemini-2.5-flash"}}, client: 10.89.29.187, server: _, request: "POST /chat/completions HTTP/1.1", host: "localhost:9080", request_id: "a96099056c57a2dfe01abd8b776708c8"
2026/03/25 17:14:10 [info] 89#89: *2112 [lua] client.lua:123: dns_parse(): dns resolve generativelanguage.googleapis.com, result: {"type":1,"name":"generativelanguage.googleapis.com","address":"172.217.23.74","ttl":96,"class":1,"section":1}, client: 10.89.29.187, server: _, request: "POST /chat/completions HTTP/1.1", host: "localhost:9080", request_id: "a96099056c57a2dfe01abd8b776708c8"
2026/03/25 17:14:10 [info] 89#89: *2112 [lua] resolver.lua:84: parse_domain(): parse addr: {"type":1,"name":"generativelanguage.googleapis.com","address":"172.217.23.74","ttl":96,"class":1,"section":1}, client: 10.89.29.187, server: _, request: "POST /chat/completions HTTP/1.1", host: "localhost:9080", request_id: "a96099056c57a2dfe01abd8b776708c8"
2026/03/25 17:14:10 [info] 89#89: *2112 [lua] resolver.lua:85: parse_domain(): resolver: ["10.89.29.1"], client: 10.89.29.187, server: _, request: "POST /chat/completions HTTP/1.1", host: "localhost:9080", request_id: "a96099056c57a2dfe01abd8b776708c8"
2026/03/25 17:14:10 [info] 89#89: *2112 [lua] resolver.lua:86: parse_domain(): host: generativelanguage.googleapis.com, client: 10.89.29.187, server: _, request: "POST /chat/completions HTTP/1.1", host: "localhost:9080", request_id: "a96099056c57a2dfe01abd8b776708c8"
2026/03/25 17:14:10 [info] 89#89: *2112 [lua] resolver.lua:88: parse_domain(): dns resolver domain: generativelanguage.googleapis.com to 172.217.23.74, client: 10.89.29.187, server: _, request: "POST /chat/completions HTTP/1.1", host: "localhost:9080", request_id: "a96099056c57a2dfe01abd8b776708c8"
2026/03/25 17:14:10 [info] 89#89: *2112 [lua] openai-base.lua:186: read_response(): got token usage from ai service: null, client: 10.89.29.187, server: _, request: "POST /chat/completions HTTP/1.1", host: "localhost:9080", request_id: "a96099056c57a2dfe01abd8b776708c8"
2026/03/25 17:14:10 [info] 89#89: *2112 client 10.89.29.187 closed keepalive connection
[apisix]  | 10.89.29.187 - - [25/Mar/2026:17:14:10 +0000] localhost:9080 "POST /chat/completions HTTP/1.1" 400 595 0.111 "-" "curl/8.5.0" - - - "http://localhost:9080"

With logging.summaries = false and logging.payloads = false:

2026/03/25 17:14:40 [info] 63#63: *11948 [lua] trusted-addresses.lua:46: is_trusted(): trusted_addresses_matcher is not initialized, client: 10.89.29.187, server: _, request: "POST /chat/completions HTTP/1.1", host: "localhost:9080", request_id: "4a65c35421c5a84506a1375d4f8b24a8"
2026/03/25 17:14:40 [info] 63#63: *11948 [lua] ai.lua:243: handler(): use ai plane to match route, client: 10.89.29.187, server: _, request: "POST /chat/completions HTTP/1.1", host: "localhost:9080", request_id: "4a65c35421c5a84506a1375d4f8b24a8"
2026/03/25 17:14:40 [info] 63#63: *11948 [lua] ai.lua:247: handler(): renew route cache: count=3001, client: 10.89.29.187, server: _, request: "POST /chat/completions HTTP/1.1", host: "localhost:9080", request_id: "4a65c35421c5a84506a1375d4f8b24a8"
2026/03/25 17:14:40 [info] 63#63: *11948 [lua] radixtree.lua:493: common_route_data(): path: /chat/completions operator: =, client: 10.89.29.187, server: _, request: "POST /chat/completions HTTP/1.1", host: "localhost:9080", request_id: "4a65c35421c5a84506a1375d4f8b24a8"
2026/03/25 17:14:40 [info] 63#63: *11948 [lua] ai.lua:77: match(): route match mode: ai_match, client: 10.89.29.187, server: _, request: "POST /chat/completions HTTP/1.1", host: "localhost:9080", request_id: "4a65c35421c5a84506a1375d4f8b24a8"
2026/03/25 17:14:40 [info] 63#63: *11948 [lua] ai.lua:80: match(): route cache key: /chat/completions, client: 10.89.29.187, server: _, request: "POST /chat/completions HTTP/1.1", host: "localhost:9080", request_id: "4a65c35421c5a84506a1375d4f8b24a8"
2026/03/25 17:14:40 [info] 63#63: *11948 [lua] radixtree_host_uri.lua:161: orig_router_http_matching(): route match mode: radixtree_host_uri, client: 10.89.29.187, server: _, request: "POST /chat/completions HTTP/1.1", host: "localhost:9080", request_id: "4a65c35421c5a84506a1375d4f8b24a8"
2026/03/25 17:14:40 [info] 63#63: *11948 [lua] init.lua:733: http_access_phase(): matched route: {"clean_handlers":{},"orig_modifiedIndex":1774458877,"has_domain":false,"modifiedIndex":1774458877,"key":"/routes/ai_endpoint","value":{"plugins":{"ai-proxy":{"_meta":{},"keepalive_pool":30,"auth":{"header":{"Authorization":"Bearer myapikey"}},"logging":{"payloads":false,"summaries":false},"ssl_verify":true,"options":{"model":"gemini-2.5-flash"},"provider":"openai-compatible","override":{"endpoint":"https://generativelanguage.googleapis.com/v1beta/openai/chat/completions"},"timeout":30000,"keepalive_timeout":60000,"keepalive":true}},"uri":"/chat/completions","priority":0,"id":"ai_endpoint","status":1}}, client: 10.89.29.187, server: _, request: "POST /chat/completions HTTP/1.1", host: "localhost:9080", request_id: "4a65c35421c5a84506a1375d4f8b24a8"
2026/03/25 17:14:40 [info] 63#63: *11948 [lua] openai-base.lua:258: request extra_opts to LLM server: {"model_options":{"model":"gemini-2.5-flash"},"endpoint":"https://generativelanguage.googleapis.com/v1beta/openai/chat/completions","auth":{"header":{"Authorization":"Bearer myapikey"}},"conf":{}}, client: 10.89.29.187, server: _, request: "POST /chat/completions HTTP/1.1", host: "localhost:9080", request_id: "4a65c35421c5a84506a1375d4f8b24a8"
2026/03/25 17:14:40 [info] 63#63: *11948 [lua] openai-base.lua:336: sending request to LLM server: {"method":"POST","scheme":"https","ssl_verify":true,"headers":{"Authorization":"Bearer myapikey","Content-Type":"application/json"},"query":{},"port":443,"ssl_server_name":"generativelanguage.googleapis.com","host":"generativelanguage.googleapis.com","path":"/v1beta/openai/chat/completions","body":{"messages":[{"content":"Explain to me how AI works","role":"user"}],"model":"gemini-2.5-flash"}}, client: 10.89.29.187, server: _, request: "POST /chat/completions HTTP/1.1", host: "localhost:9080", request_id: "4a65c35421c5a84506a1375d4f8b24a8"
2026/03/25 17:14:40 [info] 63#63: *11948 [lua] client.lua:123: dns_parse(): dns resolve generativelanguage.googleapis.com, result: {"type":1,"name":"generativelanguage.googleapis.com","address":"172.217.23.170","ttl":66,"class":1,"section":1}, client: 10.89.29.187, server: _, request: "POST /chat/completions HTTP/1.1", host: "localhost:9080", request_id: "4a65c35421c5a84506a1375d4f8b24a8"
2026/03/25 17:14:40 [info] 63#63: *11948 [lua] resolver.lua:84: parse_domain(): parse addr: {"type":1,"name":"generativelanguage.googleapis.com","address":"172.217.23.170","ttl":66,"class":1,"section":1}, client: 10.89.29.187, server: _, request: "POST /chat/completions HTTP/1.1", host: "localhost:9080", request_id: "4a65c35421c5a84506a1375d4f8b24a8"
2026/03/25 17:14:40 [info] 63#63: *11948 [lua] resolver.lua:85: parse_domain(): resolver: ["10.89.29.1"], client: 10.89.29.187, server: _, request: "POST /chat/completions HTTP/1.1", host: "localhost:9080", request_id: "4a65c35421c5a84506a1375d4f8b24a8"
2026/03/25 17:14:40 [info] 63#63: *11948 [lua] resolver.lua:86: parse_domain(): host: generativelanguage.googleapis.com, client: 10.89.29.187, server: _, request: "POST /chat/completions HTTP/1.1", host: "localhost:9080", request_id: "4a65c35421c5a84506a1375d4f8b24a8"
2026/03/25 17:14:40 [info] 63#63: *11948 [lua] resolver.lua:88: parse_domain(): dns resolver domain: generativelanguage.googleapis.com to 172.217.23.170, client: 10.89.29.187, server: _, request: "POST /chat/completions HTTP/1.1", host: "localhost:9080", request_id: "4a65c35421c5a84506a1375d4f8b24a8"
2026/03/25 17:14:40 [info] 63#63: *11948 [lua] openai-base.lua:186: read_response(): got token usage from ai service: null, client: 10.89.29.187, server: _, request: "POST /chat/completions HTTP/1.1", host: "localhost:9080", request_id: "4a65c35421c5a84506a1375d4f8b24a8"
2026/03/25 17:14:40 [info] 63#63: *11948 client 10.89.29.187 closed keepalive connection
[apisix]  | 10.89.29.187 - - [25/Mar/2026:17:14:40 +0000] localhost:9080 "POST /chat/completions HTTP/1.1" 400 595 0.119 "-" "curl/8.5.0" - - - "http://localhost:9080"

Notice how we see the following entries in both cases:

2026/03/25 17:14:10 [info] 89#89: *2112 [lua] openai-base.lua:258: request extra_opts to LLM server: {"model_options":{"model":"gemini-2.5-flash"},"endpoint":"https://generativelanguage.googleapis.com/v1beta/openai/chat/completions","auth":{"header":{"Authorization":"Bearer myapikey"}},"conf":{}}, client: 10.89.29.187, server: _, request: "POST /chat/completions HTTP/1.1", host: "localhost:9080", request_id: "a96099056c57a2dfe01abd8b776708c8"
2026/03/25 17:14:10 [info] 89#89: *2112 [lua] openai-base.lua:336: sending request to LLM server: {"method":"POST","scheme":"https","ssl_verify":true,"headers":{"Authorization":"Bearer myapikey","Content-Type":"application/json"},"query":{},"port":443,"ssl_server_name":"generativelanguage.googleapis.com","host":"generativelanguage.googleapis.com","path":"/v1beta/openai/chat/completions","body":{"messages":[{"content":"Explain to me how AI works","role":"user"}],"model":"gemini-2.5-flash"}}, client: 10.89.29.187, server: _, request: "POST /chat/completions HTTP/1.1", host: "localhost:9080", request_id: "a96099056c57a2dfe01abd8b776708c8"

Steps to Reproduce

MRE (APISIX standalone):

  • File compose.yaml:

    services:
      apisix:
        container_name: apisix
        image: apache/apisix:3.15.0-ubuntu
        volumes:
          - ./apisix/conf/config.yaml:/usr/local/apisix/conf/config.yaml
          - ./apisix/conf/apisix.yaml:/usr/local/apisix/conf/apisix.yaml
        ports:
          - "9080:9080/tcp"
          - "9443:9443/tcp"
        networks:
          - apisix
    
      httpbin:
        container_name: httpbin
        image: kennethreitz/httpbin:latest
        networks:
          - apisix
    
    networks:
      apisix:
        driver: bridge
  • File conf/config.yaml:

    deployment:
      role: data_plane
      role_data_plane:
        config_provider: yaml
    
    nginx_config:
      error_log_level:  info
  • File conf/apisix.yaml:

    upstreams:
      - id: httpbin
        nodes:
          "httpbin:80": 1
        type: roundrobin
    
    routes:
      - id: ai_endpoint
        uri: /chat/completions
        plugins:
          ai-proxy:
            provider: openai-compatible
            auth:
              header:
                Authorization: Bearer myapikey
            options:
              model: gemini-2.5-flash
            override:
              endpoint: https://generativelanguage.googleapis.com/v1beta/openai/chat/completions
    
    #END

Run the containers (I use podman):

podman-compose up

Send a request (ignore the 400 which isn't relevant to this example):

curl "localhost:9080/chat/completions" -H 'Content-Type: application/json' -d '{"messages":[{"role":"user","content":"Explain to me how AI works"}]}'

Have a look at the logs.

Environment

  • APISIX version (run apisix version): 3.15.0
  • Operating system (run uname -a): Ubuntu 24.04

Metadata

Metadata

Assignees

Labels

bugSomething isn't workingdocDocumentation thingsplugin

Type

No type

Projects

Status

🏗 In progress

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions