[转帖]Dynamic A/B Testing with NGINX Plus

dynamic,testing,with,nginx,plus · 浏览次数 : 0

小编点评

**Key-Value Store Configuration** * Set up a key-value store to specify the percentage to send to each upstream group based on the 'Host' header. * keyval_zone zone=split:64k state=/etc/nginx/state_files/split.json; keyval $host $split_level zone=split; split_clients $client_ip $split0 { * appversion1; } split_clients $client_ip $split5 { 5% appversion2; * appversion1; } split_clients $client_ip $split10 { 10% appversion2; * appversion1; } split_clients $client_ip $split25 { 25% appversion2; * appversion1; } split_clients $client_ip $split50 { 50% appversion2; * appversion1; } split_clients $client_ip $split100 { * appversion2; } **NGINX Plus Configuration** * Set up a key-value store to specify the percentage to send to each upstream group based on the 'Host' header. * keyval_zone zone=split:64k state=/etc/nginx/state_files/split.json; keyval $host $split_level zone=split; * map $split_level $upstream { 0 $split0; 5 $split5; 10 $split10; 25 $split25; 50 $split50; 100 $split100; default $split0; } **NGINX Plus Dashboard** * For production, add directives to restrict access to the API, for example 'allow' and 'deny'. * location /api { api write=on; # in production, directives restricting access } * location = /dashboard.html { root /usr/share/nginx/html; } **Usage** * Restart NGINX Plus with the key-value store configuration. * Access the dashboard at /dashboard.html. * Use the API endpoints to control the traffic split between the upstream groups.

正文

 

The key‑value store feature was introduced in NGINX Plus R13 for HTTP traffic and extended to TCP/UDP (stream) traffic in NGINX Plus R14. This feature provides an API for dynamically maintaining values that can be used as part of the NGINX Plus configuration, without requiring a reload of the configuration. There are many possible use cases for this feature and I have no doubt that our customers will find a variety of ways to take advantage it.

This blog post describes one use case, dynamically altering how the Split Clients module is used to do A/B testing.

The Key-Value Store

The NGINX Plus API can be used to maintain a set of key‑value pairs which NGINX Plus can access at runtime. For example, let’s look at the use case where you want to keep a denylist of client IP addresses that are not allowed to access your site (or particular URLs).

The key is the client IP address, which is captured in the $remote_addr variable. The value is a variable named $denylist_status that is set to 1 to indicate that the client IP address is denylisted and 0 that it’s not denylisted.

To configure this, we follow these steps:

  • Set up a shared‑memory zone to store the key‑value pairs (the keyval_zone directive)
  • Give the zone a name
  • Specify the maximum amount of memory to allocate for it
  • Optionally, specify a state file to store the entries so they persist across NGINX Plus restarts

For the state file, we have previously created the /etc/nginx/state_files directory and made it writable by the unprivileged user that runs the NGINX worker processes (as defined by the user directive elsewhere in the configuration). Here we include the state parameter to the keyval_zone directive to create the file denylist.json for storing the key‑value pairs:

keyval_zone zone=denylist:64k 
            state=/etc/nginx/state_files/denylist.json;

In NGINX Plus R16 and later, we can take advantage of two additional key‑value features:

  • Set an expiration time for the entries in a key‑value store, by adding the timeout parameter to the keyval_zone directive. For example, to denylist addresses for two hours, add timeout=2h.
  • Synchronize the key‑value store across a cluster of NGINX Plus instances, by adding the sync parameter to the keyval_zone directive. You must also include the timeout parameter in this case.

So to expand our example to use a synchronized key‑value store of addresses that are denylisted for two hours, the directive becomes:

keyval_zone zone=denylist:64k timeout=2h sync
            state=/etc/nginx/state_files/denylist.json;

For detailed instructions on setting up synchronization for key-value stores, see the NGINX Plus Admin Guide.

Next we add the keyval directive to define the key‑value pair. We specify that the key is the client IP address ($remote_addr) and that the value is assigned to the $denylist_status variable:

keyval $remote_addr $denylist_status zone=denylist;

To create a pair in the key‑value store, use an HTTP POST request. For example:

# curl -iX POST -d '{"10.11.12.13":1}' http://localhost/api/3/http/keyvals/denylist

To modify the value in an existing key‑value pair, use an HTTP PATCH request. For example:

# curl -iX PATCH -d '{"10.11.12.13":0}' http://localhost/api/3/http/keyvals/denylist

To remove a key‑value pair, use an HTTP PATCH request to set the value to null. For example:

# curl -iX PATCH -d '{"10.11.12.13":null}' http://localhost/api/3/http/keyvals/denylist

Split Clients for A/B Testing

The Split Clients module allows you to split incoming traffic between upstream groups based on a request characteristic of your choice. You define the split as the percentage of incoming traffic to forward to the different upstream groups. A common use case is testing the new version of an application by sending a small proportion of traffic to it and the remainder to the current version. In our example, we’re sending 5% of the traffic to the upstream group for the new version, appversion2, and the remainder (95%) to the current version, appversion1.

We’re splitting the traffic based on the client IP address in the request, so we set the split_clients directive’s first parameter to the NGINX variable $remote_addr. With the second parameter we set the variable $upstream to the name of the upstream group.

Here’s the basic configuration:

split_clients $remote_addr $upstream {
    5% appversion2;
    *  appversion1;
}

upstream appversion1 {
   # ...
}

upstream appversion2 {
   # ...
}

server {
    listen 80;
    location / {
        proxy_pass http://$upstream;
    }
}

Using the Key-Value Store with Split Clients

Prior to NGINX Plus R13, if you wanted to change the percentages for the split, you had to edit the configuration file and reload the configuration. Using the key‑value store, you simply change the percentage value stored in the key‑value pair and the split changes accordingly, without the need for a reload.

Building on the use case in the previous section, let’s say we have decided that we want NGINX Plus to support the following options for how much traffic gets sent to appversion2: 0%, 5%, 10%, 25%, 50%, and 100%. We also want to base the split on the Host header (captured in the NGINX variable $host). The following NGINX Plus configuration implements this functionality.

First we set up the key‑value store:

keyval_zone zone=split:64k state=/etc/nginx/state_files/split.json;
keyval      $host $split_level zone=split;

As mentioned for the initial use case, in an actual deployment it makes sense to base the split on a request characteristic like the client IP address, $remote_addr. In a simple test using a tool like curl, however, all the requests come from a single IP address, so there is no split to observe.

For the test, we instead base the split on a value that is more random: $request_id. To make it easy to transition the configuration from test to production, we create a new variable in the server block, $client_ip, setting it to $request_id for testing and to $remote_addr for production. Then we set up the split_clients configuration.

The variable for each split percentage (split0 for 0%, split5 for 5%, and so on) is set in a separate split_clients directive:

split_clients $client_ip $split0 {
    *   appversion1;
}
split_clients $client_ip $split5 {
    5%  appversion2;
    *   appversion1;
}
split_clients $client_ip $split10 {
    10% appversion2;
    *   appversion1;
}
split_clients $client_ip $split25 {
    25% appversion2;
    *   appversion1;
}
split_clients $client_ip $split50 {
    50% appversion2;
    *   appversion1;
}
split_clients $client_ip $split100 {
    *   appversion2;
}

Now that we have the key‑value store and split_clients configured, we can set up a map to set the $upstream variable to the upstream group specified in the appropriate split variable:

map $split_level $upstream {
    0        $split0;
    5        $split5;
    10       $split10;
    25       $split25;
    50       $split50;
    100      $split100;
    default  $split0;
}

Finally, we have the rest of the configuration for the upstream groups and the virtual server. Note that we have also configured the NGINX Plus API which is used for the key‑value store and the live activity monitoring dashboard. This is the new status dashboard in NGINX Plus R14:

upstream appversion1 {
    zone appversion1 64k;
    server 192.168.50.100;
    server 192.168.50.101;
}

upstream appversion2 {
    zone appversion2 64k;
    server 192.168.50.102;
    server 192.168.50.103;
}

server {
    listen 80;
    status_zone test;
    #set $client_ip $remote_addr; # Production
    set $client_ip $request_id; # For testing only

    location / {
        proxy_pass http://$upstream;
    }

    location /api {
        api write=on;
        # in production, directives restricting access
    }

    location = /dashboard.html {
        root /usr/share/nginx/html;
    }
}

Using this configuration, we can now control how the traffic is split between the appversion1 and appversion2 upstream groups by sending an API request to NGINX Plus and setting the $split_level value for a hostname. For example, the following two requests can be sent to NGINX Plus so that 5% of the traffic for www.example.com is sent to the appversion2 upstream group and 25% of the traffic for www2.example.com is sent to the appversion2 upstream group:

# curl -iX POST -d '{"www.example.com":5}' http://localhost/api/3/http/keyvals/split
# curl -iX POST -d '{"www2.example.com":25}' http://localhost/api/3/http/keyvals/split

To change the value for www.example.com to 10:

# curl -iX PATCH -d '{"www.example.com":10}' http://localhost/api/3/http/keyvals/split

To clear a value:

# curl -iX PATCH -d '{"www.example.com":null}' http://localhost/api/3/http/keyvals/split

After each one of these requests, NGINX Plus immediately starts using the new split value.

Here is the full configuration file:

# Set up a key‑value store to specify the percentage to send to each # upstream group based on the 'Host' header. keyval_zone zone=split:64k state=/etc/nginx/state_files/split.json; keyval $host $split_level zone=split; split_clients $client_ip $split0 { * appversion1; } split_clients $client_ip $split5 { 5% appversion2; * appversion1; } split_clients $client_ip $split10 { 10% appversion2; * appversion1; } split_clients $client_ip $split25 { 25% appversion2; * appversion1; } split_clients $client_ip $split50 { 50% appversion2; * appversion1; } split_clients $client_ip $split100 { * appversion2; } map $split_level $upstream { 0 $split0; 5 $split5; 10 $split10; 25 $split25; 50 $split50; 100 $split100; default $split0; } upstream appversion1 { zone appversion1 64k; server 192.168.50.100; server 192.168.50.101; } upstream appversion2 { zone appversion2 64k; server 192.168.50.102; server 192.168.50.103; } server { listen 80; status_zone test; # In each 'split_clients' block above, '$client_ip' controls which # application receives each request. For a production application, we set it # to '$remote_addr' (the client IP address). But when testing from just one # client, '$remote_addr' is always the same; to get some randomness, we set # it to '$request_id' instead. #set $client_ip $remote_addr; # Production set $client_ip $request_id; # Testing only location / { proxy_pass http://$upstream; } # Configure the NGINX Plus API and dashboard. For production, add directives # to restrict access to the API, for example 'allow' and 'deny'. location /api { api write=on; # in production, directives restricting access } location = /dashboard.html { root /usr/share/nginx/html; } }
 

Conclusion

This is just one example of what you can do with the key‑value store. You can use a similar approach for request‑rate limiting, bandwidth limiting, or connection limiting.

If you don’t already have NGINX Plus, start your free 30‑day trial and give it a try.

与[转帖]Dynamic A/B Testing with NGINX Plus相似的内容:

[转帖]Dynamic A/B Testing with NGINX Plus

https://www.nginx.com/blog/dynamic-a-b-testing-with-nginx-plus/ The key‑value store feature was introduced in NGINX Plus R13 for HTTP traffic and exte

[转帖]动态追踪技术漫谈

https://blog.openresty.com.cn/cn/dynamic-tracing/ 本文最初成稿于 2016 年 5 月初,后于 2020 年 2 月中进行了较大的更新和修订,后续会持续保持更新。 什么是动态追踪 动态追踪的优点 DTrace 与 SystemTap SystemTa

[转帖]在线修改集群配置

https://docs.pingcap.com/zh/tidb/stable/dynamic-config 在线配置变更主要是通过利用 SQL 对包括 TiDB、TiKV 以及 PD 在内的各组件的配置进行在线更新。用户可以通过在线配置变更对各组件进行性能调优而无需重启集群组件。但目前在线修改 T

[转帖]在线修改集群配置

https://docs.pingcap.com/zh/tidb/stable/dynamic-config 在线配置变更主要是通过利用 SQL 对包括 TiDB、TiKV 以及 PD 在内的各组件的配置进行在线更新。用户可以通过在线配置变更对各组件进行性能调优而无需重启集群组件。但目前在线修改 T

[转帖]在线修改集群配置

https://docs.pingcap.com/zh/tidb/stable/dynamic-config 在线配置变更主要是通过利用 SQL 对包括 TiDB、TiKV 以及 PD 在内的各组件的配置进行在线更新。用户可以通过在线配置变更对各组件进行性能调优而无需重启集群组件。但目前在线修改 T

[转帖]tidb Modify Configuration Dynamically

https://docs.pingcap.com/tidb/v6.5/dynamic-config This document describes how to dynamically modify the cluster configuration. You can dynamically upd

[转帖]Nacos 是什么?

https://my.oschina.net/u/4526289/blog/5605693 摘要:Nacos 是 Dynamic Naming and Configuration Service 的首字母简称,相较之下,它更易于构建云原生应用的动态服务发现、配置管理和服务管理平台。 本文分享自华为云

[转帖]

Linux ubuntu20.04 网络配置(图文教程) 因为我是刚装好的最小系统,所以很多东西都没有,在开始配置之前需要做下准备 环境准备 系统:ubuntu20.04网卡:双网卡 网卡一:供连接互联网使用网卡二:供连接内网使用(看情况,如果一张网卡足够,没必要做第二张网卡) 工具: net-to

[转帖]

https://cloud.tencent.com/developer/article/2168105?areaSource=104001.13&traceId=zcVNsKTUApF9rNJSkcCbB 前言 Redis作为高性能的内存数据库,在大数据量的情况下也会遇到性能瓶颈,日常开发中只有时刻

[转帖]ISV 、OSV、 SIG 概念

ISV 、OSV、 SIG 概念 2022-10-14 12:29530原创大杂烩 本文链接:https://www.cndba.cn/dave/article/108699 1. ISV: Independent Software Vendors “独立软件开发商”,特指专门从事软件的开发、生产、