Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .formatter.exs
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
"mix.exs",
".formatter.exs",
"config/*.exs",
"lib/**/*.ex"
"{lib,test}/**/*.{ex,exs}"
],
line_length: 120,
plugins: [Styler],
Expand Down
61 changes: 61 additions & 0 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
name: Elixir CI

on:
push:
branches: ["master"]
pull_request:
branches: ["master"]

permissions:
contents: read

jobs:
build:
name: Build and test
runs-on: ubuntu-latest
services:
memcached:
image: memcached:alpine
ports:
- 11211:11211
strategy:
matrix:
include:
- elixir: "1.17"
otp: "27"
- elixir: "1.19"
otp: "28"
lint: true
steps:
- name: Checkout code
uses: actions/checkout@v4

- name: Set up Elixir
uses: erlef/setup-beam@v1
with:
otp-version: ${{matrix.otp}}
elixir-version: ${{matrix.elixir}}

- name: Restore dependencies cache
uses: actions/cache@v3
with:
path: deps
key: ${{ runner.os }}-mix-${{ hashFiles('**/mix.lock') }}
restore-keys: ${{ runner.os }}-mix-

- name: Install dependencies
run: mix deps.get

- name: Compile
run: mix compile --warnings-as-errors

- name: Run tests
run: mix test

- name: checks that the mix.lock file has no unused deps
run: mix deps.unlock --check-unused
if: ${{ matrix.lint }}

- name: check if files are already formatted
run: mix format --check-formatted
if: ${{ matrix.lint }}
13 changes: 7 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,18 +1,19 @@
# ex_limiter
Rate Limiter written in elixir with configurable backends

Implements leaky bucket rate limiting ([wiki](https://en.wikipedia.org/wiki/Leaky_bucket)), which is superior to most naive approaches by handling bursts even around time windows. You can define your own storage backend by implementing the `ExLimiter.Storage` behaviour, and configuring it with
Rate Limiter written in elixir with configurable backends.

Implements leaky bucket rate limiting ([wiki](https://en.wikipedia.org/wiki/Leaky_bucket)), which is superior to most naive approaches by handling bursts even around time windows. You can define your own storage backend by implementing the `ExLimiter.Storage` behaviour, and configuring it with

```elixir
config :ex_limiter, :storage, MyStorage
```

usage once configured is:
Usage once configured is:

```elixir
case ExLimiter.consume(bucket, 1, scale: 1000, limit: 5) do
{:ok, bucket} -> #do some work
{:error, :rate_limited} -> #fail
{:ok, bucket} -> # do some work
{:error, :rate_limited} -> # fail
end
```

Expand All @@ -32,4 +33,4 @@ ExLimiter also ships with a simple plug implementation. Usage is
plug ExLimiter.Plug, scale: 5000, limit: 20
```

You can also configure how the bucket is inferred from the given conn, how many tokens to consume and what limiter to use.
You can also configure how the bucket is inferred from the given `conn`, how many tokens to consume and what limiter to use.
10 changes: 0 additions & 10 deletions config/config.exs

This file was deleted.

29 changes: 13 additions & 16 deletions lib/ex_limiter.ex
Original file line number Diff line number Diff line change
@@ -1,29 +1,26 @@
defmodule ExLimiter do
@moduledoc """
Configurable, leaky bucket rate limiting. You can define your own storage backend by
Configurable, leaky bucket rate limiting.

You can define your own storage backend by
implementing the `ExLimiter.Storage` behaviour, and configuring it with

```
config :ex_limiter, :storage, MyStorage
```
config :ex_limiter, :storage, MyStorage


usage once configured is:

```
case ExLimiter.consume(bucket, 1, scale: 1000, limit: 5) do
{:ok, bucket} -> #do some work
{:error, :rate_limited} -> #fail
end
```
case ExLimiter.consume(bucket, 1, scale: 1000, limit: 5) do
{:ok, bucket} -> #do some work
{:error, :rate_limited} -> #fail
end

Additionally, if you want to have multiple rate limiters with diverse backend implementations,
you can use the `ExLimiter.Base` macro, like so:

```
defmodule MyLimiter do
use ExLimiter.Base, storage: MyStorage
end
```
defmodule MyLimiter do
use ExLimiter.Base, storage: MyStorage
end
"""
use ExLimiter.Base, storage: Application.get_env(:ex_limiter, :storage)
use ExLimiter.Base, storage: Application.compile_env(:ex_limiter, :storage, ExLimiter.Storage.Memcache)
end
39 changes: 22 additions & 17 deletions lib/ex_limiter/base.ex
Original file line number Diff line number Diff line change
@@ -1,15 +1,14 @@
defmodule ExLimiter.Base do
@moduledoc """
Base module for arbitrary rate limiter implementations. Usage is:
Base module for arbitrary rate limiter implementations.

```
defmodule MyLimiter do
use ExLimiterBase, storage: MyCustomStorage
end
```
Usage is:

defmodule MyLimiter do
use ExLimiterBase, storage: MyCustomStorage
end
"""
alias ExLimiter.Bucket
alias ExLimiter.Utils

defmacro __using__(storage: storage) do
quote do
Expand All @@ -28,12 +27,14 @@ defmodule ExLimiter.Base do
Consumes `amount` from the rate limiter aliased by bucket.

`opts` params are:

* `:limit` - the maximum amount for the rate limiter (default 10)
* `:scale` - the duration under which `:limit` applies in milliseconds
"""
@spec consume(bucket :: binary, amount :: integer, opts :: keyword) :: {:ok, Bucket.t()} | {:error, :rate_limited}
def consume(bucket, amount \\ 1, opts \\ []), do: consume(@storage, bucket, amount, opts)

@doc "Deletes the bucket from the storage"
def delete(bucket), do: @storage.delete(%Bucket{key: bucket})
end
end
Expand All @@ -51,17 +52,21 @@ defmodule ExLimiter.Base do

storage.leak_and_consume(
bucket,
fn %Bucket{value: value, last: time} = b ->
now = Utils.now()
amount = max(value - (now - time), 0)

%{b | last: now, value: amount}
end,
fn
%Bucket{value: v} = b when v + incr <= scale -> b
_ -> {:error, :rate_limited}
end,
&__MODULE__.update/1,
&__MODULE__.boundary(&1, incr, scale),
incr
)
end

@doc false
def update(%Bucket{value: value, last: time} = b) do
now = System.system_time(:millisecond)
amount = max(value - (now - time), 0)

%{b | last: now, value: amount}
end

@doc false
def boundary(%Bucket{value: v} = b, incr, scale) when v + incr <= scale, do: b
def boundary(_, _, _), do: {:error, :rate_limited}
end
4 changes: 1 addition & 3 deletions lib/ex_limiter/bucket.ex
Original file line number Diff line number Diff line change
@@ -1,15 +1,13 @@
defmodule ExLimiter.Bucket do
@moduledoc false
alias ExLimiter.Utils

@type t :: %__MODULE__{}

defstruct key: nil,
value: 0,
last: nil,
version: %{}

def new(key), do: %__MODULE__{key: key, last: Utils.now()}
def new(key), do: %__MODULE__{key: key, last: System.system_time(:millisecond)}

def new(contents, key) when is_map(contents) do
struct(__MODULE__, Map.put(contents, :key, key))
Expand Down
30 changes: 13 additions & 17 deletions lib/ex_limiter/plug.ex
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
defmodule ExLimiter.Plug do
@moduledoc """
Plug for enforcing rate limits. The usage should be something like
Plug for enforcing rate limits.

```
plug ExLimiter.Plug, scale: 1000, limit: 5
```
The usage should be something like

plug ExLimiter.Plug, scale: 1000, limit: 5

Additionally, you can pass the following options:

Expand All @@ -25,27 +25,25 @@ defmodule ExLimiter.Plug do

Additionally, you can configure a custom limiter with

```
config :ex_limiter, ExLimiter.Plug, limiter: MyLimiter
```
config :ex_limiter, ExLimiter.Plug, limiter: MyLimiter

and you can also configure the rate limited response with

```
config :ex_limiter, ExLimiter.Plug, fallback: MyFallback
```
config :ex_limiter, ExLimiter.Plug, fallback: MyFallback

`MyFallback` needs to implement a function `render_error(conn, :rate_limited)`
"""
import Plug.Conn

@limiter Application.get_env(:ex_limiter, __MODULE__)[:limiter]
@compile_opts Application.compile_env(:ex_limiter, __MODULE__, [])
@limiter @compile_opts[:limiter] || ExLimiter

defmodule Config do
@moduledoc false
@limit Application.get_env(:ex_limiter, ExLimiter.Plug)[:limit]
@scale Application.get_env(:ex_limiter, ExLimiter.Plug)[:scale]
@fallback Application.get_env(:ex_limiter, ExLimiter.Plug)[:fallback]
@compile_opts Application.compile_env(:ex_limiter, ExLimiter.Plug, [])
@limit @compile_opts[:limit] || 10
@scale @compile_opts[:scale] || 1000
@fallback @compile_opts[:fallback] || ExLimiter.Plug

defstruct scale: @scale,
limit: @limit,
Expand Down Expand Up @@ -90,9 +88,7 @@ defmodule ExLimiter.Plug do
}) do
bucket_name = bucket_fun.(conn)

bucket_name
|> @limiter.consume(consume_fun.(conn), scale: scale, limit: limit)
|> case do
case @limiter.consume(bucket_name, consume_fun.(conn), scale: scale, limit: limit) do
{:ok, bucket} = response ->
remaining = @limiter.remaining(bucket, scale: scale, limit: limit)

Expand Down
10 changes: 6 additions & 4 deletions lib/ex_limiter/storage.ex
Original file line number Diff line number Diff line change
Expand Up @@ -26,15 +26,17 @@ defmodule ExLimiter.Storage do
@callback fetch(bucket :: Bucket.t()) :: Bucket.t()

@doc """
Set the current state of the given bucket. Specify hard if you want to
force a write
Set the current state of the given bucket.

Specify hard if you want to force a write
"""
@callback refresh(bucket :: Bucket.t()) :: response
@callback refresh(bucket :: Bucket.t(), type :: :hard | :soft) :: response

@doc """
Atomically update the bucket denoted by `key` with `fun`. Leverage whatever
concurrency controls are available in the given storage mechanism (eg cas for memcached)
Atomically update the bucket denoted by `key` with `fun`.

Leverage whatever concurrency controls are available in the given storage mechanism (eg cas for memcached)
"""
@callback update(key :: binary, fun :: (Bucket.t() -> Bucket.t())) :: Bucket.t()

Expand Down
11 changes: 7 additions & 4 deletions lib/ex_limiter/storage/memcache.ex
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,6 @@ defmodule ExLimiter.Storage.Memcache do
"""
use ExLimiter.Storage

alias ExLimiter.Utils

def fetch(%Bucket{key: key}) do
key_map = keys(key)

Expand Down Expand Up @@ -88,12 +86,17 @@ defmodule ExLimiter.Storage.Memcache do

defp add_result(%{version: versions} = acc, bucket_key, {val, cas}) do
acc
|> Map.put(bucket_key, Utils.parse_integer(val))
|> Map.put(bucket_key, parse_integer(val))
|> Map.put(:version, Map.put(versions, bucket_key, cas))
end

defp add_result(acc, bucket_key, _), do: add_result(acc, bucket_key, default(bucket_key))

defp default(:value), do: {0, 0}
defp default(:last), do: {Utils.now(), 0}
defp default(:last), do: {System.system_time(:millisecond), 0}

def parse_integer(val) when is_binary(val), do: val |> Integer.parse() |> parse_integer()
def parse_integer(val) when is_integer(val), do: val
def parse_integer(:error), do: :error
def parse_integer({val, _}), do: val
end
11 changes: 3 additions & 8 deletions lib/ex_limiter/storage/pg2_shard.ex
Original file line number Diff line number Diff line change
Expand Up @@ -5,18 +5,13 @@ defmodule ExLimiter.Storage.PG2Shard do

To configure the pool size, do:

```
config :ex_limit, ExLimiter.Storage.PG2Shard,
shard_count: 20
```
config :ex_limit, ExLimiter.Storage.PG2Shard,
shard_count: 20

You must also include the shard supervisor in your app supervision tree, with
something like:

```
...
supervise(ExLimiter.Storage.PG2Shard.Supervisor, [])
```
supervise(ExLimiter.Storage.PG2Shard.Supervisor, [])
"""
use ExLimiter.Storage

Expand Down
Loading