Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
19 changes: 6 additions & 13 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,12 @@ Visit the [documentation](https://opes.pages.dev) for detailed insights on OPES.

---

## Project Methodology

This project follows an Agile development approach. Every feature is designed to be extensible, exploratory and open to modification as the system evolves. Each GitHub commit represents a usable and coherent version of OPES. While not every commit is feature-complete or fully refined, each serves as a stable minimum viable product and a reliable snapshot of progress. Features marked as *experimental* are subject to active evaluation and will be either validated and promoted or removed entirely based on feasibility and empirical performance.

---

## Disclaimer

The information provided by OPES is for educational, research and informational purposes only. It is not intended as financial, investment or legal advice. Users should conduct their own due diligence and consult with licensed financial professionals before making any investment decisions. OPES and its contributors are not liable for any financial losses or decisions made based on this content. Past performance is not indicative of future results.
Expand Down Expand Up @@ -192,16 +198,3 @@ GOOG, AAPL, AMZN, MSFT
```

The price data is stored in the `prices.csv` file within the `tests/` directory. The number of tickers are limited to 4 since there are computationally heavy portfolio objectives (like `UniversalPortfolios`) included which may take an eternity to test well using multiple tickers.

Also it eats up RAM like pac-man.

---

## Upcoming Features (Unconfirmed)

These features are still in the works and may or may not appear in later updates:

| **Objective Name (Category)** |
| ------------------------------------------------ |
| Online Newton Step (Online Learning) |
| ADA-BARRONS (Online Learning) |
63 changes: 39 additions & 24 deletions docs/docs/backtesting.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,33 +71,33 @@ It also stores transaction cost parameters for portfolio simulations.
```python
def backtest(
optimizer,
rebalance_freq=None,
seed=None,
rebalance_freq=1,
reopt_freq=1,
seed=100,
weight_bounds=None,
clean_weights=False
)
```

Execute a portfolio backtest over the test dataset using a given optimizer.

This method performs either a static-weight backtest or a rolling-weight
backtest depending on whether `rebalance_freq` is specified. It also
applies transaction costs and ensures no lookahead bias during rebalancing.
This method performs a walk-forward backtest using the user defined `rebalance_freq`
and `reopt_freq`. It also applies transaction costs and ensures no lookahead bias.
For a rolling backtest, any common date values are dropped, the first occurrence
is considered to be original and kept.

!!! warning "Warning:"
Some online learning methods such as `ExponentialGradient` update weights based
on the most recent observations. Setting `rebalance_freq` to any value other
than `1` (or possibly `None`) may result in suboptimal performance, as
intermediate data points will be ignored and not used for weight updates.
Proceed with caution when using other rebalancing frequencies with online learning algorithms.
on the most recent observations. Setting `reopt_freq` to any value other
than `1` may result in suboptimal performance, as intermediate data points will
be ignored and not used for weight updates.

**Args:**

- `optimizer`: An optimizer object containing the optimization strategy. Accepts both OPES built-in objectives and externally constructed optimizer objects.
- `rebalance_freq` (*int or None, optional*): Frequency of rebalancing (re-optimization) in time steps. If `None`, a static weight backtest is performed. Defaults to `None`.
- `seed` (*int or None, optional*): Random seed for reproducible cost simulations. Defaults to `None`.
- `rebalance_freq` (*int, optional*): Frequency of rebalancing in time steps. Must be `>= 1`. Defaults to `1`.
- `reopt_freq` (*int, optional*): Frequency of re-optimization in time steps. Must be `>= 1`. Defaults to `1`.
- `seed` (*int or None, optional*): Random seed for reproducible cost simulations. Defaults to `100`.
- `weight_bounds` (*tuple, optional*): Bounds for portfolio weights passed to the optimizer if supported.

!!! abstract "Rules for `optimizer` Object"
Expand All @@ -107,33 +107,44 @@ is considered to be original and kept.
- `**kwargs`: For safety against breaking changes.
- `optimize` must output weights for the timestep.

!!! note "Note"
- Re-optimization does not automatically imply rebalancing. When the portfolio is re-optimized at a given timestep, weights may or may not be updated depending on the value of `rebalance_freq`.
- To ensure a coherent backtest, a common practice is to choose frequencies such that `reopt_freq % rebalance_freq == 0`. This guarantees that whenever optimization occurs, a rebalance is also performed.
- Also note that within a given timestep, rebalancing, if it occurs, is performed after optimization when optimization is scheduled for that timestep.

!!! tip "Tip"
Common portfolio styles can be constructed by appropriate choices of `rebalance_freq` and `reopt_freq`:

- Buy-and-Hold: `rebalance_freq > horizon`, `reopt_freq > horizon`
- Constantly Rebalanced: `rebalance_freq = 1`, `reopt_freq > horizon`
- Fully Dynamic: `rebalance_freq = 1`, `reopt_freq = 1`

**Returns:**

- `dict`: Backtest results containing the following keys:
- `'returns'` (*np.ndarray*): Portfolio returns after accounting for costs.
- `'weights'` (*np.ndarray*): Portfolio weights at each timestep.
- `'costs'` (*np.ndarray*): Transaction costs applied at each timestep.
- `'dates'` (*np.ndarray*): Dates on which the backtest was conducted.
- `'timeline'` (*np.ndarray*): Timeline on which the backtest was conducted.

**Raises**

- `DataError`: If the optimizer does not accept weight bounds but `weight_bounds` are provided.
- `PortfolioError`: If input validation fails (via `_backtest_integrity_check`).
- `OptimizationError`: If the underlying optimizer uses optimization and if it fails to optimize.


!!! note "Notes:"
- All returned arrays are aligned in time and have length equal to the test dataset.
- Static weight backtest: Uses a single set of optimized weights for all test data. This denotes a constant rebalanced portfolio.
- Rolling weight backtest: Re-optimizes weights at intervals defined by `rebalance_freq` using only historical data up to the current point to prevent lookahead bias.
- Returns and weights are stored in arrays aligned with test data indices.

!!! example "Example:"
```python
import numpy as np

# Importing necessary OPES modules
from opes.objectives.utility_theory import Kelly
from opes.backtester import Backtester
from opes.objectives import Kelly
from opes import Backtester

# Place holder for your price data
from some_random_module import trainData, testData
Expand All @@ -149,7 +160,11 @@ is considered to be original and kept.
tester = Backtester(train_data=training, test_data=testing)

# Obtaining backtest data for kelly optimizer
kelly_backtest = tester.backtest(optimizer=kelly_optimizer, rebalance_freq=21)
kelly_backtest = tester.backtest(
optimizer=kelly_optimizer,
rebalance_freq=1, # Rebalance daily
reopt_freq=21 # Re-optimize monthly
)

# Printing results
for key in kelly_backtest:
Expand Down Expand Up @@ -214,8 +229,8 @@ commonly used in finance, including volatility, drawdowns and tail risk metrics.
!!! example "Example:"
```python
# Importing portfolio method and backtester
from opes.objectives.markowitz import MaxSharpe
from opes.backtester import Backtester
from opes.objectives import MaxSharpe
from opes import Backtester

# Place holder for your price data
from some_random_module import trainData, testData
Expand Down Expand Up @@ -280,8 +295,8 @@ a file.
!!! example "Example:"
```python
# Importing portfolio methods and backtester
from opes.objectives.markowitz import MaxMean, MeanVariance
from opes.backtester import Backtester
from opes.objectives import MaxMean, MeanVariance
from opes import Backtester

# Place holder for your price data
from some_random_module import trainData, testData
Expand All @@ -297,9 +312,9 @@ a file.
# Initializing Backtest with constant costs
tester = Backtester(train_data=training, test_data=testing)

# Obtaining returns array from backtest for both optimizers (Monthly Rebalancing)
scenario_1 = tester.backtest(optimizer=maxmeanl2, rebalance_freq=21)
scenario_2 = tester.backtest(optimizer=mvo1_5, rebalance_freq=21)['returns']
# Obtaining returns array from backtest for both optimizers
scenario_1 = tester.backtest(optimizer=maxmeanl2)
scenario_2 = tester.backtest(optimizer=mvo1_5)['returns']

# Plotting wealth
tester.plot_wealth(
Expand Down
10 changes: 8 additions & 2 deletions docs/docs/examples/good_strategy.md
Original file line number Diff line number Diff line change
Expand Up @@ -95,10 +95,16 @@ tester = Backtester(train_data=train, test_data=test, cost={'gamma' : (5, 1)})

# Obtaining returns
# For now, weights and costs dont matter, so we discard them
return_scenario = tester.backtest(optimizer=mvo_ra08, rebalance_freq=1, clean_weights=True, seed=100)['returns']
return_scenario = tester.backtest(
optimizer=mvo_ra08,
rebalance_freq=1,
reopt_freq=1,
clean_weights=True,
seed=100
)['returns']
```

We use `rebalance_freq=1` so we can see how the portfolio adapts to changes quickly. `seed=100` gaurantees reproducibility and Gamma slippage captures asymmetric execution costs where extreme liquidity events are rare but painful. After obtaining `return_scenario` we can get the metrics and plot wealth.
We use `rebalance_freq=1` and `reopt_freq=1` so we can see how the portfolio adapts to changes quickly. `seed=100` gaurantees reproducibility and Gamma slippage captures asymmetric execution costs where extreme liquidity events are rare but painful. After obtaining `return_scenario` we can get the metrics and plot wealth.

---

Expand Down
10 changes: 5 additions & 5 deletions docs/docs/examples/if_you_knew_the_future.md
Original file line number Diff line number Diff line change
Expand Up @@ -90,14 +90,14 @@ The in-sample backtester can be constructed by enforcing `train_data=test` as we
# In-sample backtester
# zero-cost backtesting
tester_in_sample = Backtester(train_data=test, test_data=test, cost={'const' : 0})
in_sample_results = tester_in_sample.backtest(optimizer=mean_variance, clean_weights=True)
in_sample_results = tester_in_sample.backtest(optimizer=mean_variance, clean_weights=True, reopt_freq=1000)

# Obtaining weights and returns from the backtest
in_weights = in_sample_results["weights"][0]
return_scenario_in = in_sample_results["returns"]
```

The `rebalance_freq` parameter is defaulted to `None`, imposing a static weight backtest.
The `rebalance_freq` parameter is defaulted to `1` and `reopt_freq` is set to `1000`, imposing a constant rebalanced backtest.

### Out-of-Sample Backtester

Expand All @@ -107,21 +107,21 @@ The out-of-sample backtester is normally written by feeding training and testing
# Out-of-sample backtester
# Zero-cost backtesting
tester_out_of_sample = Backtester(train_data=train, test_data=test, cost={'const' : 0})
out_of_sample_results = tester_out_of_sample.backtest(optimizer=mean_variance, clean_weights=True)
out_of_sample_results = tester_out_of_sample.backtest(optimizer=mean_variance, clean_weights=True, reopt_freq=1000)

# Obtaining weights and returns from the backtest
out_weights = out_of_sample_results["weights"][0]
return_scenario_out = out_of_sample_results["returns"]
```

This is also a static weight backtest.
This is also a constant rebalanced backtest.

### Uniform Portfolio Backtester

Since uniform equal weight has constant weights, regardless of test and train data, we can use any backtester to obtain returns. Here we use `tester_in_sample`.

```python
uniform_results = tester_in_sample.backtest(optimizer=uniform_port)
uniform_results = tester_in_sample.backtest(optimizer=uniform_port, reopt_freq=1000)
uniform_weights = uniform_results["weights"][0]
uniform_scenario = uniform_results["returns"]
```
Expand Down
4 changes: 2 additions & 2 deletions docs/docs/examples/the_alpha_engine.md
Original file line number Diff line number Diff line change
Expand Up @@ -192,8 +192,8 @@ alpha_strategy = SuperDuperAlphaEngine()
# Initialize our backtester
tester = Backtester(train_data=train, test_data=test, cost={'const': 40})

# Backtest with `rebalance_freq` set to 1 for daily momentum
alpha_returns = tester.backtest(optimizer=alpha_strategy, rebalance_freq=1)
# Backtest with `rebalance_freq` and `reopt_freq` set to 1 for daily momentum
alpha_returns = tester.backtest(optimizer=alpha_strategy, rebalance_freq=1, reopt_freq=1)
```

Upon having `alpha_returns` we can use it to plot wealth and get metrics.
Expand Down
16 changes: 8 additions & 8 deletions docs/docs/examples/which_kelly_is_best.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ The Kelly Criterion, proposed by John Larry Kelly Jr., is the mathematically opt
There are numerous variants of the Kelly Criterion introduced to combat this fragile dependency, such as fractional Kelly, popularized by Ed Thorpe, and distributionally robust Kelly models. In this example, we compare several of the most well-known Kelly variants under identical out-of-sample conditions, evaluating their realized performance and wealth dynamics using `opes`.

!!! warning "Warning:"
This example may be computationally heavy because of multiple optimization models running with a low `rebalance_freq=5`. If you prefer better performance, increase `rebalance_freq` to monthly (`21`) or any value much greater than `5`.
This example may be computationally heavy because of multiple optimization models running with a low `reopt_freq=5`. If you prefer better performance, increase `reopt_freq` to monthly (`21`) or any value much greater than `5`.

---

Expand Down Expand Up @@ -121,20 +121,20 @@ for distributionally robust variants, we utilize `KLradius` for the ambiguity ra

## Backtesting

Using the `Backtester` class from `opes`, we backtest these strategies under a constant, but high, cost of 20 bps and `rebalance_freq=5` (weekly). Oh, and we clean weights too.
Using the `Backtester` class from `opes`, we backtest these strategies under a constant, but high, cost of 20 bps and `reopt_freq=5` (weekly). `rebalance_freq` is defaulted to `1`. Oh, and we clean weights too.

```python
# A constant slippage backtest
tester = Backtester(train_data=train, test_data=test, cost={'const' : 20})

# Obtaining returns
# For now, weights and costs dont matter, so we discard them
ck_scenario = tester.backtest(optimizer=classic_kelly, rebalance_freq=5, clean_weights=True)['returns']
hk_scenario = tester.backtest(optimizer=half_kelly, rebalance_freq=5, clean_weights=True)['returns']
qk_scenario = tester.backtest(optimizer=quarter_kelly, rebalance_freq=5, clean_weights=True)['returns']
kldrk_scenario = tester.backtest(optimizer=kldr_kelly, rebalance_freq=5, clean_weights=True)['returns']
kldrhk_scenario = tester.backtest(optimizer=kldr_halfkelly, rebalance_freq=5, clean_weights=True)['returns']
kldrqk_scenario = tester.backtest(optimizer=kldr_quarterkelly, rebalance_freq=5, clean_weights=True)['returns']
ck_scenario = tester.backtest(optimizer=classic_kelly, reopt_freq=5, clean_weights=True)['returns']
hk_scenario = tester.backtest(optimizer=half_kelly, reopt_freq=5, clean_weights=True)['returns']
qk_scenario = tester.backtest(optimizer=quarter_kelly, reopt_freq=5, clean_weights=True)['returns']
kldrk_scenario = tester.backtest(optimizer=kldr_kelly, reopt_freq=5, clean_weights=True)['returns']
kldrhk_scenario = tester.backtest(optimizer=kldr_halfkelly, reopt_freq=5, clean_weights=True)['returns']
kldrqk_scenario = tester.backtest(optimizer=kldr_quarterkelly, reopt_freq=5, clean_weights=True)['returns']
```

---
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/objectives/heuristics.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ class HierarchicalRiskParity(cluster_method='average')

Hierarchical Risk Parity (HRP) optimization.

Hierarchical Risk Parity (HRP), introduced by L≤pez de Prado,
Hierarchical Risk Parity (HRP), introduced by Lopez de Prado,
is a portfolio construction methodology that allocates capital
through hierarchical clustering and recursive risk balancing
rather than direct optimization of a scalar objective. HRP
Expand Down
2 changes: 1 addition & 1 deletion opes/__init__.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# Version Log
__version__ = "0.10.0"
__version__ = "0.11.0"

# Backtester easy import
from .backtester import Backtester
Loading