This major release brings significant enhancements to the trueskillthroughtime package while maintaining the same module name and package structure.
- Ordinal Model: Traditional ranking-based outcomes (win/loss/draw)
- Continuous Model: Continuous score differences (e.g., time, distance)
- Discrete Model: Discrete count-based scores (e.g., goals, points)
- Implements Poisson-based likelihood with fixed-point approximation (Guo et al. 2012)
- Supports score-based Bayesian skill learning
Mixed Observation Models in History:
The obs parameter in History allows flexible observation model specification:
- Single model for all games: Pass a list with one element, e.g.,
obs=["Ordinal"] - Per-game model: Pass a list with one element per game, e.g.,
obs=["Ordinal", "Discrete", "Continuous"]
# Example: Mix different observation models in the same history
composition = [
[["team_a"], ["team_b"]], # Ordinal game (win/loss)
[["player_1"], ["player_2"]], # Discrete game (score difference)
[["athlete_x"], ["athlete_y"]] # Continuous game (time difference)
]
results = [
[1, 0], # Ordinal result
[5, 3], # Discrete scores
[10.5, 11.2] # Continuous scores (seconds)
]
obs = ["Ordinal", "Discrete", "Continuous"] # One per game
h = ttt.History(composition, results, obs=obs)
h.convergence()- Added
obsparameter toGameandHistoryclasses- Accepts:
"Ordinal","Continuous", or"Discrete" - Per-game observation model specification supported in
History
- Accepts:
- Enhanced
Gameclass initialization with clearer parameter documentation - Improved
Historyclass with better batch handling
add_history()method: Add new games to an existing History- Allows incremental updates without recreating the entire history
- Maintains all previous skill estimates and convergence
- Supports adding new players dynamically
- Perfect for real-time applications and streaming data
Example Usage:
# Create initial history
composition = [[["alice"], ["bob"]], [["bob"], ["charlie"]]]
results = [[1, 0], [1, 0]]
h = ttt.History(composition, results, times=[1, 2])
h.convergence()
# Later, add new games
new_composition = [[["alice"], ["charlie"]], [["bob"], ["alice"]]]
new_results = [[1, 0], [0, 1]]
h.add_history(new_composition, new_results, times=[3, 4])
h.convergence() # Re-converge with new data
# All learning curves now include the new games
lc = h.learning_curves()- Complete English docstrings for all public APIs:
- Module-level documentation
- All classes (Gaussian, Player, Game, History, Skill, GameType)
- All public methods and functions
- Comprehensive parameter descriptions and examples
- PEP 8 compliance: Code formatting improvements
- Consistent spacing around operators
- Proper line lengths and indentation
- Improved variable naming (English translations)
- Cleaner code structure with better separation of concerns
fixed_point_approx(): New function for Gaussian approximation of Poisson observations- Improved numerical stability in likelihood computations
- Better convergence tracking with
iteration()method - Enhanced evidence computation for all observation models
- Comprehensive test suite documentation (
runtest.py) - Detailed docstrings with usage examples
- Clear parameter descriptions and return types
- Better inline comments throughout codebase
- Optimized message passing in
likelihood_convergence() - More efficient batch processing in
History - Improved memory usage in skill tracking
- More robust error handling in input validation
- Fixed edge cases in draw probability computation
- Improved handling of extreme skill values
- Expanded test coverage for new observation models
- Added tests for discrete score outcomes
- Improved test documentation with descriptive docstrings
- All tests passing with new implementation
- Convergence now evaluates likelihood changes instead of parameter changes
- The verbose output during
convergence()will show different step values - The final skill estimates remain the same
- Impact: If you were monitoring step values in verbose mode, the numbers will be different (but results are equivalent)
- The verbose output during
- Behavior change when
timesparameter is not provided toHistory- Previous version: May have had different default time handling
- Current version: Uses sequential indices as time points when not specified
- Recommendation: Always explicitly pass the
timesparameter toHistoryfor consistent behavior - This ensures deterministic results across versions
History.add_history(): Add new games to existing history incrementallyHistory.iteration(): Improved convergence tracking method
Game.__init__(obs=...): Observation model parameter ("Ordinal","Continuous", or"Discrete")History.__init__(obs=...): Observation model(s) for games (single model or per-game list)History.add_history(obs=...): Observation model(s) for new games being addedHistory.learning_curves(who=..., online=...):who: Filter learning curves by player names (default: None returns all players)online: Switch between batch posteriors (default: False) and online estimates (True)
History.convergence(): Now tracks likelihood changes instead of parameter changes- Return values unchanged
- Verbose output format changed
History.__init__(times=...): Behavior changed whentimesnot provided- Now uses sequential indices as default
- Strongly recommended to always pass explicit
timesparameter
- None - all previous API calls remain compatible
- None - backward compatibility maintained
- Guo, S., Zoeter, O., & Archambeau, C. (2012). "Score-based Bayesian skill learning"
- We enable the weight procedure #6
- We no longer use the numba package
- Fixed multiplayer evidence