← Back to Usage | Workflow → | Output →
- Overview
- Basic Example Structure
- Required Components
- Example Patterns
- Settings Configuration
- Formula Syntax
- Advanced Examples
- Best Practices
- Common Patterns
- Troubleshooting
Example files are Python modules that define logical formulas, settings, and theory configurations for the ModelChecker to process. They serve as the primary input format for testing logical inferences, exploring semantic theories, and validating philosophical arguments.
The examples.py structure serves as both documentation and executable code, demonstrating theory capabilities through carefully crafted logical formulas. These files are essential for validating semantic implementations, providing test cases, and offering practical usage examples for researchers and developers.
Architecture Context: For how example files are processed through the system, see Pipeline Architecture and Builder Architecture. For formula parsing details, see Syntactic Architecture.
Every ModelChecker example file must export two dictionaries:
# Minimal example file
from model_checker.theory_lib.logos import get_theory
# Get the semantic theory (loads all subtheories by default)
theory = get_theory()
# Or load specific subtheories for logos
# theory = get_theory(['modal', 'counterfactual']) # Just modal and counterfactual
# theory = get_theory(['extensional']) # Just extensional operators
# Define the example
MY_EXAMPLE_premises = ["A", "A \\rightarrow B"]
MY_EXAMPLE_conclusions = ["B"]
MY_EXAMPLE_settings = {'N': 3}
MY_EXAMPLE_example = [MY_EXAMPLE_premises, MY_EXAMPLE_conclusions, MY_EXAMPLE_settings]
# Required exports
example_range = {
"MY_EXAMPLE": MY_EXAMPLE_example
}
semantic_theories = {
"logos": theory
}# Single theory
from model_checker.theory_lib.logos import get_theory
theory = get_theory()
# Multiple theories
from model_checker.theory_lib.logos import get_theory as get_logos
from model_checker.theory_lib.exclusion import get_theory as get_exclusion
logos_theory = get_logos()
exclusion_theory = get_exclusion()
# Theory with specific subtheories (logos)
from model_checker.theory_lib.logos import get_theory
theory = get_theory(['extensional', 'modal'])Each example follows the PREFIX_TYPE_NUMBER naming convention:
# PREFIX: Theory abbreviation (2-3 characters)
# TYPE: CM (countermodel) or TH (theorem)
# NUMBER: Sequential number starting from 1
# Standard examples:
EXT_TH_1_premises = [...] # Extensional theorem 1
LOG_CM_3_premises = [...] # Logos countermodel 3
IMP_TH_2_premises = [...] # Imposition theorem 2
# Complete structure:
EXAMPLE_NAME_premises = [...] # List of premise formulas
EXAMPLE_NAME_conclusions = [...] # List of conclusion formulas
EXAMPLE_NAME_settings = {...} # Dictionary of settings
EXAMPLE_NAME_example = [ # Combined list
EXAMPLE_NAME_premises,
EXAMPLE_NAME_conclusions,
EXAMPLE_NAME_settings
]| Theory | Prefix | Full Name |
|---|---|---|
| Extensional | EXT | Basic extensional logic |
| Modal | MOD | Modal operators |
| Counterfactual | CF | Counterfactual conditionals |
| Constitutive | CON | Identity and essence |
| Relevance | REL | Relevance logic |
| Logos (general) | LOG | Hyperintensional logic |
| Exclusion | EX | Unilateral semantics |
| Imposition | IMP | Fine's counterfactuals |
| Bimodal | BIM | Temporal-modal logic |
# Map example names to example data
example_range = {
"EXAMPLE_NAME": EXAMPLE_NAME_example,
# Add more examples as needed
}
# Map theory names to theory objects
semantic_theories = {
"theory_name": theory_object,
# Add more theories for comparison
}Basic pattern for testing one inference:
from model_checker.theory_lib.logos import get_theory
theory = get_theory()
# Modus Ponens
MP_premises = ["P", "P \\rightarrow Q"]
MP_conclusions = ["Q"]
MP_settings = {'N': 3}
MP_example = [MP_premises, MP_conclusions, MP_settings]
example_range = {"MP": MP_example}
semantic_theories = {"logos": theory}Testing several inferences in one file:
from model_checker.theory_lib.logos import get_theory
theory = get_theory(['extensional', 'modal'])
# Example 1: Conjunction Elimination
CONJ_ELIM_premises = ["A \\wedge B"]
CONJ_ELIM_conclusions = ["A", "B"]
CONJ_ELIM_settings = {'N': 3}
CONJ_ELIM_example = [CONJ_ELIM_premises, CONJ_ELIM_conclusions, CONJ_ELIM_settings]
# Example 2: Disjunction Introduction
DISJ_INTRO_premises = ["A"]
DISJ_INTRO_conclusions = ["A \\vee B"]
DISJ_INTRO_settings = {'N': 3}
DISJ_INTRO_example = [DISJ_INTRO_premises, DISJ_INTRO_conclusions, DISJ_INTRO_settings]
# Example 3: Modal Distribution
MODAL_DIST_premises = ["\\Box (A \\rightarrow B)", "\\Box A"]
MODAL_DIST_conclusions = ["\\Box B"]
MODAL_DIST_settings = {'N': 3}
MODAL_DIST_example = [MODAL_DIST_premises, MODAL_DIST_conclusions, MODAL_DIST_settings]
example_range = {
"CONJ_ELIM": CONJ_ELIM_example,
"DISJ_INTRO": DISJ_INTRO_example,
"MODAL_DIST": MODAL_DIST_example
}
semantic_theories = {"logos_modal": theory}Compare how different theories handle the same inference:
from model_checker.theory_lib.logos import get_theory as get_logos
from model_checker.theory_lib.exclusion import get_theory as get_exclusion
from model_checker.theory_lib.imposition import get_theory as get_imposition
# Initialize theories
logos = get_logos(['extensional'])
exclusion = get_exclusion()
imposition = get_imposition()
# Define test case
NEGATION_TEST_premises = ["\\neg \\neg A"]
NEGATION_TEST_conclusions = ["A"]
NEGATION_TEST_settings = {'N': 3}
NEGATION_TEST_example = [
NEGATION_TEST_premises,
NEGATION_TEST_conclusions,
NEGATION_TEST_settings
]
# Export for comparison
example_range = {
"DOUBLE_NEGATION": NEGATION_TEST_example
}
semantic_theories = {
"Logos": logos,
"Exclusion": exclusion,
"Imposition": imposition
}Generate examples programmatically:
from model_checker.theory_lib.logos import get_theory
theory = get_theory(['extensional', 'modal'])
# Generate chain of implications
def create_implication_chain(length):
premises = []
for i in range(length - 1):
prop_from = chr(65 + i) # A, B, C, ...
prop_to = chr(65 + i + 1)
premises.append(f"{prop_from} \\rightarrow {prop_to}")
premises.append("A") # Starting point
conclusion = chr(65 + length - 1) # Final proposition
settings = {'N': length + 1}
return [premises, [conclusion], settings]
# Create examples of different lengths
example_range = {}
for n in [3, 5, 7]:
example_name = f"CHAIN_{n}"
example_range[example_name] = create_implication_chain(n)
semantic_theories = {"logos": theory}Every example must include these settings with descriptive comments:
settings = {
'N': 3, # Max number of atomic propositions
'contingent': False, # Allow non-contingent propositions
'non_null': False, # Allow the null state
'non_empty': False, # Allow empty verifier/falsifier sets
'disjoint': False, # Allow verifier/falsifier overlap
'max_time': 10, # Timeout in seconds
'iterate': 1, # Number of models to find
'expectation': False, # True = expect countermodel, False = expect theorem
}Important: All settings dictionaries MUST include inline comments after each setting.
Some theories define additional settings:
# Bimodal settings (temporal-modal logic)
'M': 2, # Number of time points/steps
# Exclusion settings (unilateral semantics)
'possible': False, # Require possible propositions
'fusion_closure': False, # Enforce fusion closure constraints
# Logos modal settings (when using modal subtheory)
'reflexive': True, # Reflexive accessibility
'transitive': True, # Transitive accessibility
'euclidean': False, # Euclidean accessibility- Always include 'expectation': Explicitly state whether expecting countermodel (True) or theorem (False)
- Use descriptive comments: Every setting needs a brief explanation
- Order consistently: Core settings first, theory-specific settings after
- Document constraints: If settings interact, explain in comments
# Propositional operators
"\\neg A" # Negation
"A \\wedge B" # Conjunction
"A \\vee B" # Disjunction
"A \\rightarrow B" # Implication
"A \\leftrightarrow B" # Biconditional
# Modal operators (requires modal subtheory)
"\\Box A" # Necessity
"\\Diamond A" # Possibility
# Counterfactual operators (requires counterfactual subtheory)
"A \\boxright B" # Counterfactual conditional# Nested operators
"\\Box (A \\rightarrow B)"
"(A \\vee B) \\wedge \\neg C"
"\\Diamond (A \\wedge B) \\rightarrow \\Box C"
# Multiple premises/conclusions
premises = [
"A \\rightarrow B",
"B \\rightarrow C",
"C \\rightarrow D"
]
conclusions = ["A \\rightarrow D"]Binary operators require outer parentheses:
# CORRECT - Outer parentheses required for binary operators
formulas = [
"(A \\wedge B)", # Conjunction
"(A \\vee B)", # Disjunction
"(A \\rightarrow B)", # Implication
"(A \\leftrightarrow B)", # Biconditional
"(A \\boxright B)", # Counterfactual
]
# INCORRECT - Missing parentheses
formulas = [
"A \\wedge B", # Will cause parsing errors
"A \\vee B",
]
# Unary operators do not need outer parentheses
"\\neg A" # Correct as-is
"\\Box A" # Correct as-is
"\\Diamond A" # Correct as-isUnicode symbols may appear in output or explanatory text:
- ¬ for NOT (use \neg in code)
- ∧ for AND (use \wedge in code)
- ∨ for OR (use \vee in code)
- → for IMPLIES (use \rightarrow in code)
- ↔ for IFF (use \leftrightarrow in code)
- □ for NECESSARY (use \Box in code)
- ◇ for POSSIBLE (use \Diamond in code)
# Test frame conditions
REFLEXIVITY_TEST_premises = []
REFLEXIVITY_TEST_conclusions = ["A \\rightarrow \\Box A"]
REFLEXIVITY_TEST_settings = {
'N': 2,
'reflexive': True # Should make this valid
}
TRANSITIVITY_TEST_premises = ["\\Box A"]
TRANSITIVITY_TEST_conclusions = ["\\Box \\Box A"]
TRANSITIVITY_TEST_settings = {
'N': 2,
'transitive': True # Should make this valid
}# Intentionally invalid for countermodel analysis
INVALID_premises = ["A \\vee B"]
INVALID_conclusions = ["A", "B"] # Can't derive both
INVALID_settings = {
'N': 3,
'find_countermodel': True,
'verbose': True # Show countermodel details
}# Large-scale example
PERFORMANCE_premises = [f"P{i} \\rightarrow P{i+1}" for i in range(10)]
PERFORMANCE_premises.append("P0")
PERFORMANCE_conclusions = ["P10"]
PERFORMANCE_settings = {
'N': 12,
'max_time': 60,
'optimize': True
}# Use descriptive uppercase names
MODUS_PONENS_example = [...] # Clear
MP_example = [...] # Acceptable abbreviation
test1_example = [...] # Avoid generic names"""
Example file for testing classical propositional logic.
Theory: Logos (extensional subtheory)
Author: Your Name
Date: 2024-01-15
"""
# Document complex examples
# Test case: Verify distributivity of ∧ over ∨
DISTRIBUTIVITY_premises = ["A \\wedge (B \\vee C)"]
DISTRIBUTIVITY_conclusions = ["(A \\wedge B) \\vee (A \\wedge C)"]# Group related examples
# === Conjunction Tests ===
CONJ_INTRO_example = [...]
CONJ_ELIM_L_example = [...]
CONJ_ELIM_R_example = [...]
# === Disjunction Tests ===
DISJ_INTRO_L_example = [...]
DISJ_INTRO_R_example = [...]
DISJ_ELIM_example = [...]# Define reusable settings
BASE_SETTINGS = {'N': 3, 'max_time': 30}
# Extend for specific tests
MODAL_SETTINGS = {**BASE_SETTINGS, 'reflexive': True}
PERFORMANCE_SETTINGS = {**BASE_SETTINGS, 'N': 10, 'max_time': 120}# Standard logical laws
EXCLUDED_MIDDLE_premises = []
EXCLUDED_MIDDLE_conclusions = ["A \\vee \\neg A"]
CONTRADICTION_premises = ["A", "\\neg A"]
CONTRADICTION_conclusions = ["B"] # Ex falso
DEMORGAN_1_premises = ["\\neg (A \\wedge B)"]
DEMORGAN_1_conclusions = ["\\neg A \\vee \\neg B"]# Modal axioms
K_AXIOM_premises = ["\\Box (A \\rightarrow B)", "\\Box A"]
K_AXIOM_conclusions = ["\\Box B"]
T_AXIOM_premises = ["\\Box A"]
T_AXIOM_conclusions = ["A"]
S4_AXIOM_premises = ["\\Box A"]
S4_AXIOM_conclusions = ["\\Box \\Box A"]# Common inference rules
HYPOTHETICAL_SYLLOGISM_premises = ["A \\rightarrow B", "B \\rightarrow C"]
HYPOTHETICAL_SYLLOGISM_conclusions = ["A \\rightarrow C"]
DISJUNCTIVE_SYLLOGISM_premises = ["A \\vee B", "\\neg A"]
DISJUNCTIVE_SYLLOGISM_conclusions = ["B"]
CONSTRUCTIVE_DILEMMA_premises = [
"A \\rightarrow B",
"C \\rightarrow D",
"A \\vee C"
]
CONSTRUCTIVE_DILEMMA_conclusions = ["B \\vee D"]When running counterfactual examples, the output shows detailed semantic evaluations:
========================================
EXAMPLE CF_CM_1: there is a countermodel.
Atomic States: 4
Semantic Theory: Brast-McKie
Premises:
1. \neg A
2. (A \boxright C)
Conclusion:
3. ((A \wedge B) \boxright C)
Z3 Run Time: 0.0341 seconds
========================================
State Space:
#b0101 = a.c (world)
#b0110 = b.c (world)
#b1001 = a.d (world)
The evaluation world is: b.c
INTERPRETED PREMISES:
1. |\neg A| = < {b.c}, {a, a.b.c.d} > (True in b.c)
|A| = < {a, a.b.c.d}, {b.c} > (False in b.c)
2. |(A \boxright C)| = < {a.c, b.c}, {a.d} > (True in b.c)
|A|-alternatives to b.c = {a.c}
|C| = < {a.c}, {a.b.c.d, a.b.d, a.d, b} > (True in a.c)
INTERPRETED CONCLUSION:
3. |((A \wedge B) \boxright C)| = < {}, {a.c, a.d, b.c} > (False in b.c)
|(A \wedge B)|-alternatives to b.c = {a.d}
|C| = < {a.c}, {a.b.c.d, a.b.d, a.d, b} > (False in a.d)
Total Run Time: 0.389 seconds
========================================
This output demonstrates:
- Countermodel Found: Shows the argument is invalid
- State Space: Lists possible worlds and their properties
- Evaluation World: The world where the countermodel exists
- Interpreted Formulas: Shows verifiers/falsifiers for each premise and conclusion
- Counterfactual Alternatives: Displays the nearest possible worlds for counterfactual evaluation
Error: NameError: name 'example_range' is not defined
# Solution: Ensure you export both required dictionaries
example_range = {"EXAMPLE": example_data}
semantic_theories = {"theory": theory_object}Error: Invalid operator: \implies
# Solution: Use correct LaTeX commands
# Wrong: "\implies", "\land", "\lor"
# Right: "\rightarrow", "\wedge", "\vee"Error: Settings key 'N' not found
# Solution: Always include required settings
settings = {
'N': 3 # Required: maximum atomic propositions
# Other settings are optional
}Error: Theory not loading
# Solution: Check import and initialization
from model_checker.theory_lib.logos import get_theory
theory = get_theory() # Don't forget parentheses- Start simple: Test with basic formulas first
- Use verbose mode: Add
'verbose': Trueto settings - Print constraints: Add
'print_constraints': True - Reduce N: Start with
'N': 2or'N': 3 - Check syntax: Validate LaTeX commands and escaping
- Workflow Guide - Running examples
- Settings Guide - Configuring examples
- Output Guide - Saving and formatting results
- ModelChecker Tools - Advanced features and multi-theory examples
- Constraints Testing - Advanced testing techniques
- Syntactic Processing - How formulas are parsed and AST is constructed
- Builder Architecture - How examples are orchestrated and executed
- Pipeline Overview - Complete data flow from examples to outputs
- Settings Architecture - How example settings are processed
- Models Architecture - How examples generate semantic models