Skip to content

Commit a18133c

Browse files
Reformats header.
1 parent c09e8aa commit a18133c

2 files changed

Lines changed: 4 additions & 4 deletions

File tree

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,12 @@
11
# bead
22

3-
A Python framework for constructing, deploying, and analyzing large-scale linguistic judgment experiments with active learning.
4-
53
[![CI](https://github.com/FACTSlab/bead/actions/workflows/ci.yml/badge.svg)](https://github.com/FACTSlab/bead/actions/workflows/ci.yml)
64
[![Python 3.13](https://img.shields.io/badge/python-3.13-blue.svg)](https://www.python.org/downloads/)
75
[![License: MIT](https://img.shields.io/badge/license-MIT-green.svg)](LICENSE)
86
[![Documentation](https://img.shields.io/badge/docs-readthedocs-blue.svg)](https://bead.readthedocs.io)
97

8+
A Python framework for constructing, deploying, and analyzing large-scale linguistic judgment experiments with active learning.
9+
1010
## Overview
1111

1212
`bead` implements a complete pipeline for linguistic research: from lexical resource construction through experimental deployment to model training with active learning. It handles the combinatorial explosion of linguistic stimuli while maintaining full provenance tracking.

bead/tokenization/tokenizers.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -159,7 +159,7 @@ def _load(self) -> Callable[..., _SpacyDocProtocol]:
159159
return self._nlp
160160

161161
try:
162-
import spacy # noqa: PLC0415
162+
import spacy # noqa: PLC0415 # type: ignore[reportMissingImports]
163163
except ImportError as e:
164164
raise ImportError(
165165
"spaCy is required for SpacyTokenizer. "
@@ -233,7 +233,7 @@ def _load(self) -> _StanzaPipelineProtocol:
233233
return self._nlp
234234

235235
try:
236-
import stanza # noqa: PLC0415
236+
import stanza # noqa: PLC0415 # type: ignore[reportMissingImports]
237237
except ImportError as e:
238238
raise ImportError(
239239
"Stanza is required for StanzaTokenizer. "

0 commit comments

Comments
 (0)