|
6 | 6 | :::::{grid} 2 |
7 | 7 | :gutter: 2 |
8 | 8 |
|
| 9 | + |
| 10 | + |
9 | 11 | :::{grid-item-card} |
10 | | -:img-top: _static/img/statistical_perceptions.png |
11 | 12 |
|
12 | 13 |
|
13 | | -Perceptions of AI Fairness |
| 14 | +Realizing Sociotechnical Machine Learning through Modeling, Explanations, and Reflections. |
14 | 15 | ^^^ |
15 | 16 |
|
16 | | -In collaboration with [Malik Boykin's lab](https://www.boykinlab.com/) in Cognitive, Linguistic, and Psychological Sciences at Brown, we are studying people's preferred definition of fairness and what social and algorithmic factors influence these preferences. To power these tools, we are also developing techniques to interpolate between definitions of fairness. |
| 17 | +This project will build on prior work in the lab to take a human in the loop type approach to understanding how to make *socially* safer *technical* components in sociotechncial systems (people+technology interacting). |
| 18 | + |
| 19 | +We will look at how model-based approaches and explanation techniques help develop sociotechnical foresight, the ability of technologists to anticipate the social impacts of their work through examining their reflections on their proceses and their work. |
17 | 20 |
|
18 | 21 | +++ |
19 | 22 |
|
20 | | -[{far}`file-pdf`](https://doi.org/10.1145/3465416.3483302) |
| 23 | +::: |
| 24 | + |
| 25 | +:::{grid-item-card} |
| 26 | + |
| 27 | + |
| 28 | +Model-Based Fairness Intervention Assessment |
| 29 | +^^^ |
| 30 | + |
| 31 | +In this project, we are using bias models to evalute the effectiveness of different types of fair machine learning interventions. We hope to use the insights from this to provide data scientists with more actionable advice on how to select a fairness intervention. |
| 32 | + |
| 33 | ++++ |
21 | 34 |
|
22 | 35 | ::: |
23 | 36 |
|
24 | 37 |
|
| 38 | + |
| 39 | + |
| 40 | + |
25 | 41 | :::{grid-item-card} |
26 | | -:img-top: _static/img/taskfair.png |
27 | 42 |
|
28 | | -Task Level Fairness |
29 | | -^^^ |
30 | 43 |
|
31 | | -In this project, we examine how fairness can be evaluated at the task and problem level in order to develop heuristics for the feasibility of a fair model prior to training. |
| 44 | +LLMs and Fair Data Driven Decision-making |
| 45 | +^^^ |
32 | 46 |
|
33 | | -This project will produce a Python library that anyone can use in the EDA step of their project. |
| 47 | +We are buliding a benchmark to evaluate benchmarks on making non-discriminatory decisions from data. This will include evaluating direct decisions, assisting on fair ML tasks, and agentically training fair models. |
34 | 48 |
|
35 | 49 | +++ |
36 | 50 |
|
37 | | -[{far}`file-pdf`](https://charliezhaoyinpeng.github.io/EAI-KDD22/camera_ready/information.pdf) |
38 | | - |
39 | 51 | ::: |
40 | 52 |
|
41 | | - |
42 | 53 | :::{grid-item-card} |
| 54 | +:img-top: _static/img/statistical_perceptions.png |
43 | 55 |
|
44 | 56 |
|
45 | | -Model-Based Fairness Intervention Assessment |
| 57 | +Perceptions of AI Fairness |
46 | 58 | ^^^ |
47 | 59 |
|
48 | | -In this project, we are using bias models to evalute the effectiveness of different types of fair machine learning interventions. We hope to use the insights from this to provide data scientists with more actionable advice on how to select a fairness intervention. |
| 60 | +In collaboration with [Malik Boykin's lab](https://www.boykinlab.com/) in Cognitive, Linguistic, and Psychological Sciences at Brown, we are studying people's preferred definition of fairness and what social and algorithmic factors influence these preferences. To power these tools, we are also developing techniques to interpolate between definitions of fairness. |
49 | 61 |
|
50 | 62 | +++ |
51 | 63 |
|
| 64 | +[{far}`file-pdf`](https://doi.org/10.1145/3465416.3483302) |
| 65 | + |
52 | 66 | ::: |
53 | 67 |
|
54 | 68 |
|
55 | 69 | :::{grid-item-card} |
| 70 | +:img-top: _static/img/taskfair.png |
56 | 71 |
|
57 | | - |
58 | | -LLMs and Fair Data Driven Decision-making |
| 72 | +Task Level Fairness |
59 | 73 | ^^^ |
60 | 74 |
|
61 | | -We are buliding a benchmark to evaluate benchmarks on making non-discriminatory decisions from data. This will include evaluating direct decisions, assisting on fair ML tasks, and agentically training fair models. |
| 75 | +In this project, we examine how fairness can be evaluated at the task and problem level in order to develop heuristics for the feasibility of a fair model prior to training. |
| 76 | + |
| 77 | +This project will produce a Python library that anyone can use in the EDA step of their project. |
62 | 78 |
|
63 | 79 | +++ |
64 | 80 |
|
| 81 | +[{far}`file-pdf`](https://charliezhaoyinpeng.github.io/EAI-KDD22/camera_ready/information.pdf) |
| 82 | + |
65 | 83 | ::: |
| 84 | + |
| 85 | + |
66 | 86 | :::: |
67 | 87 |
|
68 | 88 |
|
|
0 commit comments