Skip to content

Commit 4ab058d

Browse files
authored
Revise fundamental AI webpage structure and content
Updated the structure and content of the fundamental AI webpage, including new sections on AI Safety and Human–AI Ecosystem.
1 parent a05d61f commit 4ab058d

File tree

1 file changed

+38
-207
lines changed

1 file changed

+38
-207
lines changed

fundamental-ai.html

Lines changed: 38 additions & 207 deletions
Original file line numberDiff line numberDiff line change
@@ -1,207 +1,38 @@
1-
<!DOCTYPE html>
2-
<html lang="en">
3-
<head>
4-
<meta charset="UTF-8">
5-
<meta name="viewport" content="width=device-width, initial-scale=1.0">
6-
<title>Fundamental AI — BioIntelligence Lab</title>
7-
8-
<style>
9-
body {
10-
margin: 0;
11-
font-family: "Inter", sans-serif;
12-
color: #222;
13-
line-height: 1.6;
14-
background: #ffffff;
15-
}
16-
17-
/* NAVIGATION BAR (same as homepage) */
18-
nav {
19-
width: 100%;
20-
padding: 20px 40px;
21-
box-sizing: border-box;
22-
position: sticky;
23-
top: 0;
24-
background: rgba(255,255,255,0.90);
25-
backdrop-filter: blur(8px);
26-
display: flex;
27-
justify-content: space-between;
28-
align-items: center;
29-
border-bottom: 1px solid #eee;
30-
z-index: 1000;
31-
}
32-
33-
nav .logo img {
34-
height: 45px;
35-
cursor: pointer;
36-
}
37-
38-
nav ul {
39-
list-style: none;
40-
display: flex;
41-
gap: 30px;
42-
margin: 0;
43-
padding: 0;
44-
}
45-
46-
nav ul li a {
47-
text-decoration: none;
48-
color: #222;
49-
font-weight: 500;
50-
transition: color 0.2s;
51-
}
52-
53-
nav ul li a:hover {
54-
color: #003d99;
55-
}
56-
57-
/* PAGE HEADER */
58-
.header-block {
59-
text-align: center;
60-
padding: 120px 20px 80px;
61-
background: #f7f9fc;
62-
}
63-
64-
.header-block h1 {
65-
font-size: 2.6em;
66-
margin-bottom: 10px;
67-
font-weight: 700;
68-
color: #111;
69-
}
70-
71-
.header-block p {
72-
font-size: 1.2em;
73-
color: #555;
74-
max-width: 800px;
75-
margin-left: auto;
76-
margin-right: auto;
77-
}
78-
79-
/* SECTION TITLE */
80-
h2.section-title {
81-
text-align: center;
82-
margin-top: 70px;
83-
margin-bottom: 25px;
84-
font-size: 1.9em;
85-
font-weight: 600;
86-
}
87-
88-
/* CONTENT SECTIONS */
89-
.content-block {
90-
max-width: 900px;
91-
margin: 0 auto;
92-
padding: 10px 20px 40px;
93-
color: #333;
94-
font-size: 1.05em;
95-
}
96-
97-
.placeholder-img {
98-
width: 100%;
99-
height: 220px;
100-
background: #e9eef5;
101-
border-radius: 8px;
102-
margin: 30px 0;
103-
display: flex;
104-
align-items: center;
105-
justify-content: center;
106-
color: #708090;
107-
font-size: 1.1em;
108-
font-style: italic;
109-
}
110-
111-
/* PUBLICATION LIST */
112-
.pub-section h3 {
113-
margin-top: 40px;
114-
font-size: 1.3em;
115-
color: #003d99;
116-
}
117-
118-
.pub-section ul {
119-
margin-top: 10px;
120-
line-height: 1.55;
121-
}
122-
</style>
123-
</head>
124-
125-
<body>
126-
127-
<!-- NAVIGATION BAR -->
128-
<nav>
129-
<div class="logo">
130-
<a href="index.html">
131-
<img src="https://github.com/BioIntelligence-Lab/BioIntelligence-Lab.github.io/blob/main/images/Lab_logo3.png?raw=true" alt="Lab Logo">
132-
</a>
133-
</div>
134-
135-
<ul>
136-
<li><a href="index.html#research">Research</a></li>
137-
<li><a href="index.html#tools">Software</a></li>
138-
<li><a href="index.html#people">People</a></li>
139-
<li><a href="index.html#contact">Contact</a></li>
140-
</ul>
141-
</nav>
142-
143-
<!-- HEADER -->
144-
<section class="header-block">
145-
<h1>Fundamental AI Research</h1>
146-
<p>
147-
Advancing the foundations of Artificial Intelligence through safety, trustworthiness, human–AI ecosystems,
148-
and multi-agent autonomy. Our work pushes beyond application-driven AI to explore how intelligent systems
149-
learn, collaborate, and evolve over time.
150-
</p>
151-
</section>
152-
153-
<!-- AI SAFETY -->
154-
<h2 class="section-title">AI Safety & Trustworthiness</h2>
155-
156-
<div class="content-block">
157-
<p>
158-
This sub-area focuses on ensuring AI systems are reliable, transparent, and equitable.
159-
We investigate algorithmic bias, uncertainty modeling, robustness to real-world variation,
160-
security vulnerabilities, demographic leakage in foundation models, and safe use of generative AI
161-
in clinical decision-making.
162-
</p>
163-
164-
<div class="placeholder-img">[ Placeholder: Diagram on AI Safety / Bias / Robustness ]</div>
165-
166-
<div class="pub-section">
167-
<h3>Representative Publications</h3>
168-
<ul>
169-
<li>Beheshtian et al., Radiology, 2022 — Bias in pediatric bone age prediction.</li>
170-
<li>Bachina et al., Radiology, 2023 — Coarse race labels masking underdiagnosis patterns.</li>
171-
<li>Santomartino et al., Radiology: AI, 2024 — Stress testing and robustness evaluation.</li>
172-
<li>Trang et al., Emergency Radiology, 2024 — Sociodemographic bias in ICH detection.</li>
173-
<li>Kavandi et al., AJR, 2024 — Predictability of demographics from chest radiographs.</li>
174-
<li>Santomartino et al., Radiology, 2024 — Bias in NLP tools for radiology reports.</li>
175-
<li>Garin, Parekh, Sulam, Yi et al., Nature Medicine, 2023 — Need for demographic transparency.</li>
176-
<li>Yi et al., Radiology, 2025 — Best practices for evaluating algorithmic bias.</li>
177-
<li>Zheng, Jacobs, Parekh et al., arXiv, 2024 — Demographic predictability in CT embeddings.</li>
178-
<li>Zheng, Jacobs, Braverman, Parekh et al., arXiv, 2025 — Adversarial debiasing in CT models.</li>
179-
<li>Kulkarni et al., MIDL, 2024 — Hidden-in-plain-sight imperceptible bias attacks.</li>
180-
</ul>
181-
</div>
182-
</div>
183-
184-
<!-- HUMAN–AI ECOSYSTEM -->
185-
<h2 class="section-title">Human–AI Ecosystem Modeling</h2>
186-
187-
<div class="content-block">
188-
<p>
189-
We study how humans and AI systems can learn from each other, share experience, collaborate across institutions,
190-
and form collective intelligence. This includes the development of SheLL (Shared Experience Lifelong Learning),
191-
multi-agent reasoning frameworks, and the foundations needed to build autonomous research workflows.
192-
</p>
193-
194-
<div class="placeholder-img">[ Placeholder: SheLL / Multi-Agent Collaboration Diagram ]</div>
195-
196-
<div class="pub-section">
197-
<h3>Representative Publications</h3>
198-
<ul>
199-
<li>Uwaeze, Kulkarni, Braverman, Jacobs, Parekh, ICCV 2025 — Counterfactual augmentation for equitable learning.</li>
200-
<li>Kulkarni et al., MIDL 2024 — Stealth bias attacks informing ecosystem resilience.</li>
201-
<!-- Add more as papers emerge -->
202-
</ul>
203-
</div>
204-
</div>
205-
206-
</body>
207-
</html>
1+
---
2+
layout: default
3+
title: Fundamental AI
4+
---
5+
6+
<section class="header-block">
7+
<h1>Fundamental AI Research</h1>
8+
<p>
9+
Advancing the foundations of Artificial Intelligence through safety, trustworthiness, human–AI ecosystems, and
10+
multi-agent autonomy — exploring how intelligent systems learn, collaborate, and evolve over time.
11+
</p>
12+
</section>
13+
14+
<!-- AI SAFETY -->
15+
<h2 class="section-title">AI Safety & Trustworthiness</h2>
16+
17+
<div class="content-block">
18+
<p>
19+
We develop methods to ensure AI systems are reliable, transparent, and equitable — addressing bias, uncertainty,
20+
robustness to real-world clinical variation, security vulnerabilities, demographic leakage, and safe use of
21+
generative AI in radiology and beyond.
22+
</p>
23+
24+
<div class="placeholder-img">[ Placeholder: Diagram — Bias / Robustness / Security ]</div>
25+
</div>
26+
27+
<!-- HUMAN-AI ECOSYSTEM -->
28+
<h2 class="section-title">Human–AI Ecosystem</h2>
29+
30+
<div class="content-block">
31+
<p>
32+
We investigate how AI systems can collaborate with each other and with humans — learning across sites, agents,
33+
and tasks. This includes the development of SheLL (Shared Experience Lifelong Learning), multi-agent reasoning,
34+
and foundations for autonomous research workflows.
35+
</p>
36+
37+
<div class="placeholder-img">[ Placeholder: Diagram — SheLL / Multi-Agent Learning ]</div>
38+
</div>

0 commit comments

Comments
 (0)