-
Notifications
You must be signed in to change notification settings - Fork 1
Expand file tree
/
Copy pathindex.xml
More file actions
498 lines (449 loc) · 41.7 KB
/
index.xml
File metadata and controls
498 lines (449 loc) · 41.7 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
<?xml version="1.0" encoding="utf-8" standalone="yes" ?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
<channel>
<title>Bionic Vision Lab</title>
<link>https://bionicvisionlab.org/</link>
<description>Recent content on Bionic Vision Lab</description>
<generator>Source Themes Academic (https://sourcethemes.com/academic/)</generator>
<language>en-us</language>
<copyright>&copy; {year}</copyright>
<lastBuildDate>Thu, 23 Apr 2026 18:49:00 +0000</lastBuildDate>
<atom:link href="https://bionicvisionlab.org/index.xml" rel="self" type="application/rss+xml" />
<item>
<title>Teaching</title>
<link>https://bionicvisionlab.org/teaching/</link>
<pubDate>Sun, 01 Mar 2026 00:00:00 +0000</pubDate>
<guid>https://bionicvisionlab.org/teaching/</guid>
<description></description>
</item>
<item>
<title>In the News</title>
<link>https://bionicvisionlab.org/news/</link>
<pubDate>Fri, 09 May 2025 09:59:00 +0000</pubDate>
<guid>https://bionicvisionlab.org/news/</guid>
<description></description>
</item>
<item>
<title>Code</title>
<link>https://bionicvisionlab.org/code/</link>
<pubDate>Fri, 01 Mar 2024 00:00:00 +0000</pubDate>
<guid>https://bionicvisionlab.org/code/</guid>
<description><p>To the best of our ability, we practice open science. The idea is not to simply tick a box of &ldquo;open science&rdquo;, but to make the stimuli, data, and analyses that accompany a publication readable and usable for reviewers and other researchers.
You can find code for most of our publications at <a href="https://github.com/bionicvisionlab">https://github.com/bionicvisionlab</a>.</p>
<p>In addition, we have also developed a number of open-source software tools in support of our research goals:</p>
</description>
</item>
<item>
<title>Research</title>
<link>https://bionicvisionlab.org/research/</link>
<pubDate>Sun, 12 Dec 2021 00:00:00 +0000</pubDate>
<guid>https://bionicvisionlab.org/research/</guid>
<description><div class="d-none d-sm-block float-right col-5 m-0 ml-2">
<img class="m-0 mb-2 subtle-black-shadow" src="https://bionicvisionlab.org/img/visual-prostheses.jpg"/>
<p class="mb-1" style="line-height: 100%"><small>Main approaches for the design of a visual prosthesis <a href="https://doi.org/10.1186/s42234-018-0013-8" target="_blank">(Fernandez, 2018)</a> include retinal (A), optic nerve (B), lateral geniculate nucleus (LGN, C), and cortical approaches (D).</small></p>
</div>
<p><i>Rethinking sight restoration through models, data, and lived experience.</i></p>
<p>We experience the world through rich and varied forms of vision.
For people who are blind or have low vision, existing tools and technologies often fall short of supporting this diversity of experience. Our goal is to understand how biological and artificial vision systems work, and to use those insights to design neurotechnologies that expand access to visual information and support greater independence.</p>
<p>The Bionic Vision Lab tackles this challenge at the intersection of neuroscience, psychology, and computer science. We combine behavioral experiments, VR/AR, and neurophysiological methods (EEG, TMS, physiological sensing) with computational modeling, machine learning, and computer vision. This allows us to connect brain, behavior, and technology; asking how perception supports real-world action, and how artificial systems can interface with the brain to restore or augment visual function.</p>
<p>Our work spans three interconnected research areas:</p>
<ul>
<li>
<p><strong>Understanding perception and behavior.</strong>
We study how people with low vision or blindness perceive and navigate complex environments, using psychophysics, immersive VR, and ambulatory head/eye/body tracking to link perception to real-world function.</p>
</li>
<li>
<p><strong>Modeling neural systems and prosthetic vision.</strong>
We develop biophysically informed and machine learning models of retinal and cortical stimulation, predicting what visual prosthesis users see and designing algorithms to optimize stimulation strategies.</p>
</li>
<li>
<p><strong>Building intelligent assistive technologies.</strong>
We build real-time XR testbeds and computer vision–based assistive tools, using insights from human and animal vision to power the next generation of wearable and implantable devices.</p>
</li>
</ul>
<p>This integrative approach welcomes students with diverse strengths: some focus on human behavior and perception, others on neural systems and computation, and others on designing interactive technologies. Together, they work toward a shared goal: creating tools and interfaces that bring meaningful, functional vision within reach.</p>
</description>
</item>
<item>
<title>Publications</title>
<link>https://bionicvisionlab.org/publications/</link>
<pubDate>Sun, 01 Jan 2017 00:00:00 +0000</pubDate>
<guid>https://bionicvisionlab.org/publications/</guid>
<description></description>
</item>
<item>
<title></title>
<link>https://bionicvisionlab.org/alumni/</link>
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>https://bionicvisionlab.org/alumni/</guid>
<description><p>Yolanda He is an undergraduate student in Psychological &amp; Brain Sciences at UC Santa Barbara.</p>
</description>
</item>
<item>
<title>Collaborators</title>
<link>https://bionicvisionlab.org/collaborators/</link>
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>https://bionicvisionlab.org/collaborators/</guid>
<description></description>
</item>
<item>
<title>Grants</title>
<link>https://bionicvisionlab.org/grants/</link>
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>https://bionicvisionlab.org/grants/</guid>
<description></description>
</item>
<item>
<title>Join Us</title>
<link>https://bionicvisionlab.org/join/</link>
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>https://bionicvisionlab.org/join/</guid>
<description><p>In general, we are looking for enthusiastic, computationally minded individuals with a shared passion for bionic vision.<br>
If you are interested in joining us, please review our <a href="https://docs.google.com/document/d/1Y1wzFVdp-FCoGM47okaW5eYdOOfpgXD5nM9Q7DpwAMo/edit?usp=sharing">Lab Manual</a> to familiarize yourself with our lab culture, policies, and expectations.</p>
<h3 class="fill-width">Junior Lab Trainees (UCSB Undergraduates)</h3>
<p>The Junior Lab Trainee program is the primary entry point for current UCSB undergraduates (particularly PBS and CS students) to join the Bionic Vision Lab. This role serves as a structured introduction to research, allowing you to learn how we approach science while contributing meaningfully to ongoing studies.</p>
<p>This role entails:</p>
<ul>
<li>Research literacy: Attend bi-weekly meetings to read and discuss scientific papers related to visual prostheses and neuroengineering.</li>
<li>Active participation: Serve as a reliable participant in behavioral studies (XR, psychophysics, etc.).</li>
<li>Pathway to growth: While promotions are never guaranteed, successful completion of a quarter as a Junior Trainee is often a prerequisite for becoming a Senior Research Assistant.</li>
</ul>
<p>We use a <a href="https://forms.gle/XGuijmyZX9LSedwn6">centralized waitlist</a> for all prospective UCSB undergraduate lab members. To be considered, please join the waitlist. We accept a limited cohort each quarter.</p>
<h3 class="fill-width">All Other Positions</h3>
<p><strong>Our lab is currently at capacity for Senior RAs and PhD students.</strong></p>
<p>At this time, we are unable to offer:</p>
<ul>
<li>paid or unpaid summer internships</li>
<li>remote research roles</li>
<li>positions for high-school students</li>
</ul>
<p>The Bionic Vision Lab operates with a structured, cohort-based model. To ensure high-quality mentorship and training, we only open new positions when we have the capacity to support them fully.</p>
<p>Because we receive a large number of inquiries each year, we are unable to respond to unsolicited requests outside of posted openings. We appreciate your understanding.</p>
<p><small>This page was last updated on February 18, 2026.</small></p>
<small class="mt-4 mb-4" style="display:block">
<i class="fas fa-exclamation-triangle" aria-hidden="true"></i>
We receive many applications each year. To help us give fair consideration to all candidates:
<ul class="mb-0">
<li>Please do not send generic or copy-and-paste emails. Applications must demonstrate genuine familiarity with our work.</li>
<li><strong>Do not use AI/LLM-generated statements.</strong> As an AI-focused lab, we can recognize them, and they do not make a strong impression.</li>
<li>There is no need to send multiple follow-up emails.</li>
</ul>
Messages that do not follow these guidelines will not receive a response. Thank you for your understanding.
</small>
<!--
<h3 class="fill-width">Postdoc in Visual/Computational Neuroscience (NeuroAI)</h3>
We are excited to invite a self-driven and enthusiastic postdoctoral researcher to our <a href="https://bionicvisionlab.org/research/mouse-visual-navigation/">VisNav</a> team.
This NeuroAI project, at the exciting crossroads of visual and computational neuroscience, is part of the NIH BRAIN Initiative that seeks to elucidate how the mouse brain processes visual information during active exploration to support visual navigation. Our recent <a href="https://bionicvisionlab.org/publications/2023-12-v1-mouse/">NeurIPS paper</a> highlights how a multimodal recurrent net can integrate gaze-contingent visual input with behavioral and temporal dynamics to explain V1 activity in freely moving mice. We are now seeking to extend this work to higher-order visual areas (HVAs) in collaboration with the labs of Spencer Smith (UCSB), Michael Goard (UCSB), and Cris Niell (University of Oregon).
What we offer:
- A welcoming, inclusive, and collaborative environment that combines expertise in neuroscience, computer science, and cognitive sciences.
- A fully funded, unionized position, offering a competitive salary and excellent benefits, for a minimum 2-year commitment.
- An opportunity to work with diverse experts and engage with unique neural activity data sets, collected using state-of-the-art techniques.
We are looking for:
- A passionate individual with a solid foundation in visual and computational neuroscience.
- A PhD in (computational) neuroscience, computer science, cognitive sciences, statistics, or a related field
- Strong communication and teamwork skills, with proficiency in programming and statistical analysis
To apply, please email your CV and contacts for 2-3 references to <a href="mailto:mbeyeler@ucsb.edu">Michael</a>.
<h3 class="fill-width">Junior RA Positions</h3>
We are currently looking for junior research assistants (<a href="https://docs.google.com/document/d/1Y1wzFVdp-FCoGM47okaW5eYdOOfpgXD5nM9Q7DpwAMo/edit#heading=h.b7fsq3mt32v0" target="_blank">Junior RAs</a>). As a Junior RA, your responsibilities are three-fold:
- complete readings assigned by the lab manager/your grad student mentor; afterwards, meet with them to discuss
- attend project meetings/training sessions as your schedule allows (no need to attend lab meetings)
- participate in one or more of our ongoing behavioral studies
This is a great way to get introduced to the lab's work and the field of bionic vision.
In the future, this may lead to a Senior RA position where you can contribute more substantially to our research.
Junior RAs typically earn course credit via PSY-99/199 (P/NP or letter grade), CS-196, or your department's equivalent of Directed Research.
For more details, see our <a href="https://docs.google.com/document/d/1Y1wzFVdp-FCoGM47okaW5eYdOOfpgXD5nM9Q7DpwAMo/edit?usp=sharing">Lab Manual</a>.
If you are interested in becoming a Junior RA, please reach out to our <a href="https://bionicvisionlab.org/people">Lab Manager</a>.
-->
<!--
In the future, and more generally, we often accept students through the following competitive programs:
- In Computer Science (CS): <a href="https://cs.ucsb.edu/education/undergraduate/special-programs" target="_blank">Distinction in the Major Program (DIMAP)</a> and <a href="https://ersp.cs.ucsb.edu/" target="_blank">Early Research Scholars Program (ERSP)</a>
- In Psychological & Brain Sciences (PBS): <a href="https://psych.ucsb.edu/undergraduate/honors-program" target="_blank">PBS Honors Program</a>
- In the College of Engineering (CoE): <a href="https://engineering.ucsb.edu/undergraduate/college-engineering-honors-program" target="_blank">CoE Honors Program</a>
- Across campus: <a href="https://cbsr.ucsb.edu/seeds" target="_blank">Student Engagement and Enrichment in Data Science (SEEDS)</a> and <a href="https://blacksinneurocomp.com/" target="_blank">Black Researchers Advancing Neuroscience (BRAIN)</a>
Summer programs:
- for high school students: <a href="https://www.summer.ucsb.edu/programs/research-mentorship-program/overview" target="_blank">Research Mentorship Program (RMP)</a>
- for undergrad students: <a href="https://www.ucop.edu/graduate-studies/initiatives-outreach/uc-leads.html" target="_blank">UC Leadership Excellence through Advanced DegreeS (LEADS)</a>
-->
<!--
<h3 class="fill-width">PhD Positions for Fall 2024</h3>
We have a limited number of opportunities to join us as a PhD student in Fall 2024. The deadline is December 2023.
We are especially looking for (but not limited to):
1. <a href="https://cs.ucsb.edu" target="_blank">CS</a>/<a href="https://ece.ucsb.edu" target="_blank">ECE</a>: Computer vision (CV) / human-computer interaction (HCI) students interested in <a href="https://bionicvisionlab.org/research/information-needs-blind-low-vision/">assistive technologies</a> and/or <a href="https://bionicvisionlab.org/research/event-based-vision/">Edge AI</a>. You may develop novel scene understanding algorithms, event-based CV systems, and/or prototypes of near-future accessibility aids. Your goal will be to publish at top-tier venues such as CVPR, ASSETS, or CHI.
2. <a href="https://psych.ucsb.edu" target="_blank">PBS</a>: Vision science students interested in working with real <a href="https://bionicvisionlab.org/research/information-needs-blind-low-vision/">low vision</a> and <a href="https://bionicvisionlab.org/research/predicting-visual-outcomes-visual-prostheses/">bionic eye patients</a>. You may run behavioral studies and/or develop computational models to further our understanding of the neural code of vision in health and disease. You don't need a CS background for this, but should have some experience with vision/cognition, psychophysics, and/or clinical populations.
Please apply through the <a href="https://www.graddiv.ucsb.edu/how-apply/application-and-admission-checklist" target="_blank">Grad Admissions Portal</a>.
You will then be able to indicate your wish to work with us in the "Major and Degree Objective" tab under "Faculty Interests".
US citizens and permanent residents may apply for a <a href="https://www.graddiv.ucsb.edu/how-apply/faqs-applicants#Graduate-Fee">fee waiver</a>.
<small><i class="fas fa-info-circle" aria-hidden="true"></i>
Please note that we get a lot of emails from prospective PhD students, so Michael will not be able to respond to messages that aren't clearly tailored to the lab's vision, especially if it is apparent that the same "boilerplate" message is being sent to many different labs.
You can make your application stand out by demonstrating that you have spent time on our website, <i>seen this message</i>, and thought hard about why our lab is a good fit for your skills and interests.</small>
<h3 class="fill-width">All Other Positions</h3>
We are completely full. Undergraduate students, please check back in January 2024 for Winter Quarter positions.
<h2>Postdocs</h2>
Please <a href="../people/beyeler_michael">contact Michael</a> with your CV and a brief statement
of research accomplishments, interests, and career plans.
Although all applicants are welcome, we are especially looking for expertise at the intersection of computational neuroscience and data science.
To be considered, you will need at least one publication
in a high-quality international conference (computer science)
or journal (neuroscience)
and you need to have a reasonable chance of getting a fellowship
to support your stay at UCSB.
<h3 class="fill-width">CS/ECE Master's Students</h3>
We have several MS positions available for Fall 2021:
We are looking for students interested in applying their methodological skills to research problems in bionic vision.
- Human-computer interaction: Build VR/AR/XR applications applied to low vision and bionic vision (Unity, compute shaders, image processing, bionic vision simulations, eye tracking)
- Biomedical image analysis: Build predictive models applied to biomedical image datasets (registration, segmentation, self-supervised learning)-
- Computer vision: Build predictive models applied to biomedical image datasets (registration, segmentation,
retinal fundus imaging, optical coherence tomography).
- Machine learning/data science: Build predictive models applied to real-world datasets collected on retinal prosthesis
patients (classification, regression, time-series analysis, interpretable models, heterogeneous data).
- Neuromorphic engineering: Build brain-inspired event-based vision systems applied to scene understanding (silicon retina, spiking neural networks)
- Software engineering/parallel programming: Develop parallelization back ends for
<a href="https://githb.com/pulse2percept/pulse2percept">pulse2percept</a>, our open-source Python-based simulation
framework (Python, Cython, SciPy, OpenMP, GPGPU, JAX).
Please <a href="../people/beyeler_michael">contact Michael</a> to set up a time to meet.
Typical responsibilities include one or more of the following:
- Assist in running behavioral studies (typically PSY-99 / 199)
- Develop applications for <a href="https://githb.com/pulse2percept/pulse2percept">pulse2percept</a> (typically CS-196)
- Get hands-on research experience by shadowing a PhD student
- Process and analyze scientific data
- Perform literature reviews
- Attend weekly lab meetings and present once a quarter
<h3 class="fill-width">Undergraduate Students</h3>
In addition, we often accept students through the following competitive programs:
- In Computer Science (CS): <a href="https://cs.ucsb.edu/education/undergraduate/special-programs" target="_blank">Distinction in the Major Program (DIMAP)</a> and <a href="https://ersp.cs.ucsb.edu/" target="_blank">Early Research Scholars Program (ERSP)</a>
- In Psychological & Brain Sciences (PBS): <a href="https://psych.ucsb.edu/undergraduate/honors-program" target="_blank">PBS Honors Program</a>
- In the College of Engineering (CoE): <a href="https://engineering.ucsb.edu/undergraduate/college-engineering-honors-program" target="_blank">CoE Honors Program</a>
- Across campus: <a href="https://cbsr.ucsb.edu/seeds" target="_blank">Student Engagement and Enrichment in Data Science (SEEDS)</a> and <a href="https://blacksinneurocomp.com/" target="_blank">Black Researchers Advancing Neuroscience (BRAIN)</a>
-->
<p>
<div class="gallery2 caption-position-none caption-effect-slide hover-effect-zoom hover-transition" itemscope itemtype="http://schema.org/ImageGallery">
<div class="box">
<figure itemprop="associatedMedia" itemscope itemtype="http://schema.org/ImageObject">
<div class="img" style="background-image: url('https://bionicvisionlab.org/img/gallery//001%20At%20Fall%20Vision%20Meeting%20with%20Dan%20Adams%20Eduardo%20Fernandez%20Xing%20Chen%20Frederik%20Ceyssens%20Antonio%20Lozano%20Jan%20Antolik.jpg');" >
<img itemprop="thumbnail" src="https://bionicvisionlab.org/img/gallery//001%20At%20Fall%20Vision%20Meeting%20with%20Dan%20Adams%20Eduardo%20Fernandez%20Xing%20Chen%20Frederik%20Ceyssens%20Antonio%20Lozano%20Jan%20Antolik.jpg" alt="001 at fall vision meeting with dan adams eduardo fernandez xing chen frederik ceyssens antonio lozano jan antolik" />
</div>
<figcaption>
<p>001 at fall vision meeting with dan adams eduardo fernandez xing chen frederik ceyssens antonio lozano jan antolik</p>
</figcaption>
<a href="https://bionicvisionlab.org/img/gallery//001%20At%20Fall%20Vision%20Meeting%20with%20Dan%20Adams%20Eduardo%20Fernandez%20Xing%20Chen%20Frederik%20Ceyssens%20Antonio%20Lozano%20Jan%20Antolik.jpg" itemprop="contentUrl" alt="001 at fall vision meeting with dan adams eduardo fernandez xing chen frederik ceyssens antonio lozano jan antolik"></a>
</figure>
</div>
<div class="box">
<figure itemprop="associatedMedia" itemscope itemtype="http://schema.org/ImageObject">
<div class="img" style="background-image: url('https://bionicvisionlab.org/img/gallery//002%20Byron%20Johnson%20presenting%20at%20VSS.jpg');" >
<img itemprop="thumbnail" src="https://bionicvisionlab.org/img/gallery//002%20Byron%20Johnson%20presenting%20at%20VSS.jpg" alt="002 byron johnson presenting at vss" />
</div>
<figcaption>
<p>002 byron johnson presenting at vss</p>
</figcaption>
<a href="https://bionicvisionlab.org/img/gallery//002%20Byron%20Johnson%20presenting%20at%20VSS.jpg" itemprop="contentUrl" alt="002 byron johnson presenting at vss"></a>
</figure>
</div>
<div class="box">
<figure itemprop="associatedMedia" itemscope itemtype="http://schema.org/ImageObject">
<div class="img" style="background-image: url('https://bionicvisionlab.org/img/gallery//003%20Group%20picture%20of%20the%20BVL%20team%20celebrating%20Prof.%20Beyeler%27s%20Plous%20Award.jpg');" >
<img itemprop="thumbnail" src="https://bionicvisionlab.org/img/gallery//003%20Group%20picture%20of%20the%20BVL%20team%20celebrating%20Prof.%20Beyeler%27s%20Plous%20Award.jpg" alt="003 group picture of the bvl team celebrating prof" />
</div>
<figcaption>
<p>003 group picture of the bvl team celebrating prof</p>
</figcaption>
<a href="https://bionicvisionlab.org/img/gallery//003%20Group%20picture%20of%20the%20BVL%20team%20celebrating%20Prof.%20Beyeler%27s%20Plous%20Award.jpg" itemprop="contentUrl" alt="003 group picture of the bvl team celebrating prof"></a>
</figure>
</div>
<div class="box">
<figure itemprop="associatedMedia" itemscope itemtype="http://schema.org/ImageObject">
<div class="img" style="background-image: url('https://bionicvisionlab.org/img/gallery//004%20Group%20picture%20of%20the%20BVL%20team%20at%20The%20Eye%20and%20The%20Chip%202023.jpg');" >
<img itemprop="thumbnail" src="https://bionicvisionlab.org/img/gallery//004%20Group%20picture%20of%20the%20BVL%20team%20at%20The%20Eye%20and%20The%20Chip%202023.jpg" alt="004 group picture of the bvl team at the eye and the chip 2023" />
</div>
<figcaption>
<p>004 group picture of the bvl team at the eye and the chip 2023</p>
</figcaption>
<a href="https://bionicvisionlab.org/img/gallery//004%20Group%20picture%20of%20the%20BVL%20team%20at%20The%20Eye%20and%20The%20Chip%202023.jpg" itemprop="contentUrl" alt="004 group picture of the bvl team at the eye and the chip 2023"></a>
</figure>
</div>
<div class="box">
<figure itemprop="associatedMedia" itemscope itemtype="http://schema.org/ImageObject">
<div class="img" style="background-image: url('https://bionicvisionlab.org/img/gallery//005%20Dr%20Justin%20Kasowski%20getting%20hooded%20at%20the%202023%20Commencement%20Ceremony.jpg');" >
<img itemprop="thumbnail" src="https://bionicvisionlab.org/img/gallery//005%20Dr%20Justin%20Kasowski%20getting%20hooded%20at%20the%202023%20Commencement%20Ceremony.jpg" alt="005 dr justin kasowski getting hooded at the 2023 commencement ceremony" />
</div>
<figcaption>
<p>005 dr justin kasowski getting hooded at the 2023 commencement ceremony</p>
</figcaption>
<a href="https://bionicvisionlab.org/img/gallery//005%20Dr%20Justin%20Kasowski%20getting%20hooded%20at%20the%202023%20Commencement%20Ceremony.jpg" itemprop="contentUrl" alt="005 dr justin kasowski getting hooded at the 2023 commencement ceremony"></a>
</figure>
</div>
<div class="box">
<figure itemprop="associatedMedia" itemscope itemtype="http://schema.org/ImageObject">
<div class="img" style="background-image: url('https://bionicvisionlab.org/img/gallery//007%20Argus%20II%20user%20drawing%20a%20perceived%20phosphene%20on%20a%20touchscreen%20in%20front%20of%20him.jpg');" >
<img itemprop="thumbnail" src="https://bionicvisionlab.org/img/gallery//007%20Argus%20II%20user%20drawing%20a%20perceived%20phosphene%20on%20a%20touchscreen%20in%20front%20of%20him.jpg" alt="007 argus ii user drawing a perceived phosphene on a touchscreen in front of him" />
</div>
<figcaption>
<p>007 argus ii user drawing a perceived phosphene on a touchscreen in front of him</p>
</figcaption>
<a href="https://bionicvisionlab.org/img/gallery//007%20Argus%20II%20user%20drawing%20a%20perceived%20phosphene%20on%20a%20touchscreen%20in%20front%20of%20him.jpg" itemprop="contentUrl" alt="007 argus ii user drawing a perceived phosphene on a touchscreen in front of him"></a>
</figure>
</div>
<div class="box">
<figure itemprop="associatedMedia" itemscope itemtype="http://schema.org/ImageObject">
<div class="img" style="background-image: url('https://bionicvisionlab.org/img/gallery//007%20Prof.%20Beyeler%20sitting%20on%20a%20panel%20with%20prosthesis%20experts%20at%20the%20Eye%20and%20the%20Chip%202023.jpg');" >
<img itemprop="thumbnail" src="https://bionicvisionlab.org/img/gallery//007%20Prof.%20Beyeler%20sitting%20on%20a%20panel%20with%20prosthesis%20experts%20at%20the%20Eye%20and%20the%20Chip%202023.jpg" alt="007 prof" />
</div>
<figcaption>
<p>007 prof</p>
</figcaption>
<a href="https://bionicvisionlab.org/img/gallery//007%20Prof.%20Beyeler%20sitting%20on%20a%20panel%20with%20prosthesis%20experts%20at%20the%20Eye%20and%20the%20Chip%202023.jpg" itemprop="contentUrl" alt="007 prof"></a>
</figure>
</div>
<div class="box">
<figure itemprop="associatedMedia" itemscope itemtype="http://schema.org/ImageObject">
<div class="img" style="background-image: url('https://bionicvisionlab.org/img/gallery//008%20Bionic%20Vision%20Lab%20cake%20designed%20by%20Tori%20LeVier.jpg');" >
<img itemprop="thumbnail" src="https://bionicvisionlab.org/img/gallery//008%20Bionic%20Vision%20Lab%20cake%20designed%20by%20Tori%20LeVier.jpg" alt="008 bionic vision lab cake designed by tori le vier" />
</div>
<figcaption>
<p>008 bionic vision lab cake designed by tori le vier</p>
</figcaption>
<a href="https://bionicvisionlab.org/img/gallery//008%20Bionic%20Vision%20Lab%20cake%20designed%20by%20Tori%20LeVier.jpg" itemprop="contentUrl" alt="008 bionic vision lab cake designed by tori le vier"></a>
</figure>
</div>
<div class="box">
<figure itemprop="associatedMedia" itemscope itemtype="http://schema.org/ImageObject">
<div class="img" style="background-image: url('https://bionicvisionlab.org/img/gallery//009%20Jacob%20Granley%20in%20front%20of%20his%20poster%20at%20NeurIPS%202022.jpg');" >
<img itemprop="thumbnail" src="https://bionicvisionlab.org/img/gallery//009%20Jacob%20Granley%20in%20front%20of%20his%20poster%20at%20NeurIPS%202022.jpg" alt="009 jacob granley in front of his poster at neur IP s 2022" />
</div>
<figcaption>
<p>009 jacob granley in front of his poster at neur IP s 2022</p>
</figcaption>
<a href="https://bionicvisionlab.org/img/gallery//009%20Jacob%20Granley%20in%20front%20of%20his%20poster%20at%20NeurIPS%202022.jpg" itemprop="contentUrl" alt="009 jacob granley in front of his poster at neur IP s 2022"></a>
</figure>
</div>
<div class="box">
<figure itemprop="associatedMedia" itemscope itemtype="http://schema.org/ImageObject">
<div class="img" style="background-image: url('https://bionicvisionlab.org/img/gallery//010%20Galen%20Pogoncheff%20using%20the%20Bionic%20Vision%20XR%20Simulator.jpg');" >
<img itemprop="thumbnail" src="https://bionicvisionlab.org/img/gallery//010%20Galen%20Pogoncheff%20using%20the%20Bionic%20Vision%20XR%20Simulator.jpg" alt="010 galen pogoncheff using the bionic vision xr simulator" />
</div>
<figcaption>
<p>010 galen pogoncheff using the bionic vision xr simulator</p>
</figcaption>
<a href="https://bionicvisionlab.org/img/gallery//010%20Galen%20Pogoncheff%20using%20the%20Bionic%20Vision%20XR%20Simulator.jpg" itemprop="contentUrl" alt="010 galen pogoncheff using the bionic vision xr simulator"></a>
</figure>
</div>
<div class="box">
<figure itemprop="associatedMedia" itemscope itemtype="http://schema.org/ImageObject">
<div class="img" style="background-image: url('https://bionicvisionlab.org/img/gallery//011%20Students%20drifting%20off%20in%20the%20sunset%20after%20a%20hard%20days%20work.jpg');" >
<img itemprop="thumbnail" src="https://bionicvisionlab.org/img/gallery//011%20Students%20drifting%20off%20in%20the%20sunset%20after%20a%20hard%20days%20work.jpg" alt="011 students drifting off in the sunset after a hard days work" />
</div>
<figcaption>
<p>011 students drifting off in the sunset after a hard days work</p>
</figcaption>
<a href="https://bionicvisionlab.org/img/gallery//011%20Students%20drifting%20off%20in%20the%20sunset%20after%20a%20hard%20days%20work.jpg" itemprop="contentUrl" alt="011 students drifting off in the sunset after a hard days work"></a>
</figure>
</div>
<div class="box">
<figure itemprop="associatedMedia" itemscope itemtype="http://schema.org/ImageObject">
<div class="img" style="background-image: url('https://bionicvisionlab.org/img/gallery//012%20Prof.%20Beyeler%20demoing%20BionicVisionXR%20at%20VSS%202023%20while%20a%20participant%20is%20using%20the%20head-mounted%20display.jpg');" >
<img itemprop="thumbnail" src="https://bionicvisionlab.org/img/gallery//012%20Prof.%20Beyeler%20demoing%20BionicVisionXR%20at%20VSS%202023%20while%20a%20participant%20is%20using%20the%20head-mounted%20display.jpg" alt="012 prof" />
</div>
<figcaption>
<p>012 prof</p>
</figcaption>
<a href="https://bionicvisionlab.org/img/gallery//012%20Prof.%20Beyeler%20demoing%20BionicVisionXR%20at%20VSS%202023%20while%20a%20participant%20is%20using%20the%20head-mounted%20display.jpg" itemprop="contentUrl" alt="012 prof"></a>
</figure>
</div>
<div class="box">
<figure itemprop="associatedMedia" itemscope itemtype="http://schema.org/ImageObject">
<div class="img" style="background-image: url('https://bionicvisionlab.org/img/gallery//013%20Mac%20and%20Cheese%20trophy%20given%20to%20first%20place%20winner%20Lily%20Turkstra%20at%20the%20lab%27s%20first%20competitive%20Mac%20and%20Cheese%20cookoff.jpg');" >
<img itemprop="thumbnail" src="https://bionicvisionlab.org/img/gallery//013%20Mac%20and%20Cheese%20trophy%20given%20to%20first%20place%20winner%20Lily%20Turkstra%20at%20the%20lab%27s%20first%20competitive%20Mac%20and%20Cheese%20cookoff.jpg" alt="013 MAC and cheese trophy given to first place winner lily turkstra at the lab&#39;s first competitive MAC and cheese cookoff" />
</div>
<figcaption>
<p>013 MAC and cheese trophy given to first place winner lily turkstra at the lab&#39;s first competitive MAC and cheese cookoff</p>
</figcaption>
<a href="https://bionicvisionlab.org/img/gallery//013%20Mac%20and%20Cheese%20trophy%20given%20to%20first%20place%20winner%20Lily%20Turkstra%20at%20the%20lab%27s%20first%20competitive%20Mac%20and%20Cheese%20cookoff.jpg" itemprop="contentUrl" alt="013 MAC and cheese trophy given to first place winner lily turkstra at the lab&#39;s first competitive MAC and cheese cookoff"></a>
</figure>
</div>
<div class="box">
<figure itemprop="associatedMedia" itemscope itemtype="http://schema.org/ImageObject">
<div class="img" style="background-image: url('https://bionicvisionlab.org/img/gallery//014%20Group%20picture%20of%20the%20BVL%20team%20at%20the%20Mac%20and%20Cheese%20cookoff.jpg');" >
<img itemprop="thumbnail" src="https://bionicvisionlab.org/img/gallery//014%20Group%20picture%20of%20the%20BVL%20team%20at%20the%20Mac%20and%20Cheese%20cookoff.jpg" alt="014 group picture of the bvl team at the MAC and cheese cookoff" />
</div>
<figcaption>
<p>014 group picture of the bvl team at the MAC and cheese cookoff</p>
</figcaption>
<a href="https://bionicvisionlab.org/img/gallery//014%20Group%20picture%20of%20the%20BVL%20team%20at%20the%20Mac%20and%20Cheese%20cookoff.jpg" itemprop="contentUrl" alt="014 group picture of the bvl team at the MAC and cheese cookoff"></a>
</figure>
</div>
<div class="box">
<figure itemprop="associatedMedia" itemscope itemtype="http://schema.org/ImageObject">
<div class="img" style="background-image: url('https://bionicvisionlab.org/img/gallery//015%20Dr%20Aiwen%20Xu%20getting%20hooded%20at%20the%202024%20Commencement%20Ceremony.jpg');" >
<img itemprop="thumbnail" src="https://bionicvisionlab.org/img/gallery//015%20Dr%20Aiwen%20Xu%20getting%20hooded%20at%20the%202024%20Commencement%20Ceremony.jpg" alt="015 dr aiwen xu getting hooded at the 2024 commencement ceremony" />
</div>
<figcaption>
<p>015 dr aiwen xu getting hooded at the 2024 commencement ceremony</p>
</figcaption>
<a href="https://bionicvisionlab.org/img/gallery//015%20Dr%20Aiwen%20Xu%20getting%20hooded%20at%20the%202024%20Commencement%20Ceremony.jpg" itemprop="contentUrl" alt="015 dr aiwen xu getting hooded at the 2024 commencement ceremony"></a>
</figure>
</div>
</div>
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/photoswipe/4.1.1/photoswipe.min.css" integrity="sha256-sCl5PUOGMLfFYctzDW3MtRib0ctyUvI9Qsmq2wXOeBY=" crossorigin="anonymous" />
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/photoswipe/4.1.1/default-skin/default-skin.min.css" integrity="sha256-BFeI1V+Vh1Rk37wswuOYn5lsTcaU96hGaI7OUVCLjPc=" crossorigin="anonymous" />
<div class="pswp" tabindex="-1" role="dialog" aria-hidden="true">
<div class="pswp__bg"></div>
<div class="pswp__scroll-wrap">
<div class="pswp__container">
<div class="pswp__item"></div>
<div class="pswp__item"></div>
<div class="pswp__item"></div>
</div>
<div class="pswp__ui pswp__ui--hidden">
<div class="pswp__top-bar">
<div class="pswp__counter"></div>
<button class="pswp__button pswp__button--close" title="Close (Esc)"></button>
<button class="pswp__button pswp__button--share" title="Share"></button>
<button class="pswp__button pswp__button--fs" title="Toggle fullscreen"></button>
<button class="pswp__button pswp__button--zoom" title="Zoom in/out"></button>
<div class="pswp__preloader">
<div class="pswp__preloader__icn">
<div class="pswp__preloader__cut">
<div class="pswp__preloader__donut"></div>
</div>
</div>
</div>
</div>
<div class="pswp__share-modal pswp__share-modal--hidden pswp__single-tap">
<div class="pswp__share-tooltip"></div>
</div>
<button class="pswp__button pswp__button--arrow--left" title="Previous (arrow left)">
</button>
<button class="pswp__button pswp__button--arrow--right" title="Next (arrow right)">
</button>
<div class="pswp__caption">
<div class="pswp__caption__center"></div>
</div>
</div>
</div>
</div>
</p>
</description>
</item>
<item>
<title>Location</title>
<link>https://bionicvisionlab.org/contact/</link>
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>https://bionicvisionlab.org/contact/</guid>
<description></description>
</item>
<item>
<title>People</title>
<link>https://bionicvisionlab.org/people/</link>
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>https://bionicvisionlab.org/people/</guid>
<description></description>
</item>
<item>
<title>Recent & Upcoming Talks</title>
<link>https://bionicvisionlab.org/talk/</link>
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>https://bionicvisionlab.org/talk/</guid>
<description></description>
</item>
</channel>
</rss>