-
Notifications
You must be signed in to change notification settings - Fork 1
Expand file tree
/
Copy pathindex.html
More file actions
840 lines (773 loc) · 50.5 KB
/
index.html
File metadata and controls
840 lines (773 loc) · 50.5 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
<meta name="description" content="">
<meta name="author" content="Eric Fosler-Lussier">
<title>Eric Fosler-Lussier</title>
<!-- Bootstrap core CSS -->
<link href="https://stackpath.bootstrapcdn.com/bootstrap/4.4.1/css/bootstrap.min.css" rel="stylesheet" integrity="sha384-Vkoo8x4CGsO3+Hhxv8T/Q5PaXtkKtu6ug5TOeNV6gBiFeWPGFN9MuhOf23Q9Ifjh" crossorigin="anonymous">
<!-- Font Awesome-->
<link href="https://stackpath.bootstrapcdn.com/font-awesome/4.7.0/css/font-awesome.min.css" rel="stylesheet" integrity="sha384-wvfXpqpZZVQGK6TAh5PVlGOfQNHSoD2xbE+QkPxCAFlNEevoEH3Sl0sibVcOQVnN" crossorigin="anonymous">
<style>
.bd-placeholder-img {
font-size: 1.125rem;
text-anchor: middle;
-webkit-user-select: none;
-moz-user-select: none;
-ms-user-select: none;
user-select: none;
}
@media (min-width: 768px) {
.bd-placeholder-img-lg {
font-size: 3.5rem;
}
}
</style>
<!-- Custom styles for this template -->
<link href="efosler.css" rel="stylesheet">
</head>
<body>
<nav class="navbar bd-navbar navbar-dark fixed-top bg-dark flex-md-nowrap p-0 shadow">
<a class="navbar-brand col-sm-3 col-md-3 mr-0" href="#">Eric Fosler-Lussier</a>
<ul class="navbar-nav bd-navbar-nav flex-row" id="nav">
<li class="nav-item text-nowrap">
<a class="nav-link" href="#">Home</a>
</li>
<li class="nav-item text-nowrap">
<a class="nav-link" href="#bio">Bio</a>
</li>
<li class="nav-item text-nowrap">
<a class="nav-link" href="#research">Research</a>
</li>
<li class="nav-item text-nowrap">
<a class="nav-link" href="#teaching">Teaching</a>
</li>
<li class="nav-item text-nowrap">
<a class="nav-link" href="#pubs">Publications</a>
</li>
<li class="nav-item text-nowrap">
<a class="nav-link" href="#fun">Fun Stuff</a>
</li>
</ul>
</nav>
<div class="container-fluid">
<div class="row">
<nav class="col-md-3 d-none d-md-block bg-light sidebar">
<div class="sidebar-sticky">
<ul class="nav flex-column">
<li class="nav-item">
<div class="sidebar-pic">
<img class="rounded-circle img-fluid" src="img/eric.jpg" alt="Eric Fosler-Lussier">
</div>
</li>
<li class="nav-item">
<div class="academic-title"><a href="#makhoul">John I. Makhoul Professor,</a><br/>
Department of Computer Science & Engineering<br/>
Assistant Dean for Faculty Lifecycle, <br/>
College of Engineering <br/>
The Ohio State University</div>
</li>
<li class="nav-item">
<div class="academic-subtitle">Professor by Courtesy of Linguistics, Biomedical Informatics</div>
</li>
<li class="nav-item">
<a class="nav-link" href="https://goo.gl/maps/9yq8UNsAoF1TCaEH8"><i class="fa fa-fw fa-map-marker"></i>Office: <span class="academic-subtitle"> 491 Dreese Lab</span>
</a>
</li>
<li class="nav-item nav-link">
<i class="fa fa-fw fa-phone"></i>Phone: <span class="academic-subtitle"> +1 614 292 4890</span>
</li>
<li class="nav-item nav-link">
<i class="fa fa-fw fa-envelope"></i>Email:<span class="academic-subtitle"> [first half of last name] @cse.osu.edu</span>
</li>
<li class="nav-item nav-link">
<i class="fa fa-fw fa-map-marker"></i>Mailing address: <br/><span class="academic-subtitle">395 Dreese Lab, 2015 Neil Ave, Columbus, OH 43210</span>
</li>
<li class="nav-item">
<a class="nav-link" href="https://github.com/OSU-slatelab"><i class="fa fa-fw fa-github"></i> Lab GitHub
</a>
</li>
<li class="nav-item">
<a class="nav-link" href="https://efosler.github.io/Fosler-Lussier_cv.pdf">
<span><i class="fa fa-fw fa-file"></i> CV</span>
</a>
</li>
<li class="nav-item">
<a class="nav-link" href="https://github.com/efosler">
<span><i class="fa fa-fw fa-github"></i> GitHub</span>
</a>
</li>
<li class="nav-item">
<a class="nav-link" href="https://scholar.google.com/citations?hl=en&user=AlsMV98AAAAJ&view_op=list_works">
<span><i class="fa fa-fw fa-graduation-cap"></i> Google Scholar</span>
</a>
</li>
<li class="nav-item">
<a class="nav-link" href="https://orcid.org/0000-0001-8004-5169">
<span><i class="fa-brands fa-orcid"></i> ORCID 0000-0001-8004-5169</span>
</a>
</li>
</ul>
</div>
</nav>
<main role="main" class="col-md-9 ml-sm-auto col-lg-9 px-4" id='main-'>
<div class="card-columns">
<div class="card">
<div class="card-body">
<h5 class="card-title">Research</h5>
<h6 class="card-subtitle mb-2 text-muted">My students and I work in a number of areas in speech and language processing, including...</h6>
<ul>
<li>Novel statistical methods for speech recognition</li>
<li>Prediction of errors in ASR systems</li>
<li>Discriminative language/pronunciation models</li>
<li>Phonetically-aware speech enhancement</li>
<li>Statistical investigations of linguistic phenomena in large corpora</li>
<li>Spoken dialogue system design; spoken human-computer interface issues</li>
<li>Natural language generation for spoken dialogue systems</li>
<li>Information extraction from electronic medical records</li>
</ul>
<a href="https://cse.osu.edu/~fosler/slate" class="card-link">Lab Link</a>
<a href="#research" class="card-link">More about Research</a>
</div>
</div>
<div class="card">
<div class="card-body">
<h5 class="card-title">Teaching</h5>
<h6 class="card-subtitle mb-2 text-muted">I teach a number of the Artificial Intelligence Courses at OSU:</h6>
<ul>
<li>Intro to Artificial Intelligence</li>
<li>Neural Networks</li>
<li>Foundations of Speech and Language Processing</li>
</ul>
<a href="#teaching" class="card-link">More about Teaching</a>
</div>
</div>
<div class="card">
<div class="card-body">
<h5 class="card-title">Publications</h5>
<h6 class="card-subtitle mb-2 text-muted">Recent publications include</h6>
<div class="bibtex_display" bibtexkey="sunder2025non|wagner2025ohio|sunder2024improving|jones-etal-2024-multi|sunder2024end|chang-fosler-lussier-2023-selective" id="homepubs"></div>
<a href="#pubs" class="card-link">Full List</a>
<a href="https://scholar.google.com/citations?hl=en&user=AlsMV98AAAAJ&view_op=list_works" class="card-link">Google Scholar</a>
</div>
</div>
<div class="card">
<div class="card-body">
<h5 class="card-title">Recent News</h5>
<!--<h6 class="card-subtitle mb-2 text-muted">Recent publications include</h6>-->
<ul>
<li> <a href="https://www.linkedin.com/in/david-palzer-04397662/">David Palzer</a> finished his PhD degree!</li>
<li> <a href="https://www.linkedin.com/in/vishal-sunder-11a2a4193/">Vishal Sunder</a> finished his PhD degree!</li>
</ul>
</div>
</div>
<div class="card">
<div class="card-body">
<h5 class="card-title">Useful Stuff</h5>
<h6 class="card-subtitle mb-2 text-muted">Some things we've done</h6>
<ul>
<li> <a href="https://github.com/OSU-slatelab">OSU SLaTe Lab GitHub Repository</a> </li>
<li> <a href="https://childes.talkbank.org/access/Eng-NA/OCSC.html">Ohio Child Speech Corpus</a></li>
<li> <a href="https://www.youtube.com/channel/UClcHk9xPa2e0PQqJBtJ_3xg">OSU Virtual Patient Project</a></li>
<li> <a href="https://slate.cse.ohio-state.edu/JET/COVID-19/">COVID concept embeddings</a></li>
<li> <a href="http://speechkitchen.org">Speech Recognition Virtual Kitchen</a> </li>
<li> <a href="http://buckeyecorpus.osu.edu">Buckeye Corpus of Speech</a></li>
</ul>
</div>
</div>
</div>
<h2>Current Research Students</h2>
<div class="card-columns" id="currentstudents">
</div>
<h2>Ph.D. Graduates</h2>
<div class="card-columns" id='phdstudents'>
</div>
<h2>Postdocs</h2>
<div class="card-columns" id='postdocs'>
</div>
<h2>MS/BS Graduates</h2>
<div class="card-columns" id='msbsstudents'>
</div>
</main>
<main role="main" class="col-md-9 ml-sm-auto col-lg-9 px-4" id="main-makhoul"><h2>About the Makhoul Professorship</h2>
The Ohio State University Board of Trustees established the John I. Makhoul Professorship in Electrical and Computer Engineering in 2020 in support of signal processing and machine learning at Ohio State; I am the first holder of the professorship (2022-2026). Unusually, while the professorship is in Electrical and Computer Engineering, my tenure home remains in Computer Science and Engineering. I'm very grateful to the College of Engineering, ECE, CSE and particularly John Makhoul for the opportunity to serve as the first Makhoul Professor.
</main>
<main role="main" class="col-md-9 ml-sm-auto col-lg-9 px-4" id="main-bio">
<h2>Bio</h2>
<p>Eric Fosler-Lussier is the <a href="#makhoul">John I. Makhoul Professor</a> in Computer Science and Engineering and Assistant Dean for Faculty Lifecycle
in the College of Engineering at the Ohio State University. He holds courtesy appointments in Linguistics and Biomedical Informatics.
After receiving a B.A.S. (Computer and Cognitive Science) and B.A. (Linguistics) from the University of Pennsylvania in 1993,
he received his Ph.D. in 1999 from the University of California, Berkeley, performing his dissertation research
at the International Computer Science Institute under the tutelage of Prof. Nelson Morgan.
He has also been a Member of Technical Staff at Bell Labs, Lucent Technologies,
a Visiting Researcher at Columbia University, and a Visiting Professor at the University of Pennsylvania.
Awards inlcude NSF Career (2006), Ohio State College of Engineering Lumley Research Award (2010,2021),
the IEEE Signal Processing Society Best Paper Award (2011), and the IMIA Yearbook Best Paper award in the Natural Language Processing (2015, 2017).</p>
<p>He has published widely in speech and language processing, and is a Fellow of the International Speech Communication Association and the IEEE, and a member of the Association for Computational Linguistics.</p>
<p>Fosler-Lussier served as an senior area editor for the IEEE/ACM Transactions on Audio, Speech, and Language Proessing; he has served three terms on the IEEE Speech and Language Technical Committee (Chair, 2019-2020).
He also served on the editorial board of the ACM Transactions on Speech and Language Processing, as an action editor for Transactions of the Association for Computational Linguistics, and was co-Program Chair for NAACL 2012.</p>
<h2> Appointments </h2>
<div class="card" style="width: 100%;">
<ul class="list-group list-group-flush">
<li class="list-group-item">2022 - 2026: John I. Makhoul Professor of Electrical and Computer Engineering</li>
<li class="list-group-item">2025 - present: Assistant Dean for Faculty Lifecycle, <a href="http://engineering.osu.edu/">College of Engineering</a></li>
<li class="list-group-item">2025: Acting Department Chair, <a href="http://www.cse.ohio-state.edu/">Dept. of Computer Science and Engineering</a></li>
<li class="list-group-item">2021 - present: Associate Chair for Academic Administration, <a href="http://www.cse.ohio-state.edu/">Dept. of Computer Science and Engineering</a></li>
<li class="list-group-item">2020 - 2023: Program co-Director, Foundations of Data Science and Artificial Intelligence, <a href="http://tdai.osu.edu/">Translational Data Analytics Institute</a></li>
<li class="list-group-item">2016 - present: Professor <a href="http://www.cse.ohio-state.edu/">Dept. of Computer Science and Engineering</a>, and Professor by Courtesy, Departments of <a href="http://www.ling.ohio-state.edu/">Linguistics</a> and <a href="http://bmi.osu.edu/">Biomedical Informatics</a>, OSU</li>
<li class="list-group-item">Jan - May 2019: Visiting Professor <a href="http://www.cis.upenn.edu">Dept. of Computer and Information Science</a>, University of Pennsylvania
</li>
<li class="list-group-item">2010 - 2016 : Associate Professor <a href="http://www.cse.ohio-state.edu/">Dept. of Computer Science and Engineering</a>, and Associate Professor by Courtesy, Departments of <a href="http://www.ling.ohio-state.edu/">Linguistics</a> and <a href="http://bmi.osu.edu/">Biomedical Informatics (since 2016)</a>, OSU</li>
<li class="list-group-item">2003 - 2010 : Assistant Professor <a href="http://www.cse.ohio-state.edu/">Dept. of Computer Science and Engineering</a>, and Assistant Professor by Courtesy, Department of <a href="http://www.ling.ohio-state.edu/">Linguistics</a> (since 2004), OSU</li>
<li class="list-group-item">2003 - present: Member, <a href="http://www.cog.ohio-state.edu/">Center for Cognitive Science</a>, OSU</li>
<li class="list-group-item">2003: Visiting Research Scientist, <a href="http://www.ee.columbia.edu">Dept. of Electrical Engineering</a>, Columbia University</li>
<li class="list-group-item">2000-2002: Member of Technical Staff, Bell Labs Research, Lucent Technologies</li>
<li class="list-group-item">1999-2000: Postdoctoral Researcher, International Computer Science Institute</li>
<li class="list-group-item">1994-1999: Graduate Student Researcher, U.C. Berkeley and International Computer Science Institute</li>
</ul>
</div>
<h2> Professional Activities </h2>
<div class="card" style="width: 100%;">
<ul class="list-group list-group-flush">
<li class="list-group-item">
IEEE James L. Flanagan Speech and Audio Processing Award Committee, member 2022-2024, chair 2025-2026, past chair 2027.
</li>
<li class="list-group-item">
ISCA Fellow Selection Committee, 2024-2027.
</li>
<li class="list-group-item">
Member of IEEE SPS <a href="http://www.signalprocessingsociety.org/technical-committees/list/sl-tc/">Speech and Language Technical Committee</a>, 2006-2008, 2011-2013, 2017-2021.
<ul>
<li> Vice Chair, 2018 </li>
<li> Chair, 2019-2020 </li>
<li> Past Chair, 2021 </li>
</ul>
</li>
<li class="list-group-item">
IEEE Signal Processing Society Awards Board, 2021-2023.
</li>
<li class="list-group-item">
Senior Area chair, <a href="http://naacl.org/naacl-hlt-2016/">NAACL HLT 2021</a>
</li>
<li class="list-group-item">
IEEE Signal Processing Society Technical Directions Board, 2019-2020.
</li>
<li class="list-group-item">
General Co-chair, IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), Signapore, 2019.
</li>
<li class="list-group-item">
Associate Editor, <a href="https://signalprocessingsociety.org/publications-resources/ieeeacm-transactions-audio-speech-and-language-processing/ieeeacm">IEEE/ACM Transactions on Audio, Speech and Language Processing</a>, 2017-2021
</li>
<li class="list-group-item">
Action Editor, <a href="http://transacl.org">Transactions of the Association for Computational Linguistics</a>, 2012-2018
</li>
<li class="list-group-item">
Area chair, <a href="http://naacl.org/naacl-hlt-2016/">NAACL HLT 2018</a>
</li>
<li class="list-group-item">
Executive Committee, <a href="http://www.cog.ohio-state.edu">Center for Cognitive and Brain Sciences,</a> The Ohio State University, 2011-2014
</li>
<li class="list-group-item">
Tutorials chair, <a href="http://interspeech2016.org">Interspeech 2016</a>
</li>
<li class="list-group-item">
Area chair, <a href="http://naacl.org/naacl-hlt-2016/">NAACL HLT 2016</a>
</li>
<li class="list-group-item">
Program co-chair, <a href="http://www.naaclhlt2012.org">North American Association for Computational Linguistics Annual Meeting - Human Language Technologies Conference (NAACL HLT), 2012</a>
</li>
<li class="list-group-item">
Associate Editor, <a href="http://tslp.acm.org"> ACM Transactions on Speech and Language Processing</a>, 2011-2013
</li>
<li class="list-group-item">
Finance chair, <a href="http://www.asru2011.org"> IEEE Automatic Speech Recognition and Understanding Workshop (ASRU 2011)</a>
</li>
<li class="list-group-item">
Panels co-chair, <a href="http://www.slt2010.org"> IEEE Spoken Language Technology Workshop (SLT 2010)</a>
</li>
<li class="list-group-item">
Publication chair, <a href="http://www.lsi.upc.edu/events/emnlp2010"> 2010 Conference on Empirical Methods in Natural Language Processing </a>
</li>
<li class="list-group-item">
<a href="http://www.aclweb.org">ACL Archivist</a>, 2006-2010
</li>
<li class="list-group-item">
Executive committee, <a href="http://www.sigmorphon.org">ACL Special Interest Group in Computational Morphonology and Phonology (SIGMORPHON)</a>, 2006-2007
</li>
<li class="list-group-item">
Publicity chair, IEEE/ACL Workshop on Spoken Language Technology, 2006.
</li>
<li class="list-group-item">
Student Workshop Faculty Co-advisor, <a href="http://www1.cs.columbia.edu/~pablo/hlt-naacl04">HLT/NAACL Conference</a>, 2004.
</li>
<li class="list-group-item">
Publicity chair, <a href="http://www.asru2003.org">IEEE Workshop on Automatic Speech Recognition and Understanding</a>, 2003.
</li>
<li class="list-group-item">
Co-organizer, <a href="http://www.clsp.jhu.edu/pmla2002">ISCA Tutorial and Research Workshop on Pronunciation Modeling and Lexicon Adaptation for Spoken Language Technology</a>, 2002.
</li>
<li class="list-group-item">
Reviewer/program committee, Annual Meeting of the Association for Computational Linguistics, Int'l Conference on Acoustics, Speech, and Signal Processing, IEEE Workshop on Automatic Speech Recognition and Understanding, IEEE/ACL Workshop on Spoken Language Technology, Interspeech, Human Language Technologies conference, Neural Information Processing Systems.
</li>
<li class="list-group-item">
Reviewer for the journals Speech Communication; Computer, Speech and Language; Compuational Linguistics; IEEE Transactions on Speech and Audio Processing; IEEE Systems, Man and Cybernetics; Machine Learning Journal; Journal of the Acoustical Society of America.
</li></ul>
</div>
</main>
<main role="main" class="col-md-9 ml-sm-auto col-lg-9 px-4" id="main-research">
<div class="card">
<div class="card-body">
<h5 class="card-title"> Introduction </h5>
<p>
This page gives an overview of and links to recent research papers that
describe some of the research of <a href="http://www.cse.ohio-state.edu/slate">my lab</a>.
The commentary for some papers gives links to follow-on work so that the reader can see the trajectories of the different research lines.
</p>
<p>
My group's current research covers a number of topics in speech and natural language processing.
The overall goal of my lab's research is to find meaningful ways to integrate acoustic, phonetic, lexical, and other
linguistic insights into the speech recognition process through a combination of statistical modeling and data/error analysis.
My goal is to train students to be flexible, independent thinkers
who can apply statistical techniques to a range of language-related
problems. </p>
</div>
</div>
<div class="card">
<div class="card-body">
<h5 class="card-title">Joining the lab </h5>
<p> The <a href="http://www.cse.ohio-state.edu/slate"> Speech and Language
Technologies Laboratory </a> is a group of dynamic researchers who are
interested in mixing aspects of machine learning with speech and
language processing. </p>
<p><b> If you are not an OSU student, but want to apply: </b> see <a data-toggle='modal' href='#exampleModal'>my note on the application process to OSU</a>.</p>
<p><b> If you are a current OSU student:</b> see the <a data-toggle='modal' href="#exampleModal">"once you are at OSU" section</a> of my note.
</p>
</div>
</div>
<div class="card-column">
<div class="card">
<div class="card-body">
<h5 class="card-title"> Selected papers (with commentary)</h5>
<h6>D. Bagchi, P. Plantinga, A. Stiff, and E. Fosler-Lussier, <a href="https://arxiv.org/pdf/1803.09816.pdf">"Spectral feature mapping with mimic loss for robust speech recognition,"</a>, ICASSP 2018.</h6>
<p>
For the task of speech enhancement, local learning objectives are agnostic to phonetic structures helpful for speech recognition.
We propose to add a global criterion to ensure de-noised speech is useful for downstream tasks like ASR.
We first train a spectral classifier on clean speech to predict senone labels.
Then, the spectral classifier is joined with our speech enhancer as a noisy speech recognizer.
This model is taught to imitate the output of the spectral classifier alone on clean speech.
This mimic loss is combined with the traditional local criterion to train the speech enhancer to produce de-noised speech.
Feeding the de-noised speech to an off-the-shelf Kaldi training recipe for the CHiME-2 corpus shows significant improvements in WER.
</p>
</div>
</div>
<div class="card">
<div class="card-body">
<h6 class="card-title"> D. Newman-Griffis, A. Lai, and E. Fosler-Lussier, <a href="">"Jointly embedding entities and text with distant supervision,"</a> Proceedings of the 3rd Workshop on Representation Learning for NLP, 2018.
</h6>
<p>
Learning representations for knowledge base entities and concepts is becoming increasingly important for NLP applications.
However, recent entity embedding methods have relied on structured resources that are expensive to create for new domains and corpora.
We present a distantly-supervised method for jointly learning embeddings of entities and text from an unnanotated corpus,
using only a list of mappings between entities and surface forms. We learn embeddings from open-domain and biomedical corpora,
and compare against prior methods that rely on human-annotated text or large knowledge graph structure.
Our embeddings capture entity similarity and relatedness better than prior work, both in existing biomedical datasets
and a new Wikipedia-based dataset that we release to the community. Results on analogy completion and entity sense disambiguation
indicate that entities and words capture complementary information that can be effectively combined for downstream use.
</p>
</div>
</div>
<div class="card">
<div class="card-body">
<h6 class="card-title"> J.K. Kim, Y.B. Kim, R. Sarikaya, and E. Fosler-Lussier <a href="">"Cross-lingual transfer learning for POS tagging without cross-lingual resources,"</a> EMNLP 2017.
</h6>
<p>
Training a POS tagging model with crosslingual transfer learning usually requires linguistic knowledge
and resources about the relation between the source language and the target language. In this paper,
we introduce a cross-lingual transfer learning model for POS tagging without ancillary resources such as parallel corpora.
The proposed cross-lingual model utilizes a common BLSTM that enables knowledge transfer from other languages,
and private BLSTMs for language-specific representations. The cross-lingual model is trained with language-adversarial
training and bidirectional language modeling as auxiliary objectives to better represent language-general information
while not losing the information about a specific target language. Evaluating on POS datasets from 14 languages in the
Universal Dependencies corpus, we show that the proposed transfer learning model improves the POS tagging performance
of the target languages without exploiting any linguistic knowledge between the source language and the target language.
</p>
</div>
</div>
<div class="card">
<div class="card-body">
<h6 class="card-title">
Y. He and E. Fosler-Lussier. <a href="papers/scrf_word.pdf">"Segmental Conditional Random Fields with Deep Neural Networks as Acoustic Models for First-Pass Word Recognition,"</a> <i>Interspeech 2015</i>, Dresden, Germany, 2015.
</h6>
<p>
In this line of research, my lab engaged in a series of studies to
build automatic speech recognition systems using direct discriminative
models that can combine correlated evidence of linguistic events.
This work is the lastest step in this line of research: it provides a
discriminative framework for modeling longer trajectories in speech
through segmental models. The innovation in this particular paper is
the first one-pass discriminative segmental model for word recognition
(building on <a href="papers/scrf_phone.pdf">our previous work in phone
recognition</a>). We show that the monophone-based model improves
recognition over discriminatively trained monophone-based HMM and
Frame-based CRF models for the Wall Street Journal read-speech task,
and starts to approach triphone-based performance. Thus, this serves
as a good intermediate point in building systems that can start to
compete with state-of-the-art systems.
</p>
<p>
See also:
</p>
<ul>
<li> Y. He and E. Fosler-Lussier, <a href="papers/scrf_phone.pdf">"Efficient Segmental Conditional Random Fields for One-Pass Phone Recognition,"</a> Interspeech 2012.</li>
</ul>
</div>
</div>
<div class="card">
<div class="card-body">
<h6 class="card-title">R. Prabhavalkar, E. Fosler-Lussier, and K. Livescu, <a href="papers/asru2011_prabhavalkar.pdf">"A Factored Conditional Random Field Model for Articulatory Feature Forced Transcription,"</a> IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), 2011.</h6>
<p><i>Recognized as a Spotlight Poster at ASRU 2011 (voted as a top 3 poster in its session by the attendees).</i></p>
<p>
Segmental modeling can be thought of as a type of linguistic
structural modeling (integrating linguistic structure over time).
Another linguistic-inspired modeling approach that we have
experimented with, in conjunction with partners at Toyota
Technological Institute at Chicago, explicitly models articulator
trajectories over time through a factored model -- unlike phone-based
systems, this paradigm allows models of asynchrony which can account
for different types of pronunciation variation commonly seen in
continuous speech. In this paper, we use factorized Conditional
Random Fields in order to learn patterns of asynchrony that can be
utilized in providing articulatory feature transcriptions that can be
expensive to obtain manually. Our experiments show that the
transcriptions can better account for pronunciation variations
observed by linguists in the Switchboard corpus. In subsequent
papers, we were able to utilize this framework for acoustic-based
keyword spotting, showing improvement over a HMM-based baseline.
</p>
<p> See also:
</p><ul>
<li> R. Prabhavalkar, J. Keshet, K. Livescu, and E. Fosler-Lussier, <a href="papers/prabhavalkar_etal_MLSLP2012.pdf">"Discriminative Spoken Term Detection with Limited Data,"</a> Symposium on Machine Learning in Speech and Language Processing (MLSLP), 2012. </li>
<li> R. Prabhavalkar, K. Livescu, E. Fosler-Lussier, J. Keshet, <a href="papers/prabhavalkar_ICASSP2013.pdf">"Discriminative Articulatory Models for Spoken Term Detection in Low-Resource Conversational Settings,"</a> Proceedings of ICASSP, 2013.</li>
</ul>
</div>
</div>
<div class="card">
<div class="card-body">
<h6 class="card-title">
P. Jyothi, E. Fosler-Lussier, and K. Livescu, <a href="http://www.isca-speech.org/archive/interspeech_2012/i12_1063.html">"Discriminatively learning factorized finite state pronunciation models from dynamic Bayesian networks,"</a> Interspeech 2012.
</h6>
<p> <i> Best Student Paper Award, Interspeech 2012 </i> </p>
<p>
This paper takes a slightly different approach to articulatory
modeling than the Prabhavalkar work described above, starting from a
<a href="papers/jyothi_etal_ICASSP2011.pdf">previous Dynamic Bayesian
Network (DBN) approach</a> and efficiently derives, as well as
discriminatively trains, a weighted finite state transducer (WFST)
representation for the articulatory feature-based model of
pronunciation. We use the conditional independence assumptions
imposed by the DBN to efficiently convert it into a sequence of WFSTs
(factor FSTs) which, when composed, yield the same model as the
DBN. We then introduce a linear model of the arc weights of the factor
FSTs and discriminatively learn its weights using the averaged
perceptron algorithm. We demonstrate the approach using a lexical
access task in which we recognize a word given its surface
realization. This work subsequently led to <a href="http://www.isca-speech.org/archive/interspeech_2013/i13_1961.html">
discriminative training approaches for factorized WFSTs</a> that can
be used even in standard WFST-based ASR systems.
</p>
<p> See also:
</p><ul>
<li>P. Jyothi, K. Livescu, and E. Fosler-Lussier, <a href="papers/jyothi_etal_ICASSP2011.pdf">Lexical access experiments with context-dependent articulatory feature-based model,"</a> ICASSP 2011.</li>
<li>P. Jyothi, E. Fosler-Lussier, and K. Livescu, <a href="http://www.isca-speech.org/archive/interspeech_2013/i13_1961.html">"Discriminative Training of WFST Factors with Application to Pronunciation Modeling,"</a> <i>Proceedings of Interspeech</i>, 2013. </li>
</ul>
<p></p>
</div>
</div>
<div class="card">
<div class="card-body">
<h6 class="card-title">
W. Hartmann, A. Narayanan, E. Fosler-Lussier, and D. Wang, <a href="papers/hartmann_taslp13.pdf">"A Direct Masking Approach to Robust ASR,"</a> <i>IEEE Transactions on Acoustics, Speech, and Language Processing,</i> 21:10, pp 1993-2005, Oct 2013.
</h6>
<p>
One line of research that we have followed is to use some of the
discriminative techniques that we have developed in speech recognition
in concert with speech separation techniques inspired by (and often in
collaboration with) my colleague DeLiang Wang. The paper highlighted
here was an outgrowth of this work, in which my student Billy Hartmann
and I asked whether it was possible to use speech separation directly
on noisy speech data to mask out noise without any reconstruction of
the masked componenets in ASR. Previously it was assumed that
zero-energy "holes" would cause problems in spectrally-masked speech
that was not reconstructed or where the missing components were not
marginalized in the probability estimation. The baseline for these
latter techniques was usually just the recognition on the non-modified
(noisy) speech. In this paper we show that one can use masked speech
data directly in recognition, and argue that this should be the
"simple" baseline from which other techniques are based.
</p>
<p>See also:
</p><ul>
<li>R. Prabhavalkar, Z. Jin, and E. Fosler-Lussier, <a href="http://www.isca-speech.org/archive/interspeech_2009/i09_0856.html">"Monaural Segregation of Voiced Speech using Discriminative Random Fields,"</a> Proceedings of Interspeech, Brighton, UK, 2009.</li>
<li>W. Hartmann and E. Fosler-Lussier, <a href="http://www.isca-speech.org/archive/interspeech_2012/i12_1203.html">"Improved Model Selection for the ASR-Driven Binary Mask,"</a> Interspeech 2012.</li>
<li>W. Hartmann and E. Fosler-Lussier, <a href="papers/hartmann_icassp2012.pdf">"ASR-Driven Top-Down Binary Mask Estimation Using Spectral Priors,"</a> Proc. ICASSP, 2012.</li>
</ul>
</div>
</div>
<div class="card">
<div class="card-body">
<h6 class="card-title"> P. Raghavan, E. Fosler-Lussier, N. Elhadad, and A. Lai, <a href="http://acl2014.org/acl2014/P14-1/pdf/P14-1094.pdf">"Cross-narrative Temporal Ordering of Medical Events,"</a> Association for Computational Linguistics Annual Meeting, 2014.
</h6>
<p>My group has also been active in NLP research, particularly in the
domain of electronic health records (EHRs) in collaboration with
Albert Lai in Biomedical Informatics. This paper describes the
culmination of several pieces of work, where we extract medical events
from multiple clinical notes in an EHR, develop a timeline for each
note, and then align the events across notes to create an overall
summary timeline of the medical history.
</p>
<p>See also:
</p><ul>
<li>P. Raghavan, E. Fosler-Lussier, and A. Lai, <a href="http://www.aclweb.org/anthology/W12-2404.pdf">"Temporal Classification of Medical Events,"</a> BioNLP 2012.</li>
<li>P. Raghavan, E. Fosler-Lussier, and A. Lai, <a href="http://www.aclweb.org/anthology/N12-1091">"Exploring Semi-Supervised Coreference Resolution of Medical Concepts using Semantic and Temporal Features,"</a> North American Association for Computational Linguistics Annual Meeting - Human Language Technologies Conference (NAACL HLT 2012), 2012.</li>
</ul>
</div>
</div>
<div class="card">
<div class="card-body">
<h6 class="card-title">
E. Fosler-Lussier, Y. He, P. Jyothi, and R. Prabhavalkar, <a href="papers/IEEE_CRF_SALP_FoslerEtalPreprint.pdf">"Conditional Random Fields in Speech, Audio and Language Processing,"</a> <i>Proceedings of the IEEE</i>, 101:5, pp 1054-1075, 2013.
</h6>
<p>I have also been active in developing review articles to help explain several current topics to wider audiences. This invited paper gives a broad overview of Conditional Random Fields and their use in various processing tasks.</p>
<p> See also:
</p><ul>
<li>M.J.F. Gales, S. Watanabe, and E. Fosler-Lussier, <a href="papers/segdisc_2012.pdf">"Structured Discriminative Models for Speech Recognition,"</a> <i>Signal Processing Magazine, </i> 29:6, pp 70-81, Nov. 2012.</li>
<li>K. Livescu, E. Fosler-Lussier, and F. Metze, <a href="papers/livescu_etal_SPM2012.pdf">"Subword Modeling for Automatic Speech Recognition: Past, Present, and Emerging Approaches,"</a> <i>Signal Processing Magazine,</i> 29:6, pp 44-57, Nov. 2012. </li>
</ul>
</div>
</div>
<div class="card">
<div class="card-body">
<h6 class="card-title"> J. Morris and E. Fosler-Lussier. <a href="papers/morris_taslp2008.pdf"> "Conditional Random Fields for Integrating Local Discriminative Classifiers,"</a> <i>IEEE Transactions on Audio, Speech, and Language Processing,</i> 16:3, pp 617-628, March 2008.</h6>
<i> Awarded IEEE Signal Processing Society Best Paper Award in 2010.</i>
<p>
This paper details a model which can selectively pay attention to some phonological information and ignore other information using a discriminative model known as Conditional Random Fields (CRFs). While CRFs had been used in a few studies prior to this work, the contribution of this paper was to examine their utility as feature combiners, combining posterior estimates of phone classes and phonological feature classes to improve TIMIT phone recognition. We have continued this line of research since this paper, moving towards the first CRF-based <a href="http://www.isca-speech.org/archive/interspeech_2009/i09_3063.html">word recognition</a> experiments ever done.
</p>
<p> See also:
</p><ul>
<li>
<i>E. Fosler-Lussier and J. Morris, <a href="papers/fosler_icassp2008.pdf">"CRANDEM systems: Conditional Random Field Acoustic Models for Hidden Markov Models,"</a> International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2008), Las Vegas, Nevada, 2008.</i>
</li>
<li>
<i>J. Morris and E. Fosler-Lussier, <a href="http://www.isca-speech.org/archive/interspeech_2009/i09_3063.html"> "CRANDEM: Conditional Random Fields for Word Recognition,"</a> Proceedings of Interspeech, Brighton, UK, 2009.</i>
</li>
<li>I. Heintz, E. Fosler-Lussier, and C. Brew. <a href="papers/heintz_taslp2009.pdf">"Discriminative Input Stream Combination for Conditional Random Field Phone Recognition,"</a> <i>IEEE Transactions on Audio, Speech, and Language Processing,</i> 18:8, pp 1533-1546, 2009.
</li>
</ul>
</div>
</div>
<div class="card">
<div class="card-body">
<h6 class="card-title">
E. Fosler-Lussier, I. Amdal, and H.-K. J. Kuo. <a href="papers/fosler_specom2005.pdf"> "A Framework for
Predicting Speech Recognition Errors," </a> <i>Speech
Communication</i> issue on Pronunciation Modeling and Lexicon Adaptation, 46:2, pp. 153-170, 2005.
</h6>
<p>
Much of the work above is devoted to methods of modeling the
acoustic-phonetic variation inherent in speech, in order to build
better speech recognition models. However, a slightly different way
of thinking about variation is to consider the variation in patterns
of errors made by a speech recognizer due to many factors (for
example, errors due to inherent speech variation, errors caused by
poor acoustic/lexical models, or search errors). This paper focuses
on methods to predict errors made by speech recognition systems, even
when we only have a text transcript (i.e., no audio); the proposed
framework is flexible enough to <a href="http://www.isca-speech.org/archive/interspeech_2009/i09_1211.html">allow
for different prediction models</a> to characterize system
performance. The impact of this technology has allowed us and others
to train <a href="http://www.isca-speech.org/archive/interspeech_2010/i10_1049.html">discriminative
language models</a> that directly optimize system error rate (rather
than data likelihood) using a large amount of textual data.
</p>
<p> See also:
</p><ul>
<li> <i> P. Jyothi and E. Fosler-Lussier, <a href="http://www.isca-speech.org/archive/interspeech_2009/i09_1211.html">"A Comparison of Audio-free Speech Recognition Error Prediction Methods,"</a> Proceedings of Interspeech, Brighton, UK, 2009.</i></li>
<li> <i> P. Jyothi and E. Fosler-Lussier, <a href="http://www.isca-speech.org/archive/interspeech_2010/i10_1049.html">"Discriminative Language Modeling Using Simulated ASR Errors,"</a> Proc. Interspeech, 2010.</i></li>
</ul>
</div>
</div>
</main>
<main role="main" class="col-md-9 ml-sm-auto col-lg-9 px-4" id="main-pubs">
<!--
<div class="bibtex_structure">
<div class="sections bibtextypekey">
<div class="section article">
<h2>Refereed Articles</h2>
<div class="sort year" extra="DESC number">
<div class="templates"></div>
</div>
</div>
<div class="section ^proceedings">
<h2>Collections</h2>
<div class="sort year" extra="DESC number">
<div class="templates"></div>
</div>
</div>
<div class="section inproceedings">
<h2>Refereed Conference and Workshop Papers</h2>
<div class="sort year" extra="DESC number">
<div class="templates"></div>
</div>
</div>
<div class="section misc|phdthesis|mastersthesis|bachelorsthesis|techreport">
<h2>Other Publications</h2>
<div class="sort year" extra="DESC number">
<div class="templates"></div>
</div>
</div>
</div>
</div>
-->
<div class="bibtex_structure">
<div class="group year" extra="DESC number">
<h4 class="title"></h4>
<div class="templates"></div>
</div>
</div>
<div class="bibtex_display">
<div class="if bibtex_template" style="display: none;" callback="updatebib(bibtexentry)" >
<ul class="list-group list-group-flush"> <li class="list-group-item">
<div class="if editor">
<span class="if BIBTEXTYPEKEY==PROCEEDINGS">
<span class="editor"></span> (editors),</span>
</div>
<div class="if author" bibcontent="yes">
<span class="venue"></span>
<span class="author"></span>,
</div>
<span class="if journal !nolink">
<a class="bibtexVar" href="http://www.cs.cmu.edu/~mmv/papers/+BIBTEXKEY+.pdf" extra="BIBTEXKEY">
<span style="text-decoration: underline;" class="title"></span>,
</a>
</span>
<span class="if title">
"<span class="title"></span>,"
</span>
<div bibcontent="yes">
<span class="if journal"><em><span class="journal"></span></em>,</span>
<span class="if booktitle">In <em><span class="booktitle"></span></em>,</span>
<span class="if !BIBTEXTYPEKEY==PROCEEDINGS">
<span class="if editor"><span class="editor"></span> (editors),</span></span>
<span class="if publisher"><em><span class="publisher"></span></em>,</span>
<!--<span class="if !journal number">Technical report <span class="number"></span>,</span>-->
<span class="if institution"><span class="institution"></span>,</span>
<span class="if address"><span class="address"></span>,</span>
<span class="if volume"><span class="volume"></span>,</span>
<span class="if journal number">(<span class="number"></span>),</span>
<span class="if pages"> <span class="pages"></span>,</span>
<span class="if month"><span class="month"></span>,</span>
<span class="if year"><span class="year"></span>.</span>
<span class="if note"><span class="note"></span>.</span>
<a class="bibtexVar" role="button" data-toggle="collapse" href="#bib+BIBTEXKEY+" aria-expanded="false" aria-controls="bib+BIBTEXKEY+" extra="BIBTEXKEY">
[bib]
</a>
</div>
<div class="bibtexVar collapse" id="bib+BIBTEXKEY+" extra="BIBTEXKEY">
<div class="well">
<pre><span class="bibtexraw noread"></span></pre>
</div>
</div>
<div style="display:none"><span class="bibtextype"></span></div>
<div style="display:none"><span class="if topic"><span class="topic"></span></span></div>
</li></ul>
</div>
</div>
</main>
<main role="main" class="col-md-9 ml-sm-auto col-lg-9 px-4" id="main-teaching">
<h2>Teaching: Classes taught at Ohio State</h2>
<div class="card-columns" id="classes">
</div>
<h2>Resources from my classes that might be useful</h2>
<div class="card-columns" id="teachingresources">
</div>
</main>
<main role="main" class="col-md-9 ml-sm-auto col-lg-9 px-4" id="main-fun">
<h2>Fun things about me</h2>
<div class="card-columns" id="funstuff">
</div>
</main>
<!-- Modal -->
<div class="modal fade" id="exampleModal" tabindex="-1" role="dialog" aria-labelledby="exampleModalLabel" aria-hidden="true">
<div class="modal-dialog" role="document">
<div class="modal-content">
<div class="modal-header">
<h5 class="modal-title" id="exampleModalLabel">Joining the Lab</h5>
<button type="button" class="close" data-dismiss="modal" aria-label="Close">
<span aria-hidden="true">×</span>
</button>
</div>
<div class="modal-body">
<h6 class="modal-title" id="applying students">
If you are applying to Ohio State
</h6>
<p>I apologize that I usually cannot give personalized responses to students who contact me about joining the lab,
particularly those students applying to Ohio State who have not matriculated. You may feel free to contact me,
as I will be glad know that you are interested, but you are likely to receive a form letter in response.</p>
<p><a href="http://www.cse.ohio-state.edu/grad/admissions.shtml">Click here for information on applying to Computer Science &
Engineering at Ohio State.</a> Please note the application deadlines, particularly for fellowships.</p>
<p><b>On the application process:</b> complete application files are reviewed by a department-level committee before they
are passed to the research areas; I only see files that have made it through the department-level committee.
There is a place on the form to indicate that you have contacted me, which will help with routing.
However, the strongest case you can make is by making a strong, focused statement of purpose; this is read by multiple
faculty members as we make recommendations for admission and funding.
The admissions process takes several months.
Research areas don't see folders until well into the new year (late January/early February). </p>
<p><b>On funding:</b> the department uses three sources of funding to help graduate students.
Not all students are admitted with funding, unfortunately: we just don't have enough resources for everyone.
Strong students are placed in a competition for university-level fellowship funding, which usually covers the first year.
The department also makes some offers with Teaching Assistant (TA funding), but these resources vary from year to year.
Note that professors (like me) don't have the authority to make fellowship or TA offers.
Professors do often have Research Assistantship positions (RA); these positions are often funded by using external research grants.
However, I do not usually offer funding to incoming students; see the next section for details.</p>
<h6>If you are already admitted to/attending Ohio State</h6>
<p>
Congratulations! We're glad to have you here.
</p><p>
As mentioned above, I rarely fund first year students in their first term at OSU.
This is in part because the external funds available for RAships varies over time
and I need to fund student currently in the lab first, and because I like to get
to know students and their abilities before putting them into the main line of research.
Students also have quite a large coursework committment in their first year; managing that
first term can be a challenge for some. It's also important for the student to know that they match my advising style.
</p>
<p>
The typical path for grad students to get to know me is to take one of the project-based courses and do well in that:
</p>
<ul>
<li>CSE 5522: Advanced Artificial Intelligence</li>
<li>CSE 5525: Foundations of Speech and Language Processing</li>
<li>CSE 5539: Advanced Studies in Artificial Intelligence</li>
</ul>
<p> Students who want to learn more about the area but get closed out of classes might want to enroll
for 1 credit of my section of CSE 6539 (typically meets Monday @ 9:10).
Students enrolling for credit will be expected to be active participants and contribute to the presentation of papers in the literature.</p>
</div>
<div class="modal-footer">
<button type="button" class="btn btn-primary" data-dismiss="modal">Close</button>
</div>
</div>
</div>
</div>
</div>
</div>
<script src="https://code.jquery.com/jquery-3.5.1.min.js" integrity="sha256-9/aliU8dGd2tb6OSsuzixeV4y/faTqgFtohetphbbj0=" crossorigin="anonymous"></script>
<script>window.jQuery || document.write('<script src="/docs/4.4/assets/js/vendor/jquery.slim.min.js"><\/script>')</script>
<script src="https://stackpath.bootstrapcdn.com/bootstrap/4.4.1/js/bootstrap.min.js" integrity="sha384-wfSDF2E50Y2D1uUdj0O3uMBJnjuUD4Ih7YwaYd1iqfktj0Uod8GCExl3Og8ifwB6" crossorigin="anonymous"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/feather-icons/4.9.0/feather.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/Chart.js/2.7.3/Chart.min.js"></script>
<script src="efosler.js"></script></body>
<script type="text/javascript" src="https://cdn.jsdelivr.net/gh/pcooksey/bibtex-js/src/bibtex_js.js"></script>
<bibtex src='fosler.bib'></bibtex>
</html>