-
Notifications
You must be signed in to change notification settings - Fork 1.5k
[FirebaseAI] Add Live Audio screen #1776
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Summary of ChangesHello @daymxn, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request integrates a new 'Live Audio' feature into the Firebase AI example application. It provides a practical demonstration of how to leverage the Live API for interactive audio experiences with a Gemini model, encompassing real-time transcription, dynamic UI updates via function calls, and sophisticated audio management for various device configurations. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a LiveAudioScreen to the AI sample app, demonstrating audio-to-audio communication with a Gemini model. The implementation is comprehensive, covering audio capture, playback, transcription, and function calling. The code is well-structured, particularly the audio handling logic in AudioController and the typewriter effect for transcripts in TranscriptViewModel. My review focuses on enhancing the robustness and thread safety of the new features. I've provided suggestions to replace fatalError calls with more graceful error handling to prevent application crashes and to convert the AudioController class to an actor to prevent potential race conditions. These changes will improve the stability of the sample app and make it a better reference for production-level code.
firebaseai/FirebaseAIExample/Features/Live/ViewModels/LiveViewModel.swift
Show resolved
Hide resolved
peterfriese
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great work on this PR! The live audio feature is a fantastic addition and works really well on a physical device. On the simulator, the model tends to interrupt itself, maybe this is something you can look into?
I think it still needs a bit of refinement before it's ready to merge. The main thing is the error handling; could you please address the fatalError calls throughout the audio code?
I've also left a couple of remarks on the SwiftUI side of things with some ideas for improvement.
Again, really great work on this!
firebaseai/LiveAudioExample/ViewModels/TranscriptViewModel.swift
Outdated
Show resolved
Hide resolved
* Add audio playback toggle when running on the simulator * Fix microphone input support with `NSMicrophoneUsageDescription` * Remove unused `ConversationKit` import * Fix typos * Swap `#if targetEnvironment(simulator)` condition * Only show toggle when not connected * Extract toggle to `AudioOutputToggle` * Remove padding on AudioOutputToggle Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> --------- Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
andrewheard
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a new 'Live Audio' feature, which is a great addition to showcase the capabilities of the Gemini Live API. The implementation is comprehensive, covering audio recording and playback, real-time communication with the model, function calling, and UI updates. The code is well-structured, particularly the use of an actor for the AudioController and the clean separation of concerns into different views and view models.
My review includes a few suggestions to improve robustness, maintainability, and UI quality. These include avoiding a potential crash from force-unwrapping, providing higher-resolution image assets, reducing code duplication, and using a consistent logging strategy. Overall, this is a solid contribution.
firebaseai/FirebaseAIExample/Features/Live/ViewModels/LiveViewModel.swift
Show resolved
Hide resolved
firebaseai/FirebaseAIExample/Assets.xcassets/gemini-logo.imageset/Contents.json
Show resolved
Hide resolved
firebaseai/FirebaseAIExample/Shared/Audio/AudioController.swift
Outdated
Show resolved
Hide resolved
peterfriese
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
App crashes due to missing NSMicrophoneUsageDescription, both on Simulator and physical device.
The model still seemed to interrupt itself quite a bit unless the volume is turned way down.
The styling of the ConnectButton looks a bit out of place on iOS. We haven't used gradient buttons anywhere else in the app, and this might make it harder to fully adopt liquid glass. Apart from that, the size of the button and the fonts don't match the rest of the UI and should be updated to match. The connected state of the button doesn't seem to have an outline, which looks confusing.
firebaseai/FirebaseAIExample/Features/Live/Views/ConnectButton.swift
Outdated
Show resolved
Hide resolved
firebaseai/FirebaseAIExample/Features/Live/Views/ConnectButton.swift
Outdated
Show resolved
Hide resolved
firebaseai/FirebaseAIExample/Features/Live/Views/ConnectButton.swift
Outdated
Show resolved
Hide resolved
firebaseai/FirebaseAIExample/Features/Live/Views/ConnectButton.swift
Outdated
Show resolved
Hide resolved
Fixed
This is due to the apple AEC warming up. It should fix itself after a couple seconds. There's technically ways to help alleviate this, but that's adding a moderate amount of testing and code to our already moderately heavy code for |
firebaseai/FirebaseAIExample/Features/Live/Views/ConnectButton.swift
Outdated
Show resolved
Hide resolved
firebaseai/FirebaseAIExample/Features/Live/Views/ConnectButton.swift
Outdated
Show resolved
Hide resolved
firebaseai/FirebaseAIExample/Features/Live/Views/ConnectButton.swift
Outdated
Show resolved
Hide resolved
firebaseai/FirebaseAIExample/Features/Live/Views/ConnectButton.swift
Outdated
Show resolved
Hide resolved
firebaseai/FirebaseAIExample/Features/Live/Views/ConnectButton.swift
Outdated
Show resolved
Hide resolved
firebaseai/FirebaseAIExample/Features/Live/Screens/LiveScreen.swift
Outdated
Show resolved
Hide resolved
firebaseai/FirebaseAIExample/Features/Live/Views/AudioOutputToggle.swift
Outdated
Show resolved
Hide resolved
peterfriese
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for making those changes!
LGTM.
This adds the
LiveAudioScreento the AI sample app. Since the Live API was released with12.4.0, this adds a sample app showcasing how you can do audio to audio communication with a Gemini model.The new screen also supports the following:
Note:
In a follow-up PR. we'll likely add a sub-screen for the Live API, and provide other code samples (eg; text-text, audio-audio w/o function calling, video, etc.,). For those screens, we'll take advantage of the infra defined here.