I was originally going to implement this by simply passing the commands to all synth drivers under control, but found out that this won't work.
When calculating the changed value from the commands instance, NVDA seems to read the default value from the currently selected synthDriver config. This must be handled like that at the moment since the command objects do not have synthDriver reference to which the commands will be sent.
https://github.com/nvaccess/nvda/blob/51f5a38466daef7c92402e16f9898f1ecfb40fa5/source/speech/commands.py#L207
UML does not have voice attributes, and they can be different between each synth that UML is currently using. Maybe I need to catch these commands in UML and translate to set_xxx method calls for each synth under control.
I was originally going to implement this by simply passing the commands to all synth drivers under control, but found out that this won't work.
When calculating the changed value from the commands instance, NVDA seems to read the default value from the currently selected synthDriver config. This must be handled like that at the moment since the command objects do not have synthDriver reference to which the commands will be sent.
https://github.com/nvaccess/nvda/blob/51f5a38466daef7c92402e16f9898f1ecfb40fa5/source/speech/commands.py#L207
UML does not have voice attributes, and they can be different between each synth that UML is currently using. Maybe I need to catch these commands in UML and translate to set_xxx method calls for each synth under control.