Directories | |
Source | |
By GrrimGrriefer, DZnnah and Barty.
Development progress board: https://trello.com/b/Biv7Si4l/unrealvoxta
⚠️ AI characters are not real, always use common sense! ⚠️
Third party licenses:
Quick 2-minute video with an overview & recap for this patch (version 0.1.1) available here:
https://www.youtube.com/watch?v=lxcALWdu3uA
ℹ️ If you do not plan on using Lipsync, you can ignore this section.
ℹ️ If you do not plan on using Voice input, you can ignore this section.
Audio input for this plugin relies on Unreal's VoiceInput system, which has to be turned on.
All header files are fully documented and easily navigatable via the online doxygen
For a rough overview of what is included:
The high-level overviews are located in markdowns as part of the repository. These can be viewed without going to the doxygen link if needed.
Links are below:
Due to the alpha nature of the plugin, only C++ documentation is provide. However, each blueprint node maps 1:1 to a UFUNCTION and will have the same discription visible when hovered.
A full in-depth breakdown on each node will be made in video-format once the plugin reaches beta (v0.2)
For alpha usage, it is adviced to use the new Modular Template UI as reference.
Note: All source files for the template are included (.psd) to allow for easy reskinning. Do keep in mind that for the Beta all UI will be reskinned (again) by us too.
Either select the BP_ExampleGameMode, or manually select the example hud in your own GameMode if you have one already.
These icons are visible in the top right of the template HUD, and display Voice input, Audio output, VoxtaServer status, connection, and settings.
In general:
This menu contains all global meta-configuration elements. Host IPv4, ServerAPI version, configurable microphone gain & noisegate, LogCensoring, global audio (2d)...
After connection:
Screen with a dropdown to select the character, and an option to change the chat context. A button is available to edit the character in the system-browser.
Note: Thumbnails will be resized if too large, but will preserve their aspect ratio.
Screen displaying the current chat, minimal UI to avoid blocking the rest of the screen.
Can be triggered with the cogwheel, shows the current chat and context can be modified. Also displays VoxtaServer services status.
Basic test coverage (85 tests atm) of the main VoxtaClient public C++ API. Additional test coverage for the blueprint API and for audio input & output are scheduled for upcoming releases.
Note:
All tests are integration tests and require a valid instance of VoxtaServer to be running (configured on localhost but that can be easily modified).
⚠️⚠️ Be mindful running tests when using VoxtaCloud services, especially cloud TTS audio. It is highly adviced to only execute relevant tests during development to avoid draining cloud credits. ⚠️⚠️
⚠️⚠️ Be mindful that Audio2Face is experimental and will receive a full overhaul before going to beta. Do NOT build anything you need to support medium/long term with this. ⚠️⚠️
Keep in mind that for A2F lipsync, you will need to add the required plugin:
A2F Omniverse lipsync: direct link or download via the portal https://www.nvidia.com/en-us/omniverse/
Then, install A2F via Omniverse as follows:
After installation, locate your A2F installation and run the A2F_headless client:
Ensure your A2F_headless API is running and marked as "ready" for it to function correctly:
Once playback in Unreal is started, the VoxtaClient will automatically attempt to connect to A2F if it is available.
These are then applied in the animgraph using the custom node:
Additionally, ensure the A2F Pose mapping is correctly configured in your MetaHuman face animator blueprint.