Oh wow, I didn’t even hear it was discontinued, interesting.
Does it use a large language model? it says “Willow users can now self-host the Willow Inference Server for lightning-fast language inference tasks with Willow and other applications (even WebRTC) including STT, TTS, LLM, and more!”
but i’m not sure if that refers to using a large language model or if LLM refers to something else