April 19th, 2024 was a company holiday. Instead of taking it easy, I decided to embark on a Twitch streaming adventure to build a Minimum Viable Product (MVP) Twitch bot powered by an LLM. The result? The birth of my Twitch GenAI Bot was achieved in a single stream using Golang, Langchaingo, TwitchIRC, and Llamacpp. Let’s dive into the journey of how this bot came to life.
Connecting to Twitch Chat IRC
The initial hurdle was establishing a seamless connection between my application and Twitch chat. Twitch chat used the IRC protocol for all its messaging. I use a small go library that I have used before to connect to set up the auth to Twitch and to manage the message. I always struggle when setting up Oauth, but I managed to connect. After connecting I needed a way to manage our messages. Because the package is handling messages in its goroutine, I chose to use a crude approach with a channel for accepting and managing the interface between the Twitch bot and the GenAI model.
Incorporating the LLM Model with Llamacpp
With the Twitch connection setup, I needed to spin up the model and leverage a server to manage access and inferences on that model. Per the suggestion of Chris Brousseau, I acquired the Mistrial 7b LLM model from Hugging Face and deployed it as a server using Llamacpp. The `serve` mode of Llamacpp aligns with the OpenAI API specifications, empowering us to leverage a standard and well-adapted protocol for bot interactions, also I can swap out the model in the backend easily without needing to change to calls from the bot. Initially, I attempted to run Llamacpp on a refurbished Windows machine serving as a local network server. However, technical challenges are, prompting a pivot to my trusty M1 Mac so I can complete the MVP. After the stream, I was able to get it up and running on the Windows machine.
Leveraging Langchaingo for Chat Interactions
With the LLM model operational, the next phase was to facilitate seamless interactions between Twitch chat messages and the AI model. Given my limited domain knowledge, I chose to use a Go version of the established tool Langchain. Langchaingo is a fairly robust library that allows for connecting Go software services to a variety of GenAI models or LLMs. For the MVP I leveraged the effectiveness of the `GenerateSinglePrompt()` function within the library. This allowed me to take each chat message, send it to the model, and publish the response all from a single go-binary.
Achieving Crude Yet Functional Results
The efforts paid off as my GenAI bot, powered by Langchaingo, TwitchIRC, and Llamacpp, began generating responses to Twitch chat inputs. Although the results were initially crude, they represented a significant milestone toward a fully functional and engaging AI companion for my Twitch channel.
Text snippet of the conversation.
[Twitch: TheShaggyD] tell another story
[Twitch: Bot] what story?
[Twitch: TheShaggyD] about a programmer
[Twitch: Bot] About a programmer?
[Twitch: TheShaggyD] Complement me
[Twitch: Bot] This is the assistant.
[Twitch: SoyPeteTech] tell a story about a famous streamer
[Twitch: Bot] I’ve heard of several famous streamers, including Ninja, Shroud, and Dr DisRespect. What do you want to know about them?
[Twitch: TheShaggyD] LOL
[Twitch: TheShaggyD] it’s quality
[Twitch: Bot] thanks
[Twitch: TheShaggyD] see
[Twitch: Bot] hi
With a working MVP now it’s on to develop a more robust Twitch chat application over the next few months. I intend to refine the GenAI bot so that it can respond to multiple chats instead of just one and to store chats in a vector database so that we can reuse history for training as well as for model comparison. Lastly, I want it to dynamically replace commands and timers like Nightbot. I’m not making this as a project to sell mostly as a fun side project to learn about large language models from the perspective of a software engineer and software techniques involved with integrating them into an application and self-hosting a model on my own stack. If you were interested in doing this at scale or as a genAI as a service, I would suggest checking out the AltF4 Stream where Blackglasses is using Go and Serverless Architecture to build a chat that people can install in their own Twitch chats.
In conclusion, the process of building my Twitch GenAI bot was a testament to perseverance, adaptability, and the power of innovative technologies. Through leveraging Golang, Langchaingo, TwitchIRC, and Llamacpp, I transformed a vision into reality, with my bot now actively engaging with viewers and enhancing the overall Twitch experience. To view continued work on this project check the Code repository on GitHub or follow my Twitch to catch me working on it live.