Machine Learning
Systems Architect,
PhD Mathematician
A few months ago I built an automated twitch.tv stream to broadcast high-ranked one-trick pony League of Legends players using Python3. I used the built-in socket
package to handle twitch chat, the requests
package to handle Riot API HTTP requests, and pywinauto
to handle interactions with OBS Studio and the League of Legends spectator client.
The result worked out fine, but the code was synchronous so there were tons of calls to time.sleep()
between stream operations, which would block all other operations. There were also many intermittent calls to bot.catchup()
, a chatbot method that goes through dozens of loop iterations to read and respond to all chat messages coming into the socket connection. The socket handling also used the select
package to prevent socket.recv()
from blocking any other synchronous operations in the event that no new messages had appeared in chat.
The Riot API calls were handled with the threading
package that spawned threads for each deployment region for the game, since API rate limiting is region-specific. This meant spawning a separate requests.Session()
for each region to separately track rate limiting. The entire script was sloppy and very difficult to debug.
Having recently built a REST API server and then an analytics Slack bot for work using Python’s tornado
package and web framework, I realized that I could solve all of these issues by making the code asynchronous. In comes the relatively new asyncio
built-in package, first part of Python 3.4 and improved significantly through 3.6.
Built on top of asyncio
is the aiohttp
package from the aio-libs
development team. The aiohttp
package provides an asynchronous HTTP Client that I’ll use in place of the requests
elements. Here are the key changes since the last version of OTPSpectate:
Perhaps the most painful part of the synchronous implementation was the weaving of the stream controller code with the code for having the bot listen and respond to chat. Using asyncio, this logic can be cleanly separated by using one coroutine for the stream controller, and one coroutine for the chat bot.
The result is that the chat bot sits inside of a loop that constantly reads and responds to chat. The stream controller manages the League of Legends spectator client subprocess, directs the broadcast, and handles the API requests. At a high level the main code looks something like this.
import asyncio
# ...
# Initialize some resources
riot = Riot(riot_config)
otp_bot = OTPBot(twitch_config, riot)
obs = OBSClient()
async def stream():
"""Handle the client, broadcast, and API."""
# stream control code
# ...
async def chat():
"""Handle Twitch chat."""
await otp_bot.connect()
while True:
await otp_bot.respond()
if __name__ == '__main__':
asyncio.get_event_loop().create_task(stream())
asyncio.get_event_loop().create_task(chat())
asyncio.get_event_loop().run_forever()
Prior to defining the coroutines, we define instances of the Riot API wrapper, the chat bot, and the OBS Studio controller.
There are two coroutines, stream
and chat
given by the async
keyword before the function definition. In the main function we attach the coroutines to the event loop and start the loop. The create_task()
method schedules the coroutine with the event loop, and the run_forever()
method begins execution of the event loop, hence the coroutines. The await
keyword indicates either a call to another coroutine or yielding control to the event loop for network I/O or sleeping.
One of the caveats I found with using asyncio with Windows is that CTRL-C
doesn’t work to break the event loop. To restore this functionality we need the signals
package and a single line of code at the start of the program.
import signal
# ...
# Restore ctrl-c functionality on Windows
signal.signal(signal.SIGINT, signal.SIG_DFL)
# ...
The asyncio
package comes with its own socket level connection interface, which I used to completely replace the dependencies of the socket
and select
packages in the previous version of OTPSpectate.
To connect using asyncio
we just need to call the open_connection()
method, which returns a reader
and writer
object associated with the socket. Writing operations are synchronous, and reading operations are asynchronous, since the connection might have to wait to receive bytes through the socket. Messages are encoded/decoded on the way in and out and PING
messages are PONG
ed back automatically by the get_chat_line()
method.
class TwitchChatClient(object):
# ...
async def connect(self):
"""Connect to twitch chat."""
# connect
self.reader, self.writer = await asyncio.open_connection(
self.host, self.port, loop=self.loop, ssl=self.ssl
)
# ...
def send(self, message):
"""Send message through socket."""
self.writer.write(message.encode('utf-8'))
async def get_chat_line(self):
"""Read the channel chat."""
line = await self.reader.readline()
line = line.decode('utf-8')
if line and line.startswith('PING'):
return self.pong(line)
return line
def pong(self, ping):
"""Send PONG reply through socket."""
ping_msg = re.search(r'^PING :(.+)$', ping)[1]
self.send('PONG {}\r\n'.format(ping_msg))
return None
Replacing the requests
session with an aiohttp
client session was relatively straightforward. Their interfaces are similar so the process only required only a few adjustments. The other major changes consisted of changing all the API wrapper methods to coroutines using async def
.
As for the rate limiting, I added region-specific rate limit tracking, and all instances of time.sleep()
became asynchronous calls of await asyncio.sleep(seconds)
.
One of the features I had wanted to add in the last iteration was chat commands; simple commands that give a bit more detail about the player and their expertise on their champion in League of Legends. The !vote
command was part of the initial implementation; in this iteration I’ve added !rank
, !mastery
, and !dedication
commands for which the bot responds with the players region/rank, champion mastery level/points, and percentage of ranked games played on their OTP, respectively. Each command has its own cooldown to prevent spam abuse.
Some features I would like to add.
Check out the stream on twitch! twitch.tv/otpspectate
_OTPSpectate isn’t endorsed by Riot Games and doesn’t reflect the views or opinions of Riot Games or anyone officially involved in producing or managing League of Legends. League of Legends and Riot Games are trademarks or registered trademarks of Riot Games, Inc. League of Legends © Riot Games, Inc._