Feel free to reach out if you want to chat about LLM inference, training, GPU/kernel programming, or anything interesting.