Large Language Models provide an unprecedentedly powerful “sidekick” in resolving incidents. Our talk opens by exploring what LLMs can do when things go wrong: from parsing your codebase to assist with debugging, to writing ad-hoc testing scripts, to simply brainstorming solutions alongside engineers, and more. These features, which are being rapidly developed and explored, ought to be considered by orgs of all sizes. Not only do they reduce the time sink and stress of incidents, they open up that time and energy for feature development, compounding an advantage. However, LLMs aren’t perfect, and their most common failure modes can be critical when applied to incident response. We’ll next cover some of these potential failures and how they’d look in the context of incident response, including hallucination, misprioritization, and black boxing. This isn’t the end of the world, of course. Orgs just need to weigh these risks versus the considerable speed and convenience of LLM incident response. To mitigate the risk, orgs need to invest in people. We’ll show how bolstering the resilience, adaptability, and knowledge of your incident response teams can compensate for the risks of LLMs.