Artificial Intelligence (AI) has seen interest and adoption grow as companies continually learn new ways to solve increasingly complex problems. Despite the many advancements in the AI space, models are not perfect and humans are still fundamentally required for interpreting and tuning the output from machine models. The field of Explainable AI seeks to provide tools and methods to help practitioners understand why models fail and how they can be improved.
This event brings together leaders from industry, academia, and government to introduce the topic of Explainable AI and discuss how practitioners can build interpretability into their own AI workflows. Our panel will discuss what Explainable AI is, why it is important, and how practitioners can get started integrating Explainable AI into their own workflows. After a lively discussion with our panelists, we will provide a Q&A period for the audience to directly interact with our experts in the field.