Kailash Budhathoki

Contact

  • Work: kaibud@ (append amazon.com)
  • Home: kailash.buki@ (append gmail.com)

Bio

Kailash currently leads a science team on optimizing foundation models for inference on AI accelerators (e.g., Amazon’s in-house Neuron chips) at Amazon Web Services (AWS) AI. Check out KDD’24 tutorial.

Prior to bootstrapping the inference optimization research effort, he led the cross-org science effort within Amazon to deliver bias mitigation solutions for Amazon’s in-house multimodal foundation models, called Titan Multimodal Embeddings model and Amazon Titan Image Generation model, towards their re:Invent 2023 release (Barth, 2023; Ali et al., 2023).

He joined Amazon Research Lablet Tübingen (part of AWS AI) in 2020, where he developed algorithms / tools to help businesses explain complex cause-effect relationships underlying their business problems, and led cross-org effort within Amazon to launch them in production (Budhathoki & Blöbaum, 2022; Budhathoki, 2021; Götz & Budhathoki, 2022). Businesses like Amazon Supply Chain and Amazon Ads actively use those solutions for effect estimation and root cause analysis of changes / outliers. Those algorithmic solutions were also open-sourced to the Python DoWhy library under a new package called gcm (Götz & Budhathoki, 2022; Blöbaum et al., 2023; Emre Kiciman, 2022). This collaboration with Microsoft Research led to a new GitHub organization, PyWhy, with the mission to build an open source ecosystem for causal machine learning (Götz & Budhathoki, 2022; Emre Kiciman, 2022).

He has a PhD in Computer Science from the Max Planck Institute for Informatics and a Master of Computer Science with honours from the Saarland University.

References

  1. Barth, A. (2023). Amazon Titan Image Generator, Multimodal Embeddings, and Text models are now available in Amazon Bedrock. https://aws.amazon.com/blogs/aws/amazon-titan-image-generator-multimodal-embeddings-and-text-models-are-now-available-in-amazon-bedrock
  2. Ali, J., Kleindessner, M., Wenzel, F., Budhathoki, K., Cevher, V., & Russell, C. (2023). Evaluating the Fairness of Discriminative Foundation Models in Computer Vision. Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society, 809–833.
  3. Budhathoki, K., & Blöbaum, P. (2022). New method identifies the root causes of statistical outliers. https://www.amazon.science/blog/new-method-identifies-the-root-causes-of-statistical-outliers
  4. Budhathoki, K. (2021). Explaining changes in real-world data. https://www.amazon.science/blog/explaining-changes-in-real-world-data
  5. Götz, P., & Budhathoki, K. (2022). AWS contributes novel causal machine learning algorithms to DoWhy Python library. https://www.amazon.science/blog/aws-contributes-novel-causal-machine-learning-algorithms-to-dowhy
  6. Blöbaum, P., Budhathoki, K., & Götz, P. (2023). Root Cause Analysis with DoWhy, an Open Source Python Library for Causal Machine Learning. https://aws.amazon.com/blogs/opensource/root-cause-analysis-with-dowhy-an-open-source-python-library-for-causal-machine-learning/
  7. Emre Kiciman, A. S. (2022). DoWhy evolves to independent PyWhy model to help causal inference grow. https://www.microsoft.com/en-us/research/blog/dowhy-evolves-to-independent-pywhy-model-to-help-causal-inference-grow/