I am a PhD candidate and data scientist working at the intersection of neuro-symbolic AI, large language models, and probabilistic reasoning. I am broadly interested in how AI systems can be made not only capable but explainable and trustworthy, particularly in high-stakes institutional contexts. My current work explores how Bayesian Networks can serve as explicit reasoning structures within LLM pipelines, enabling systems to justify their outputs through conditional and causal dependency chains extracted from domain knowledge.

Interests
  • Neuro-symbolic AI
  • Knowledge-grounded LLMs
  • Responsible AI
  • Explainability
Education
  • MSc, Computer Science, 2025

    University of the Witwatersrand