Medieval Saints to Modern AI: Why the Humanities Will Always Matter

In the fast-moving world of tech innovation, where algorithms and neural networks dominate conversations, my path to AI leadership began in an unexpected place: among the faded manuscripts of sixth century Latin saints.

🔹 The Unexpected Education of a Data Scientist

My personal edge in AI and analytics leadership did not come from a background in computer science. It began with graduate studies in history, where I immersed myself in the lives of early Christian saints. Only later did I pursue a degree in machine learning, building on a foundation that many might consider unrelated to technology.

🔹 But was it truly unrelated? Not at all.

As I studied ancient hagiographies - the accounts of saints written during the Middle Ages - I was unknowingly learning skills that would later define how I approach artificial intelligence and data science. These texts, often fragmentary and filled with ambiguity, demanded a particular kind of discipline: the ability to find meaning in messy and incomplete data.

🔹 Finding Patterns in Fragments

Medieval historians, especially those focused on the early Middle Ages, work with sources that are sparse and inconsistent. A single sentence in a chronicle might be the only clue to a major event. A passing reference in a letter could reveal a broader social custom.

🔹 This kind of work trained me to:

  • Identify patterns across sources that appear unrelated

  • Understand the significance of what is missing, not just what is recorded

  • Construct coherent narratives while openly acknowledging uncertainty

  • Challenge my assumptions at every step (anthropologists are particularly good at that) 

These abilities carried over when I began building models to uncover trends and make predictions. Connecting scattered historical references is surprisingly similar to identifying features in machine learning. Interpreting limited evidence requires the same mindset as working with sparse datasets.

🔹 The Value of Not Knowing

Perhaps the most important lesson I gained from historical training was how to think carefully about what we do not know. Medieval studies instilled in me a sense of intellectual humility and a healthy skepticism toward easy answers, qualities that strengthen both model performance and ethical decision making.

In historical research, recognizing the limits of what we can conclude is not a weakness. It is a sign of rigor. The best historians explain their assumptions, qualify their conclusions, and remain open to revision when new evidence comes to light.

This mindset has served me well in leading data science teams. Too often in AI development, we see overconfidence in model outputs, an unwillingness to admit limitations, and a lack of reflection on assumptions. The historian's mindset helps counteract these tendencies.

🔹 From Theological Debates to Ethical Frameworks

My work with medieval theology also provided lasting value. The scholastic approach, with its clear definitions, systematic engagement with counter arguments, and structured reasoning, offers a strong model for ethical thinking in technology.

Take the concept of right intention from medieval just war theory, evolving from its origins with Thomas Aquinas. Applied to AI, it raises powerful questions: What is the true purpose of this system? Who will benefit from its use? What consequences might arise beyond the intended outcome?

Or consider how medieval thinkers like St. Augustine examined the relationship between divine law, natural law, and human law. Their careful balancing of competing moral frameworks provides insight into how we might weigh algorithmic efficiency against human dignity and collective well-being.

These ethical traditions are not academic artifacts. They provide practical tools for evaluating the real-world impact of technology.

🔹 The Human Questions Behind the Code

As AI systems become more powerful, the most urgent challenges are not technical. They are human. The essential questions are not only about how to improve accuracy or speed. They are about purpose, values, and impact:

  • What problems deserve our attention?

  • Who benefits from the solutions we create?

  • How do we measure success?

  • What tradeoffs are acceptable?

  • How do we ensure these systems help people live better lives?

Answering these questions requires leadership that can think beyond engineering. We need people who understand both the systems we build and the lives they affect.

🔹 The Advantage of a Renaissance Mindset

The most effective leaders in AI will be those who bring a broad and flexible way of thinking. This is not about knowing a little bit about everything. It is about building deep and adaptable approaches to solving problems that draw from many traditions.

My journey from the study of saints to the world of machine learning is not a quirky footnote. It’s an example of the kind of cross-disciplinary thinking that technology increasingly needs. The humanities develop essential skills, including:

  • Understanding historical and cultural context

  • Clarifying values and reasoning through difficult choices

  • Communicating ideas clearly and persuasively

  • Listening to diverse perspectives with empathy

  • Asking better questions by challenging what is taken for granted

These strengths complement technical expertise and become more important as AI becomes more central to decision making across society.

🔹 Nurturing This Way of Thinking

Organizations that want to lead with both innovation and responsibility can take several steps:

  • Recruit people with different academic and professional backgrounds

  • Create teams that blend disciplines to solve complex challenges

  • Support lifelong learning across both technical and human fields

  • Encourage reflection and constructive questioning

  • Bring human-centered insights into design and strategy

  • Make space for ethical thinking in the development process

Individuals can also develop this mindset by:

  • Exploring books and ideas beyond their usual fields

  • Studying philosophy, history, and the arts

  • Seeking out conversations with people who see the world differently

  • Thinking through the ethical implications of their daily work

  • Finding mentors who challenge them to grow in unexpected ways

🔹 Not a Tradeoff, but a Partnership

The connection between the humanities and technology is not a matter of one or the other. It is a relationship of mutual enrichment. We do not need to choose between technical excellence and human understanding. The most powerful and responsible innovations will come from the fusion of both.

As we move forward with ever more capable AI systems, we will need leaders who speak multiple kinds of language, not only programming languages, but also the languages of values, meaning, and human experience.

So I ask: What unexpected part of your education has shaped how you lead? And how can we better bring different ways of thinking together in service of a better future?

The future of AI may depend on how we answer.

Previous
Previous

From Analytics to AI: Evolving on Familiar Ground

Next
Next

The Data Journey: Building Progress One Layer at a Time