A man with a bald head, glasses, and a beard wearing a dark suit and white shirt, posing against a plain white background.

Hi, I’m Kostakis

I work at the intersection of large scale engineering and governance. My focus is the translation layer between complex systems and executive decision making, where operational signals must become a clear, measured view of risk, performance and accountability.

My work has involved large consumer platforms and multi party ecosystems at scale. In those environments reliability is not an abstract metric. It shows up as user friction, partner escalation and operational cost when systems drift or fail.

My writing on platform governance and AI enabled operating models has appeared in outlets including IEEE Computer Society, Management Today, BLOG at CACM, LSE Business Review,. I studied business at Warwick Business School and information technologies at postgraduate level(Msc AIT).

My current focus is platform governance in multi party ecosystems and AI enabled operating models, particularly decision rights, escalation paths, auditability and the assurance signals that senior leaders can rely on.

Work

Much of my work turns on three ideas.

The first is reliability. Platforms are only useful if the people who depend on them can trust how they behave. That means shaping expectations for performance and stability, deciding how to measure them and designing releases and incident practice so that surprises become rarer and learning becomes routine.

The second is governance. Decisions about risk, investment and strategy depend on seeing how systems behave in real life. That means treating automation and AI components as part of a broader control environment, not as something separate. It means deciding where human judgement must remain non delegable and agreeing a small set of signals that deserve the attention of senior leaders and committees.

The third is honesty. The view from a report or a dashboard should match what engineers, operators and partners see. Status information should reveal reality, not cover it. I care about reducing the gap between the story that is told and what is actually happening in the platform, so that difficult truths surface early enough for thoughtful action rather than late as crisis.

In practice this usually involves platforms that already serve many customers, depend on multi party ecosystems and are under pressure to introduce more automation and AI into products and operations. In these settings I work with engineers, product and partner leaders, finance and risk colleagues and the teams that prepare information for senior forums, helping them settle on a shared view of what the system is doing and how to keep it within acceptable boundaries.

A tall modern red building with horizontal slats, viewed from below with the sun shining behind it and a partly cloudy sky.
Black and white portrait of a man with glasses, short hair, and a beard, wearing a dark suit against a plain white background.

Background

I started as a software engineer and spent many years close to the code on systems where reliability, latency and failure patterns were very visible. That experience shaped how I think about risk, because it showed how often small local choices can create large scale behaviour once they are deployed widely.

Later I moved into architecture and technical leadership roles, then into broader mandates that involved aligning engineering, product, operations, partner teams and senior leaders. The work there was as much about language and measurement as it was about technology. It required finding ways to talk about performance and risk that meant something to everyone at the table.

Today I hold a senior technology role in a global consumer organisation, focused on reliability and governance in complex multi party platforms. The details belong to the organisation. I write about the patterns that repeat across different systems and contexts.

A person with a painted hand reaching out next to a cloth with colorful handprints and text, holding a tray of green paint.

Governance and Service

In addition to my main role I contribute to a small number of professional and civic forums concerned with responsible technology and governance.

I hold appointments with the techUK Emerging Tech Leadership Committee, the IIET Artificial Intelligence Technical Network, and UKRI’s Digital Research Infrastructure STRIKE pool, and I serve as an expert assessor for ADR UK. In these roles I try to connect the day to day realities of running systems at scale with the governance expectations of leaders, regulators, and the public.

I am also a member of organisations such as Chatham House, the Institute of Risk Management, the IEEE and the Open Data Institute. Participation in these communities helps me keep work on platform governance and AI operating models grounded in wider thinking about risk, standards and the social impact of technology.

I am interested in contestability and evidence when automated systems shape outcomes, and I explore how observability and governance can support fair review when things go wrong

A man with glasses, a beard, and a bald head is standing with arms crossed in front of a brick wall. He is wearing a dark blazer over a light blue shirt.

Writing

Writing is where I try to give shape to these themes in public.

On Breakthrough Pursuit I publish practitioner essays for people who live with platform and governance questions. Recent pieces have looked at AI as part of a control environment, at reliability and release practice in multi party ecosystems and at ways to keep senior decision making connected to real system behaviour rather than to abstract stories.

On Progress Pursuit I explore related questions in a more personal way through essays. Breakthrough Pursuit is mainly about systems and organisations. Progress Pursuit is mainly about the person inside those systems. Posts draw on psychology, philosophy and leadership practice and look at what it means to carry responsibility in settings where AI and automation are always present.

I also publish research notes and working papers on open platforms when a topic needs more structure and references, for example on the links between governance frameworks, platform engineering practice and the responsibilities that senior leaders and boards hold.

Selected pieces include work on AI as a form of control rather than a mere feature, on stability and learning in partner ecosystems, on judgement in an age of co pilots and on connecting AI governance frameworks with daily engineering practice. As further essays and external publications appear, they can simply be added as titles and links without changing the overall shape.

Contact Us

If you would like to stay in touch or start a conversation you can use the contact form on this page. Messages from the form reach me directly.

You can also connect with me on LinkedIn or follow my work through Breakthrough Pursuit and Progress Pursuit.