The AI Guardrail with Kuya Dev
From the Power Grid to the AI Grid.
Hi, I’m Rem Lampa, more popularly known as Kuya Dev.
My career hasn’t been a straight line. I spent nearly a decade as an Electrical Engineer at the largest power distribution utility in the Philippines, Meralco, making sure the physical grid infrastructure stayed safe and stable. Following that, I taught myself to code and shifted careers into tech, eventually working my way up to my current role as Head of Engineering at Prosple, an early-careers tech platform empowering millions of career starters globally.
Recently, I found that there’s huge need for more AI Ethics advocates after giving a TEDx talk on AI and Data Colonialism. Sparked with this realization, I’m now again carving a new chapter in my career, as I pursue an MSc in Responsible AI at OPIT, earn my AIGP certification, and work toward establishing my voice in AI Law and Policy.
I’ve spent 15+ years building solutions and “guardrails” that prevent and mitigate systems failures. From high-voltage transformers to high-risk AI systems, the core question remains the same: How do we make sure this technology serves humanity safely while managing to still deliver business value?
What is The AI Guardrail?
This is my personal Responsible AI journal. It’s where I “think out loud” while I navigate the journey from building distributed software systems to governing AI ones.
I’ll shy away from corporate speak and AI marketing lingo. Instead, you’ll get my honest and unfiltered thoughts and reflections on how I believe we should be leveraging trustworthy AI systems that benefit the whole of humanity and innovate towards equitable progress.
What to expect:
The AI ethics and governance space is still very young. Countries and organizations are just now trying to make sense of how to best maneuver in this unchartered landscape. I aim to contribute to these discussions through The AI Guardrail:
Perspective: Personal opinions and advice on AI implementation and Business Risk Management
Reflection: Interpretations of existing and upcoming AI governance frameworks and legislation (EU AI Act, NIST, OECD), algorithmic bias, technical transparency, data privacy, and adversarial risks.
Reaction: Thoughts on news and updates regarding key AI milestones and major AI harms incidents.
Expert Opinion: Occasional deep-dives with AI ethics, risk, and governance professionals sharing their own insights and experiences.
Let’s build a safer AI-enabled society
If you’re an engineer or developer who cares about ethics, a business leader navigating AI risk, or a policy geek who wants to understand the technical aspects of AI, you’re in the right place.
No fluff. No rigid corporate speak. Just easy to understand guardrails.
Why subscribe?
Subscribe to get full access to the newsletter and website. Never miss an update.
Stay up-to-date
You won’t have to worry about missing anything. Every new edition of the newsletter goes directly to your inbox.

