Transparency & Testing Ethics
All blog experiments and prompting tests are conducted independently.
No client data is ever used in our posts or demonstrations.
When real-world experiences shape our insights, we recreate those experiments using synthetic or
publicly available data—never proprietary or sensitive material.
We believe transparency builds trust, and clear boundaries protect it.
Published July 22, 2025 ·
Tags: PromptEngineering, GPT-4o, Experiment
What happens when a language model takes on a logic puzzle like Wordle? This experiment tests GPT-4o
with the
ReAct framework, showing where its reasoning holds—and where its pattern-based “thinking” leads to
contradictions. The results highlight why human oversight and well-structured prompts remain
essential.
Read more
→
Published July 02, 2025 ·
Tags: PromptEngineering, Ethics, GPT-4o, Experiment
How does persistent user context change the way AI handles moral dilemmas? This experiment with
GPT-4o
explores how prompting structure and personalization shape ethical reasoning—using the classic
trolley problem as the test bed.
Read
more →
Published April 28, 2025 ·
Tags: PromptEngineering, LLMs, ReAct
Testing the ReAct prompting framework in real-world conditions using local models. This post walks
through the experiment setup, key observations, and what the results reveal about building more
thoughtful AI workflows.
Read
more →
Published April 21, 2025 ·
Tags: PromptEngineering, LLMs, Reflection
A personal reflection on the real-world gaps in AI adoption that inspired the creation of Shinros.
This post explores what it feels like to work with AI without guidance—and why trusted prompting
expertise is the missing link.
Read
more →