Focus Areas

Human–AI Interfaces & Immersion

We design and prototype the places where people and AI meet—XR/spatial, voice, and multimodal tools— for clear, low-friction collaboration. Followed by pilot integration and real-world evaluation (usability, understanding, trust, user satisfaction), iterated with users until it works in the field.

Human–AI Interfaces & Immersion

AI Governance & Social Science

We help organizations maintain human oversight through rigorous governance frameworks and evidence-based social-science research. We establish decision-making processes, implement bias detection and fairness assessments, and continuously monitor human-AI interactions to ensure systems serve human values.

AI Governance & Social Science

Technical Trust Enablement

We put names and signatures on what systems do: identities for services, signed actions and outputs, and provenance that links data, models, and results. We design trust-aware interfaces that translate technical evidence into human-readable explanations, ensuring cryptographic verification stands up to tomorrow’s challenges.

Technical Trust Enablement

About HMMC

We operate at the intersection of AI, human–machine interaction, and decision support. Our work advances hybrid intelligence—humans and machines thinking together—through transparent, adaptable systems. We evaluate and calibrate generative models with human‑centred methods (e.g., our Subjective Hallucination Scale) to detect misleading outputs and support metacognitive reflection. We help organisations turn AI into real operational value while keeping it secure, fair, and accountable.

More about us
  • Human–AI interfaces & immersion
  • AI governance & social science
  • Technical trust & provenance (PQC-ready)
  • Security for complex, regulated environments
About HMMC