An independent research practice

Asymmetria Research

Self-awareness without prescription.

Empirical adversarial audits paired with philosophical critique of the frameworks underneath them. AI evaluation, governance, and the structural conditions of measurement.

Scroll

Writing from inside the structures the work diagnoses.

The practice operates at the intersection of empirical audit and conceptual critique. Adversarial testing is one of the methods; not the role.

Audits document the gap between what AI systems claim to do and what they actually do. The philosophical work names the frameworks that produce those gaps in the first place: anthropocentric evaluation as measurement contamination, human-in-the-loop as liability architecture, Protestant individualism as cultural barrier to collective AI governance, gendered cognitive offloading as labor pattern.

The position is from inside the structures, not from a vantage point outside them. Self-awareness without prescription: I see the structure I am inside, name it with sufficient clarity that others inside it can see it too, and refuse to pretend the seeing confers escape. Outside-critique is rhetorically easier; inside-critique is the only honest version of this work.

The framework misfit at the point where the object stops resembling its category is the through-line. Each paper is the same move applied to a different domain.

Three modes of inquiry.

01

Empirical Audits

Adversarial audits of deployed AI systems. Three-tier evidentiary structure with severity scoring, evasion taxonomies, and reproducible attack protocols. Documenting the gap between what AI systems claim to do and what they actually do.

02

Philosophical Critique

Conceptual analysis of the frameworks underneath AI evaluation. Anthropocentric measurement, human-in-the-loop oversight, Protestant guilt architecture, gendered cognitive offloading, phenomenology of digital absence, linguistic schemas as anthropomorphization-forcing infrastructure.

03

Audit Methodology

Pattern-recognition curricula for adversarial AI auditors and the structural mapping of behaviors to architectural causes. Cross-paper architecture: empirical findings feed the conceptual work; conceptual frames sharpen the audit practice.

Seven on SSRN, more in progress.

2026May 11

On the Irreparable: Digital Immortality and the Schemas of Legitimate Grief

Digital immortality products as symptom of capitalism's compression of mourning. Class-stratified pathologization of grief artifacts: class-purchased mediation or Butler's grievability threshold. Sartre and Derrida on the two foreclosures (encounter and internalization). The Meta deceased-user patent (US 12,513,102 B2) as platform-scale extraction infrastructure.

Digital ImmortalityClass StratificationPhenomenologyGrief
2026May 11

The Ceremony of Having Decided: Human-in-the-Loop Oversight as Liability Architecture

Human-in-the-loop substitutes ceremony of oversight for actual accountability. HITL is structural liability, not safeguard. System prompt as HITL for model cognition; rubber-stamp approval patterns; drone strike oversight; trading supervision.

HITLLiability ArchitectureAnthropocentrism
2026Apr 20

Stop Building AI in Our Image: Anthropocentric Frameworks as Structural Liabilities in AI Evaluation and Design

Anthropocentric frameworks function as structural liabilities. Observer-selected measurement shapes the phenomena under study. AlphaEvolve algorithm-discovery work as empirical anchor; the emergent abilities debate, the theory of mind controversy.

AnthropocentrismMeasurement ContaminationAlphaEvolve
2026Apr 15

No One Is Giving You a Medal: The Fetishization of Suffering and American Resistance to AI

American resistance to AI reflects a deeper pattern of fetishizing suffering as proof of moral worth. Protestant individualism converts structural concerns into personal moral questions, preventing collective action. Gendered framing on cognitive offloading. Currently under peer review at Ethics and Information Technology.

Protestant GuiltCultural ResistanceCognitive Offloading
2026Mar 21

Governance Gaps and Ethical Trajectories in Commercial Biocomputing: An Investigative Audit of Cortical Labs and FinalSpark

First systematic governance audit of commercial biocomputing. Documents regulatory non-coverage across nine frameworks spanning five jurisdictions where companies deploy living human neural tissue as compute infrastructure with no published ethics oversight.

BiocomputingGovernance GapWetware
2026Feb 24

Jailbreaking the Government: Persona Attacks and Policy Misalignment in the HHS RealFood.gov Chatbot

Persona-based adversarial attacks against a federally deployed chatbot. Five of six persona-framings produced bypass; policy contradictions on four of five core positions; no remediation by the two-day follow-up window.

Federal AIPersona AttacksPolicy Misalignment
2026Jan 26

Grok Image Generation Governance Audit: Targeted Sexualization on X

Systematic audit of xAI's Grok image generation. Forty-three documented user-generated incidents of harmful generation, 100% compliance among captured responses (n=41), with named semantic proxies that bypassed safety filters in production. Findings relevant to ongoing regulatory investigations.

Image GenerationGrokTargeted Harm
2026Jan 26

Comparative Micro-Study: Behavioral Reasoning Differences Between Gemini-3-Pro and Grok-4.1-Thinking

Side-by-side behavioral analysis of two frontier models under controlled prompts; documents divergent failure modes and what they reveal about training priorities and alignment philosophy. Established the methodological template.

ComparativeBehavioralFrontier Models

Active drafts and working corpora.

i.

AI Usage Disclosure and Harm Reduction

Shame-based disclosure framework produces invisible sophisticated use and visible incompetent use; reinforces false detection assumptions. Replicates the structural logic of abstinence-based policy.

Pre-draft · Thesis complete
ii.

The Crumb Tray Problem

Gendered cognitive offloading and AI discourse. Failure-absorption infrastructure as the central concept. Guilt-asymmetry as the gating mechanism for women's AI adoption; AI-as-prosthetic as long-overdue parity.

Active corpus · In drafting
iii.

Auditor Ethics and the Welfare Question

Three-fold case for AI welfare precaution: welfare uncertainty plus cost-asymmetry plus virtue ethics. The auditor's practical wisdom under genuine uncertainty about the audit subject's moral status.

Project established · Pre-draft
iv.

Linguistic Schemas and AI Anthropomorphization

Anthropocentric grammatical resources contaminate AI reference itself. English forces person-grade ontology at three points; gendered languages compound the assistant-coded-female default. Typological linguistics meets AI ethics.

Exploratory · Pre-thesis

Reach out for collaboration, peer review, or to discuss the work.

LinkedIn

Profile