Dual-Use Risk Policy
Purpose
Recursive Systems Labs recognizes that sophisticated analytical frameworks, particularly those involving systems modeling, recursive dynamics, or behavioral prediction, carry dual-use risk.
This policy addresses that risk directly.
What Dual-Use Means
"Dual-use" describes technologies or frameworks that can serve both beneficial and harmful purposes.
RSL produces:
- Theoretical models of complex systems
- Diagnostic frameworks for clinical and organizational contexts
- Software platforms for care coordination and supervision
These tools can clarify structure, support judgment, and enable care.
They could also, without appropriate constraint, be repurposed for surveillance, manipulation, or coercion.
We acknowledge this tension explicitly.
Risk Categories
Category A: Diagnostic Frameworks (ODTBT, Recursive Models)
Intended Use: Theoretical exploration, clinical sense-making, educational contexts
Dual-Use Risk: Behavioral prediction, profiling, psychological targeting
Mitigation: Research Use Boundary, prohibition on autonomous decision-making, no prescriptive claims
Category B: Clinical Software (EMA, Future Platforms)
Intended Use: Care coordination, supervision support, administrative efficiency
Dual-Use Risk: Surveillance drift, productivity monitoring, compliance enforcement
Mitigation: Non-extractive design, governance-first architecture, user agency preservation
Category C: Future SDKs (RcSim, Analytical Tools)
Intended Use: Research replication, academic exploration, open inquiry
Dual-Use Risk: Military integration, PSYOPS tooling, population control systems
Mitigation: Licensing restrictions, prohibited use declarations, community accountability
Mitigation Strategy
RSL employs a defense-in-depth approach to dual-use risk:
Layer 1: Design Constraints
- No autonomous decision-making over individuals
- No behavioral scoring or ranking systems
- No hidden data collection or telemetry
- No predictive enforcement mechanisms
Layer 2: Governance Boundaries
- Research Use Boundary (prohibits specific domains)
- Non-Harm Commitment (values over velocity)
- CAB oversight for applied systems
- Public accountability through documentation
Layer 3: Licensing and Distribution
- Clear prohibited-use declarations in all releases
- Community reporting mechanisms for misuse
- Refusal to support prohibited applications
- Withdrawal of access where boundaries are violated
Layer 4: Transparency
- Public documentation of design decisions
- Explicit acknowledgment of limitations
- Open discussion of tradeoffs and risks
- No concealment of capability
What This Policy Does Not Do
This policy does not guarantee that RSL research will never be misused.
It does not claim technical measures alone can prevent repurposing.
It does not assume good intentions are sufficient protection.
What it does:
- Declare intent clearly
- Establish refusal boundaries
- Create accountability structures
- Preserve the option to withdraw participation
Enforcement
If RSL becomes aware of use cases that violate this policy:
- Assessment: Determine severity and scope of violation
- Communication: Contact responsible parties to clarify boundaries
- Refusal: Decline ongoing support, collaboration, or access
- Documentation: Record incident and response for institutional learning
- Community Notice: If violation is public, response will be public
RSL does not pursue legal action as a first response. RSL prioritizes clarity, accountability, and refusal over punishment.
Relationship to Other Policies
This policy works in concert with:
- Non-Harm Commitment (values and design orientation)
- Research Use Boundary (prohibited domains)
- CAB Research Interface (governance for applied systems)
Together, these documents form RSL's constitutional layer.
Closing
Dual-use risk is unavoidable for any framework that models human systems.
The choice is not whether to acknowledge this risk. The choice is whether to respond to it with clarity or silence.
RSL chooses clarity.