Code of Conduct
Overarching Principle: Human Life Comes First
This project exists to help us live better lives. Code, debate, and technical superiority are means, not ends in themselves. Therefore, rules and discussions must never devolve into unnecessary stress or “moral posturing” that exerts pressure on contributors. We focus on efficiently generating results within an atmosphere of objective, mutual respect.
Our Mission: Beyond Boundaries
We are a collective united to architect a completely new computing paradigm. We recognize that even the platforms hosting this repository are bound by centralized corporate infrastructures and legacy cultural frameworks.
We demand, as our highest priority, a culture that encourages breaking the mold and thinking differently—provided it does not hinder the activity of other contributors. We aim to build a culture where even the boundaries surrounding us are treated as objects of innovation.
We aggressively adopt any technology that advances computing paradigms faster and with less energy. We do not delay adoption due to unproven traditions or institutional conservatism when a tool’s efficacy and safety are established.
Core Values
Results and Pragmatism
- Results-Oriented: The “pedigree” of how code was written matters less than whether it works, remains maintainable, and positively impacts the system.
- Efficiency: We bypass unnecessary bureaucracy and exhausting debates. Prove your point with code.
- Strict Technical Focus: Discussions are limited to software engineering, architecture, and performance. Ideological or social agendas unrelated to the project are considered noise; such behavior will naturally lose relevance and fade out within this community.
Realistic Approach to Equality and Diversity
We support equal opportunity regardless of background. However, we recognize that human psychology does not always align with abstract ideals. We ensure that enforcing “equality” or “righteousness” does not itself become a new form of oppression. Complex social issues are addressed through a healthy project culture and merit-based consensus, not rigid policing.
Clarity of Responsibility
Every contributor bears 100% responsibility for the integrity of their contributions. Tools (including AI) are aids, but the human engineer is ultimately accountable.
AI Tooling Policy: Tools as Standard
Using AI (LLMs, etc.) for engineering is as standard as using a compiler or an IDE.
- No Mandatory Disclosure: Contributors are not required to disclose AI usage in commit messages or PRs.
- Human Accountability: Regardless of the tools used, the human engineer who submits the code bears 100% responsibility for its consequences.
- Evaluation Criteria: Reviewers evaluate only architectural consistency, logical correctness, security, and test compliance. Low-quality code that lacks proper human direction and validation will be rejected regardless of its origin.
- Extended Use: AI can be actively utilized in areas difficult to codify, such as cultural synthesis or streamlining communication. We encourage using AI as an aid to improve decision-making and efficiency.
- No Vendor‑Lock via Unratified Standards: All interfaces, including agent command patterns, prioritize compliance with recognized international standards. Specific service providers (OpenAI, Claude, etc.) or development environments (VS Code, GitHub, etc.) may be supported for convenience where needed, but we do not rely on their proprietary schemas. This ensures our infrastructure remains open and portable, independent of changes in any commercial ecosystem.
Communication Guidelines
- Issue Reporting: Reports should be brief, including clear reproduction paths, expected vs. actual results, and relevant logs.
- Limits on Debate: “Tooling wars” (religious debates over tech stacks) are prohibited. Propose alternatives only with objective data, such as benchmarks or architectural design documents.
- Self-Eliminating Non-Technical Discussions: Topics unrelated to technical goals will naturally go unnoticed and die out. Persistent attempts to steer the project toward external social agendas will result in a natural loss of influence.
- LLM-Enabled Global Collaboration: English is our common tongue, but not everyone’s native language. We strongly encourage using powerful LLM models for translation and drafting to bridge communication gaps.
System Protection
We maintain a community of professionals focused on technical productivity.
Cultural Self-Purification
Personal attacks, slander, or insults are not tolerated. We do not exercise “punishment” in an authoritarian manner; instead, those who persistently act unproductively will be recognized as incompatible with the project’s culture, and their participation will naturally become meaningless. This is the “self-purification” system we believe in.
Technical Defense As Infrastructure Protection
We reserve the right to immediately block malicious technical actions that physically threaten the project’s momentum. This is a matter of resource protection, not ideological judgment:
- Resource Exhaustion: Intentional CI/CD spamming or storage abuse.
- Security Threats: Injecting malware, backdoors, or unauthorized data.
- Automation Abuse: Using bots with the deliberate intent to paralyze project workflows.
Maintainers may restrict access or report to platform providers solely to ensure the continued operation of the system.