Skip to content

Commit b97ca32

Browse files
committed
chore: remove non-innersource specific guidance
1 parent 71054d8 commit b97ca32

1 file changed

Lines changed: 0 additions & 14 deletions

File tree

innersource-and-ai/risks-and-guardrails.md

Lines changed: 0 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -4,20 +4,6 @@ AI is the ultimate InnerSource contributor. Like any external contributor, AI ag
44

55
Adopting AI without these guardrails can deliver short-term gains in speed and productivity, but at the cost of long-term risks to quality, security, and maintainability. The good news: if your organization has built a strong InnerSource culture, you already have the foundations in place.
66

7-
## Risks of rapid AI deployment
8-
9-
Trusting AI-generated or AI-modified code without review or guidance can lead to subtle bugs, security issues, and technical debt. Mandating AI use without clear best practices or policies can create unintended problems—teams may not know when to rely on AI and when to double-check. A balanced approach is to encourage experimentation while reinforcing code review, testing, and clear ownership. InnerSource’s culture of transparency and review supports that balance.
10-
11-
## The role of code review and systems thinking
12-
13-
When AI generates or modifies code, human review remains essential. Code review catches errors, enforces consistency, and spreads knowledge. InnerSource's existing review and governance practices—trusted committers, contribution guidelines, and transparent decision-making—apply directly to AI-assisted contributions. Systems thinking is also critical: understanding how a change fits into boundaries, interfaces, and dependencies helps avoid local optimizations that cause global problems. Resources like the [InnerSource Patterns](https://patterns.innersourcecommons.org/) and [Trusted Committer guide](https://innersourcecommons.org/learn/learning-path/trusted-committer/) provide guidance on how to frame and review contributions responsibly.
14-
157
## Transparency and stakeholder involvement
168

179
Involving stakeholders and keeping development transparent supports responsible AI deployment. When decisions about tools, patterns, and policies are visible and discussable, teams can align on what is acceptable and what is not. This aligns with InnerSource principles of openness and collaboration and helps prevent AI from being used in ways that conflict with organizational values or compliance requirements.
18-
19-
## Best practices and guidance
20-
21-
Organizations need practical guidance on when and how to use AI in development. That may include policies on which tools are approved, how to attribute and review AI-assisted contributions, and how to handle sensitive or regulated code. Detailed policies and playbooks may live in your governance model or in the [InnerSource Patterns](https://patterns.innersourcecommons.org/) book. As a program lead, you can work with governance, security, and engineering leadership to develop and communicate these practices so that adoption of AI remains safe and sustainable.
22-
23-
For more on governance in InnerSource, see [Governance](/governance/governance.md).

0 commit comments

Comments
 (0)