Skip to content

Commit 3a15eaa

Browse files
committed
feat: add InnerSource and AI section
1 parent d4c469d commit 3a15eaa

9 files changed

Lines changed: 106 additions & 0 deletions

File tree

SUMMARY.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -31,6 +31,11 @@
3131
* [References](measuring/references.md)
3232
* [Authors and Reviewers](measuring/authors.md)
3333
* [Governance](governance/governance.md)
34+
* [InnerSource and AI](innersource-and-ai/innersource-and-ai.md)
35+
* [Why InnerSource Matters When Adopting AI](innersource-and-ai/why-innersource-matters-with-ai.md)
36+
* [Shaping Repositories and Practices for AI](innersource-and-ai/shaping-for-ai.md)
37+
* [Risks and Guardrails](innersource-and-ai/risks-and-guardrails.md)
38+
* [Authors and Reviewers](innersource-and-ai/authors.md)
3439
* [Tooling](tooling/innersource-tooling.md)
3540
* [GitHub Strategy](tooling/github-strategy.md)
3641
* [GitHub Configuration](tooling/github-configuration.md)

governance/governance.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -250,6 +250,8 @@ organization and its context and find other goals InnerSource may
250250
contribute towards. Then communicate it and get as much air cover
251251
from your executives as you can.
252252

253+
For how governance and transparency support responsible use of AI in development, see [InnerSource and AI](/innersource-and-ai/innersource-and-ai.md), in particular [Risks and Guardrails](/innersource-and-ai/risks-and-guardrails.md).
254+
253255
[^1]: http://oss-watch.ac.uk/resources/governancemodels
254256

255257
[^2]: https://ospo-alliance.org/ggi/

innersource-and-ai/authors.md

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,15 @@
1+
# Authors and Reviewers
2+
3+
## Authors
4+
5+
Chronological order:
6+
7+
* InnerSource Program Office (ISPO) Working Group, [InnerSource Commons](https://innersourcecommons.org/).
8+
9+
## Reviewers
10+
11+
Chronological order:
12+
13+
* (To be added as the section is reviewed and refined by the community.)
14+
15+
This section was drafted as a discussion starter and is open for contributions. If you would like to be listed as an author or reviewer, please open a pull request or get in touch via [Slack](https://innersourcecommons.org/slack).
Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
# InnerSource and AI
2+
3+
Organizations are increasingly adopting AI in the workplace—from generative AI assistants to agentic coding tools that can write, refactor, and review code. This shift is changing how developers work: less time on typing code, more on defining requirements, guiding AI, and making sure systems are reliable and maintainable. For InnerSource program leads, the question is whether InnerSource still matters in this new landscape.
4+
5+
It does. InnerSource is potentially *more* important than ever. Shared repositories, clear boundaries, documentation, and collaborative practices help AI systems—and the people using them—work with the right context, reuse existing components, and keep quality high. This section explains why InnerSource matters when adopting AI, how to shape your repositories and practices for AI-assisted development, and what risks and guardrails to keep in mind.
6+
7+
The following articles in this section go deeper:
8+
9+
- [Why InnerSource Matters When Adopting AI](why-innersource-matters-with-ai.md) — Relevance of InnerSource in an AI-augmented world, reuse, and production readiness.
10+
- [Shaping Repositories and Practices for AI](shaping-for-ai.md) — Repository design, documentation, and workflow integration so both humans and AI can contribute effectively.
11+
- [Risks and Guardrails](risks-and-guardrails.md) — Balancing speed with safety, the role of code review, and organizational best practices for AI use.
12+
13+
AI tooling and practices are evolving quickly. This section will be updated as the community learns more and as survey and research data become available. If you are new to InnerSource, we recommend starting with [Getting Started with InnerSource](http://www.oreilly.com/programming/free/getting-started-with-innersource.csp) and the [Introduction](/introduction/introduction.md) to this book.
Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
# Risks and Guardrails
2+
3+
Adopting AI in development can deliver short-term gains in speed and productivity. Without guardrails, it can also introduce long-term risks to quality, security, and maintainability. InnerSource practices help organizations balance innovation with responsibility.
4+
5+
## Risks of rapid AI deployment
6+
7+
Trusting AI-generated or AI-modified code without review or guidance can lead to subtle bugs, security issues, and technical debt. Mandating AI use without clear best practices or policies can create unintended problems—teams may not know when to rely on AI and when to double-check. A balanced approach is to encourage experimentation while reinforcing code review, testing, and clear ownership. InnerSource’s culture of transparency and review supports that balance.
8+
9+
## The role of code review and systems thinking
10+
11+
When AI generates or modifies code, human review remains essential. Code review catches errors, enforces consistency, and spreads knowledge. InnerSource’s existing review and governance practices—trusted committers, contribution guidelines, and transparent decision-making—apply directly to AI-assisted contributions. Systems thinking is also critical: understanding how a change fits into boundaries, interfaces, and dependencies helps avoid local optimizations that cause global problems. Program leads can emphasize that AI output is a draft to be reviewed, not a substitute for human judgment.
12+
13+
## Transparency and stakeholder involvement
14+
15+
Involving stakeholders and keeping development transparent supports responsible AI deployment. When decisions about tools, patterns, and policies are visible and discussable, teams can align on what is acceptable and what is not. This aligns with InnerSource principles of openness and collaboration and helps prevent AI from being used in ways that conflict with organizational values or compliance requirements.
16+
17+
## Best practices and guidance
18+
19+
Organizations need practical guidance on when and how to use AI in development. That may include policies on which tools are approved, how to attribute and review AI-assisted contributions, and how to handle sensitive or regulated code. Detailed policies and playbooks may live in your governance model or in the [InnerSource Patterns](https://patterns.innersourcecommons.org/) book. As a program lead, you can work with governance, security, and engineering leadership to develop and communicate these practices so that adoption of AI remains safe and sustainable.
20+
21+
For more on governance in InnerSource, see [Governance](/governance/governance.md).
Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
# Shaping Repositories and Practices for AI
2+
3+
InnerSource practices that make life easier for human contributors also help AI systems and agentic tools. Clear scope, good documentation, and consistent workflows make it easier for both people and AI to discover, understand, and contribute to shared code safely.
4+
5+
## Repository and boundary design
6+
7+
Well-defined repositories with clear scope and interfaces make it easier for humans and AI to contribute without stepping on each other’s toes. When boundaries are explicit—what belongs in this repo, what the APIs are, what the project is *not* responsible for—AI agents and assistants can operate within a manageable context. The community is exploring a pattern sometimes called “InnerSource the AI way,” which emphasizes clear scope and boundaries; as it matures, it may be documented in the [InnerSource Patterns](https://patterns.innersourcecommons.org/) book and linked from here.
8+
9+
## Documentation and discoverability
10+
11+
InnerSource behaviors like solid READMEs, CONTRIBUTING guides, and architecture decision records are increasingly important when AI is in the loop. They help AI and people alike understand how to use and extend shared code correctly. Documentation that explains *why* decisions were made, not just *what* the code does, supports better AI-generated contributions and reduces misuse. Making repositories searchable and well-described also helps teams and tools find the right building blocks instead of reimplementing them.
12+
13+
## Playbooks for people and agents
14+
15+
Playbooks that describe how to contribute—and what to avoid—benefit both human contributors and AI-assisted workflows. The community is starting to develop playbooks that work for both. As these emerge, they will be reflected in the InnerSource Patterns book and linked from this section. The goal is to make it easy for contributors and tools to follow the same rules and expectations.
16+
17+
## Skills, plugins, and workflow integration
18+
19+
InnerSource can be integrated directly into coding workflows through skills, plugins, and tooling. When reuse and contribution are part of the daily environment—for example, by suggesting existing InnerSource components when starting a new feature—both developers and AI-assisted flows are more likely to reuse rather than duplicate. This is an area of active development; program leads can work with their tooling and platform teams to explore how to surface InnerSource projects and contribution paths where developers (and their tools) already work.
20+
21+
For more on infrastructure and tooling in InnerSource, see [Tooling](/tooling/innersource-tooling.md) and [Infrastructure](/infrastructure/infrastructure.md).
Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,23 @@
1+
# Why InnerSource Matters When Adopting AI
2+
3+
AI and agentic coding are changing how development work gets done. Developers spend more time specifying requirements and guiding AI tools than writing every line of code by hand. Yet collaboration, reuse, and clear boundaries remain critical. InnerSource helps organizations move faster with *shared* components and practices instead of scattered, duplicated solutions.
4+
5+
## InnerSource is more relevant than ever
6+
7+
When many teams use AI to generate or modify code, the risk of duplication and inconsistency grows. InnerSource encourages shared building blocks and a single place to contribute improvements. That reduces waste and keeps quality consistent across the organization. The demand for software architecture and orchestration skills is also rising: understanding system boundaries, interfaces, and processes is essential for building valuable, reliable AI-assisted systems. InnerSource’s emphasis on transparency, documentation, and community aligns with this need.
8+
9+
## Reducing context for AI
10+
11+
AI systems and coding agents work best when they have a well-scoped, well-boundaried context. InnerSource projects that are clearly scoped—with explicit interfaces and a clear purpose—give AI a manageable surface area. That improves reliability and reduces the chance of AI “hallucinating” or misusing code from outside the intended scope. Shaping your repositories for both humans and AI is a theme we explore in [Shaping Repositories and Practices for AI](shaping-for-ai.md).
12+
13+
## Reuse and avoiding duplication
14+
15+
Reuse at the service or component level is especially valuable when many teams use AI to generate code. Without shared standards and shared repos, each team may produce similar solutions in isolation. InnerSource fosters reuse and cost sharing across units, which in turn supports sustainability and efficiency. This is the same benefit InnerSource has always offered; in an AI-augmented world, it becomes harder to ignore.
16+
17+
## Enterprise AI and production readiness
18+
19+
This section focuses on large-scale enterprise adoption of AI—internal tools, pipelines, and agentic workflows—rather than consumer-facing AI products. In that context, the difference between prototype AI solutions and production-ready ones matters a lot. InnerSource practices—transparency, code review, documentation, and governance—help teams build robust, secure, and maintainable AI-assisted development. They also help leaders see what is ready for production and what still needs work.
20+
21+
## Evidence and further reading
22+
23+
AI tooling and organizational practices are evolving. This section will be updated with results from the [InnerSource Commons](https://innersourcecommons.org/) survey and from research partnerships (e.g. with universities, [FINOS](https://finos.org/), and other organizations) as data becomes available. If you have case studies or data to share, we encourage you to contribute or get in touch via the [InnerSource Commons Slack](https://innersourcecommons.org/slack).

introduction/framework.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -112,3 +112,7 @@ Open measurement gives a lot of benefits for our InnerSource community:
112112

113113
- transparency, as trust generator for third parties and fairness
114114
for our InnerSource community
115+
116+
## InnerSource and AI
117+
118+
Organizations adopting AI in the workplace—from generative AI assistants to agentic coding tools—can benefit from InnerSource in new ways. Shared repositories, clear boundaries, and collaborative practices help both humans and AI work with the right context and reuse. See the [InnerSource and AI](/innersource-and-ai/innersource-and-ai.md) section for why it matters, how to shape repositories and practices for AI, and what risks and guardrails to consider.

tooling/innersource-tooling.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,7 @@
11
# InnerSource with GitHub
22

3+
For how tooling and workflow integration support AI-assisted development and InnerSource together, see the [InnerSource and AI](/innersource-and-ai/innersource-and-ai.md) section, in particular [Shaping Repositories and Practices for AI](/innersource-and-ai/shaping-for-ai.md).
4+
35
## Effective InnerSource Strategies and Configuration for GitHub
46

57
This documentation is a compilation of the essential settings and strategies necessary for implementing InnerSource inside an organization. It encompasses both overall strategic elements and specific points relating to SCM configuration.

0 commit comments

Comments
 (0)