Skip to content
Open
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
23 changes: 20 additions & 3 deletions innersource-and-ai/risks-and-guardrails.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,15 +2,15 @@

AI is the ultimate InnerSource contributor. Like any external contributor, AI agents generate code that must be reviewed, validated, and integrated thoughtfully into your systems. The same InnerSource practices that enable trusted external contributions—code review, clear guidelines, transparent decision-making, and systems thinking—are exactly what you need to safely and sustainably adopt AI in development.

Adopting AI without these guardrails can deliver short-term gains in speed and productivity, but at the cost of long-term risks to quality, security, and maintainability. The good news: if your organization has built a strong InnerSource culture, you already have the foundations in place.
Adopting AI without these guardrails can deliver short-term gains in speed and productivity, but at the cost of long-term risks to quality, security, and maintainability. The good news is that if your organization has built a strong InnerSource culture, you already have the foundations in place.

## Short-term speed vs. long-term risk

AI coding tools can deliver impressive short-term productivity gains. The risk is that teams take on more risk than they realize—releasing AI-generated content with fewer human reviews, skipping tests, or accepting code they do not fully understand. These gains can erode over time as technical debt, security vulnerabilities, and maintenance burden accumulate. InnerSource practices like mandatory code review, clear ownership, and contribution guidelines act as a natural brake on this tendency, ensuring that speed does not come at the expense of reliability.
AI coding tools boost short-term productivity but may cause teams to underestimate risks like releasing less-reviewed AI content, skipping tests, or using misunderstood code. These gains can decrease as technical debt and vulnerabilities grow. InnerSource practices such as mandatory review, clear ownership, and contribution guidelines help maintain reliability without sacrificing speed.

## Mitigating AI slop

"AI slop" refers to low-quality, generic, or incorrect content produced by AI systems without adequate human oversight. In a development context, this can mean boilerplate code that does not fit the project's conventions, misleading documentation, or subtly incorrect implementations. InnerSource's emphasis on transparency—keeping things traceable and open for inspection—directly mitigates this risk. When contributions (whether from humans or AI) go through visible review processes in shared repositories, quality issues are caught earlier and patterns of slop become visible to the community.
"AI slop" refers to low-quality, generic, or incorrect content produced by AI systems without adequate human oversight. In a development context, this can mean boilerplate code that does not fit the project's conventions, misleading documentation, or subtly incorrect implementations. InnerSource's emphasis on transparency—keeping things traceable and open for inspection—directly mitigates this risk. When contributions (whether from humans or AI) go through visible review processes in shared repositories, quality issues are caught earlier, and patterns of slop become visible to the community.

## Defining boundaries for proprietary knowledge

Expand All @@ -22,6 +22,23 @@ The goal is to separate human creation outcomes (the knowledge and artifacts tha

Involving stakeholders and keeping development transparent supports responsible AI deployment. When decisions about tools, patterns, and policies are visible and discussable, teams can align on what is acceptable and what is not. This aligns with InnerSource principles of openness and collaboration and helps prevent AI from being used in ways that conflict with organizational values or compliance requirements.

## Walled gardens and uneven access to AI tooling

Access to AI tools varies across companies. Some teams have broad access, while others face restrictions due to API limits, approval delays, or group-specific access. This risks creating 'walled gardens' in InnerSource, where collaboration depends more on access to tools than on work merit or community needs.
Comment thread
jeffabailey marked this conversation as resolved.
Outdated

An ISPO's role mirrors an OSPO's: not to deploy tooling, but to track access disparities affecting collaboration. When contribution patterns shift due to faster iteration by some teams, the ISPO should highlight this as a signal. It can then recommend policies to address uneven access: contribution norms without AI dependence, universal review standards, and guidance on equitable collaboration.
Copy link

Copilot AI Apr 18, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This line introduces "ISPO" and "OSPO" without expanding the acronyms. Since this may be read out of context, consider defining them on first use (e.g., "InnerSource Program Office (ISPO)" and "Open Source Program Office (OSPO)").

Suggested change
An ISPO's role mirrors an OSPO's: not to deploy tooling, but to track access disparities affecting collaboration. When contribution patterns shift due to faster iteration by some teams, the ISPO should highlight this as a signal. It can then recommend policies to address uneven access: contribution norms without AI dependence, universal review standards, and guidance on equitable collaboration.
An InnerSource Program Office's (ISPO's) role mirrors an Open Source Program Office's (OSPO's): not to deploy tooling, but to track access disparities affecting collaboration. When contribution patterns shift due to faster iteration by some teams, the ISPO should highlight this as a signal. It can then recommend policies to address uneven access: contribution norms without AI dependence, universal review standards, and guidance on equitable collaboration.

Copilot uses AI. Check for mistakes.


## Measuring the cost of AI adoption

Organizations are spending heavily on AI tooling, often while simultaneously being asked to reduce costs elsewhere. Program leads should help their organizations measure the actual costs of AI adoption: licensing, compute, tokens, platform engineering effort, and the opportunity cost of experiments that do not land. InnerSource components that are well-maintained reduce these costs by preventing agents from regenerating solutions that already exist. Making savings visible, along with AI's productivity gains, helps leaders decide where to invest.

## Designing for agent accessibility alongside human contributors and consumers

InnerSource offices see two main audiences: contributors submitting code and consumers relying on outputs. Agents now constitute a third category, capable of contributing code and independently consuming interfaces; project standards should be updated accordingly.

An ISPO should expand its stewardship to include agent accessibility by encouraging projects with documented processes, clear contribution guidelines, and discoverable interfaces, making them more approachable. It can prompt teams to consider whether agents can find, understand, and contribute independently, alongside regular questions from contributors and consumers.

## Leading people and agents

As AI agents take on more development tasks, leaders face a new challenge: managing both people and AI agents. This goes beyond tooling decisions into questions of work design, accountability, and organizational structure. Who is responsible when an agent produces incorrect or harmful output? How do you balance workloads between human contributors and automated agents? How do you ensure that institutional knowledge continues to be built by people even as agents handle more of the routine work?
Expand Down
14 changes: 12 additions & 2 deletions innersource-and-ai/shaping-for-ai.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,14 +8,24 @@ Well-defined repositories with clear scope and interfaces make it easier for hum

## Documentation and discoverability

InnerSource behaviors like solid READMEs, CONTRIBUTING guides, and architecture decision records are increasingly important when AI is in the loop. They help AI and people alike understand how to use and extend shared code correctly. Documentation that explains *why* decisions were made, not just *what* the code does, supports better AI-generated contributions and reduces misuse. Making repositories searchable and well-described also helps teams and tools find the right building blocks instead of reimplementing them.
InnerSource behaviors like solid READMEs, CONTRIBUTING guides, and architecture decision records are increasingly important when AI is in the loop. They help AI and people alike understand how to use and extend shared code correctly. Documentation that explains *why* decisions were made, not just *what* the code does, supports better AI-generated contributions and reduces misuse. Making repositories searchable and well-described also helps teams and tools find the right building blocks rather than reimplementing them.

Discoverability deserves special attention. In large organizations, teams frequently build duplicate solutions because they cannot find what already exists. This problem extends beyond code to data assets, enablement content, and operational knowledge. Program leads should work with platform teams to ensure that shared assets are consistently tagged, well-described, and surfaced through central search and recommendation tools. AI-powered chatbots and assistants can help with discoverability, but they are only as good as the content they can access—investing in publishing and indexing infrastructure pays dividends.
Discoverability deserves special attention. In large organizations, teams frequently build duplicate solutions because they cannot find existing ones. This problem extends beyond code to data assets, enablement content, and operational knowledge. Program leads should work with platform teams to ensure that shared assets are consistently tagged, well-described, and surfaced through central search and recommendation tools. AI-powered chatbots and assistants can help with discoverability, but they are only as good as the content they can access—investing in publishing and indexing infrastructure pays dividends.

## Playbooks for people and agents

Playbooks that describe how to contribute—and what to avoid—benefit both human contributors and AI-assisted workflows. The community is starting to develop playbooks that work for both. As these emerge, they will be reflected in the InnerSource Patterns book and linked from this section. The goal is to make it easy for contributors and tools to follow the same rules and expectations.

## Beyond portals: integrating InnerSource into the flow of work

Early InnerSource initiatives often relied on a single portal where teams had to find, navigate, and access reusable components. While this can work for motivated contributors, it often fails at larger scales. A better approach is to embed reusable components and documentation within tools like IDEs, workflows, or chat platforms. Moving from a portal-centric model doesn't eliminate the need for a central index; it shifts the focus to making the index improve user experience, not be the only interaction point.

## Removing barriers at the platform engineering layer

AI adoption raises the stakes for frictionless contribution workflows. A human contributor who encounters a confusing approval gate can ask a colleague for help; an agent that hits the same gate will stall, fail silently, or open a malformed PR. When agentic tools are submitting contributions at scale—running automated fixes, proposing dependency updates, or handling issue triage—every point of unnecessary friction is amplified. Heavyweight review requirements, unclear ownership, or undocumented merge criteria that seasoned engineers navigate by instinct become hard blockers for automated workflows.

Program leads should work with platform engineering teams to ensure contribution paths are explicit and machine-navigable: clear criteria for what makes a PR acceptable, documented ownership so agents know who to route requests to, and lightweight approval flows for low-risk changes. The organizations best positioned to benefit from agentic contribution are those whose InnerSource platforms are already streamlined for human contributors—AI makes it impossible to defer the cost of ignoring that work.

## Skills, plugins, and workflow integration

InnerSource can be integrated directly into coding workflows through skills, plugins, and tooling. When reuse and contribution are part of the daily environment—for example, by suggesting existing InnerSource components when starting a new feature—both developers and AI-assisted flows are more likely to reuse rather than duplicate. This is an area of active development; program leads can work with their tooling and platform teams to explore how to surface InnerSource projects and contribution paths where developers (and their tools) already work.
Expand Down
18 changes: 14 additions & 4 deletions innersource-and-ai/why-innersource-matters-with-ai.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,13 +10,13 @@ This relevance extends beyond code. Organizations are discovering that non-code

## The shifting role of the developer

Agentic coding—sometimes called “vibe coding”—is changing what it means to be a software developer. The role is shifting from one that writes code to one that provides instructions in natural language and oversees the work of automated agents. Teams are beginning to deploy agent teams where specialized agents handle quality engineering, project management, frontend, and backend work, interacting directly with tools like Jira and GitHub.
Agentic coding—sometimes called “vibe coding”—is changing what it means to be a software developer. The role is shifting from writing code to providing natural-language instructions and overseeing the work of automated agents. Teams are beginning to deploy agent teams where specialized agents handle quality engineering, project management, frontend, and backend work, interacting directly with tools like Jira and GitHub.

This makes software architecture and orchestration skills more important than ever. Understanding system boundaries, interfaces, and integration points is essential when you are guiding agents rather than typing code yourself. InnerSource's emphasis on clear ownership, well-documented interfaces, and collaborative governance gives developers and their agents the structure they need to operate effectively across team boundaries.

## Reducing context for AI

AI systems and coding agents work best when they have a well-scoped, well-boundaried context. InnerSource projects that are clearly scoped—with explicit interfaces and a clear purpose—give AI a manageable surface area. That improves reliability and reduces the chance of AI “hallucinating” or misusing code from outside the intended scope. Context size remains a practical impediment: current models have limits on how much code and conversation history they can process at once, which makes well-boundaried, modular repositories even more valuable. Shaping your repositories for both humans and AI is a theme we explore in [Shaping Repositories and Practices for AI](shaping-for-ai.md).
AI systems and coding agents achieve optimal performance when operating within a clearly defined and bounded context. Well-scoped InnerSource projects with explicit interfaces restrict the accessibility of AI, thereby enhancing reliability and reducing the likelihood of hallucinations or misuse beyond their intended scope. Because current models struggle with large codebases and extensive conversations due to context size limits, maintaining modular and well-boundaried repositories becomes even more important. Designing your repositories to accommodate both human developers and AI aligns with the themes discussed in [Shaping Repositories and Practices for AI](shaping-for-ai.md).
Copy link

Copilot AI Apr 18, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The phrase "restrict the accessibility of AI" reads like an access-control or a11y concept; here the intent seems to be limiting the scope/surface area (context) available to the model. Consider rephrasing to something like limiting the context/surface area the agent must reason over, to avoid ambiguity.

Suggested change
AI systems and coding agents achieve optimal performance when operating within a clearly defined and bounded context. Well-scoped InnerSource projects with explicit interfaces restrict the accessibility of AI, thereby enhancing reliability and reducing the likelihood of hallucinations or misuse beyond their intended scope. Because current models struggle with large codebases and extensive conversations due to context size limits, maintaining modular and well-boundaried repositories becomes even more important. Designing your repositories to accommodate both human developers and AI aligns with the themes discussed in [Shaping Repositories and Practices for AI](shaping-for-ai.md).
AI systems and coding agents achieve optimal performance when operating within a clearly defined and bounded context. Well-scoped InnerSource projects with explicit interfaces limit the context and surface area AI systems must reason over, thereby enhancing reliability and reducing the likelihood of hallucinations or misuse beyond their intended scope. Because current models struggle with large codebases and extensive conversations due to context size limits, maintaining modular and well-boundaried repositories becomes even more important. Designing your repositories to accommodate both human developers and AI aligns with the themes discussed in [Shaping Repositories and Practices for AI](shaping-for-ai.md).

Copilot uses AI. Check for mistakes.

## Reuse and avoiding duplication

Expand All @@ -28,13 +28,23 @@ Without shared standards and shared repos, each team may produce similar solutio

Software reuse is only part of the picture. Organizations also benefit from capturing and sharing non-code reusable assets: patterns, enablement content, tutorials, architecture decisions, and operational learnings. These assets are valuable both for human contributors and as training or grounding material for AI systems. Rather than requiring individuals to read through long tutorials, AI tools can surface the right knowledge at the right time—but only if that knowledge has been captured, organized, and made accessible in the first place.

Focus on capturing historical knowledge from key personnel like experts, engineers, and architects, whose insights are often in memory, chats, or documents. Making this knowledge discoverable enables agent workflows to surface the right expertise at the right time without humans needing to know who to ask.

InnerSource practices provide a natural framework for this. Open contribution models, visible repositories, and shared publishing tools encourage teams to document what they learn rather than keeping it siloed. Organizations that invest in capturing non-code knowledge will find their AI systems are better grounded in organizational context and more useful to the people they serve.

## Reimagining collaboration over the next five years

Organizations are reconsidering employee collaboration as AI and agentic workflows transform work. InnerSource programs can either lead this change proactively or wait for top-down imposition by CIOS or others. Proactive leadership enables shaping practices that suit their organization, rather than retrofitting based on external decisions.
Copy link

Copilot AI Apr 18, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"CIOS" appears to be a typo; the plural of CIO should be "CIOs".

Suggested change
Organizations are reconsidering employee collaboration as AI and agentic workflows transform work. InnerSource programs can either lead this change proactively or wait for top-down imposition by CIOS or others. Proactive leadership enables shaping practices that suit their organization, rather than retrofitting based on external decisions.
Organizations are reconsidering employee collaboration as AI and agentic workflows transform work. InnerSource programs can either lead this change proactively or wait for top-down imposition by CIOs or others. Proactive leadership enables shaping practices that suit their organization, rather than retrofitting based on external decisions.

Copilot uses AI. Check for mistakes.

A five-year vision emphasizes agentic experiences where tools predict needs and offer seamless access to info and expertise. Future focus shifts from raw search to curated experiences and collaboration. InnerSource practices that capture and share expert knowledge form the foundation.
Copy link

Copilot AI Apr 18, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"info" is an informal abbreviation in an otherwise formal document. Consider spelling out "information" for consistency and clarity.

Suggested change
A five-year vision emphasizes agentic experiences where tools predict needs and offer seamless access to info and expertise. Future focus shifts from raw search to curated experiences and collaboration. InnerSource practices that capture and share expert knowledge form the foundation.
A five-year vision emphasizes agentic experiences where tools predict needs and offer seamless access to information and expertise. Future focus shifts from raw search to curated experiences and collaboration. InnerSource practices that capture and share expert knowledge form the foundation.

Copilot uses AI. Check for mistakes.

Be realistic about AI adoption. Enthusiasm is mostly in tech; wider awareness is limited. Program leads should plan for a long adoption curve, focus on current users, and stay open to future adopters.

## Platforms ready for InnerSource

Platforms and tooling play a crucial role in enabling InnerSource at scale. As organizations adopt AI and agentic workflows, collaboration platforms must support discovery, visibility, and contribution across team boundaries. Platforms that make it easy to find reusable components, understand interfaces, and submit improvements reduce friction and encourage participation. Investment in platform capabilities—search, documentation, governance workflows, and integration with development tools—directly multiplies the effectiveness of InnerSource practices in an AI-augmented environment.

Discoverability is a particular challenge. Without central guidance and good search capabilities, multiple teams within the same organization may independently build similar platforms or solutions, unaware of each other's work. This duplication is costly and undermines the benefits of InnerSource. Program leads should invest in mechanisms that make shared assets—code, data, documentation, and tooling—easy to find across the organization. AI-powered search and recommendation tools can help, but they work best when the underlying assets are well-described, consistently tagged, and published to a central location.
Discoverability is a particular challenge. Without central guidance and effective search capabilities, multiple teams within the same organization may independently build similar platforms or solutions, unaware of one another's work. This duplication is costly and undermines the benefits of InnerSource. Program leads should invest in mechanisms that make shared assets—code, data, documentation, and tooling—easy to find across the organization. AI-powered search and recommendation tools can help, but they work best when the underlying assets are well-described, consistently tagged, and published to a central location.

## The importance of a solid data foundation

Expand All @@ -48,4 +58,4 @@ This section focuses on large-scale enterprise adoption of AI—internal tools,

## Evidence and further reading

AI tooling and organizational practices are evolving. This section will be updated with results from the [InnerSource Commons](https://innersourcecommons.org/) survey and from research partnerships (e.g. with universities, [FINOS](https://finos.org/), and other organizations) as data becomes available. If you have case studies or data to share, we encourage you to contribute or get in touch via the [InnerSource Commons Slack](https://innersourcecommons.org/slack).
AI tooling and organizational practices are evolving. This section will be updated with results from the [InnerSource Commons](https://innersourcecommons.org/) survey and from research partnerships (e.g., with universities, [FINOS](https://finos.org/), and other organizations) as data becomes available. If you have case studies or data to share, we encourage you to contribute or get in touch via the [InnerSource Commons Slack](https://innersourcecommons.org/slack).
Loading