Skip to content

feat: ispo working group risks-shaping-why updates#115

Open
jeffabailey wants to merge 2 commits intomainfrom
risks-shaping-why-updates
Open

feat: ispo working group risks-shaping-why updates#115
jeffabailey wants to merge 2 commits intomainfrom
risks-shaping-why-updates

Conversation

@jeffabailey
Copy link
Copy Markdown
Collaborator

No description provided.

Copilot AI review requested due to automatic review settings April 18, 2026 01:31
@jeffabailey jeffabailey requested a review from a team as a code owner April 18, 2026 01:31
@github-actions
Copy link
Copy Markdown

github-actions bot commented Apr 18, 2026

PR Preview Action v1.8.1

QR code for preview link

🚀 View preview at
https://InnerSourceCommons.github.io/managing-innersource-projects/pr-preview/pr-115/

Built to branch gh-pages at 2026-04-18 01:41 UTC.
Preview will be ready when the GitHub Pages deployment is complete.

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Updates InnerSource + AI guidance documents with refined phrasing and additional sections focused on collaboration, discoverability, and AI adoption risks/guardrails.

Changes:

  • Refines and expands narrative in “Why InnerSource Matters When Adopting AI,” including new forward-looking collaboration guidance.
  • Adds guidance in “Shaping Repositories and Practices for AI” on moving beyond portals and reducing platform-engineering friction for agentic contributions.
  • Expands “Risks and Guardrails” with new sections on uneven AI tooling access, cost measurement, and agent accessibility considerations.

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 5 comments.

File Description
innersource-and-ai/why-innersource-matters-with-ai.md Rewording for clarity; adds new collaboration section and additional knowledge-capture guidance.
innersource-and-ai/shaping-for-ai.md Adds new sections on embedding InnerSource into flow-of-work and reducing platform contribution friction for agents.
innersource-and-ai/risks-and-guardrails.md Refines intro text; adds new risk/guardrail sections on access disparity, cost, and agent accessibility.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

## Reducing context for AI

AI systems and coding agents work best when they have a well-scoped, well-boundaried context. InnerSource projects that are clearly scoped—with explicit interfaces and a clear purpose—give AI a manageable surface area. That improves reliability and reduces the chance of AI “hallucinating” or misusing code from outside the intended scope. Context size remains a practical impediment: current models have limits on how much code and conversation history they can process at once, which makes well-boundaried, modular repositories even more valuable. Shaping your repositories for both humans and AI is a theme we explore in [Shaping Repositories and Practices for AI](shaping-for-ai.md).
AI systems and coding agents achieve optimal performance when operating within a clearly defined and bounded context. Well-scoped InnerSource projects with explicit interfaces restrict the accessibility of AI, thereby enhancing reliability and reducing the likelihood of hallucinations or misuse beyond their intended scope. Because current models struggle with large codebases and extensive conversations due to context size limits, maintaining modular and well-boundaried repositories becomes even more important. Designing your repositories to accommodate both human developers and AI aligns with the themes discussed in [Shaping Repositories and Practices for AI](shaping-for-ai.md).
Copy link

Copilot AI Apr 18, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The phrase "restrict the accessibility of AI" reads like an access-control or a11y concept; here the intent seems to be limiting the scope/surface area (context) available to the model. Consider rephrasing to something like limiting the context/surface area the agent must reason over, to avoid ambiguity.

Suggested change
AI systems and coding agents achieve optimal performance when operating within a clearly defined and bounded context. Well-scoped InnerSource projects with explicit interfaces restrict the accessibility of AI, thereby enhancing reliability and reducing the likelihood of hallucinations or misuse beyond their intended scope. Because current models struggle with large codebases and extensive conversations due to context size limits, maintaining modular and well-boundaried repositories becomes even more important. Designing your repositories to accommodate both human developers and AI aligns with the themes discussed in [Shaping Repositories and Practices for AI](shaping-for-ai.md).
AI systems and coding agents achieve optimal performance when operating within a clearly defined and bounded context. Well-scoped InnerSource projects with explicit interfaces limit the context and surface area AI systems must reason over, thereby enhancing reliability and reducing the likelihood of hallucinations or misuse beyond their intended scope. Because current models struggle with large codebases and extensive conversations due to context size limits, maintaining modular and well-boundaried repositories becomes even more important. Designing your repositories to accommodate both human developers and AI aligns with the themes discussed in [Shaping Repositories and Practices for AI](shaping-for-ai.md).

Copilot uses AI. Check for mistakes.

## Reimagining collaboration over the next five years

Organizations are reconsidering employee collaboration as AI and agentic workflows transform work. InnerSource programs can either lead this change proactively or wait for top-down imposition by CIOS or others. Proactive leadership enables shaping practices that suit their organization, rather than retrofitting based on external decisions.
Copy link

Copilot AI Apr 18, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"CIOS" appears to be a typo; the plural of CIO should be "CIOs".

Suggested change
Organizations are reconsidering employee collaboration as AI and agentic workflows transform work. InnerSource programs can either lead this change proactively or wait for top-down imposition by CIOS or others. Proactive leadership enables shaping practices that suit their organization, rather than retrofitting based on external decisions.
Organizations are reconsidering employee collaboration as AI and agentic workflows transform work. InnerSource programs can either lead this change proactively or wait for top-down imposition by CIOs or others. Proactive leadership enables shaping practices that suit their organization, rather than retrofitting based on external decisions.

Copilot uses AI. Check for mistakes.

Organizations are reconsidering employee collaboration as AI and agentic workflows transform work. InnerSource programs can either lead this change proactively or wait for top-down imposition by CIOS or others. Proactive leadership enables shaping practices that suit their organization, rather than retrofitting based on external decisions.

A five-year vision emphasizes agentic experiences where tools predict needs and offer seamless access to info and expertise. Future focus shifts from raw search to curated experiences and collaboration. InnerSource practices that capture and share expert knowledge form the foundation.
Copy link

Copilot AI Apr 18, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"info" is an informal abbreviation in an otherwise formal document. Consider spelling out "information" for consistency and clarity.

Suggested change
A five-year vision emphasizes agentic experiences where tools predict needs and offer seamless access to info and expertise. Future focus shifts from raw search to curated experiences and collaboration. InnerSource practices that capture and share expert knowledge form the foundation.
A five-year vision emphasizes agentic experiences where tools predict needs and offer seamless access to information and expertise. Future focus shifts from raw search to curated experiences and collaboration. InnerSource practices that capture and share expert knowledge form the foundation.

Copilot uses AI. Check for mistakes.

Access to AI tools varies across companies. Some teams have broad access, while others face restrictions due to API limits, approval delays, or group-specific access. This risks creating 'walled gardens' in InnerSource, where collaboration depends more on access to tools than on work merit or community needs.

An ISPO's role mirrors an OSPO's: not to deploy tooling, but to track access disparities affecting collaboration. When contribution patterns shift due to faster iteration by some teams, the ISPO should highlight this as a signal. It can then recommend policies to address uneven access: contribution norms without AI dependence, universal review standards, and guidance on equitable collaboration.
Copy link

Copilot AI Apr 18, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This line introduces "ISPO" and "OSPO" without expanding the acronyms. Since this may be read out of context, consider defining them on first use (e.g., "InnerSource Program Office (ISPO)" and "Open Source Program Office (OSPO)").

Suggested change
An ISPO's role mirrors an OSPO's: not to deploy tooling, but to track access disparities affecting collaboration. When contribution patterns shift due to faster iteration by some teams, the ISPO should highlight this as a signal. It can then recommend policies to address uneven access: contribution norms without AI dependence, universal review standards, and guidance on equitable collaboration.
An InnerSource Program Office's (ISPO's) role mirrors an Open Source Program Office's (OSPO's): not to deploy tooling, but to track access disparities affecting collaboration. When contribution patterns shift due to faster iteration by some teams, the ISPO should highlight this as a signal. It can then recommend policies to address uneven access: contribution norms without AI dependence, universal review standards, and guidance on equitable collaboration.

Copilot uses AI. Check for mistakes.
Comment thread innersource-and-ai/risks-and-guardrails.md Outdated
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants