-
Notifications
You must be signed in to change notification settings - Fork 18
feat: ispo working group risks-shaping-why updates #115
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from 1 commit
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change | ||||
|---|---|---|---|---|---|---|
|
|
@@ -2,15 +2,15 @@ | |||||
|
|
||||||
| AI is the ultimate InnerSource contributor. Like any external contributor, AI agents generate code that must be reviewed, validated, and integrated thoughtfully into your systems. The same InnerSource practices that enable trusted external contributions—code review, clear guidelines, transparent decision-making, and systems thinking—are exactly what you need to safely and sustainably adopt AI in development. | ||||||
|
|
||||||
| Adopting AI without these guardrails can deliver short-term gains in speed and productivity, but at the cost of long-term risks to quality, security, and maintainability. The good news: if your organization has built a strong InnerSource culture, you already have the foundations in place. | ||||||
| Adopting AI without these guardrails can deliver short-term gains in speed and productivity, but at the cost of long-term risks to quality, security, and maintainability. The good news is that if your organization has built a strong InnerSource culture, you already have the foundations in place. | ||||||
|
|
||||||
| ## Short-term speed vs. long-term risk | ||||||
|
|
||||||
| AI coding tools can deliver impressive short-term productivity gains. The risk is that teams take on more risk than they realize—releasing AI-generated content with fewer human reviews, skipping tests, or accepting code they do not fully understand. These gains can erode over time as technical debt, security vulnerabilities, and maintenance burden accumulate. InnerSource practices like mandatory code review, clear ownership, and contribution guidelines act as a natural brake on this tendency, ensuring that speed does not come at the expense of reliability. | ||||||
| AI coding tools boost short-term productivity but may cause teams to underestimate risks like releasing less-reviewed AI content, skipping tests, or using misunderstood code. These gains can decrease as technical debt and vulnerabilities grow. InnerSource practices such as mandatory review, clear ownership, and contribution guidelines help maintain reliability without sacrificing speed. | ||||||
|
|
||||||
| ## Mitigating AI slop | ||||||
|
|
||||||
| "AI slop" refers to low-quality, generic, or incorrect content produced by AI systems without adequate human oversight. In a development context, this can mean boilerplate code that does not fit the project's conventions, misleading documentation, or subtly incorrect implementations. InnerSource's emphasis on transparency—keeping things traceable and open for inspection—directly mitigates this risk. When contributions (whether from humans or AI) go through visible review processes in shared repositories, quality issues are caught earlier and patterns of slop become visible to the community. | ||||||
| "AI slop" refers to low-quality, generic, or incorrect content produced by AI systems without adequate human oversight. In a development context, this can mean boilerplate code that does not fit the project's conventions, misleading documentation, or subtly incorrect implementations. InnerSource's emphasis on transparency—keeping things traceable and open for inspection—directly mitigates this risk. When contributions (whether from humans or AI) go through visible review processes in shared repositories, quality issues are caught earlier, and patterns of slop become visible to the community. | ||||||
|
|
||||||
| ## Defining boundaries for proprietary knowledge | ||||||
|
|
||||||
|
|
@@ -22,6 +22,23 @@ The goal is to separate human creation outcomes (the knowledge and artifacts tha | |||||
|
|
||||||
| Involving stakeholders and keeping development transparent supports responsible AI deployment. When decisions about tools, patterns, and policies are visible and discussable, teams can align on what is acceptable and what is not. This aligns with InnerSource principles of openness and collaboration and helps prevent AI from being used in ways that conflict with organizational values or compliance requirements. | ||||||
|
|
||||||
| ## Walled gardens and uneven access to AI tooling | ||||||
|
|
||||||
| Access to AI tools varies across companies. Some teams have broad access, while others face restrictions due to API limits, approval delays, or group-specific access. This risks creating 'walled gardens' in InnerSource, where collaboration depends more on access to tools than on work merit or community needs. | ||||||
|
|
||||||
| An ISPO's role mirrors an OSPO's: not to deploy tooling, but to track access disparities affecting collaboration. When contribution patterns shift due to faster iteration by some teams, the ISPO should highlight this as a signal. It can then recommend policies to address uneven access: contribution norms without AI dependence, universal review standards, and guidance on equitable collaboration. | ||||||
|
||||||
| An ISPO's role mirrors an OSPO's: not to deploy tooling, but to track access disparities affecting collaboration. When contribution patterns shift due to faster iteration by some teams, the ISPO should highlight this as a signal. It can then recommend policies to address uneven access: contribution norms without AI dependence, universal review standards, and guidance on equitable collaboration. | |
| An InnerSource Program Office's (ISPO's) role mirrors an Open Source Program Office's (OSPO's): not to deploy tooling, but to track access disparities affecting collaboration. When contribution patterns shift due to faster iteration by some teams, the ISPO should highlight this as a signal. It can then recommend policies to address uneven access: contribution norms without AI dependence, universal review standards, and guidance on equitable collaboration. |
| Original file line number | Diff line number | Diff line change | ||||
|---|---|---|---|---|---|---|
|
|
@@ -10,13 +10,13 @@ This relevance extends beyond code. Organizations are discovering that non-code | |||||
|
|
||||||
| ## The shifting role of the developer | ||||||
|
|
||||||
| Agentic coding—sometimes called “vibe coding”—is changing what it means to be a software developer. The role is shifting from one that writes code to one that provides instructions in natural language and oversees the work of automated agents. Teams are beginning to deploy agent teams where specialized agents handle quality engineering, project management, frontend, and backend work, interacting directly with tools like Jira and GitHub. | ||||||
| Agentic coding—sometimes called “vibe coding”—is changing what it means to be a software developer. The role is shifting from writing code to providing natural-language instructions and overseeing the work of automated agents. Teams are beginning to deploy agent teams where specialized agents handle quality engineering, project management, frontend, and backend work, interacting directly with tools like Jira and GitHub. | ||||||
|
|
||||||
| This makes software architecture and orchestration skills more important than ever. Understanding system boundaries, interfaces, and integration points is essential when you are guiding agents rather than typing code yourself. InnerSource's emphasis on clear ownership, well-documented interfaces, and collaborative governance gives developers and their agents the structure they need to operate effectively across team boundaries. | ||||||
|
|
||||||
| ## Reducing context for AI | ||||||
|
|
||||||
| AI systems and coding agents work best when they have a well-scoped, well-boundaried context. InnerSource projects that are clearly scoped—with explicit interfaces and a clear purpose—give AI a manageable surface area. That improves reliability and reduces the chance of AI “hallucinating” or misusing code from outside the intended scope. Context size remains a practical impediment: current models have limits on how much code and conversation history they can process at once, which makes well-boundaried, modular repositories even more valuable. Shaping your repositories for both humans and AI is a theme we explore in [Shaping Repositories and Practices for AI](shaping-for-ai.md). | ||||||
| AI systems and coding agents achieve optimal performance when operating within a clearly defined and bounded context. Well-scoped InnerSource projects with explicit interfaces restrict the accessibility of AI, thereby enhancing reliability and reducing the likelihood of hallucinations or misuse beyond their intended scope. Because current models struggle with large codebases and extensive conversations due to context size limits, maintaining modular and well-boundaried repositories becomes even more important. Designing your repositories to accommodate both human developers and AI aligns with the themes discussed in [Shaping Repositories and Practices for AI](shaping-for-ai.md). | ||||||
|
||||||
| AI systems and coding agents achieve optimal performance when operating within a clearly defined and bounded context. Well-scoped InnerSource projects with explicit interfaces restrict the accessibility of AI, thereby enhancing reliability and reducing the likelihood of hallucinations or misuse beyond their intended scope. Because current models struggle with large codebases and extensive conversations due to context size limits, maintaining modular and well-boundaried repositories becomes even more important. Designing your repositories to accommodate both human developers and AI aligns with the themes discussed in [Shaping Repositories and Practices for AI](shaping-for-ai.md). | |
| AI systems and coding agents achieve optimal performance when operating within a clearly defined and bounded context. Well-scoped InnerSource projects with explicit interfaces limit the context and surface area AI systems must reason over, thereby enhancing reliability and reducing the likelihood of hallucinations or misuse beyond their intended scope. Because current models struggle with large codebases and extensive conversations due to context size limits, maintaining modular and well-boundaried repositories becomes even more important. Designing your repositories to accommodate both human developers and AI aligns with the themes discussed in [Shaping Repositories and Practices for AI](shaping-for-ai.md). |
Copilot
AI
Apr 18, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"CIOS" appears to be a typo; the plural of CIO should be "CIOs".
| Organizations are reconsidering employee collaboration as AI and agentic workflows transform work. InnerSource programs can either lead this change proactively or wait for top-down imposition by CIOS or others. Proactive leadership enables shaping practices that suit their organization, rather than retrofitting based on external decisions. | |
| Organizations are reconsidering employee collaboration as AI and agentic workflows transform work. InnerSource programs can either lead this change proactively or wait for top-down imposition by CIOs or others. Proactive leadership enables shaping practices that suit their organization, rather than retrofitting based on external decisions. |
Copilot
AI
Apr 18, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"info" is an informal abbreviation in an otherwise formal document. Consider spelling out "information" for consistency and clarity.
| A five-year vision emphasizes agentic experiences where tools predict needs and offer seamless access to info and expertise. Future focus shifts from raw search to curated experiences and collaboration. InnerSource practices that capture and share expert knowledge form the foundation. | |
| A five-year vision emphasizes agentic experiences where tools predict needs and offer seamless access to information and expertise. Future focus shifts from raw search to curated experiences and collaboration. InnerSource practices that capture and share expert knowledge form the foundation. |
Uh oh!
There was an error while loading. Please reload this page.