Agile Lab — Responsible AI Usage Policy
Introduction
At Agile Lab, we embrace AI as a tool to enhance execution, accelerate learning, and amplify productivity, never to replace critical thinking, engineering rigor, or our commitment to our clients’ trust.
This policy sets principles, rules, and best practices for using AI tools — including large language models (LLMs) — in a way that aligns with our strategic goals, our high engineering standards, and our obligations under law and client agreements.
This policy exists to ensure that AI is used responsibly, safely, and effectively, protecting our clients, our colleagues, and our reputation.
Cultural Commitment
We believe AI can empower us to work faster and learn more — but it does not replace our engineering discipline, creativity, or professional responsibility.
By following these principles, we protect our clients, our colleagues, and our reputation as trusted engineers.
Remember: If you’re not sure — don’t share. Ask first.
The Risk of Over-Reliance
Recent studies have shown that heavy dependence on AI tools can reduce our cognitive engagement and problem-solving capacity:
- Stanford University (2023) found that participants relying on AI assistants performed worse on novel problem-solving tasks, suggesting a reduction in independent thinking and creativity.
- Nature Human Behaviour (2023) reported that using AI for writing reduced users’ ability to recall information and generate original ideas afterwards.
(Sources: Stanford HAI, Nature Human Behaviour, 2023)
While AI accelerates execution, over-relying on it can impoverish our human capital – our ability to reason deeply, invent solutions, and understand complex systems. For Agile Lab, these capabilities are our core strength and market differentiation.
Our Strategic Focus: Human Capital Development
In an era where tools and frameworks are everywhere, and with AI excelling in narrow and vertical tasks what really makes us valuable isn’t just knowing one thing really well, but being able to connect the dots across domains, understand how everything fits together in real-world systems, and design solutions that actually work.
At Agile Lab, we aim to grow T-shaped engineers:
- People with a solid core in software engineering, distributed systems, and quality practices
- And with enough breadth to understand architecture, business models, and how their code impacts the bigger system
This means focusing our growth on:
- Seeing the big picture: knowing why we’re building something, how it delivers value, and what the real goals are – beyond just “making it work”
- Understanding system and enterprise architecture: so we can design software that scales, is maintainable, and stays secure in the messy reality of production
- Learning patterns and practices: not just copying solutions, but knowing the design trade-offs, and when to apply each pattern for security, cost, and long-term evolvability
We’re not asking everyone to stop loving code. On the contrary – writing great code becomes way more meaningful when you know how it fits into a bigger system, solves real problems, and supports goals that matter.
Providing Context Responsibly
AI outputs are only as good as the context we provide.
It is the responsibility of each professional to ensure that any prompt or input given to AI systems includes:
- Precise, clear context about the problem, constraints, and objectives
- Systemic constraints and dependencies relevant to the task or domain
- Unique considerations, nuances, and insights that draw from our expertise
- Connections between dots that a general-purpose AI model cannot infer without explicit guidance
Without context rooted in our expertise, AI outputs will be generic, standard, and disconnected from the real problem. This is where we make the difference as professionals: by ensuring AI is working with the right information, grounded in reality, and with proper quality standard in mind.
The Power of Human Relationships
Our impact does not come only from technical skill. It is amplified by our human intelligence:
- Building meaningful relationships with colleagues, stakeholders, and clients
- Developing emotional intelligence: awareness, empathy, and the ability to read the room
- Cultivating active listening: understanding what is really needed, beyond what is requested
- Focusing on what truly matters: what the software or data product must achieve, what characteristics it must have, what risks it mitigates, what opportunities it enables
- Anticipating reactions: thinking ahead to how users, clients, and peers will interpret, use, and respond to our solutions
AI as an Amplifier, Not a Replacement
AI can assist us in execution, accelerate our learning, and inspire alternative viewpoints – but it cannot create purpose, propose context, or build relationships. These remain uniquely human capabilities.
Therefore, at Agile Lab, we commit to:
- Using AI to augment our thinking, not to replace it
- Continuing to invest in our human capital, growing our ability to connect dots, understand complexity, and shape impactful solutions
- Protecting the quality of our thinking, our creativity, and our relationships, as they are the foundation of our excellence
- Challenging both the context we provide and the outputs we receive: an effective technique is to always ask “What are the drawbacks of this solution?”, “What problems to you see with this?” whether it’s a piece of code, an architectural proposal, or a document draft. This habit deepens our understanding, reveals trade-offs, and prevents us from overfitting to the most probable or default answer, enabling better decisions and true learning.
Remember: If you’re not sure – pause and think first, then ask.
Guiding Principles
AI is for execution and learning — not for thinking or decision-making
Use AI to automate repetitive tasks, generate drafts, speed up documentation, and support continuous learning.
Do not rely on AI to make architectural or strategic decisions, or to replace human judgment.Human accountability is non-negotiable
Every output produced or assisted by AI must be reviewed, validated, and owned by a responsible team member.
AI cannot be a substitute for engineering due diligence, secure design, or quality assurance. We stand out in front of our customers and stakeholders, and we take responsibility.Security and confidentiality are paramount
Treat all AI interactions as potential disclosures. Never input confidential client or company data unless fully approved and protected.
Any breach of this principle puts client trust — and our reputation — at risk.Solid foundations come first
AI is powerful only in the hands of professionals with strong fundamentals in software engineering, system design, troubleshooting, and secure coding.
These are the qualities we look for in candidates and develop in our teams.Transparency and compliance build trust
We use AI openly, responsibly, and always in line with GDPR, NDAs, and our contractual obligations.
When AI contributes significantly to a deliverable, this must be traceable internally.
Best Practices for Safe AI Use
- Mask sensitive data. Remove client names, internal IDs, keys, or URLs. Use generic placeholders.
- Never assume outputs are accurate. Always verify facts, run tests, and check for security issues — especially in AI-generated code.
- Check for licensing issues. Treat AI output like third-party code: review it for originality and compliance.
- Document significant AI assistance. Note in commit messages or docs if AI generated a major section.
- Get approval for unusual uses. If in doubt, escalate to your manager or the designated AI governance contact.
- Stay informed. This policy will evolve as technology and regulations change — keep up to date and ask when unsure.
Policy
1. Client Data, Documents, and Code
Prohibited: Uploading any client data, documents, or software — in any form — to external AI tools (including private subscriptions) is forbidden without explicit, written consent from the client. At this stage: No exceptions based on assumed security features of an AI vendor.
Allowed: The customer provides AI Tools and LLM subscriptions with related usage policies. In that case pay attention to do not upload Agile Lab code/info/data in the customer subscription.
2. Agile Lab Internal Data and Code
- Allowed: Public domain materials (e.g. the public Handbook, open knowledge base).
- Restricted: Proprietary IP (e.g. Witboost source code, internal architecture docs, employees data, financial data).
- Upload only if:
- The AI vendor contractually guarantees no data retention, no training on input, and GDPR compliance (e.g. enterprise agreements with strict Data Processing Agreements).
- You have explicit, written approval from management.
- The individual user remains personally responsible for misuse.
- Upload only if:
Use of Personal AI Subscriptions (e.g. ChatGPT Plus, Claude Pro, Gemini Pro)
Use of personal AI subscriptions is permitted temporarily under the following conditions:
- The subscription is configured to disable data retention and training on input data.
- Users are personally responsible for ensuring these settings are active and up to date.
- The AI is used exclusively on non-sensitive, non-confidential, and non-IP-protected information, such as:
- Public domain knowledge or documentation
- General brainstorming, summaries, or explanations
- Internal Agile Lab information classified as public or non-sensitive
- Never use personal AI subscriptions for:
- Client data, code, or documents
- Agile Lab proprietary code or strategic documents
By using personal subscriptions for Agile Lab activities, the user accepts full responsibility for:
- Ensuring correct privacy settings
- Get an explicit approval by the company
Temporary allowance
This permission is temporary and will remain in place only until Agile Lab adopts an enterprise-grade AI stack that fully addresses security, data privacy, and compliance requirements at organizational level.
At that point, the use of personal AI subscriptions for work activities will be re-evaluated and potentially prohibited.
Guite to AI Tools
AI tool comparison
Tool | Training on data | Data retention | Contractual guarantees (DPA, GDPR) | Recommended use |
---|---|---|---|---|
ChatGPT Free | ❌ Used | Indefinite | ❌ None | ❌ Not allowed with Agile Lab data/info |
ChatGPT Plus | ⚠️ Opt-out possible | Indefinite (min ~30d) | ❌ None | ✅ Allowed with Agile Lab data/info excluding IP |
Claude Free | ✅ No (default) | Up to 30 days | ❌ None | ❌ Not allowed with Agile Lab data/info |
Claude Pro | ✅ No (default) | Up to 30 days | ❌ None | ✅ Allowed with Agile Lab data/info excluding IP |
Gemini Free | ❌ Used | Unclear | ❌ None | ❌ Not allowed with Agile Lab data/info |
Gemini Pro | ❌ Used (default) | Unclear | ❌ None | ❌ Not allowed with Agile Lab data/info |
🔑 Key Takeaways
- Free versions: No contractual guarantees, data used for training – do not use for work work-related that involve sensitive information. Instead is possible to use for all the other cases.
Plus/Pro versions:
- Better privacy settings (Claude Pro best, ChatGPT Plus opt-out), but no legal guarantees.
- Suitable only for Agile Lab internal Data and Code taking responsibility on settings and after asking an explicit and written authorization.
Enterprise plans (not listed here) are required for:
- No data training by default
- Custom retention and compliance (DPA, GDPR, SOC-2)
Do not use other tools, not mentioned here, without prior authorization.
Note: When in doubt, always use an enterprise-approved AI tool and ensure you mask or stub any potentially sensitive input.
Code Assistants (e.g. Cursor, VS Code AI plugins, Windsurf)
You may use code assistant tools only if both conditions are met:
The underlying LLM provider complies with this policy
(e.g. it is an approved subscription with no data training or retention on your inputs).The tool or plugin itself explicitly states in its documentation or privacy policy that it does not store, retain, or reuse your prompts or code for any purpose.
⚠️ Note:
If either of these conditions is not met, the tool is prohibited for Agile Lab activities.
Always ensure you have reviewed the tool’s privacy documentation before use, or submit an approval request via the AI Tool Approval Process if in doubt.
AI Tool Approval Form
When to submit
Submit this form before first use if:
- You want to use a new AI tool on internal data/info/code of Agile Lab
Where to submit
The form is located in BigData Sharepoint --> InternalIT --> FormForAIApproval
Form Fields
Please provide:
General purpose of use
Select one:- Code Assistant (e.g. coding, refactoring, scaffolding)
- Office Automation (e.g. drafting emails, summarising documents)
- Learning & Research (e.g. tutorials, explanations, concept exploration)
AI tool or plugin name
- LLM provider (e.g. OpenAI, Anthropic, Google Gemini)
Subscription type
Select one:- Personal Free
- Personal Plus/Pro
- Enterprise (Agile Lab provided)
- Other (specify)
Type of internal asset/data/info to be uploaded
Describe clearly:- Agile Lab internal documents ( e.g. employees contracts, legal documents )
- Agile Lab proprietary IP (e.g. Witboost code )
Consent & accountability acknowledgement
Checkbox: “I acknowledge that I am personally responsible for ensuring compliance with the AI policy and for verifying outputs generated by this tool.”
What happens next
- Your request will be reviewed by Internal IT
- You will receive approval, further questions, or rejection within 5 working days.
- Approved requests are logged centrally for audit and compliance tracking.
⚠️ Important
Do not use any AI tool or plugin for Agile Lab activities before receiving written approval.
For questions, contact Internal IT
AI Incident Reporting
If you become aware of any AI-related incident, data breach, misuse, or potential security risk, report it immediately to Internal IT or CTO
Timely reporting ensures we can protect clients, mitigate risks, and continuously improve our practices.
Risk Management
Connected Risks
Using AI tools carries risks including but not limited to:
- Data confidentiality breaches (e.g. leaking client or proprietary data to external systems)
- Intellectual property violations (e.g. reusing AI outputs containing copyrighted or licensed material)
- Security vulnerabilities (e.g. insecure code generated by AI assistants)
- Regulatory non-compliance (e.g. GDPR violations, breach of contractual obligations)
- Reputational damage to Agile Lab and clients through improper or unvalidated use
Ultimate Responsibility
- The CTO is the ultimate owner and responsible person for AI usage risks within Agile Lab.
- Day-to-day compliance remains the responsibility of each individual user and their chapter leads.
- Internal IT is responsible for the execution of the policy
Policy Maintenance
- This policy is maintained by the CTO, with support from Security and Legal teams.
- It will be reviewed every 6 months or earlier if regulations, client needs, or technology evolution require updates.
Note: This policy was created responsibly with the support of AI tools to accelerate drafting and structure.
The core concepts and strategic directions were provided as human input, and its development included extensive market research, validation and benchmarking against industry best practices has been supported by AI.
The final version has been deeply reviewed, validated, and refined by human experts to ensure accuracy, applicability, and full alignment with our values and goals.