Legal
AI Policy
Last updated: 4 May 2026
Introduction
Tech Studio X Ltd ("TechStudioX", "we", "us", or "our") provides software engineering, consulting, and team augmentation services, including for clients in regulated and public-sector environments. We recognise both the substantial advantages that artificial intelligence can offer to software delivery and the heightened need for care, control, and transparency where personal data, client systems, and regulated activities are involved.
This policy is the public counterpart to our internal AI Usage Policy, which sets the operational rules our staff and contractors follow. Where this policy gives an overview, the internal policy provides the detailed procedures. Both are reviewed at least annually, and sooner where our AI tooling materially changes.
Scope
This policy applies to all use of AI for TechStudioX work, on any Company device, in conjunction with any software or service we provide, or otherwise on our behalf. We use the term "AI" broadly. It covers:
- Third-party hosted AI services such as commercial chat assistants and APIs.
- Self-hosted AI software, including locally run open-weight models.
- AI hardware operated by or for the Company.
- Any retrieval, agentic, or tooling layer built around such AI, including Model Context Protocol (MCP) servers.
Genuinely personal AI use, unconnected with Company work and involving no Company data, falls outside the scope of this policy.
Our Principles
The following principles guide every decision we take about AI:
- Responsibility: We are accountable for the AI we use and for the outputs we accept and apply. AI is a tool, not an authority.
- Lawfulness: Our AI use complies with applicable law, including data protection law, intellectual property law, and any sector-specific obligations we or our clients are subject to.
- Confidentiality: We use AI in a way that protects confidential and personal information, both our own and that of our clients and Data Subjects.
- Proportionality: We select and deploy AI proportionately to the task, with the minimum necessary data exposure for the result required.
- Verification: AI outputs, including code, text, and decisions, are reviewed by a human before being relied upon, deployed, or shared externally.
- Transparency: We are open with our clients and Data Subjects about our use of AI, including disclosing AI involvement in deliverables where appropriate.
- Continuous review: The AI landscape changes rapidly. We expect our approved tooling to evolve and treat every change as a controlled change.
AI Tools Register
We maintain an internal AI Tools Register that lists, at any given time, the AI tools, services, models, and hardware approved for Company use, together with the approved purposes, restrictions, deployment model, and any cross-border or data-sensitivity flags.
Tools and configurations not on the Register are not used for Company work. New entries and material changes to existing entries are approved by our Managing Director, and additionally assessed by our Data Protection Officer where personal data is involved. The Register is reviewed at every quarterly access review and any time a new AI tool, model, or significant hardware change is introduced.
Deployment Models
Different ways of running AI carry different risk and compliance characteristics. Each entry in our Register is mapped to one of the following deployment models, and the constraints below apply:
- Commercial cloud AI APIs: Third-party hosted AI services accessed over the internet. Used only under business-tier or higher terms of service that prohibit training on Company input. Cross-border data transfers are assessed under our Data Protection Policy.
- Rented GPU compute: Open-weight or proprietary models run on rented infrastructure where we control the workload but not the underlying hardware. No personal data, no client production data, and no Company secrets are processed on rented compute without prior assessment by our Data Protection Officer.
- Local Company-managed AI hardware: Hardware operated by us at a staff working location, running open-weight or self-hosted models. Because we control the device, the network path, and the data lifecycle end to end, this is our preferred environment for processing confidential information and personal data when AI is involved, provided the underlying processing has a lawful basis under our Data Protection Policy. Hardware is recorded in our asset register, physically secured, encrypted at rest, and not exposed to the public internet. Client production data continues to require explicit written authorisation from the relevant client.
- Local on-device inference: Lightweight models run directly on a Company device, used for assistive tasks such as autocomplete, summarisation, and transcription of recorded calls and meetings. Inputs do not leave the device. Models are loaded only from reputable, attributable sources.
- Custom AI tooling and MCP servers: Internal tooling that connects an AI to data sources. Read-only access to non-production, anonymised, development-only data by default. Production data, production credentials, and personal data of any real Data Subject are not reachable through any such tooling without prior written approval and a Data Protection Impact Assessment.
Data Sensitivity
We classify data and apply corresponding rules on AI use. The summary below sets out what we permit, and what we never permit, at each tier:
- Public: Public documentation, our published policies, and public open-source code. Permitted with any approved tool.
- Internal (non-sensitive): Internal documents that do not contain personal data, client confidential information, or Company secrets, and non-sensitive code. Permitted with any approved tool.
- Confidential: Client business information not in the public domain, non-public commercial information, and non-personal-data analytics. Permitted on local Company-managed AI hardware and on local on-device inference, which we treat as our preferred environments for confidential processing given the level of control we retain. Permitted on commercial cloud AI APIs only under business-tier terms that prohibit training on our input. Not processed on rented GPU compute without prior approval from our Data Protection Officer.
- Personal data: Any data identifying a Data Subject. Permitted only where we have a lawful basis under our Data Protection Policy, the AI tool's terms support the processing (including a Data Processing Agreement and appropriate safeguards for any cross-border transfer), and the Data Protection Officer has assessed and recorded the use. Where AI processing of personal data is required, local Company-managed AI hardware and local on-device inference are our preferred environments because they keep the data within infrastructure we control end to end. A Data Protection Impact Assessment is carried out where processing is likely to result in a high risk to Data Subjects.
- Production secrets and credentials: API keys, deployment keys, database credentials, environment files, certificate private keys, and OAuth tokens. Never submitted to any AI tool under any circumstances. Any inadvertent submission triggers immediate rotation of the affected secret and incident handling.
- Client production data: Live customer records, transaction data, and end-user content held in client production systems where we act as Data Processor. Never submitted to any AI tool except under explicit written instruction from the relevant client, supported by an appropriate Data Processing Agreement that authorises that use.
Where this policy refers to "anonymised" data, we mean data processed such that no living individual can be identified, directly or indirectly, by reasonably likely means. Pseudonymised data, where re-identification remains possible by reference to a key or to other data, is not anonymised and continues to be treated as personal data.
Verification of Outputs
AI-assisted code, documents, analyses, and other outputs are reviewed by a human before they are deployed, distributed, or relied upon. Information obtained directly from AI must have its authenticity and accuracy checked before it can be used in any deliverable. This covers:
- Generated code and code suggestions.
- Technical documentation and specifications.
- Research outputs and data analysis.
- Security recommendations.
- Regulatory or compliance guidance.
Where AI is used to support a decision that has legal or similarly significant effects on a Data Subject, that decision is not based solely on the AI output, and the affected Data Subject's rights under Article 22 of the UK GDPR are respected.
Where AI is used to verify earlier work, for example an AI review of human or AI-generated code, the verification result is itself confirmed by a human before being acted on. Where AI assists with research or synthesis, cited or referenced facts are checked against authoritative sources before being relied upon in client-facing or regulator-facing work.
Bias and Accuracy
Outputs of AI tools may reflect biases or inaccuracies present in their training data. We treat this as a real risk and check outputs against reliable, up-to-date sources before further use.
Where AI is used in connection with client work in regulated sectors, financial decision support, or other high-stakes contexts, we apply additional care to test for bias and to document mitigation steps. Where we identify a bias or inaccuracy that has already affected a client deliverable, we report it to our management, correct it, and inform the affected client where appropriate.
Intellectual Property
Where the relevant AI tool's terms permit, we own the intellectual property in outputs produced by our staff in the course of Company work. Where a tool's terms require attribution, we apply that attribution.
We remain alert to the risk that AI outputs may incorporate third-party intellectual property from training data without attribution. Where there is reasonable doubt as to the licensability of an output, the output is reworked or discarded.
Source code containing material commercial value, trade secrets, or third-party-licensed components subject to use restrictions is not submitted to AI tools other than those whose terms have been specifically assessed for that purpose.
Transparency with Clients and Data Subjects
Where AI has been used in producing a deliverable, we are transparent with the client about that use. Where AI is used in connection with personal data, the affected Data Subjects are informed in accordance with our Data Protection Policy and the relevant privacy notice.
Security
We apply the following security measures to our AI use:
- AI tools accessed via the internet are authenticated using our standard access controls, including multi-factor authentication where supported.
- Local Company-managed AI hardware is physically secured at the staff working location and recorded in our asset register.
- Network access to local Company-managed AI hardware is limited and is not exposed to the public internet without prior approval from our management.
- Models, weights, and any cached data on local AI hardware are managed under the same access controls and retention rules as any other Company data store. Where personal data is processed on local AI hardware, the processing has a lawful basis recorded by our Data Protection Officer, the device is encrypted at rest, and data is deleted in line with our retention schedule.
- Custom MCP servers and similar tooling apply least-privilege access in line with our Access Control Policy and log access for review.
Compliance & Standards
TechStudioX is committed to working in line with established standards for the responsible use of artificial intelligence, including the principles laid out in the BS EN ISO 42001 International Standard for Artificial Intelligence Management Systems.
We comply with all applicable legal and regulatory obligations concerning AI, including those issued by the Information Commissioner's Office (ICO). We also recognise that many of our clients operate in regulated sectors — including healthcare, the public sector, and financial services — and we work within the sector-specific obligations to which those clients are subject.
Policy Review
We review this policy and our underlying AI practices at least annually, and sooner where required. A review is also triggered by:
- Material changes to our AI Tools Register.
- Any incident involving AI.
- Significant developments in AI technology or in related regulation.
- New AI tools or fresh use cases being proposed.
Contact Us
Should you have any questions about this AI Policy or about the way we use artificial intelligence, please get in touch:
Email: [email protected]
Data Protection Officer: [email protected]
Telephone: 020 3095 1050
Address: Curtis House, 34 Third Avenue, Hove, BN3 2PD
Tech Studio X Ltd is registered in England and Wales under company number 14562679, VAT number 516210140.