Automatrics AI Use Statement

1. Purpose and Scope

1.1 Purpose: This policy establishes the guidelines for the responsible, secure, and ethical use of Artificial Intelligence (AI) tools and technologies at Automatrics. We recognize AI as a powerful driver of innovation and efficiency for internal research and development (R&D), and this policy is designed to maximize these benefits while minimizing potential risks, such as data breaches, bias, and intellectual property (IP) infringement.

1.2 Scope: This policy applies to all Automatrics employees, contractors, and third-party vendors who use, develop, or manage AI systems in connection with company business. It covers both third-party AI tools (e.g., public Large Language Models) and internally developed AI solutions.

1.3 Exclusions: This policy specifically prohibits the commercial sale of any AI-generated data or outputs to external parties, except as incorporated into Automatrics’ final, human-reviewed products or services which are then sold as part of standard business operations.

2. Definitions

AI Tools/Systems: Any software, algorithm, or system that performs tasks typically associated with human intelligence, such as learning, reasoning, and problem-solving.

Generative AI (GenAI): A type of AI that can produce new content, such as text, code, or images.

Confidential Information: Includes, but is not limited to, trade secrets, proprietary data, unpublished R&D data, and personal data of employees or customers.

Human Oversight: The requirement that a competent human always monitors and validates AI outputs before any significant action or decision is taken.

3. Guiding Principles

Our use of AI is guided by the following principles:

Human-Centric: AI tools should support and enhance human work, not replace human judgment.

Accountability: Humans are accountable for all decisions and actions, even those assisted by AI.

Fairness and Ethics: AI must be used in a manner that avoids bias, discrimination, or any form of unlawful content or outcomes.

Transparency and Explainability: We strive to understand and document how our AI systems work and the data they use.

4. Permitted and Prohibited Uses

4.1 Permitted Uses (Internal R&D Only):

  • Generating code or initial drafts of internal documentation to enhance productivity.
  • Analysing anonymized internal datasets to identify trends or research gaps.
  • Assisting with literature reviews or translation for internal R&D purposes.
  • Creating internal presentations or reports, provided all AI-generated content is clearly marked and verified by a human.
  • Creating AI-generated images or videos for public-facing content: The use of generative AI tools (e.g., Adobe Firefly, Photoshop generative features) to create or enhance images and videos for Automatrics’ website or marketing materials is permitted, provided that:
    • All AI-generated media is used only for illustrative or aesthetic purposes and never to impersonate real individuals, customers, partners, or events.
    • AI-generated media does not misrepresent product functionality, safety features, or performance.
    • All outputs are human-reviewed prior to publication to ensure accuracy, brand alignment, and compliance with copyright standards.
    • Any third-party tools used for media generation comply with company privacy and IP requirements.

4.2 Prohibited Uses:

  • Entering Automatrics’ confidential or sensitive information into public AI tools or platforms that lack appropriate non-disclosure and data privacy agreements.
  • Using AI for making final decisions related to HR functions (e.g., hiring, performance evaluations) without significant human oversight and a clear contestability mechanism.
  • Generating content that infringes upon third-party intellectual property or copyrights.
  • Using AI to create “deepfakes” or other synthetic content intended to mislead or manipulate.
  • Selling, sharing, or otherwise commercializing raw AI-generated data or outputs externally.

5. Data Privacy and Security

5.1 Data Handling: All data used in AI systems must comply with existing data protection laws (e.g., GDPR, CCPA) and internal security policies.

Anonymization: Data should be anonymized whenever possible before being used in AI tools.

DPIAs: Any AI initiative involving personal data requires a Data Protection Impact Assessment (DPIA).

5.2 Third-Party Tools: Employees must only use AI tools that have been officially approved by the IT and Legal departments. Unapproved tools may pose security and compliance risks.

6. Accountability and Documentation

6.1 Human Oversight: A competent human must review and validate all AI outputs, especially those used in critical decision-making processes.

6.2 Documentation: The use of AI in R&D projects must be transparently documented, including the AI system used, the data utilized, performance metrics, and any risks identified and mitigated.

6.3 Governance: The [Insert Name/Department] has overall responsibility for monitoring and enforcing this policy.

7. Training and Compliance

7.1 Training: All employees will receive training on this policy, the ethical use of AI, and specific guidance on using approved AI tools safely.

7.2 Compliance: Any violation of this policy may result in disciplinary action, up to and including termination of employment, and potential civil or criminal liability.

8. Policy Review

This policy is a living document and will be reviewed and updated regularly (at least annually) by the [Insert Name/Department] to keep pace with evolving technology, regulations, and best practices.

WordPress Image Lightbox