AI Transparency
May 9, 2026
In compliance with the EU AI Act, QAWave discloses the following information about our use of AI systems.
AI Systems in Use
QAWave deploys specialized AI agents for software quality assurance. These agents use large language models (primarily Anthropic Claude) to generate test code, analyze test failures, and propose fixes.
Our agents are classified as limited-risk AI systems under the EU AI Act. They assist human engineers — all agent outputs require human review and approval before being merged into production code.
Human Oversight
Every agent output is subject to human review. Generated tests are submitted as pull requests for engineer approval. Proposed fixes are reviewed before merge. Triage suggestions are advisory — engineers decide the response.
Customers maintain full control over which agent actions are automated versus requiring approval.
AI-Generated Content on This Website
Portions of this website's content were drafted or refined with AI assistance. All published content is reviewed and approved by a human before publication.
Per EU AI Act Article 50, this disclosure satisfies the transparency requirement for AI-generated content in commercial communications.
Data and Training
QAWave does not train AI models. We use commercially available models (Anthropic Claude) via their API. Customer code and data are never used to train third-party models. Anthropic's zero data retention policy applies to all API interactions.
Contact
For questions about our AI practices: ai@qawave.ai. For general privacy inquiries: privacy@qawave.ai.