At AIHelp, privacy, security, and compliance are foundational principles built into every stage of our product design and development. Our mission is to empower developers with trusted, secure, and intelligent customer service solutions. These commitments extend to everything we build — and our AI-powered features are no exception. We are deeply dedicated to safeguarding customer data and maintaining transparency in how our AI technologies are designed, deployed, and managed. To achieve this, AIHelp follows strict security standards and privacy-by-design principles that guide how we build, train, and operate all AI systems — ensuring that every AIHelp feature meets the highest standards of protection, reliability, and regulatory compliance.
Service Data processed by AIHelp is subject to all security standards and commitments. Service Data will not be shared with any other customer.
All models developed by AIHelp are classification models – this means they are trained to read and classify inputs into one of a set number of categories created by AIHelp. Because these models are not generative, no content is produced by the model, and it is not possible for data to be reproduced by the model.
Generative AI features are currently powered by OpenAI and Google. OpenAI and Google does not claim ownership of generated content; enterprise-level services do not use customer content to improve services. OpenAI data security practices are available here. Google's terms of service can be viewed here, Gemini API Additional Terms of Service can be viewed here.
Does AIHelp use customer Service Data to train machine learning models?
AIHelp offers three types of machine learning functionality:
1. Account-specific ML functionality: AIHelp creates machine learning models tailored to a customer’s account using only data existing in the account. Account-specific models will not be used by any other customer.
2. Generic ML functionality: AIHelp uses Service Data to train its generic, cross-account machine learning models to be predictive and useful to multiple AIHelp customers. These include global and industry models. These models will never disclose one customer’s Service Data to another customer, because they are not “generative” (i.e., they do not create text).
3. Generative ML functionality supported by OpenAI and Google: models are pre-trained and AIHelp customer data will never be used by OpenAI and Google(or any other third party) to train their model(s).
How does AIHelp protect Service Data when used for model training? Before Service Data is used to train generic ML functionality, AIHelp applies aggregation and sanitation processes, as necessary. No fields designed to intake personal data or ticket attachments are used for model training. AIHelp is committed to ensuring that no Service Data will be reproduced by the model. There is no risk that one customer’s data will be exposed to another customer through the model’s output.
Hallucinations are an intrinsic risk for generative AI features. AIHelp does two things to mitigate this risk:
● AIHelp utilizes the Retrieval Augmented Generation (RAG) technique to ensure that generated replies or search results are grounded in specific knowledge base content.
● AIHelp team regularly inspects replies with negative user feedback for hallucinations to develop tools that can automatically detect and prevent such scenarios.
All AIHelp features are designed with privacy in-mind, and the AIHelp AI is no different.
Subscribers are able to comply with various privacy laws (including GDPR and CCPA) when using AIHelp, including AIHelp AI features.
For more information on how we handle data, please refer to our Privacy Policy and the AIHelp Subprocessor List.