- © 2025 Mindtickle Inc. All rights reserved
- Privacy Policy
- CSR Policy
- Terms of Service
- Sitemap
- Do Not Sell or Share My Personal Information
- ESG Policy
At Mindtickle, we integrate AI with a strong commitment to security, ethics, and regulatory compliance. Our goal is to enhance customer experience without compromising trust or transparency.
We use enterprise-grade AI models from trusted providers like Microsoft Azure OpenAI and AWS Bedrock. All third parties undergo thorough due diligence, including security certifications and privacy reviews. Data access, processing, and retention are tightly controlled.
Our AI features prioritize safety, fairness, and accountability. Customer data is never used for public model training, retained, or manually reviewed. With clear AI terms, strict data segregation, and strong governance, customers stay fully in control of their AI experience.
Mindtickle leverages private enterprise AI models provided by Microsoft Azure OpenAI and AWS Bedrock.
Third parties, including AI model providers, are evaluated by Mindtickle and approved by Customers before use.
Third parties used by Mindtickle undergo mandatory evaluation through a review of compliances and audits such as SOC 2, ISO, Penetration Testing, AI Controls, Security and Privacy Policies, etc.
Third parties processing customer data are transparently documented in a public sub-processor repository, sign data processing agreements with Mindtickle, and commit to standard contractual clauses for secure data transfer.
Mindtickle performs GDPR-mandated transfer impact assessment through an independent legal auditor to review all the locations where data could be transferred.
Customer data is never transmitted to public AI models.
Customer data is never used to train or improve AI models.
AI models access data only temporarily to process requests. Data is deleted immediately upon request completion and is never stored permanently.
Customer data will never be used to contribute to the AI models’ knowledge base.
We have opted out of human review from AI models to eliminate manual oversight.
A content filtering system prevents the generation of inappropriate, harmful, unethical, and copyrighted content through AI features.
Customers completely own the data provided as input, user instructions, additional context, and the generated response/output, and Mindtickle does not make any ownership claims to such data.
Audio/Video recordings are never shared with AI models; only the limited subset of the learner’s audio/video recording text transcript is shared.
Only the minimum required data is shared with the AI models.
Mindtickle does not use AI to perform facial recognition or biometric analysis.
AI requests are strictly segregated at the tenant and user levels to ensure that customer data remains isolated and there is no cross-tenant access to AI interactions.
We maintain dedicated AI terms that outline our commitment to responsible data usage and retention while addressing key topics like data ownership, accuracy, and accountability.
Mindtickle follows responsible AI principles in alignment with the EU AI Act, and general industry standards to remain ethical, transparent, and aligned with regulatory requirements.
System prompts are designed to prevent bias, toxicity, and discrimination by AI models.
Content generated by AI features is saved in draft mode, requiring human review and approval before publishing — ensuring accuracy and accountability.
We select AI models that have been trained on a broad and diverse range of datasets to ensure fairness, equity, and non-discrimination.
AI features are designed to enhance human capabilities, boost productivity, and provide data-driven insights—not replace human roles.
AI-powered features and output are clearly labeled with the “Mindtickle Copilot” title and an icon indicating AI use.
All AI features are thoroughly documented to highlight the data fields used, the purpose of each data field, and the generated output/response to help in AI use case evaluation.
Customers decide who can use AI features via the platform’s Role Based Access Control (RBAC) framework.
Mindtickle logs all requests between users and AI models, capturing relevant data points, including user input, system instructions, and pre-defined constraints for a transparent and secure audit trail.
Mindtickle provides customers full control over their AI experience, with the flexibility to enable or disable AI features based on their organization’s use cases.
End users are provided a mechanism to give feedback on AI-generated content to identify and address any problematic content, unintended biases, or ethical concerns.
AI outputs are context-aware, rule-based, and constrained to ensure clear reasoning behind the response and feedback.
Mindtickle has formally assessed itself through an external auditor and has implemented all the controls to comply with the EU AI Act, ensuring ethical and safe AI deployment.
Mindtickle rigorously tests its platform, and all AI features through semi-annual VAPT audits, covering OWASP’s top 10 AI/LLM risks, including prompt injection, output manipulation, model inversion, data poisoning, etc., along with responsible AI measures.
Responsible AI principles, AI compliance policy, and AI system security checks are integrated into the AI product development lifecycle.
Mindtickle has been audited by an independent assessor and complies with all the requirements provided by ISO 42001 – the world’s first comprehensive AI system security standard.
Mindtickle adheres to rigorous industry standards and compliance frameworks, including SOC 2, SOC 3, ISO 27001, ISO 22301, ISO 27701, ISO 27017, ISO 27018, HIPAA, 21 CFR Part 11, GDPR, SCCs, DPF, CCPA, CPRA, UK DPA 2018, and various US state privacy laws.
Training, content, & insights revenue teams need to win over buyers and close more deals.