You are about to paste your company's quarterly financials into a new AI summarization tool. Or maybe you are uploading a client contract to an AI document analyzer. Or feeding your entire codebase into an AI coding assistant for the first time. Before you do any of that, there are exactly ten things you should check.
This is not a theoretical framework. It is a practical, actionable checklist you can run through in under fifteen minutes for any AI tool. Each item is something you can verify right now, with no specialized tools or technical expertise required. We use a similar methodology to grade every tool in the TrustGrade database, and this checklist gives you the manual version of that process.
Print it out, bookmark it, share it with your team. The fifteen minutes you spend on this checklist could save you from a data breach, a compliance violation, or the uncomfortable conversation where you explain to a client that their confidential data ended up in someone else's AI model.
TrustGrade Database — Live Data
1. Verify the SSL Certificate
This is the single most basic security check, and yet a surprising number of newer AI tools fail it. SSL (Secure Sockets Layer, now technically TLS) encrypts the connection between your browser and the tool's servers. Without it, every piece of data you type or upload travels across the internet in plain text.
How to check
Look at the URL bar in your browser. The address should start with https://(not http://), and you should see a padlock icon. Click the padlock to verify the certificate is valid and not expired. If the browser shows any certificate warnings, do not proceed.
Why it matters
Without SSL, a malicious actor on the same network (a coffee shop, a hotel, an airport) could intercept everything you send to the tool. This includes your login credentials, the data you input, and the results you receive. SSL is the bare minimum. Any tool that lacks it in 2026 receives an automatic F grade in our system.
2. Read the Privacy Policy (Really)
Yes, actually read it. You do not have to read every word, but you need to search for the answers to three specific questions:
- Does the tool use your data to train its AI models? If yes, your proprietary information could influence outputs shown to other users. Some tools (like ChatGPT) allow you to opt out; others do not.
- Who else gets access to your data? Look for sections about third-party sharing, subprocessors, and analytics providers. Each additional party is an additional risk surface.
- Can you delete your data? Look for information about data deletion rights. A good policy will explain exactly how to request deletion and how long it takes.
Red flags to watch for
Vague language like “we may use your data to improve our services” without specifics. Policies that have not been updated in over a year. Policies that grant the company “perpetual, irrevocable” rights to your content. And the biggest red flag of all: no privacy policy at all.
3. Check the Data Retention Period
How long does the tool keep your data after you use it? This is different from the training question. Even if a tool does not use your data for training, it may still store it indefinitely on its servers.
What to look for
The best tools offer clear retention periods: “Data is deleted within 30 days” or “Inputs are processed in memory and not stored.” Some enterprise-tier tools allow you to configure your own retention period. The worst tools either do not disclose their retention period or state that data is retained “as long as necessary,” which could mean forever.
This matters especially for regulated industries. If you are subject to GDPR or HIPAA, indefinite retention of sensitive data in a third-party AI tool could put you out of compliance.
4. Inspect Security Headers
Security headers are invisible instructions that a website sends to your browser, telling it to enable additional protections. They are one of the best indicators of how seriously a tool's engineering team takes security, because they require deliberate effort to implement.
How to check
Open your browser's developer tools (F12 or right-click and select “Inspect”), go to the Network tab, reload the page, and click on the main document request. Look at the Response Headers section. You want to see:
Strict-Transport-Security- Forces HTTPS connectionsContent-Security-Policy- Prevents cross-site scripting attacksX-Content-Type-Options: nosniff- Prevents MIME-type attacksX-Frame-Optionsor CSPframe-ancestors- Prevents clickjackingReferrer-Policy- Controls information leakage through referrers
Missing one or two is common and not necessarily a deal-breaker. Missing all of them is a serious concern. Tools that score well on security headers in the TrustGrade database typically implement at least four of the five.
5. Look for Third-Party Trackers
Many AI tools embed third-party tracking scripts, analytics pixels, and advertising SDKs that send data to companies you may not have agreed to share with. While some tracking is benign (analytics to improve the product), excessive tracking is a privacy concern, especially when you are processing sensitive data.
How to check
Browser extensions like uBlock Origin or Privacy Badger will show you which third-party domains a site connects to. You can also check the Network tab in developer tools and filter for third-party requests. A well-designed AI tool should have minimal third-party connections, especially on pages where you input data.
What to watch for
Advertising trackers (Google Ads, Facebook Pixel) on data input pages are a major red flag. Analytics tools (Google Analytics, Mixpanel) are more common and less concerning, but best-in-class tools use privacy-respecting analytics alternatives. The fewer third parties that have access to your browsing behavior on an AI tool, the better.
6. Verify Security Certifications
Third-party certifications are among the strongest trust signals because they require an independent auditor to verify security practices. They are expensive and time-consuming to obtain, which means tools that have them are making a real investment in security.
AI Tool Certification Counts — Live Data
The certifications that matter most
- SOC 2 Type II: The gold standard for SaaS. Requires a 6-12 month audit of security controls. If an AI tool has SOC 2 Type II, it means an independent auditor verified their security practices over an extended period.
- ISO 27001: International security management standard. Demonstrates systematic approach to information security.
- GDPR: EU data protection regulation. Tools that invest in GDPR compliance typically have stronger privacy practices globally.
- HIPAA: Required for healthcare data. Must include a Business Associate Agreement (BAA).
Verification tip
Do not just trust the badge. Some tools display certification logos they have not earned. Look for a dedicated security or trust page that provides details about when the certification was achieved and what scope it covers. For SOC 2, ask to see the report (companies commonly share it under NDA). For ISO 27001, the certificate should be verifiable through the certifying body.
7. Check for Encryption at Rest
SSL protects your data while it travels between your browser and the tool's servers. Encryption at rest protects your data while it is stored on those servers. Both are necessary for comprehensive data protection.
What to look for
Check the tool's security page or documentation for statements about encryption at rest. Industry standard is AES-256 encryption for stored data. Some tools go further with envelope encryption, where the encryption keys themselves are encrypted and managed by a dedicated key management service.
This information is often found on a security or trust page rather than in the privacy policy. If the tool does not mention encryption at rest anywhere, it is either not implemented or not considered a priority, neither of which is reassuring.
8. Evaluate Access Controls
How does the tool manage who can access your data, both within your organization and within the tool provider's company? Access controls are especially important for team and enterprise plans where multiple users share an account.
What to look for
- Multi-factor authentication (MFA): The tool should support, and ideally require, MFA for user accounts. This is the single most effective defense against account takeover.
- Role-based access: Team plans should support different permission levels (admin, member, viewer) so you can control who can access what.
- SSO integration: Enterprise tools should support single sign-on through providers like Okta, Azure AD, or Google Workspace, allowing centralized access management.
- Audit logs: The ability to see who accessed what and when. Critical for compliance and incident investigation.
Tools that earn Grade A or Grade B ratings typically offer robust access controls, especially at their enterprise tiers.
9. Research Breach History
A quick search for “[tool name] data breach” or “[tool name] security incident” can reveal critical information that is not visible from the tool's own marketing materials. Past breaches are not automatically disqualifying, how the company responded is what matters.
Good signs after a breach
- Prompt public disclosure with clear timelines
- Detailed explanation of what happened and what data was affected
- Concrete steps taken to prevent recurrence
- Free credit monitoring or identity protection for affected users
- Third-party security audit commissioned after the incident
Bad signs
- Delayed disclosure (weeks or months after the incident)
- Vague communications that minimize the scope
- No evidence of remediation
- Multiple breaches with similar root causes
Also check whether the tool has a bug bounty or responsible disclosure program. Companies that invite security researchers to find vulnerabilities are generally more secure than those that do not.
10. Review the Terms of Service
The terms of service (ToS) contain legally binding obligations that can significantly affect your rights over the data you share. While privacy policies focus on how data is handled, ToS documents focus on what rights you are granting the company.
Key sections to find
- Intellectual property rights: Do you retain ownership of content you create using the tool? Some tools claim broad licenses over your outputs.
- Liability limitations: What happens if the tool causes a data breach? Most ToS limit the company's liability to the amount you paid in the last 12 months, which may not cover your actual damages.
- Service level agreements: For enterprise users, what uptime guarantees and support commitments are included?
- Modification clauses: Can the company change the terms without notice? Good ToS require advance notice of material changes.
- Data portability: Can you export your data if you decide to leave? Tools that make it easy to export are generally more trustworthy than those that create lock-in.
Putting It All Together
No AI tool will score perfectly on every item in this checklist. The goal is not perfection but informed decision-making. Here is a practical scoring approach:
- 8-10 items satisfied: Strong security posture. Appropriate for sensitive data and enterprise use.
- 5-7 items satisfied: Moderate security. Acceptable for general business use with non-critical data.
- 3-4 items satisfied: Weak security. Use only for non-sensitive, non-proprietary tasks.
- 0-2 items satisfied: Avoid entirely. Do not share any data with this tool.
This maps roughly to TrustGrade's letter grade system: 8-10 is Grade A or B, 5-7 is Grade C, 3-4 is Grade D, and 0-2 is Grade F.
When to Re-check
Security is not a one-time assessment. Re-run this checklist:
- Every 6 months for tools you use regularly
- After any reported security incident involving the tool
- When the tool changes ownership, pricing, or terms of service
- Before upgrading to a plan that gives the tool access to more sensitive data
- When you are about to use the tool for a new, more sensitive use case
Or let TrustGrade do it for you. Our automated assessments continuously monitor the tools in our database, so you can check a tool's current trust score anytime without running through the checklist manually.
Next Steps
Now that you have the checklist, put it to work. Start with the AI tools you use most frequently, the ones that have access to your most sensitive data. You might be surprised by what you find.
For a deeper understanding of the methodology behind this checklist, read our complete guide to evaluating AI tool trustworthiness. And for a data-driven view of how the AI tool landscape is performing on these criteria right now, check out our State of AI Tool Security in 2026 report.