Security practices for AI visibility workspaces.
prompts-gpt.com is built for teams tracking how AI systems mention, cite, and rank brands. These notes explain the security expectations around accounts, workspaces, public previews, reports, and exported answer evidence.
Authentication and account access
Protected workspaces require authenticated access. Users should keep sign-in methods secure, use unique passwords where applicable, and remove collaborators who no longer need access to AI visibility reports.
Workspace isolation
Projects, prompt monitors, reports, and preferences are scoped to the active workspace so teams can separate brands, clients, and internal visibility programs.
AI visibility data handling
The platform stores operational data needed to run scans, compare answer evidence, prepare reports, and preserve an auditable history of source and prompt outcomes.
Operational safeguards
Rate limits, protected application routes, validation, and server-side API boundaries are used to reduce abuse of previews, scans, exports, and reporting workflows.
Safe operating guidance
AI visibility work often combines public website facts, competitor context, and internal prioritization. Keep sensitive inputs out of public preview tools.
Public previews
Treat free checker inputs as non-confidential. Use public brand, domain, category, and competitor context only.
Prompt design
Avoid adding secrets, private customer names, unreleased campaigns, or regulated personal data to monitored prompts.
Exports
Review CSV exports before sharing outside your organization because they can include prompts, answer evidence, competitor context, and action notes.
Support
Send suspected security issues, account access concerns, or data handling questions to hello@prompts-gpt.com.