How are teams safely using AI tools with sensitive customer/financial data?

We’ve been exploring AI tools (ChatGPT, copilots, automation tools, etc.) to speed up reporting, support tickets, and ops work.

But working with customer and financial data makes things tricky from a privacy/compliance standpoint.

 

Before letting teams use any AI tool, I’ve started asking questions like:

• Is our data used to train their models?
• Where is the data stored/processed?
• Do they retain prompts or logs?
• Can we anonymize/redact sensitive info first?
• Do they offer private or enterprise environments?
• Audit logs and access controls available?

Without clear answers, it feels risky to allow usage.

 

Curious how others here handle this:

Are AI tools allowed, restricted, or blocked in your workflows?
Do you have a policy or checklist?
Any safe/enterprise setups that worked well?

Would love to learn what’s working for other Square teams.

44 Views
Message 1 of 1
Report
0 REPLIES 0