Openai/692c4a6d-a0b8-8002-af18-05564d5876c3

Revision as of 05:30, 1 December 2025 by Lukegao (talk | contribs) (by freem.ai...)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)


This conversation was exported from ChatGPT[1]

gpt-5-1 edit

User: Can you tell me if there are any red flags in these new policies regarding AI use in the Yuh App edit

Can you tell me if there are any red flags in these new policies regarding AI use in the Yuh App

Assistant: Below is a clear and direct assessment of red flags specifically related to AI use, based solely on the three documents… edit

Below is a clear and direct assessment of red flags specifically related to AI use, based solely on the three documents you provided:

  • Yuh App Terms & Conditions terms-and-condititions-yuh-app-…
  • Yuh Account Privacy Policy (Swissquote) yuh-account-privacy-policy-d-en…
  • Yuh App & Website Privacy Policy (Yuh Ltd.) yuh-app-privacy-policy-en-v3

Major AI-Related Red Flags edit

1. Your chatbot conversations may be reviewed by humans edit

The Yuh App Terms explicitly state that chatbot conversations may be stored and used for analysis including manual human review to improve services. terms-and-condititions-yuh-app-…

Why this matters: This is not standard for banking apps where user conversations can contain sensitive financial intent, account-related details, etc.

2. Your data—including chatbot inputs—can be transferred outside Switzerland, including to the US edit

The Terms state that personal data sent to the chatbot can be transferred to jurisdictions without an equivalent level of data protection, explicitly naming the United States. terms-and-condititions-yuh-app-…

Both privacy policies confirm international transfers for many purposes, including analytics, payment processing, marketing, and more. yuh-account-privacy-policy-d-en…

yuh-app-privacy-policy-en-v3

Why this matters: US-based providers may be subject to broad federal access rules (e.g., the CLOUD Act). Your banking-related conversations could therefore lose Swiss-level confidentiality.

3. You explicitly waive Swiss banking secrecy in several cases edit

The Swissquote privacy policy includes a broad banking secrecy waiver, including for:

  • AI-powered tools
  • foreign third-party processors
  • card processors (e.g., Marqeta in the US) yuh-account-privacy-policy-d-en…

Why this matters: This is unusually broad for a Swiss financial institution. Your data is no longer protected under strict Swiss secrecy rules once it leaves Switzerland.

4. AI output is disclaimed heavily (“hallucinations,” no reliability, no liability) edit

The Terms emphasize that the chatbot may produce:

  • inaccurate
  • incomplete
  • misleading information, including hallucinations. terms-and-condititions-yuh-app-…

Why this matters: If you rely on chatbot responses for financial decisions, they disavow all responsibility.

5. Chatbot data is used to train or improve models (not just to answer you) edit

The Terms state the data may be used to:

  • improve the chatbot
  • develop new products
  • improve the Yuh App terms-and-condititions-yuh-app-…

Why this matters: Your personal queries, habits, and metadata can feed internal AI training pipelines.

6. Profiling is explicitly permitted edit

Both privacy policies describe profiling, including assessing aspects of your personality, especially for marketing. yuh-account-privacy-policy-d-en…

yuh-app-privacy-policy-en-v3

Why this matters: Your interactions—possibly including AI chatbot conversations—may influence personalized marketing, risk scoring, or eligibility assessments.

Moderate Red Flags edit

7. Data shared with many third parties, including Big Tech analytics providers edit

Yuh and Swissquote share data with:

  • Google (analytics, cookies)
  • Apple/Google Push services
  • Marqeta (US), Mastercard, payment partners
  • Other analytics tools yuh-app-privacy-policy-en-v3

Push notifications can expose sensitive data and pass through foreign servers. terms-and-condititions-yuh-app-…

8. No promise that AI data will stay encrypted end-to-end edit

While they say data is encrypted “to some extent,” they clearly acknowledge:

  • data may travel uncontrolled across foreign networks
  • even encrypted data may expose who is communicating yuh-account-privacy-policy-d-en… yuh-app-privacy-policy-en-v3

9. Exercise of your rights may result in loss of service edit

If you object to certain processing (including AI-related profiling or data transfer), they may:

  • stop providing the service
  • terminate your access yuh-account-privacy-policy-d-en… yuh-app-privacy-policy-en-v3

Why this matters: There is effectively no meaningful opt-out for AI-related tracking or profiling.

Low-Severity (but notable) AI-related concerns edit

10. The AI chatbot can influence your financial behaviour without accountability edit

They emphasize the chatbot’s information is:

  • not advice
  • not recommendations
  • not reliable terms-and-condititions-yuh-app-…

But since users may treat it as advice, this offloads all responsibility.

11. Conversations can be linked to your account identity edit

Because the chatbot is inside the banking app, conversations are necessarily tied to:

  • device identifiers
  • account data
  • customer profile terms-and-condititions-yuh-app-…

This is substantially different from an anonymous chatbot.

Summary — The Biggest Red Flags edit

Highest-risk items:

  1. Human review of your chatbot conversations
  2. Data transfers outside Switzerland (incl. US)
  3. Waiver of Swiss banking secrecy
  4. Use of your data for training/improving the AI
  5. Profiling for marketing and behavioral analysis
  6. No reliable AI output, no liability

If you're concerned about privacy, confidentiality, or regulatory compliance, these are non-trivial red flags.

If you want, I can create a short risk-rating table, suggest a safe way to continue using the app, or draft questions to ask Yuh's Data Protection Officer.

References edit

  1. "Yuh AI policy red flags". ChatGPT. Retrieved 2025-11-30.