Government Warns Over DeepSeek: Unpacking Data Privacy and Cybersecurity Risks

Government Warns Over DeepSeek: Unpacking Data Privacy and Cybersecurity Risks

Governments worldwide are raising alarms about the Chinese AI tool DeepSeek over data privacy and cybersecurity risks. With thorough investigations revealing extensive data collection and potential misuse, the advisory against using DeepSeek underscores concerns about unregulated foreign data practices and the broader implications for national security.

Navigating Concerns Over DeepSeek's Data Tracking

In recent developments, government authorities are scrutinizing the Chinese AI tool, DeepSeek, for its potential data privacy and cyber espionage risks. Research and investigations have raised alarms about the app’s capacity to harvest extensive user data, a fact that has prompted several nations, including Italy and Australia, to ban the software on official devices.

Government Advisory and Investigations

Top government officials have indicated that an advisory is imminent, cautioning users against employing DeepSeek because of issues relating to data privacy and surveillance. The nodal cybersecurity agency, CERT-In, has conducted an in-depth probe to determine the potential risks posed to citizens. The inquiry examines data collected in three major forms:

  • User Prompts: This includes images, documents, and chat histories.
  • Automatically Collected Information: Device data, details on battery usage, metadata from other applications, cookie tracking, and keystroke information.
  • Other Sources: Data amassed from crowdsourced and publicly available resources.

A senior government official stated, "There are concerns over the usage of DeepSeek; we can’t use it like we use ChatGPT. We have to be careful," underscoring the need for vigilance when adopting such AI models.

Concerns Over Data Misuse and Cybersecurity

Cybersecurity experts have stressed the importance of awareness, reminding users that acquiring a service for free often means that users become the product. Prashant Mali, a cybersecurity advisor, elaborated on the broader implications of unregulated data practices:

  • Data Accessibility: The information collected may be shared among corporate networks and possibly accessed by Chinese law enforcement, with unknown implications.
  • Misinformation Risks: There is a real threat that AI-generated responses could be manipulated to influence political discourse.

Experts warn that this lack of accountability differentiates DeepSeek from other global AI tools, suggesting that the absence of adherence to international data protection norms poses a significant risk.

Global Reactions

Countries across the globe have taken measures in response to the concerns over DeepSeek. Notable responses include:

  • Australia: Citing privacy risks and malware issues, the country has banned DeepSeek on official devices.
  • Taiwan: Authorities have labeled the app as a significant "security risk."
  • South Korea: Various ministries and police have raised alarms over national security following the app's non-compliance with data management inquiries.
  • Italy: An investigation into DeepSeek's R1 model has resulted in a ban on processing Italian users' data.

Future Implications for Hosting and Global Compliance

The situation remains fluid, with government bodies reassessing operational and hosting requirements for DeepSeek within India. Recent directives from the Ministry of Finance have already led to a restriction on certain AI models, including both ChatGPT and DeepSeek, highlighting the broader wariness towards foreign data collection practices.

As nations grapple with the trade-off between technological advancement and cybersecurity, the cautionary stance towards DeepSeek serves as a pivotal reminder of the intricate balance needed to protect sensitive information in the age of AI.

Published At: Feb. 12, 2025, 7:27 a.m.
Original Source: Government may warn against DeepSeek over China’s data tracking risks (Author: Surabhi Agarwal and Himanshi Lohchab)
Note: This publication was rewritten using AI. The content was based on the original source linked above.
← Back to News