
DeepSeek ‘China-controlled’, can be used to cause harm: OpenAI
In a recent policy proposal, OpenAI, a leading artificial intelligence (AI) research organization, has labeled Chinese AI startup DeepSeek as “state-controlled” and “state-subsidised”. The proposal, made under the US President Donald Trump’s ‘AI Action Plan’ initiative, has raised concerns about the potential misuse of DeepSeek’s AI models to cause harm.
OpenAI’s proposal, which aims to promote responsible AI development and deployment, highlights the need to ensure that AI systems are transparent, explainable, and accountable. In the case of DeepSeek, OpenAI believes that China’s control over the company creates a significant risk of manipulation, which could have devastating consequences.
DeepSeek, a Beijing-based AI startup, has developed a range of AI-powered tools and services, including natural language processing (NLP) and computer vision models. While the company has gained significant traction in the global AI market, OpenAI’s proposal suggests that China’s involvement in the company raises serious concerns about the potential misuse of DeepSeek’s technology.
According to OpenAI, China’s control over DeepSeek could lead to the manipulation of the company’s AI models to cause harm. This could include using the models to spread disinformation, compromise national security, or even engage in cyber attacks. In its proposal, OpenAI urges the US government to take action to prevent the misuse of DeepSeek’s technology, including imposing a ban on China-made AI models that violate user privacy and create security risks.
The US government has been increasingly concerned about the potential risks associated with AI development and deployment. In 2019, the US Department of Defense (DoD) launched the ‘AI Action Plan’, a comprehensive initiative aimed at promoting responsible AI development and deployment across the US government. The plan outlines several key principles, including the need to ensure that AI systems are transparent, explainable, and accountable.
OpenAI’s proposal builds on these principles, highlighting the need for greater transparency and accountability in the development and deployment of AI models. The proposal also calls for the establishment of a new framework for assessing the risks associated with AI development and deployment, including the potential for manipulation and misuse.
The proposal has sparked a heated debate about the role of China in the global AI market. While some experts argue that China’s involvement in the AI market is inevitable and potentially beneficial, others are concerned about the potential risks associated with Chinese AI companies.
In response to the proposal, DeepSeek has denied any allegations of manipulation or misuse of its technology. In a statement, the company emphasized its commitment to transparency and accountability, stating that it has always followed the highest standards of ethics and integrity in its AI development and deployment.
Despite the controversy surrounding OpenAI’s proposal, the company’s concerns about the potential risks associated with DeepSeek’s technology are unlikely to go away. As AI continues to play an increasingly critical role in global affairs, it is essential that we prioritize transparency, accountability, and ethical standards in AI development and deployment.
In conclusion, OpenAI’s proposal highlights the need for greater transparency and accountability in the development and deployment of AI models, particularly in the context of China-controlled AI companies like DeepSeek. While the proposal has sparked a heated debate about the role of China in the global AI market, it is essential that we prioritize ethical standards and responsible development practices in AI research and development.