DeepSeek has quickly become one of the most talked-about artificial intelligence (AI) models, with its latest releases, positioned as open-source rivals to OpenAI’s frontier models. As excitement builds, so do concerns over security, privacy and regulatory scrutiny.
DeepSeek
DeepSeek refers to the large language model (LLM) produced by a Chinese company named DeepSeek, founded in 2023 by Liang Wenfeng. LLM is a machine-learning model that has been pre-trained on a large corpus of data, which enables it to respond to user inputs using natural, human-like responses.
Interest in DeepSeek LLM
In January 2025, DeepSeek published two new LLMs: DeepSeek V3 and DeepSeek R1. The interest surrounding these models is two-fold. First, they are open-source — meaning anyone can download and run these LLMs on their local machines. Second, they were reportedly trained using less-powerful hardware, which was believed to be a breakthrough in this space as it revealed such models could be developed at a lower cost.
Differences between DeepSeek V3 and DeepSeek R1
DeepSeek V3 is an LLM that employs a technique called mixture-of-experts (MoE) that requires less compute power because it only loads the required “experts” to respond to a prompt. It also implements a new technique called multi-head latent attention (MLA), which significantly reduces the memory usage and performance during training and inference (the process of generating a response from user input).
In addition to MoE and MLA, DeepSeek R1 implements a multitoken prediction architecture first introduced by Meta. Instead of just predicting the next word each time the model is executed, DeepSeek R1 predicts the next two tokens in parallel.
DeepSeek R1 is an advanced LLM that utilises reasoning, which includes chain-of-thought (CoT), revealing to the end user how it responds to each prompt. According to DeepSeek, performance of its R1 model rivals OpenAI’s o1 model.
Minimum requirements to run DeepSeek model locally
DeepSeek R1 has 671 billion parameters and requires multiple expensive high-end GPUs to run.
There are distilled versions of the model starting at 1.5 billion parameters, going all the way up to 70 billion parameters. These distilled models are able to run on consumer-grade hardware. Therefore, the lower the parameters, the less resources are required and the higher the parameters, the more resources are required.
The number of parameters also influences how the model will respond to prompts by the user. Most modern computers, including laptops that have 8GB to 16GB of RAM, are capable of running distilled LLMs with seven billion or eight billion parameters.
DeepSeek vs other LLMs
Benchmark testing conducted by DeepSeek showed its DeepSeek R1 model is on par with many of the existing models from OpenAI, Claude and Meta at the time of its release. Additionally, many of the companies in this space have not open-sourced their frontier LLMs, which gives DeepSeek a unique advantage.
Finally, its CoT approach is verbose, revealing more of the nuances involved in how LLMs respond to prompts compared with other reasoning models. The latest models from OpenAI (o3) and Google (Gemini 2.0 Flash Thinking) reveal additional reasoning to the end user, though in a less verbose fashion.
Frontier model
A frontier model refers to the most advanced LLMs available that include complex reasoning and problem-solving capabilities. Currently, OpenAI’s o1 and o3 models along with DeepSeek R1 are the only frontier models available.
Open source vs website
Deploying the open-source version of DeepSeek on a system is likely safer to use versus DeepSeek’s website or mobile applications, since it does not require a connection to the internet to function.
However, there are genuine privacy and security concerns about using DeepSeek, specifically through its website and mobile applications available on iOS and Android.
Concerns surrounding using DeepSeek’s website and mobile applications
DeepSeek’s data collection disclosure is outlined in its privacy policy, which specifies the types of data collected when using its website or mobile applications. It is important to note that data is stored on secure servers, although the retention terms are unclear. Since DeepSeek operates in China, its terms of service are subjected to Chinese law, meaning consumer privacy protections, such as the European Union’s General Data Protection Regulation and similar global regulations, do not apply. If you choose to download DeepSeek models and run them locally, you face a lower risk regarding data privacy.
DeepSeek’s ban
As of Feb 13, several countries have banned or are investigating DeepSeek for a potential ban, including Italy, Taiwan, South Korea and Australia, as well as several states in the US have banned DeepSeek from government devices including Texas, New York, Virginia along with several entities of the US federal government, including the US Department of Defense, US Navy and the US Congress. This list is likely to continue to grow in the coming weeks and months.
This article is contributed by the Tenable Security Response Team.