How to Install DeepSeek on Windows (Step by Step)

So, you’ve heard enough of the privacy concerns of DeepSeek online and you want to learn how to run it locally. Well, that’s great because in this article, we’ll show you two quick ways to use the DeepSeek R1 model locally, totally disconnected from the internet. In this way you’ll be able to learn how to install DeepSeek on Windows. Let’s dive in.

Guide for Non-Technical User

If you are a non-technical user, go to the LM Studio website. We’ve included a card in the top right, which you can click on. Once you’re here, you need to download LM Studio for your machine. For Mac and Linux, you only have one option, but if you’re a Windows user, you need to choose the correct option. If you’re not sure, click on the start button on your taskbar, then type in MSINFO 32, and then press enter. On this screen, you should see an option that says system type.

Now, if that says ARM anywhere in the type, you need the ARM 64 version of LM Studio, and if it doesn’t, you need the other version. So, I need to download the ARM 64 version. Once it’s done, you can open the file and install it. The installation is painless, so I will see you when it’s done. Make sure to check “run LM Studio” before you exit the installer, then click finish.

This is what you see when you open LM Studio. Before you start chatting, you need to select a model, and before you can select a model, you need to download a model. To do that, click on the search icon in the left sidebar. This allows you to search for models on a site called Hugging Face, which comes from the warmth and friendliness of the hugging face emoji, but we’re here on a mission. So, click on the text box that says “search for models on Hugging Face.” Then type DeepSeek.

Okay, so in the results, we have two DeepSeek R1 models. One says it’s distilled into Qwen 7B and the other says it’s distilled into Llama 8B. So, what the heck do these things mean? And more importantly, which one should you choose? To answer those questions, I’m going to teach you about distillation in 30 seconds. You can’t download DeepSeek R1. It’s like if you had a library of every book that’s ever been written. It’s just too big. But what you could do is summarize the most important books into smaller collections. Those collections would still be valuable, but they wouldn’t have all of the details of the original books. Those collections are the distilled models that we’re choosing from, and 7B and 8B refer to the number of parameters in the model. More parameters mean more complexity and more memory and processing power to run.

Back in LM Studio, the best choice is the one that works. So, I’d start with Llama, unless you need to use a model with multiple different languages, in which case I’d try Qwen. Now you can use LM Studio, just like any other generative AI. So, I’ll start by asking a variant of one of my favorite AI questions. But before I do that, I’m going to disconnect from my local network. Now let’s click send to make sure this is running locally. And just like DeepSeek R1, you get access to the AI’s chain of thought reasoning.

Now, unfortunately, this simplified model has been tricked by this question into thinking there are only two R’s in Cranberry, but later on, we’ll compare the results of practical questions between this model and DeepSeek on the web.

Learn More: 7 Best AI Tools you Should Try in 2025

Guide for Technical User

Now, if you’re a technical user, you might want to go to the Ollama website instead. Once again, you can click the card up top or the link in the description. Once you’re here, click on the big download button, then choose your operating system and click the download button once again. After you’ve downloaded it, run the installer. This installer is also pretty simple, so I’ll see you when I’m done. Once the installation has finished, make sure the app is running. If it’s successful, you’ll see that it’s running in the taskbar.

From here, open a command prompt on Windows by typing CMD and then pressing enter. With Ollama, you can run all these models on the command line, but first, you need to know which commands to run. So back on the Ollama website, click on the search models text box and then click on DeepSeek R1 or type it in if it doesn’t show up. And once you scroll down, you can see all the distilled models that you can run. We’ll use the 8 billion parameter Llama model once again. I’ll then go back to my command prompt and run this command. This will take a few minutes, so I’ll see you in a bit.

Once you download the model, you can use the command line as your chat interface. Let’s go back to my favorite question and see how the AI responds. Once again, we get the unfortunate answer of two R’s in Cranberry. Notice how DeepSeek R1’s reasoning appears between think tags. The rest of the answer appears almost normal, but it has a lot of markdown tags inside of it. As long as you can read markdown, this is a pretty quick way to interact with your models.

And as a final point, you don’t need to be connected to WIFI either to run these commands. For this last part of the tutorial, we’re going to compare our local DeepSeek R1 model on the left with the full-sized DeepSeek R1 on the right. And just to show you what the full-size model can do, you’ll see it’s correctly reasoned that there are three R’s in Cranberry. But let’s move on to some more practical questions.

We’ll start by asking both models to rewrite an email to be firm, but friendly. We’ll click send on both. And after a few seconds, we get an updated email on both. Now, both of these emails are good starts, but they could be firmer. So, let’s ask the AIs to make the email even more authoritative. And let’s see if they succeed. Here’s the full model’s email update. And here’s the distilled model. Well, the full model escalated this to a new level, but both models provided great results.

Now let’s try a math problem. We can clear the original chat by clicking on the comment box with the X on it in the top right. Let’s give it a twist on the classic two trains leave the same station problem. This is a fairly simple problem that most middle schoolers should be able to solve. So, let’s see what they do. Both technically get the correct answer, but only the larger model gets the final answer. Maybe math isn’t the strong suit, so let’s try a different question asking for advice. This one’s about two younger brothers who are fighting and then a vase gets broken. How should the oldest sibling respond? Here we start to see a shift in the quality of the responses. The larger model offers detailed step-by-step advice, providing specific suggestions on what to say at each step in the process. The smaller model, though, is just very high level. This time the larger model wins once again.

This next one is a logic riddle. Think of it as a harder version of the three hats problem. There are five people each wearing red or blue hats. There are at most three hats of any one color, and they can only see the hats of the people in front of them. And in this scenario, only the last person can guess their hat correctly. Why is that? This riddle is an AI buster because both models were wrong. The fifth person can’t know their hat color, but that didn’t stop our local model from thinking for almost 19 minutes on the topic. And ChatGPT also failed to solve this riddle, which is ironic because I had ChatGPT create it.

For our last test, we’ll ask for some music recommendations. Give me some music to check out, which is like Rise Against. The full DeepSeek model gave us a great selection of artists and songs to check out, but our distilled model had some major hallucinations. The Killers don’t have a song, Mr. Perfect, and all of the above is not a song by Blink 182. The only real song in this list is Welcome to the Black Parade.

So, what have we learned? This model might not be the right distillation for you, but if you do have a beefier machine, you can try a bigger model. Instead of just using the staff picks, try selecting one of the options further down. You can certainly find larger models like a 14 billion parameter or even a 32 billion parameter. And if this video has taught you nothing else, you have to experiment with different models to find which one works for you.

As said above, we’ll also provide you the complete step-by-step guide as well. Find below the quick installation guide for DeepSeek on Windows.

Installing DeepSeek on Windows (Step-by-Step)

To install DeepSeek on Windows, you can follow these general steps. Note that the specific installation process may vary depending on the version of DeepSeek and any updates to the software. Here’s a step-by-step guide based on common practices for installing AI models locally:

Step 1: System Requirements

Ensure your Windows system meets the requirements for running DeepSeek. This typically includes having a compatible version of Windows, sufficient RAM, and a capable GPU if you plan to use one.

Step 2: Download DeepSeek

  1. Visit the Official Website: Go to the official DeepSeek website or the repository where the model is hosted (like GitHub or Hugging Face).
  2. Download the Model: Look for the download link for the Windows version of DeepSeek. This may be a zip file or an installer.

Step 3: Install Dependencies

DeepSeek may require certain dependencies to be installed on your system. Common dependencies include:

  • Python: Make sure you have Python installed (preferably Python 3.7 or later).
  • Pip: Ensure that pip (Python package installer) is installed.
  • Other Libraries: You may need to install additional libraries. This can often be done via pip. For example:
   pip install torch transformers

Step 4: Install DeepSeek

  1. Extract Files: If you downloaded a zip file, extract it to a folder on your computer.
  2. Run the Installer: If there is an installer, run it and follow the on-screen instructions. If it’s a script, you may need to run it from the command line.

Step 5: Configure Environment Variables (if necessary)

If DeepSeek requires specific environment variables, you may need to set those up:

  1. Right-click on “This PC” or “My Computer” and select “Properties.”
  2. Click on “Advanced system settings.”
  3. Click on “Environment Variables.”
  4. Add any necessary variables as specified in the DeepSeek documentation.

Step 6: Run DeepSeek

  1. Open Command Prompt: Press Win + R, type cmd, and hit Enter.
  2. Navigate to the DeepSeek Directory: Use the cd command to change to the directory where DeepSeek is installed.
   cd path\to\DeepSeek
  1. Start DeepSeek: Run the command to start DeepSeek. This command may vary based on how the software is set up. It could be something like:
   python deepseek.py

Step 7: Test the Installation

Once DeepSeek is running, you can test it by inputting some queries or commands to ensure it’s functioning correctly.

Additional Notes

  • Documentation: Always refer to the official documentation for DeepSeek for the most accurate and detailed installation instructions.
  • Community Support: If you encounter issues, consider checking forums or community support channels for help.

By following these steps, you should be able to install DeepSeek on your Windows machine successfully. If you have any specific questions or run into issues, feel free to ask!

× Contact us