pho[to]rum

Vous n'êtes pas identifié.

#1 2025-02-01 12:27:31

LeilaniPan
Member
Lieu: Brazil, Itajai
Date d'inscription: 2025-02-01
Messages: 31
Site web

How To Run DeepSeek Locally

https://static01.nyt.com/images/2025/01/27/multimedia/27DEEPSEEK-EXPLAINER-1-01-hpmc/27DEEPSEEK-EXPLAINER-1-01-hpmc-videoSixteenByNine3000.jpg
People who want full control over data, security, and performance run LLMs locally.
https://cdn1.expresscomputer.in/wp-content/uploads/2024/08/05150239/EC-AI-Artificial-Intelligence-Technology-Microchip-01.jpg

DeepSeek R1 is an open-source LLM for conversational AI, coding, and analytical that just recently outperformed OpenAI's flagship reasoning model, o1, on several criteria.
https://ichef.bbci.co.uk/ace/standard/1024/cpsprodpb/14202/production/_108243428_gettyimages-871148930.jpg

You're in the ideal place if you want to get this design running locally.


How to run DeepSeek R1 using Ollama


What is Ollama?


Ollama runs AI designs on your regional machine. It simplifies the intricacies of AI model release by offering:


Pre-packaged design support: It supports many popular AI models, consisting of DeepSeek R1.

Cross-platform compatibility: Works on macOS, Windows, and Linux.

Simplicity and performance: Minimal hassle, straightforward commands, and efficient resource use.


Why Ollama?


1. Easy Installation - Quick setup on several platforms.

2. Local Execution - Everything works on your machine, making sure full information privacy.

3. Effortless Model Switching - Pull various AI designs as required.


Download and Install Ollama


Visit Ollama's site for comprehensive setup guidelines, or install directly via Homebrew on macOS:


brew install ollama


For Windows and Linux, follow the platform-specific actions supplied on the Ollama website.
https://emeritus.org/wp-content/uploads/2024/11/Berkeley-artificial-intelligence-program.jpg.optimal.jpg

Fetch DeepSeek R1


Next, pull the DeepSeek R1 model onto your device:


ollama pull deepseek-r1


By default, this downloads the main DeepSeek R1 model (which is big). If you're interested in a particular distilled variant (e.g., 1.5 B, 7B, 14B), just define its tag, like:


ollama pull deepseek-r1:1.5 b


Run Ollama serve


Do this in a separate terminal tab or a brand-new terminal window:


ollama serve


Start using DeepSeek R1


Once set up, you can engage with the design right from your terminal:


ollama run deepseek-r1


Or, to run the 1.5 B distilled model:


ollama run deepseek-r1:1.5 b


Or, to trigger the model:


ollama run deepseek-r1:1.5 b "What is the current news on Rust programming language patterns?"


Here are a couple of example prompts to get you started:


Chat


What's the most recent news on Rust programs language patterns?


Coding


How do I compose a routine expression for email recognition?


Math


Simplify this equation: 3x ^ 2 + 5x - 2.


What is DeepSeek R1?


DeepSeek R1 is a modern AI design developed for designers. It stands out at:


- Conversational AI - Natural, human-like discussion.

- Code Assistance - Generating and refining code snippets.

- Problem-Solving - Tackling math, algorithmic challenges, and beyond.


Why it matters


Running DeepSeek R1 locally keeps your information personal, as no details is sent to external servers.


At the same time, you'll take pleasure in faster reactions and the freedom to incorporate this AI model into any workflow without fretting about external dependencies.


For a more in-depth take a look at the design, its origins and why it's amazing, have a look at our explainer post on DeepSeek R1.


A note on distilled models


DeepSeek's team has shown that reasoning patterns found out by big designs can be distilled into smaller sized models.


This process fine-tunes a smaller sized "trainee" model utilizing outputs (or "thinking traces") from the larger "teacher" model, often resulting in much better performance than training a small model from scratch.


The DeepSeek-R1-Distill versions are smaller sized (1.5 B, 7B, 8B, and so on) and optimized for developers who:


- Want lighter calculate requirements, so they can run models on less-powerful machines.

- Prefer faster actions, particularly for real-time coding aid.

- Don't want to sacrifice excessive efficiency or reasoning ability.


Practical use ideas


Command-line automation
https://religionmediacentre.org.uk/wp-content/uploads/2021/04/machine-learning.jpg

Wrap your Ollama commands in shell scripts to automate recurring jobs. For instance, you might create a script like:


Now you can fire off requests rapidly:


IDE combination and command line tools


Many IDEs permit you to set up external tools or run jobs.


You can establish an action that triggers DeepSeek R1 for code generation or refactoring, and inserts the returned bit straight into your editor window.


Open source tools like mods supply exceptional interfaces to regional and cloud-based LLMs.


FAQ


Q: Which version of DeepSeek R1 should I pick?


A: If you have a powerful GPU or CPU and need top-tier efficiency, use the main DeepSeek R1 design. If you're on minimal hardware or prefer faster generation, choose a distilled variation (e.g., 1.5 B, 14B).


Q: Can I run DeepSeek R1 in a Docker container or on a remote server?


A: Yes. As long as Ollama can be installed, you can run DeepSeek R1 in Docker, on cloud VMs, or on-prem servers.


Q: Is it possible to tweak DeepSeek R1 even more?


A: Yes. Both the main and distilled models are licensed to permit modifications or derivative works. Make sure to examine the license specifics for Qwen- and Llama-based variants.


Q: Do these models support business use?


A: Yes. DeepSeek R1 series models are MIT-licensed, and the Qwen-distilled versions are under Apache 2.0 from their original base. For Llama-based variants, inspect the Llama license details. All are relatively liberal, however read the exact wording to verify your prepared use.
https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5a8455d7-06e8-4e8a-ab2d-74b7b4ca15c3_1017x679.png


Also visit my webpage ... ai

Hors ligne

 

Pied de page des forums

Powered by PunBB
© Copyright 2002–2005 Rickard Andersson