pho[to]rum

Vous n'êtes pas identifié.

#1 2025-02-01 12:59:43

LeilaniPan
Member
Lieu: Brazil, Itajai
Date d'inscription: 2025-02-01
Messages: 31
Site web

How To Run DeepSeek Locally

People who desire complete control over data, security, and performance run LLMs in your area.


DeepSeek R1 is an open-source LLM for conversational AI, coding, and analytical that just recently exceeded OpenAI's flagship thinking model, o1, on a number of criteria.
https://dataphoenix.info/content/images/2024/06/deepseek-coder-v2-bench.jpg

You're in the right place if you want to get this model running in your area.


How to run DeepSeek R1 using Ollama


What is Ollama?


Ollama runs AI models on your local machine. It streamlines the intricacies of AI design deployment by offering:


Pre-packaged model assistance: It supports many popular AI models, including DeepSeek R1.

Cross-platform compatibility: Works on macOS, Windows, and Linux.

Simplicity and performance: Minimal difficulty, simple commands, and effective resource usage.


Why Ollama?


1. Easy Installation - Quick setup on numerous platforms.

2. Local Execution - Everything runs on your device, ensuring full data privacy.

3. Effortless Model Switching - Pull various AI designs as required.


Download and Install Ollama


Visit Ollama's website for in-depth setup directions, or install straight via Homebrew on macOS:


brew set up ollama


For Windows and Linux, follow the platform-specific actions supplied on the Ollama website.


Fetch DeepSeek R1


Next, pull the DeepSeek R1 model onto your machine:


ollama pull deepseek-r1


By default, this downloads the main DeepSeek R1 design (which is big). If you have an interest in a specific distilled variant (e.g., 1.5 B, 7B, 14B), simply specify its tag, like:


ollama pull deepseek-r1:1.5 b


Run Ollama serve


Do this in a different terminal tab or a brand-new terminal window:
https://www.westfordonline.com/wp-content/uploads/2023/08/The-Future-of-Artificial-Intelligence-in-IT-Opportunities-and-Challenges-transformed-1.png

ollama serve
https://dp-cdn-deepseek.obs.cn-east-3.myhuaweicloud.com/api-docs/r1_hist_en.jpeg

Start using DeepSeek R1


Once installed, you can interact with the design right from your terminal:


ollama run deepseek-r1


Or, to run the 1.5 B distilled model:


ollama run deepseek-r1:1.5 b


Or, to prompt the design:


ollama run deepseek-r1:1.5 b "What is the latest news on Rust programs language trends?"


Here are a couple of example prompts to get you began:


Chat


What's the current news on Rust shows language trends?


Coding


How do I compose a regular expression for e-mail validation?
https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5a8455d7-06e8-4e8a-ab2d-74b7b4ca15c3_1017x679.png

Math


Simplify this equation: 3x ^ 2 + 5x - 2.


What is DeepSeek R1?
https://ichef.bbci.co.uk/ace/standard/1024/cpsprodpb/14202/production/_108243428_gettyimages-871148930.jpg

DeepSeek R1 is a state-of-the-art AI design developed for designers. It excels at:


- Conversational AI - Natural, human-like dialogue.

- Code Assistance - Generating and refining code bits.

- Problem-Solving - Tackling math, algorithmic challenges, and beyond.


Why it matters


Running DeepSeek R1 in your area keeps your data personal, as no information is sent to external servers.


At the same time, you'll delight in faster actions and the flexibility to incorporate this AI model into any workflow without stressing over external reliances.


For a more thorough appearance at the model, its origins and why it's remarkable, examine out our explainer post on DeepSeek R1.


A note on distilled models


DeepSeek's team has demonstrated that reasoning patterns discovered by large designs can be distilled into smaller models.


This process tweaks a smaller sized "student" model utilizing outputs (or "thinking traces") from the larger "instructor" model, typically leading to much better performance than training a little design from scratch.


The DeepSeek-R1-Distill variations are smaller (1.5 B, 7B, 8B, and so on) and enhanced for designers who:


- Want lighter compute requirements, so they can run designs on less-powerful devices.

- Prefer faster responses, especially for real-time coding help.

- Don't wish to compromise too much efficiency or thinking capability.


Practical usage ideas


Command-line automation


Wrap your Ollama commands in shell scripts to automate repeated tasks. For example, you might develop a script like:


Now you can fire off requests rapidly:


IDE combination and command line tools


Many IDEs permit you to set up external tools or run jobs.


You can establish an action that triggers DeepSeek R1 for code generation or refactoring, and inserts the returned snippet directly into your editor window.


Open source tools like mods supply excellent user interfaces to regional and cloud-based LLMs.


FAQ


Q: Which version of DeepSeek R1 should I pick?


A: If you have a powerful GPU or CPU and need top-tier performance, utilize the main DeepSeek R1 model. If you're on minimal hardware or choose much faster generation, choose a distilled variant (e.g., 1.5 B, 14B).


Q: Can I run DeepSeek R1 in a Docker container or on a remote server?


A: Yes. As long as Ollama can be set up, you can run DeepSeek R1 in Docker, on cloud VMs, or on-prem servers.


Q: Is it possible to fine-tune DeepSeek R1 further?


A: Yes. Both the main and distilled designs are licensed to enable modifications or acquired works. Be sure to examine the license specifics for Qwen- and Llama-based versions.


Q: Do these designs support business usage?


A: Yes. DeepSeek R1 series designs are MIT-licensed, and the Qwen-distilled versions are under Apache 2.0 from their initial base. For Llama-based variations, inspect the Llama license information. All are fairly permissive, however read the specific wording to confirm your prepared use.


Also visit my webpage ... ai

Hors ligne

 

Pied de page des forums

Powered by PunBB
© Copyright 2002–2005 Rickard Andersson