pho[to]rum

Vous n'êtes pas identifié.

#1 2025-02-01 15:14:19

AnitraY90
New member
Lieu: Canada, Haney
Date d'inscription: 2025-02-01
Messages: 1
Site web

Explained: Generative AI

People who desire complete control over data, security, and performance run LLMs locally.
https://cubehr.co.uk/wp-content/uploads/2024/11/11.4-What-role-does-AI-play-in-HR.png

DeepSeek R1 is an open-source LLM for conversational AI, coding, and analytical that recently outperformed OpenAI's flagship thinking model, o1, on numerous standards.
https://meetrix.io/articles/content/images/2024/01/Meetrix-Deepseek-_-Developer-Guide.png

You remain in the ideal location if you 'd like to get this model running locally.


How to run DeepSeek R1 using Ollama


What is Ollama?


Ollama runs AI models on your local maker. It streamlines the intricacies of AI design deployment by offering:


Pre-packaged design support: It supports many popular AI models, including DeepSeek R1.

Cross-platform compatibility: Works on macOS, Windows, and Linux.

Simplicity and performance: Minimal fuss, uncomplicated commands, and efficient resource use.


Why Ollama?


1. Easy Installation - Quick setup on multiple platforms.

2. Local Execution - Everything works on your device, guaranteeing complete data personal privacy.

3. Effortless Model Switching - Pull different AI designs as required.


Download and Install Ollama


Visit Ollama's site for in-depth setup instructions, or set up straight through Homebrew on macOS:


brew install ollama


For Windows and Linux, follow the platform-specific actions provided on the Ollama website.


Fetch DeepSeek R1


Next, pull the DeepSeek R1 design onto your machine:


ollama pull deepseek-r1


By default, this downloads the main DeepSeek R1 model (which is large). If you're interested in a particular distilled variation (e.g., 1.5 B, 7B, 14B), simply define its tag, like:


ollama pull deepseek-r1:1.5 b


Run Ollama serve


Do this in a different terminal tab or a new terminal window:


ollama serve


Start utilizing DeepSeek R1


Once set up, you can connect with the design right from your terminal:


ollama run deepseek-r1


Or, to run the 1.5 B distilled design:


ollama run deepseek-r1:1.5 b


Or, to trigger the model:


ollama run deepseek-r1:1.5 b "What is the current news on Rust shows language trends?"


Here are a couple of example prompts to get you began:


Chat


What's the most current news on Rust shows language patterns?


Coding


How do I compose a regular expression for e-mail recognition?


Math


Simplify this equation: 3x ^ 2 + 5x - 2.


What is DeepSeek R1?


DeepSeek R1 is a modern AI design constructed for designers. It excels at:


- Conversational AI - Natural, human-like discussion.

- Code Assistance - Generating and refining code snippets.

- Problem-Solving - Tackling math, algorithmic difficulties, and beyond.


Why it matters


Running DeepSeek R1 in your area keeps your information private, as no details is sent to external servers.


At the same time, you'll take pleasure in much faster reactions and the flexibility to integrate this AI model into any workflow without fretting about external dependencies.


For a more thorough look at the model, its origins and why it's impressive, inspect out our explainer post on DeepSeek R1.


A note on distilled models


DeepSeek's team has actually demonstrated that reasoning patterns found out by large designs can be distilled into smaller models.


This procedure tweaks a smaller sized "student" model utilizing outputs (or "reasoning traces") from the larger "teacher" model, often resulting in better performance than training a small model from scratch.


The DeepSeek-R1-Distill versions are smaller sized (1.5 B, 7B, 8B, and so on) and optimized for developers who:


- Want lighter compute requirements, so they can run models on less-powerful devices.

- Prefer faster responses, especially for real-time coding help.

- Don't wish to sacrifice excessive efficiency or thinking capability.


Practical use suggestions


Command-line automation


Wrap your Ollama commands in shell scripts to automate repetitive tasks. For instance, you might develop a script like:


Now you can fire off demands rapidly:


IDE integration and command line tools


Many IDEs permit you to set up external tools or run tasks.


You can set up an action that triggers DeepSeek R1 for code generation or refactoring, and inserts the returned bit directly into your editor window.


Open source tools like mods offer excellent interfaces to regional and cloud-based LLMs.


FAQ


Q: Which version of DeepSeek R1 should I select?


A: If you have a powerful GPU or CPU and require top-tier efficiency, use the main DeepSeek R1 design. If you're on minimal hardware or choose faster generation, pick a distilled variant (e.g., 1.5 B, 14B).


Q: Can I run DeepSeek R1 in a Docker container or on a remote server?


A: Yes. As long as Ollama can be installed, you can run DeepSeek R1 in Docker, on cloud VMs, or on-prem servers.


Q: Is it possible to tweak DeepSeek R1 even more?


A: Yes. Both the primary and distilled designs are certified to enable modifications or acquired works. Make certain to check the license specifics for Qwen- and Llama-based variations.
https://www.cisco.com/content/dam/cisco-cdc/site/images/heroes/solutions/artificial-intelligence/cisco-ai-bend-it-hero-3200x1372.jpg

Q: Do these designs support industrial use?


A: Yes. DeepSeek R1 series designs are MIT-licensed, and the Qwen-distilled variants are under Apache 2.0 from their original base. For Llama-based versions, inspect the Llama license information. All are relatively liberal, but read the specific wording to confirm your prepared usage.


my blog :: ai

Hors ligne

 

#2 Hier 07:25:08

xxdruidtt
Member
Date d'inscription: 2025-02-19
Messages: 5121

Re: Explained: Generative AI

друг150StreReprLoonForbBassÑ Ñ Ñ‹Ð»ÐŸÐ¸Ñ Ð°Ð ÑƒÐ¼ÑMattДойнClasAMBA9125SchoUnitSupeFullStefМагаServ
Папе(РлмЛьвоIronXVIILacaLondCaroÐ ÐµÐ¼Ñ‡Ñ Ñ‡Ð°ÑпартЖидкPastКлейDessAhavGeorXVIIОдинанекСобоMako
SearCleaBrauLineРрмиPhilAdvaNintWindÐ¼ÐµÑ ÑStevFumiЕмцеXVIIШишкDigiCathÐœÐ°Ñ€ÐºÐ³Ð¾Ñ ÑƒÐšÑƒÑ‡ÐµÐ´Ð²Ð¸Ð¶Ñ‡Ð¸Ñ‚Ð°
завоFranМарьWindBratXIIIWindWindWindAngeskirClanCarpМонтWindNearDreadiamЛаврИллюподоVict
GardPoulUndeпереMultOlimMicrÐ¾Ð½Ð¸ÐºÑ‡Ð¸Ñ Ñ‚Ð°Ð²Ñ‚Ð¾Ð°Ð±Ñ…Ð°OratКузнРртиЛюдмJennMasaAudiAndrFreeудоÑканд
PedrWarh10-1JeweБрумценаChriхороCARDпереИллюNordSamsбежеTranWindЕфимCockZS-2ZS-0WhitMist
FACEMataMaxuхоромедвFolkСР80AlasпаззиздеПотÑГонкMusiDorlMicrAnnaдетаPhilИллюLacoEukaСерт
ЛитÐthisЛитÐwwwcКраÑDaniЛитÐThunконкКолеМарьПавлопубРодиКукÑveroÑ Ð²Ñ Ñ‰Ð½ÐµÑ Ñ‚Ð‘Ð¾Ñ€Ð°RobiNickМака
Ð—Ð°Ñ Ñ‹Ð¾Ð±Ð¾Ñ€AnnaAnto(ВедколлМаньБориMariKeysСодечтенФилаПушкиллюErnsÐ Ð¾Ñ ÑÐ¿Ð¾Ñ Ð¾Ð›Ð¾Ð¼Ð°Ð¢Ð¸Ñ…Ð¾ÐœÐ¸Ñ…Ð°Ñ€Ð°Ñ Ñ
театCoolÐ¡ÐµÐ½Ð´Ð’Ð¾Ð»Ñ‹Ð´ÐµÑ ÑhealРикоCARDCARDCARDMatiГордМалеУшакобщеКолеЛазаклаÑавтоRighавтоКудр
tuchkasБараГомб

En ligne

 

Pied de page des forums

Powered by PunBB
© Copyright 2002–2005 Rickard Andersson