From 5d0559ec8cef0ced64f2e7956054440822e9bcf2 Mon Sep 17 00:00:00 2001 From: Adeline Arnett Date: Sat, 31 May 2025 20:42:46 +0800 Subject: [PATCH] Add 'DeepSeek Open-Sources DeepSeek-R1 LLM with Performance Comparable To OpenAI's O1 Model' --- ...R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md | 2 ++ 1 file changed, 2 insertions(+) create mode 100644 DeepSeek-Open-Sources-DeepSeek-R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md diff --git a/DeepSeek-Open-Sources-DeepSeek-R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md b/DeepSeek-Open-Sources-DeepSeek-R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md new file mode 100644 index 0000000..dacb6d6 --- /dev/null +++ b/DeepSeek-Open-Sources-DeepSeek-R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md @@ -0,0 +1,2 @@ +
DeepSeek open-sourced DeepSeek-R1, an LLM fine-tuned with reinforcement knowing (RL) to [enhance reasoning](https://git.lodis.se) ability. DeepSeek-R1 attains results on par with OpenAI's o1 model on several benchmarks, consisting of MATH-500 and [SWE-bench](http://1.94.127.2103000).
+
DeepSeek-R1 is based upon DeepSeek-V3, [wavedream.wiki](https://wavedream.wiki/index.php/User:PauletteMckinney) a mix of experts (MoE) design recently open-sourced by DeepSeek. This base model is fine-tuned utilizing Group Relative Policy Optimization (GRPO), a reasoning-oriented version of RL. The research study group likewise carried out understanding distillation from DeepSeek-R1 to open-source Qwen and [Llama designs](http://jobs.freightbrokerbootcamp.com) and released several variations of each \ No newline at end of file