From fbc83ea1ac15a5a228975a530601fbc7495267ce Mon Sep 17 00:00:00 2001 From: Aundrea Whinham Date: Sun, 9 Feb 2025 22:01:58 +0800 Subject: [PATCH] Add 'DeepSeek Open-Sources DeepSeek-R1 LLM with Performance Comparable To OpenAI's O1 Model' --- ...R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md | 2 ++ 1 file changed, 2 insertions(+) create mode 100644 DeepSeek-Open-Sources-DeepSeek-R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md diff --git a/DeepSeek-Open-Sources-DeepSeek-R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md b/DeepSeek-Open-Sources-DeepSeek-R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md new file mode 100644 index 0000000..d952a05 --- /dev/null +++ b/DeepSeek-Open-Sources-DeepSeek-R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md @@ -0,0 +1,2 @@ +
DeepSeek open-sourced DeepSeek-R1, an LLM fine-tuned with support learning (RL) to improve thinking capability. DeepSeek-R1 attains outcomes on par with OpenAI's o1 model on numerous standards, [wiki.myamens.com](http://wiki.myamens.com/index.php/User:MarylynEsmond) including MATH-500 and .
+
DeepSeek-R1 is based on DeepSeek-V3, a [mixture](http://git.bkdo.net) of experts (MoE) model recently open-sourced by DeepSeek. This base model is fine-tuned utilizing Group [Relative](https://www.nc-healthcare.co.uk) Policy [Optimization](http://140.125.21.658418) (GRPO), a [reasoning-oriented variation](https://git.tanxhub.com) of RL. The research study team likewise performed understanding distillation from DeepSeek-R1 to [open-source Qwen](https://git.kraft-werk.si) and Llama models and released numerous versions of each \ No newline at end of file