
[Adobe Stock]
A number of users expressed enthusiasm for the DeepSeek-R1 paper with one user calling it “a gem,” taking interest in its insights on reinforcement learning. “RL, if applied correctly and at scale, can lead to some really powerful and interesting scaling and emergent properties,” the user wrote.
While a significant portion of users voiced concerns over accuracy and censorship, a relatively small proportion of the user sentiment was negative — 13.9%. Conversely, 38.8% was positive and 47.3% neutral, according to Topview’s analysis of the data.
While the data could point to DeepSeek siphoning away users from ChatGPT, interest in the pioneering large language model family remains strong, reaching a recent peak in Google Trend data during the week of January 26, 2025.
On the other hand, some users pointed out that even the distilled version of the model outperformed GPT-4o and Claude 3.5 Sonnet on math benchmarks.
Affordability was one of DeepSeek’s strongest advantages. Up to 50 daily messages via the web interface are free and it is roughly 96% less expensive to use via the API than OpenAI’s comparable o1 model.
After the debut of DeepSeek R1, OpenAI has introduced its o3-mini family and launched Deep Research over the weekend, which is an agentic feature to perform complex online research.
A substantial number of the tweets noted that DeepSeek’s reasoning model, DeepSeek‑R1, “performs on par” with OpenAI’s o1 model on tasks like solving math problems, writing code, and general reasoning while a number of comments highlight that DeepSeek is released as open‑source software under an MIT license.
DeepSeek R1 is available via multiple model sizes (1.5B to 72B) making it accessible for different computing environments, including laptops in some configurations. Several users in the data reported successful deployment on M2 Ultra hardware. “DeepSeek R1 671B running on 2 M2 Ultras faster than reading speed,” reported one. Another user described running a DeepSeek RAG agent locally via Ollama. “What a huge moment for open source…Not only that, [their] fine-tuned smaller models like Llama 8B, [achieve] a similar or even better performance than GPT-4o or Claude 3.5 Sonnet! And you can run them locally now,” another wrote.
Several users were captivated by the DeepSeek model showing its chain of thought. One comment described it as a “human‑like internal monologue” while another called it “fascinating” adding that it “really reads like a human thinking out loud. Charming and strange.”
The international discourse on Twitter, with hints of Turkish, Spanish, and German input in the analysis, underscores the model’s global impact.

[Google Trend data as of February 3 ]
Tell Us What You Think!
You must be logged in to post a comment.