Enterprises are under immense pressure to use Generative AI to deliver innovative solutions, extract insights from massive volumes, and stay ahead of the competition. At the same time, expectations and requirements around data privacy and security have never been higher.
To help organizations meet these competing needs using the latest in Generative AI models, Securiti’s Gencore AI has added support for DeepSeek R1 as well as other newer AI reasoning models.
Reasoning models such as DeepSeek R1 excel at generating accurate insights and carrying out complex tasks. Their impressive gains over previous generations of Generative AI result from their use of reasoning, which allows them to devise a carefully planned response to a request and assess the quality of potential responses. Reasoning models such as DeepSeek R1 now top the charts in benchmark tests for complex tasks.
In this article, we explore the significance of reasoning in AI, the recent evolution of reasoning models, and the crucial steps enterprises should take to harness their capabilities securely.
The Reasoning Revolution Is Underway
Why Reasoning Matters
For AI-driven agents (e.g., tool-enhanced chatbots and AI workflows) to reliably assist in complex tasks, they must reason about tasks and data rather than merely provide direct responses. Reasoning has been recognized to offer key advantages, including:
- enhanced accuracy in carrying out complex tasks,
- better explanations for outcomes and recommendations,
- improved adaptability in dynamic or multi-step workflows.
Reliable Reasoning Was Difficult
Before the emergence of specialized reasoning models such as DeepSeek R1, achieving reliable AI reasoning required ad-hoc techniques or narrow, specially-trained models. For example:
1. Prompt Engineering & Iterative Processing
- Techniques like Chain of Thought, Tree of Thoughts, Best-of-N, and Self-Refine attempt to improve results by having the model “think” before responding as well as iteratively assess and refine its answers. However, they are either brittle, going astray on one bad decision or requiring a significant number of tokens (cost) to employ.
2. Specialized Fine-Tuning (SFT)
- SFT is effective for steering models toward reliable performance in narrow, specific tasks but can lead to a loss of generalization.
3. Granular Workflows
- Breaking tasks into smaller steps helps generative AI models perform better, but challenging decision points in the workflow still require improved reasoning at the model level.
DeepSeek R1 & Other Reasoning Models Lower the Barrier to High-Quality Results
Reasoning models like DeepSeek R1 are explicitly trained to reason before responding. Because these models generate higher-quality reasoning by design, they require less specialized prompt engineering and may not require SFT, enabling enterprises to tackle larger, more complex tasks efficiently.
2025 Marked a Milestone for Reasoning Models
In 2025 so far, we have seen a surge in reasoning models that pushed the boundaries of AI reasoning capabilities:
- DeepSeek R1 (January 20)
- Qwen 2.5-Max (January 29)
- o3-mini (January 31)
Among these, DeepSeek R1 has gained significant attention. More than simply being the first of this new wave of reasoning models, it offers:
- Superb Accuracy: With high-quality training data emphasizing reasoning, DeepSeek R1 consistently achieves top-tier performance comparable to OpenAI o1 (released in December 2024).
- Open-Source Weights: DeepSeek R1 is available as open-source weights under the permissive MIT License, which allows organizations to integrate and commercialize DeepSeek R1 with minimal restrictions.
- Variety of Model Sizes: DeepSeek R1 is available in a variety of distilled model sizes (1.5B, 7B, 8B, 14B, 32B, and 70B) to meet different performance-cost tradeoffs.
- Lower Cost of Development: Reports indicate DeepSeek R1 was developed at a fraction of the cost of comparable OpenAI models, leading experts to predict a flurry of highly advanced models with a variety of different licenses and resource requirements.
Strong Data+AI Security is Critical for DeepSeek Use at Work
Enterprises can harness reasoning models like DeepSeek R1 without compromising security, and it starts by implementing an AI system having very strong Data+AI Controls. In fact, research at Securiti has demonstrated that sensitive data can be retrieved from an insecure AI system in as few as two interactions. A secure AI system should be coupled with a policy to not allow the use of DeepSeek’s online chat interfaces since reports uncovered instances where online chat services (including DeepSeek’s) lead to data leaks. Finally, a secure AI system can responsibly unlock the power of your data with safe, pre-trained versions of DeepSeek, allowing your organization to avoid training DeepSeek on sensitive data. Trained models cannot reliably identify sensitive versus non-sensitive data and so can leak any data they are exposed to during training in any context.
Gencore AI uniquely enables the safe use of DeepSeek in the enterprise. Specifically:
- Gencore AI understands and enforces entitlements: Because ingested data crosses data system boundaries, a single provider for ingestion and retrieval is needed to preserve entitlements information and enforce it. Gencore AI seamlessly integrates with Securiti’s Data Command Graph to understand users and their entitlements during ingestion and retrieval, enforcing them at its core.
- Gencore AI masks sensitive data during ingestion: Unlike guardrails that solely rely upon retrieval-time logic to discover possible sensitive data, Gencore AI integrates with Securiti’s Sensitive Data Intelligence to discover and mask sensitive data during ingestion to guarantee security and privacy, even sensitive data types specific to your organization.
- Gencore AI’s LLM Firewall provides security tailored to your organization: Baked-in LLM guardrails are frequent targets of attack, provide generic safety, and do not understand your enterprise data. Gencore AI’s LLM Firewall integrates with Securiti’s Policy Engine to apply enterprise-grade policies to LLM prompts, LLM responses, and retrieved data that can be further tailored to your organization’s specific needs.
- Gencore AI ensures out-of-the-box ease and flexibility: AI security solutions must be customizable and user-friendly to ensure adoption. Gencore AI’s design philosophy emphasizes an intuitive user experience with options to expose granular controls needed for niche use cases.
Securiti’s Gencore AI meets these criteria and with its recent integrations with reasoning models like DeepSeek R1, enterprises can expect unprecedented innovation with built-in security.