Securiti leads GigaOm's DSPM Vendor Evaluation with top ratings across technical capabilities & business value.

View

Gencore AI Customers Can Now Securely Use DeepSeek R1

Author

Michael Rinehart

VP of Artificial Intelligence at Securiti

Listen to the content

This post is also available in: Brazilian Portuguese

Enterprises are under immense pressure to use Generative AI to deliver innovative solutions, extract insights from massive volumes, and stay ahead of the competition. At the same time, expectations and requirements around data privacy and security have never been higher.

To help organizations meet these competing needs using the latest in Generative AI models, Securiti’s Gencore AI has added support for DeepSeek R1 as well as other newer AI reasoning models.

Reasoning models such as DeepSeek R1 excel at generating accurate insights and carrying out complex tasks. Their impressive gains over previous generations of Generative AI result from their use of reasoning, which allows them to devise a carefully planned response to a request and assess the quality of potential responses. Reasoning models such as DeepSeek R1 now top the charts in benchmark tests for complex tasks.

In this article, we explore the significance of reasoning in AI, the recent evolution of reasoning models, and the crucial steps enterprises should take to harness their capabilities securely.

The Reasoning Revolution Is Underway

Why Reasoning Matters

For AI-driven agents (e.g., tool-enhanced chatbots and AI workflows) to reliably assist in complex tasks, they must reason about tasks and data rather than merely provide direct responses. Reasoning has been recognized to offer key advantages, including:

  • enhanced accuracy in carrying out complex tasks,
  • better explanations for outcomes and recommendations,
  • improved adaptability in dynamic or multi-step workflows.

Reliable Reasoning Was Difficult

Before the emergence of specialized reasoning models such as DeepSeek R1, achieving reliable AI reasoning required ad-hoc techniques or narrow, specially-trained models. For example:

1. Prompt Engineering & Iterative Processing

  • Techniques like Chain of Thought, Tree of Thoughts, Best-of-N, and Self-Refine attempt to improve results by having the model “think” before responding as well as iteratively assess and refine its answers. However, they are either brittle, going astray on one bad decision or requiring a significant number of tokens (cost) to employ.

2. Specialized Fine-Tuning (SFT)

  • SFT is effective for steering models toward reliable performance in narrow, specific tasks but can lead to a loss of generalization.

3. Granular Workflows

  • Breaking tasks into smaller steps helps generative AI models perform better, but challenging decision points in the workflow still require improved reasoning at the model level.

DeepSeek R1 & Other Reasoning Models Lower the Barrier to High-Quality Results

Reasoning models like DeepSeek R1 are explicitly trained to reason before responding. Because these models generate higher-quality reasoning by design, they require less specialized prompt engineering and may not require SFT, enabling enterprises to tackle larger, more complex tasks efficiently.

2025 Marked a Milestone for Reasoning Models

In 2025 so far, we have seen a surge in reasoning models that pushed the boundaries of AI reasoning capabilities:

  1. DeepSeek R1 (January 20)
  2. Qwen 2.5-Max (January 29)
  3. o3-mini (January 31)

Among these, DeepSeek R1 has gained significant attention. More than simply being the first of this new wave of reasoning models, it offers:

  1. Superb Accuracy: With high-quality training data emphasizing reasoning, DeepSeek R1 consistently achieves top-tier performance comparable to OpenAI o1 (released in December 2024).
  2. Open-Source Weights: DeepSeek R1 is available as open-source weights under the permissive MIT License, which allows organizations to integrate and commercialize DeepSeek R1 with minimal restrictions.
  3. Variety of Model Sizes: DeepSeek R1 is available in a variety of distilled model sizes (1.5B, 7B, 8B, 14B, 32B, and 70B) to meet different performance-cost tradeoffs.
  4. Lower Cost of Development: Reports indicate DeepSeek R1 was developed at a fraction of the cost of comparable OpenAI models, leading experts to predict a flurry of highly advanced models with a variety of different licenses and resource requirements.

Strong Data+AI Security is Critical for DeepSeek Use at Work

Enterprises can harness reasoning models like DeepSeek R1 without compromising security, and it starts by implementing an AI system having very strong Data+AI Controls. In fact, research at Securiti has demonstrated that sensitive data can be retrieved from an insecure AI system in as few as two interactions. A secure AI system should be coupled with a policy to not allow the use of DeepSeek’s online chat interfaces since reports uncovered instances where online chat services (including DeepSeek’s) lead to data leaks. Finally, a secure AI system can responsibly unlock the power of your data with safe, pre-trained versions of DeepSeek, allowing your organization to avoid training DeepSeek on sensitive data. Trained models cannot reliably identify sensitive versus non-sensitive data and so can leak any data they are exposed to during training in any context.

Gencore AI uniquely enables the safe use of DeepSeek in the enterprise. Specifically:

  • Gencore AI understands and enforces entitlements: Because ingested data crosses data system boundaries, a single provider for ingestion and retrieval is needed to preserve entitlements information and enforce it. Gencore AI seamlessly integrates with Securiti’s Data Command Graph to understand users and their entitlements during ingestion and retrieval, enforcing them at its core.
  • Gencore AI masks sensitive data during ingestion: Unlike guardrails that solely rely upon retrieval-time logic to discover possible sensitive data, Gencore AI integrates with Securiti’s Sensitive Data Intelligence to discover and mask sensitive data during ingestion to guarantee security and privacy, even sensitive data types specific to your organization.
  • Gencore AI’s LLM Firewall provides security tailored to your organization: Baked-in LLM guardrails are frequent targets of attack, provide generic safety, and do not understand your enterprise data. Gencore AI’s LLM Firewall integrates with Securiti’s Policy Engine to apply enterprise-grade policies to LLM prompts, LLM responses, and retrieved data that can be further tailored to your organization’s specific needs.
  • Gencore AI ensures out-of-the-box ease and flexibility: AI security solutions must be customizable and user-friendly to ensure adoption. Gencore AI’s design philosophy emphasizes an intuitive user experience with options to expose granular controls needed for niche use cases.

Securiti’s Gencore AI meets these criteria and with its recent integrations with reasoning models like DeepSeek R1, enterprises can expect unprecedented innovation with built-in security.

Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox


Share

More Stories that May Interest You
Videos
View More
Mitigating OWASP Top 10 for LLM Applications 2025
Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...
View More
Top 6 DSPM Use Cases
With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...
View More
Colorado Privacy Act (CPA)
What is the Colorado Privacy Act? The CPA is a comprehensive privacy law signed on July 7, 2021. It established new standards for personal...
View More
Securiti for Copilot in SaaS
Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...
View More
Top 10 Considerations for Safely Using Unstructured Data with GenAI
A staggering 90% of an organization's data is unstructured. This data is rapidly being used to fuel GenAI applications like chatbots and AI search....
View More
Gencore AI: Building Safe, Enterprise-grade AI Systems in Minutes
As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...
View More
Navigating CPRA: Key Insights for Businesses
What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...
View More
Navigating the Shift: Transitioning to PCI DSS v4.0
What is PCI DSS? PCI DSS (Payment Card Industry Data Security Standard) is a set of security standards to ensure safe processing, storage, and...
View More
Securing Data+AI : Playbook for Trust, Risk, and Security Management (TRiSM)
AI's growing security risks have 48% of global CISOs alarmed. Join this keynote to learn about a practical playbook for enabling AI Trust, Risk,...
AWS Startup Showcase Cybersecurity Governance With Generative AI View More
AWS Startup Showcase Cybersecurity Governance With Generative AI
Balancing Innovation and Governance with Generative AI Generative AI has the potential to disrupt all aspects of business, with powerful new capabilities. However, with...

Spotlight Talks

Spotlight 11:29
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Watch Now View
Spotlight 11:18
Rewiring Real Estate Finance — How Walker & Dunlop Is Giving Its $135B Portfolio a Data-First Refresh
Watch Now View
Spotlight 13:38
Accelerating Miracles — How Sanofi is Embedding AI to Significantly Reduce Drug Development Timelines
Sanofi Thumbnail
Watch Now View
Spotlight 10:35
There’s Been a Material Shift in the Data Center of Gravity
Watch Now View
Spotlight 14:21
AI Governance Is Much More than Technology Risk Mitigation
AI Governance Is Much More than Technology Risk Mitigation
Watch Now View
Spotlight 12:!3
You Can’t Build Pipelines, Warehouses, or AI Platforms Without Business Knowledge
Watch Now View
Spotlight 47:42
Cybersecurity – Where Leaders are Buying, Building, and Partnering
Rehan Jalil
Watch Now View
Spotlight 27:29
Building Safe AI with Databricks and Gencore
Rehan Jalil
Watch Now View
Spotlight 46:02
Building Safe Enterprise AI: A Practical Roadmap
Watch Now View
Spotlight 13:32
Ensuring Solid Governance Is Like Squeezing Jello
Watch Now View
Latest
Why I Joined Securiti View More
Why I Joined Securiti
I’m beyond excited to join Securiti.ai as a sales leader at this pivotal moment in their journey. The decision was clear, driven by three...
Navigating the Data Minefield: Essential Executive Recommendations for M&A and Divestitures View More
Navigating the Data Minefield: Essential Executive Recommendations for M&A and Divestitures
The U.S. M&A landscape is back in full swing. May witnessed a significant rebound in deal activity, especially for transactions exceeding $100 million, signaling...
Key Data Protection Reforms Introduced by the Data Use and Access Act View More
Key Data Protection Reforms Introduced by the Data Use and Access Act
UK DUAA 2025 updates UK GDPR, DPA and PECR. Changes cover research and broad consent, legitimate interests and SARs, automated decisions, transfers and cookies.
FTC's 2025 COPPA Final Rule Amendments View More
FTC’s 2025 COPPA Final Rule Amendments: What You Need to Know
Gain insights into FTC's 2025 COPPA Final Rule Amendments. Discover key definitions, notices, consent choices, methods, exceptions, requirements, etc.
View More
Is Your Business Ready for the EU AI Act August 2025 Deadline?
Download the whitepaper to learn where your business is ready for the EU AI Act. Discover who is impacted, prepare for compliance, and learn...
View More
Getting Ready for the EU AI Act: What You Should Know For Effective Compliance
Securiti's whitepaper provides a detailed overview of the three-phased approach to AI Act compliance, making it essential reading for businesses operating with AI.
Navigating the Minnesota Consumer Data Privacy Act (MCDPA) View More
Navigating the Minnesota Consumer Data Privacy Act (MCDPA): Key Details
Download the infographic to learn about the Minnesota Consumer Data Privacy Act (MCDPA) applicability, obligations, key features, definitions, exemptions, and penalties.
EU AI Act Mapping: A Step-by-Step Compliance Roadmap View More
EU AI Act Mapping: A Step-by-Step Compliance Roadmap
Explore the EU AI Act Mapping infographic—a step-by-step compliance roadmap to help organizations understand key requirements, assess risk, and align AI systems with EU...
The DSPM Architect’s Handbook View More
The DSPM Architect’s Handbook: Building an Enterprise-Ready Data+AI Security Program
Get certified in DSPM. Learn to architect a DSPM solution, operationalize data and AI security, apply enterprise best practices, and enable secure AI adoption...
Gencore AI and Amazon Bedrock View More
Building Enterprise-Grade AI with Gencore AI and Amazon Bedrock
Learn how to build secure enterprise AI copilots with Amazon Bedrock models, protect AI interactions with LLM Firewalls, and apply OWASP Top 10 LLM...
What's
New