Securiti leads GigaOm's DSPM Vendor Evaluation with top ratings across technical capabilities & business value.

View

Gencore AI Customers Can Now Securely Use DeepSeek R1

Author

Michael Rinehart

VP of Artificial Intelligence at Securiti

Listen to the content

This post is also available in: Brazilian Portuguese

Enterprises are under immense pressure to use Generative AI to deliver innovative solutions, extract insights from massive volumes, and stay ahead of the competition. At the same time, expectations and requirements around data privacy and security have never been higher.

To help organizations meet these competing needs using the latest in Generative AI models, Securiti’s Gencore AI has added support for DeepSeek R1 as well as other newer AI reasoning models.

Reasoning models such as DeepSeek R1 excel at generating accurate insights and carrying out complex tasks. Their impressive gains over previous generations of Generative AI result from their use of reasoning, which allows them to devise a carefully planned response to a request and assess the quality of potential responses. Reasoning models such as DeepSeek R1 now top the charts in benchmark tests for complex tasks.

In this article, we explore the significance of reasoning in AI, the recent evolution of reasoning models, and the crucial steps enterprises should take to harness their capabilities securely.

The Reasoning Revolution Is Underway

Why Reasoning Matters

For AI-driven agents (e.g., tool-enhanced chatbots and AI workflows) to reliably assist in complex tasks, they must reason about tasks and data rather than merely provide direct responses. Reasoning has been recognized to offer key advantages, including:

  • enhanced accuracy in carrying out complex tasks,
  • better explanations for outcomes and recommendations,
  • improved adaptability in dynamic or multi-step workflows.

Reliable Reasoning Was Difficult

Before the emergence of specialized reasoning models such as DeepSeek R1, achieving reliable AI reasoning required ad-hoc techniques or narrow, specially-trained models. For example:

1. Prompt Engineering & Iterative Processing

  • Techniques like Chain of Thought, Tree of Thoughts, Best-of-N, and Self-Refine attempt to improve results by having the model “think” before responding as well as iteratively assess and refine its answers. However, they are either brittle, going astray on one bad decision or requiring a significant number of tokens (cost) to employ.

2. Specialized Fine-Tuning (SFT)

  • SFT is effective for steering models toward reliable performance in narrow, specific tasks but can lead to a loss of generalization.

3. Granular Workflows

  • Breaking tasks into smaller steps helps generative AI models perform better, but challenging decision points in the workflow still require improved reasoning at the model level.

DeepSeek R1 & Other Reasoning Models Lower the Barrier to High-Quality Results

Reasoning models like DeepSeek R1 are explicitly trained to reason before responding. Because these models generate higher-quality reasoning by design, they require less specialized prompt engineering and may not require SFT, enabling enterprises to tackle larger, more complex tasks efficiently.

2025 Marked a Milestone for Reasoning Models

In 2025 so far, we have seen a surge in reasoning models that pushed the boundaries of AI reasoning capabilities:

  1. DeepSeek R1 (January 20)
  2. Qwen 2.5-Max (January 29)
  3. o3-mini (January 31)

Among these, DeepSeek R1 has gained significant attention. More than simply being the first of this new wave of reasoning models, it offers:

  1. Superb Accuracy: With high-quality training data emphasizing reasoning, DeepSeek R1 consistently achieves top-tier performance comparable to OpenAI o1 (released in December 2024).
  2. Open-Source Weights: DeepSeek R1 is available as open-source weights under the permissive MIT License, which allows organizations to integrate and commercialize DeepSeek R1 with minimal restrictions.
  3. Variety of Model Sizes: DeepSeek R1 is available in a variety of distilled model sizes (1.5B, 7B, 8B, 14B, 32B, and 70B) to meet different performance-cost tradeoffs.
  4. Lower Cost of Development: Reports indicate DeepSeek R1 was developed at a fraction of the cost of comparable OpenAI models, leading experts to predict a flurry of highly advanced models with a variety of different licenses and resource requirements.

Strong Data+AI Security is Critical for DeepSeek Use at Work

Enterprises can harness reasoning models like DeepSeek R1 without compromising security, and it starts by implementing an AI system having very strong Data+AI Controls. In fact, research at Securiti has demonstrated that sensitive data can be retrieved from an insecure AI system in as few as two interactions. A secure AI system should be coupled with a policy to not allow the use of DeepSeek’s online chat interfaces since reports uncovered instances where online chat services (including DeepSeek’s) lead to data leaks. Finally, a secure AI system can responsibly unlock the power of your data with safe, pre-trained versions of DeepSeek, allowing your organization to avoid training DeepSeek on sensitive data. Trained models cannot reliably identify sensitive versus non-sensitive data and so can leak any data they are exposed to during training in any context.

Gencore AI uniquely enables the safe use of DeepSeek in the enterprise. Specifically:

  • Gencore AI understands and enforces entitlements: Because ingested data crosses data system boundaries, a single provider for ingestion and retrieval is needed to preserve entitlements information and enforce it. Gencore AI seamlessly integrates with Securiti’s Data Command Graph to understand users and their entitlements during ingestion and retrieval, enforcing them at its core.
  • Gencore AI masks sensitive data during ingestion: Unlike guardrails that solely rely upon retrieval-time logic to discover possible sensitive data, Gencore AI integrates with Securiti’s Sensitive Data Intelligence to discover and mask sensitive data during ingestion to guarantee security and privacy, even sensitive data types specific to your organization.
  • Gencore AI’s LLM Firewall provides security tailored to your organization: Baked-in LLM guardrails are frequent targets of attack, provide generic safety, and do not understand your enterprise data. Gencore AI’s LLM Firewall integrates with Securiti’s Policy Engine to apply enterprise-grade policies to LLM prompts, LLM responses, and retrieved data that can be further tailored to your organization’s specific needs.
  • Gencore AI ensures out-of-the-box ease and flexibility: AI security solutions must be customizable and user-friendly to ensure adoption. Gencore AI’s design philosophy emphasizes an intuitive user experience with options to expose granular controls needed for niche use cases.

Securiti’s Gencore AI meets these criteria and with its recent integrations with reasoning models like DeepSeek R1, enterprises can expect unprecedented innovation with built-in security.

Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox


Share


More Stories that May Interest You

Videos

View More

Mitigating OWASP Top 10 for LLM Applications 2025

Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...

View More

DSPM vs. CSPM – What’s the Difference?

While the cloud has offered the world immense growth opportunities, it has also introduced unprecedented challenges and risks. Solutions like Cloud Security Posture Management...

View More

Top 6 DSPM Use Cases

With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...

View More

Colorado Privacy Act (CPA)

What is the Colorado Privacy Act? The CPA is a comprehensive privacy law signed on July 7, 2021. It established new standards for personal...

View More

Securiti for Copilot in SaaS

Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...

View More

Top 10 Considerations for Safely Using Unstructured Data with GenAI

A staggering 90% of an organization's data is unstructured. This data is rapidly being used to fuel GenAI applications like chatbots and AI search....

View More

Gencore AI: Building Safe, Enterprise-grade AI Systems in Minutes

As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...

View More

Navigating CPRA: Key Insights for Businesses

What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...

View More

Navigating the Shift: Transitioning to PCI DSS v4.0

What is PCI DSS? PCI DSS (Payment Card Industry Data Security Standard) is a set of security standards to ensure safe processing, storage, and...

View More

Securing Data+AI : Playbook for Trust, Risk, and Security Management (TRiSM)

AI's growing security risks have 48% of global CISOs alarmed. Join this keynote to learn about a practical playbook for enabling AI Trust, Risk,...

Spotlight Talks

Spotlight 13:38

Accelerating Miracles — How Sanofi is Embedding AI to Significantly Reduce Drug Development Timelines

Sanofi Thumbnail
Watch Now View
Spotlight 10:35

There’s Been a Material Shift in the Data Center of Gravity

Watch Now View
Spotlight 14:21

AI Governance Is Much More than Technology Risk Mitigation

AI Governance Is Much More than Technology Risk Mitigation
Watch Now View
Spotlight 12:!3

You Can’t Build Pipelines, Warehouses, or AI Platforms Without Business Knowledge

Watch Now View
Spotlight 47:42

Cybersecurity – Where Leaders are Buying, Building, and Partnering

Rehan Jalil
Watch Now View
Spotlight 27:29

Building Safe AI with Databricks and Gencore

Rehan Jalil
Watch Now View
Spotlight 46:02

Building Safe Enterprise AI: A Practical Roadmap

Watch Now View
Spotlight 13:32

Ensuring Solid Governance Is Like Squeezing Jello

Watch Now View
Spotlight 40:46

Securing Embedded AI: Accelerate SaaS AI Copilot Adoption Safely

Watch Now View
Spotlight 10:05

Unstructured Data: Analytics Goldmine or a Governance Minefield?

Viral Kamdar
Watch Now View

Latest

AI System Observability: Go Beyond Model Governance View More

AI System Observability: Go Beyond Model Governance

Across industries, AI systems are no longer just tools acting on human prompts. The AI landscape is evolving rapidly, and AI systems are gaining...

View More

Securiti Accelerates Secure Agentic AI Deployments with NVIDIA Enterprise AI Factory

Still adapting to  the initial Gen AI boom, the IT industry is now undergoing another profound evolution- the rise of Agentic AI. AI has...

Top 10 Data Security Risks In 2025 View More

Top 10 Data Security Risks In 2025 & How To Prevent Them

Here are the top 10 data security risks for businesses in 2025, along with the best practices, measures, and solutions businesses can adopt to...

Data Security Policy View More

What is Data Security Policy & How to Write It?

This blog discusses the importance of a sound data security policy, its essential elements, and how best to implement it across the organization.

AI Auditing By The EDPB: A Technical Guide View More

AI Auditing By The EDPB: A Technical Guide

Get insights into the EDPB’s AI Auditing project, which aims to map, develop, and pilot tools that help evaluate the GDPR compliance of AI...

Big Data, Big Risks View More

Big Data, Big Risks: The Data Privacy Challenges For Credit Reporting Agencies

Learn about regulatory frameworks, enforcement actions, privacy challenges, practical recommendations, how Securiti helps and more.

The European Health Data Space Regulation View More

The European Health Data Space Regulation: A Legislative Timeline and Implementation Roadmap

Download the infographic on the European Health Data Space Regulation, which features a clear timeline and roadmap highlighting key legislative milestones, implementation phases, and...

Comparison of RoPA Field Requirements Across Jurisdictions View More

Comparison of RoPA Field Requirements Across Jurisdictions

Download the infographic to compare Records of Processing Activities (RoPA) field requirements across jurisdictions. Learn its importance, penalties, and how to navigate RoPA.

Gencore AI and Amazon Bedrock View More

Building Enterprise-Grade AI with Gencore AI and Amazon Bedrock

Learn how to build secure enterprise AI copilots with Amazon Bedrock models, protect AI interactions with LLM Firewalls, and apply OWASP Top 10 LLM...

DSPM Vendor Due Diligence View More

DSPM Vendor Due Diligence

DSPM’s Buyer Guide ebook is designed to help CISOs and their teams ask the right questions and consider the right capabilities when looking for...

What's
New