Skip to main content

SQL Server 2025 AI Features: What's New for Data Engineers

March 21, 2026 3 min read

SQL Server 2025 brings AI capabilities directly into the database engine without sacrificing the performance characteristics that make it the backbone of enterprise systems. Let's explore what's genuinely useful and where the real wins are.

Native Vector Search

The headline feature: store, index, and query vector embeddings directly in relational tables. No separate vector database needed.

CREATE TABLE Products (
    ProductId INT PRIMARY KEY IDENTITY,
    Name NVARCHAR(200) NOT NULL,
    Description NVARCHAR(MAX),
    Price DECIMAL(10, 2),
    Category NVARCHAR(100),
    DescriptionEmbedding VECTOR(1536),
    CreatedAt DATETIME2 DEFAULT GETUTCDATE()
);

CREATE VECTOR INDEX IX_Products_Embedding
ON Products(DescriptionEmbedding)
WITH (METRIC = 'COSINE', LISTS = 100);

Querying combines vector similarity with standard SQL filters:

DECLARE @searchVector VECTOR(1536) = (
    SELECT DescriptionEmbedding FROM Products WHERE ProductId = 42
);

SELECT TOP 10 ProductId, Name, Price,
    VECTOR_DISTANCE('cosine', DescriptionEmbedding, @searchVector) AS Similarity
FROM Products
WHERE Category = 'Electronics'
ORDER BY VECTOR_DISTANCE('cosine', DescriptionEmbedding, @searchVector);

Performance is solid for datasets up to ~5 million vectors. An 800K product catalog search returns results in under 50ms with the vector index — comparable to dedicated vector databases.

In-Database AI Model Invocation

sp_invoke_external_model lets you call Azure OpenAI directly from T-SQL. Best suited for batch processing, not real-time paths:

-- Generate embeddings for new products in batches
WITH NewProducts AS (
    SELECT TOP 100 ProductId, Description
    FROM Products WHERE DescriptionEmbedding IS NULL
    ORDER BY CreatedAt DESC
)
UPDATE p
SET p.DescriptionEmbedding = ai.embedding
FROM Products p
INNER JOIN NewProducts np ON p.ProductId = np.ProductId
CROSS APPLY sp_invoke_external_model(
    @model_name = 'text-embedding-ada-002',
    @input = np.Description,
    @provider = 'AzureOpenAI',
    @endpoint = 'https://myinstance.openai.azure.com/'
) AS ai(embedding VECTOR(1536));

Use this in scheduled batch jobs, not row-by-row triggers.

Performance Improvements

Beyond AI features, several improvements matter for daily workloads:

  • Optimized Locking — lock-free reads for read-committed isolation; 25% throughput improvement on write-heavy tables with zero code changes
  • Parameter-Sensitive Plan Optimization — the optimizer maintains multiple plans for the same query based on parameter distribution, reducing parameter sniffing issues
  • DOP Feedback — the engine learns optimal degree of parallelism for recurring queries automatically

Migration Path

Set compatibility level 170 to enable new features, but test in staging first using Query Store:

ALTER DATABASE YourDatabase SET QUERY_STORE = ON;
-- Capture baseline, then upgrade:
ALTER DATABASE YourDatabase SET COMPATIBILITY_LEVEL = 170;
-- Compare plans for regressions

Licensing note: Vector search and AI features are included in Enterprise and Developer editions. Standard edition has vector index size limitations.

Key Takeaways

  • Vector search is production-ready for datasets under 5M vectors — eliminates the need for a separate vector database in most enterprise scenarios.
  • In-database AI invocation works best for batch embedding generation and enrichment jobs.
  • Performance improvements in locking, parameter sensitivity, and DOP feedback deliver measurable gains without code changes.
  • Upgrade carefully — use Query Store as your safety net when changing compatibility levels.
Share this post

Comments

Ajit Gangurde

Software Engineer II at Microsoft | 15+ years in .NET & Azure