Skip to main content

Deploying an Azure OpenAI Model and Connecting It to Progress Agentic RAG

Summary Flow

  1. Create Azure account and subscription
  2. Create Azure OpenAI resource
  3. Open Microsoft Foundry from the resource overview
  4. Deploy a model via the Model Catalog
  5. Retrieve API key and endpoint from Foundry UI
  6. Configure Azure OpenAI in Progress Agentic RAG

1. Azure Account, Subscription, and Access (Prerequisite)

Before deploying a model, you must have:

  • An active Azure account
  • An Azure subscription with billing enabled
  • Access to Azure OpenAI Service

Depending on your tenant and region, Azure OpenAI access may require approval.

Outcome

  • A subscription capable of hosting Azure OpenAI resources

2. Create an Azure OpenAI Resource

This step is completed in the Azure Portal.

Steps

  1. Create a new Azure OpenAI resource
  2. Specify:
    • Subscription
    • Resource Group
    • Region
    • Pricing tier Microsoft Azure Open AI Configuration
  3. Complete resource creation

The pricing tier determines billing and usage limits and must be selected at creation time.

Outcome

  • A named Azure OpenAI resource that will host your model deployments
  • This resource will later supply the endpoint URL

3. Access Microsoft Foundry from the Azure OpenAI Resource

Once the Azure OpenAI resource is created, model deployment and configuration occur in Microsoft Foundry.

Steps

  1. Open the Azure OpenAI resource in the Azure Portal
  2. From the Overview page:
    • Scroll to Explore and deploy
    • Select the option to open the Foundry experience

This routes you to the Foundry instance associated with your specific Azure OpenAI resource.

Outcome

  • You are now operating inside Foundry, scoped to your Azure OpenAI resource

4. Deploy an LLM Using the Foundry Model Catalog

All model deployment configuration happens in Foundry.

Steps

  1. Open the Model Catalog from the left-hand navigation Model Catalog in Microsoft Foundry UI
  2. Select the desired model (for example: gpt-4o-mini)
  3. Configure the deployment:
    • Deployment name
    • Deployment type
    • Model version
    • Model upgrade policy
    • Any additional deployment options Model Deployment in Microsoft Foundry UI
  4. Deploy the model

Verification

  • Open the Deployments tab in the left-hand navigation
  • Confirm the deployment appears and is active

Important distinctions

  • Model name: the actual model selected (e.g. gpt-4o-mini)
  • Deployment name: a custom identifier you define and must reference exactly later

Outcome

  • A live Azure OpenAI model deployment ready to receive requests

5. Retrieve API Key and Endpoint URL from Foundry

Authentication details are retrieved directly from Foundry.

Steps

  1. Navigate to the Home page in Foundry
  2. Copy:
    • API key
    • Azure OpenAI endpoint URL

Outcome

  • Credentials required to authenticate external clients

6. Configure Azure OpenAI in Progress Agentic RAG

This step connects your deployed model to Progress Agentic RAG.

Required Configuration Values

FieldSource
API KeyFoundry
Endpoint URLFoundry
Deployment NameFoundry
Model NameFoundry

Steps

  1. Select Azure OpenAI as the LLM provider
  2. Enable Use your own Azure OpenAI key
  3. Enter:
    • API Key
    • Endpoint URL
    • Deployment Name
    • Model Name
  4. Save the configuration
  5. Test answer generation

Outcome

  • Progress Agentic RAG uses your Azure-hosted LLM for answer generation