File Name: | GenAI for .NET: Build LLM Apps with OpenAI and Ollama |
Content Source: | https://www.udemy.com/course/genai-for-net-build-llm-apps-with-openai-and-ollama |
Genre / Category: | Other Tutorials |
File Size : | 2.4 GB |
Publisher: | Mehmet Ozkaya |
Updated and Published: | September 13, 2025 |
In this hands-on course, you’ll learn to integrate OpenAI, Ollama and .NET’s new Microsoft-Extensions-AI (MEAI) abstraction libraries to build a wide range of GenAI applications—from chatbots and semantic search to Retrieval-Augmented Generation (RAG) and image analysis.
Throughout the course, you’ll learn:
- NET + AI Ecosystem: You’ll learn about Microsoft’s new abstraction libraries like Microsoft-Extensions-AI, which makes it super easy to integrate & switch different LLM providers like OpenAI, Azure AI, Ollama and even self-hosted models.
- Setting Up LLM Providers: Configure the LLM providers—such as GitHub Models, Ollama, and Azure AI Foundry—so you can choose the best fit for your use case.
- Text Completion LLM w/ GitHub Models OpenAI gpt-5-mini and Ollama llama3.2 Model model: You’ll learn how to use .NET to integrate LLM models and performs Classification, Summarization, Data extraction, Anomaly detection, Translation and Sentiment Analysis use cases.
- Build AI Chat App with .NET and gpt-5-mini model: You’ll develop back-and-forth conversation based messaging with LLM and user where the AI maintains context across multiple user turns. We will use Chat Streaming features when developing AI Chat Application.
- Function Calling with .NET and gpt-5-mini model: Develop a function that will trigger from OpenAI GPT-5-mini. The model returns structured JSON specifying which .NET function to invoke, along with arguments for retrieving real-time data.
- NET AI Vector Search using Vector Embeddings and Vector Store: We’ll also cover Vector Search, a powerful feature that allows semantic search based on meaning—not keywords.
You’ll learn how to:
- Generate embeddings using OpenAI’s text-embedding-3-small or Ollama’s all-MiniLM embeddings model,
- Store these in a vector database like Qdrant
- Query the vector store with user embedding to find top matches by similarity
- Retrieve relevant data based on similarity searches—all in our .NET applications.
RAG – Retrieval-Augmented Generation with .NET
You’ll learn how to combine vector search results with LLM responses to:
- Retrieve relevant data from your own sources
- Break documents into chunks → embed them → store in vector DB
- At query time, embed the question → retrieve relevant chunks → pass them along with the user’s query to the LLM
- Get accurate, context-specific answers using your internal data from LLM
DOWNLOAD LINK: GenAI for .NET: Build LLM Apps with OpenAI and Ollama
GenAI_for_.NET_Build_LLM_Apps_with_OpenAI_and_Ollama.part1.rar – 1000.0 MB
GenAI_for_.NET_Build_LLM_Apps_with_OpenAI_and_Ollama.part2.rar – 1000.0 MB
GenAI_for_.NET_Build_LLM_Apps_with_OpenAI_and_Ollama.part3.rar – 476.2 MB
FILEAXA.COM – is our main file storage service. We host all files there. You can join the FILEAXA.COM premium service to access our all files without any limation and fast download speed.