top of page

Generative AI–Powered Chatbot

Generative AI–Powered FinOps Chatbot for Cloud Cost Optimization

 

End-to-end AI product built on AWS to enable intelligent cloud spend analysis, governance, and decision-making using RAG and LLMs

mac desktop 1.png
Screenshot 2026-01-17 at 4.55.25 PM.png

The Problem: Lack of Intelligent, Actionable Visibility into Cloud Spend

Enterprises struggle to understand and control cloud spend due to fragmented cost data, limited visibility, and manual analysis workflows. Existing tools lack natural language access, contextual insights, and governance-aware recommendations, making FinOps decisions slow, reactive, and inefficient.

  • Defined the AI product strategy by identifying FinOps pain points and translating them into an LLM-driven conversational experience for cloud cost analysis.

 

  • Designed the end-to-end system architecture for a secure, scalable, and AWS-native AI solution using Amazon Lex, Amazon Bedrock, CloudFormation, S3, Amazon Titan, OpenSearch, Amazon Q, and Lambda Function.

​

​​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

 

  • Cleaned, normalized, and structured raw AWS cloud-spend data to create reliable, analysis-ready datasets aligned with FinOps use cases.

 

  • Vectorized the curated datasets using embedding models to enable semantic search and retrieval for RAG-based LLM responses.

 

  • Evaluated and tested multiple LLMs, optimizing for response quality, latency, and cost efficiency.

 

  • Built a Lambda-based orchestration layer to connect the chatbot, vector store, and LLM inference workflows.

 

  • Deployed the solution using CloudFormation, enabling secure access, repeatable infrastructure provisioning, and a production-ready web UI.

My Role

Impact & Metrics

Reduced Manual Analysis via Natural-Language Cost Queries

Increased Financial Governance Visibility with Explainable Insights

Projected to Deliver 30% Cloud Cost Reduction

Key Learnings

  • Translating FinOps business problems into AI product requirements is as critical as model selection.

 

  • Learned to design RAG systems that balance accuracy, cost, and latency in enterprise environments.

 

  • Gained hands-on experience with LLM orchestration, embeddings, and prompt grounding using AWS Bedrock.

 

  • Understood how AI system design, cloud infrastructure, and governance constraints must align for enterprise adoption.

 

  • Strengthened skills across AI product thinking, system design and cloud architecture.

bottom of page