Data Caching Strategies In High-Volume Applications Using Azure Redis And Serverless Computing
Main Article Content
Abstract
Managing concurrent requests, guaranteeing low-latency answers, and preserving cost-effective scalability were ongoing issues for high-volume apps. The performance of many caching techniques—cache-aside, read-through, write-through, and write-behind—when used with Azure Redis Cache and serverless computing was examined in this study. Serverless apps were set up using Azure Functions and put through simulated workloads with 10,000–100,000 requests per second as part of an exploratory study design. Response time, throughput, cache hit ratio, and operational cost were among the important performance indicators that were monitored and examined. The findings showed that the read-through approach offered the best trade-off between scalability and performance under high traffic, whereas cache-aside was the most economical option for moderate workloads. While write-behind offered higher cache hit rates at the penalty of delayed synchronization, write-through guaranteed robust data consistency but added latency and cost overhead. The study came to the conclusion that, for high-volume applications, a hybrid strategy that combines cache-aside and read-through could provide the best possible balance between cost-effectiveness, performance, and scalability.