Skip to main content
Loading...
Scholar9 logo True scholar network
  • Article ▼
    • Article List
    • Deposit Article
  • Mentorship ▼
    • Overview
    • Sessions
  • Questions
  • Scholars
  • Institutions
  • Journals
  • Login/Sign up
Back to Top

Transparent Peer Review By Scholar9

A Comparative Study on AI/ML Optimization Strategies within DevOps Pipelines Deployed on Serverless Architectures in AWS Cloud Platforms

Abstract

The application of Artificial Intelligence (AI) and Machine Learning (ML) in modern DevOps pipelines is a rapidly growing trend, with organizations seeking efficient, scalable, and cost-effective solutions to integrate AI/ML models into production environments. AWS's serverless architecture, with its powerful cloud-native services such as AWS Lambda, Step Functions, and SageMaker, provides a flexible platform for deploying AI/ML workloads at scale. However, while the serverless paradigm offers considerable benefits in terms of scalability and resource management, it also presents unique challenges, including cold start latency, resource allocation, and computational efficiency. This research focuses on a comparative analysis of AI/ML optimization strategies deployed within DevOps pipelines on AWS's serverless architectures. The aim is to identify and evaluate the various optimization strategies available to enhance the performance of AI/ML models, mitigate existing challenges, and improve the efficiency and cost-effectiveness of cloud-based DevOps workflows. This paper reviews optimization techniques such as hyperparameter tuning, model compression, pruning, batch inference, and parallel processing, and their impact on the performance of ML models deployed within AWS Lambda and SageMaker environments. The study involves the empirical evaluation of real-world use cases, providing insights into the trade-offs between model accuracy, resource consumption, and execution time. Key findings suggest that while AWS serverless platforms provide excellent scalability and ease of use, careful management of resources and optimization of workflows is essential to maximize their potential. Furthermore, this paper contributes to the field by proposing recommendations for best practices in optimizing AI/ML workflows in serverless environments, while offering insights into future research directions.

Ramesh Krishna Mahimalur Reviewer

badge Review Request Accepted

Ramesh Krishna Mahimalur Reviewer

21 Apr 2025 09:53 AM

badge Approved

Relevance and Originality

Methodology

Validity & Reliability

Clarity and Structure

Results and Analysis

Relevance and Originality

The study tackles a highly relevant subject within contemporary cloud computing—specifically, the integration and optimization of AI/ML workloads in DevOps pipelines using AWS serverless infrastructure. It responds to a real and growing need in industry for scalable, cost-efficient, and high-performance AI/ML deployments. The focus on performance trade-offs and optimization strategies adds a novel layer to existing literature, especially within the context of AWS Lambda and SageMaker. By addressing operational efficiency and real-world deployment scenarios, the article makes a meaningful contribution to the evolving landscape of serverless AI, cloud-native DevOps, and ML operations.

Methodology

The research adopts an empirical approach, reviewing real-world use cases to evaluate multiple optimization strategies such as model pruning, batch inference, and hyperparameter tuning. This practical framework strengthens the study’s relevance and credibility. However, further clarity around the experimental design—such as dataset selection, benchmarking criteria, and performance baselines—would enhance the transparency and repeatability of the analysis. Still, the methodology is well-aligned with the study's objectives and offers a pragmatic lens through which readers can assess various ML performance tuning techniques in serverless environments. Keywords: empirical analysis, DevOps evaluation, optimization benchmarking.

Validity & Reliability

The use of real-world scenarios provides robustness to the findings, grounding them in operational realities rather than controlled lab settings. The comparative structure across various optimization techniques lends itself to clear performance validation. While the conclusions appear well-supported, the generalizability of the results may be constrained by platform-specific dependencies on AWS. More explicit reporting of statistical measures or performance variability could further reinforce reliability. Keywords: AWS Lambda benchmarking, real-world ML validation, performance metrics.

Clarity and Structure

The article is logically organized, clearly distinguishing between challenges, strategies, and results. It balances technical depth with accessibility, making it suitable for both practitioners and researchers. Terminology is domain-appropriate without being overly dense, and the discussion flows coherently from problem identification to actionable recommendations. Minor improvements in section segmentation—especially highlighting each optimization strategy separately—could further enhance readability. Keywords: technical clarity, workflow structure, readability.

Result Analysis

The analysis effectively outlines trade-offs between accuracy, resource utilization, and execution speed, providing a nuanced understanding of how various optimization strategies perform under serverless constraints. Conclusions are well-grounded in the data and offer valuable, actionable recommendations for practitioners

avatar

IJ Publication Publisher

Respected Sir,

Thank you for your insightful feedback. We are pleased to know that you found the study’s relevance in integrating AI/ML workloads into DevOps pipelines using AWS serverless infrastructure impactful, particularly with regard to optimization strategies like model pruning and batch inference. Your recognition of the paper’s contribution to the evolving landscape of serverless AI and cloud-native DevOps is highly appreciated.

We also acknowledge your point about the need for additional clarity on experimental design, benchmarking criteria, and performance baselines. We will work on incorporating these details to enhance the transparency and reproducibility of our methodology. Furthermore, we understand the concern about the generalizability of results due to AWS-specific dependencies and will aim to address this by providing more statistical reporting in future iterations.

Thank you once again for your valuable and constructive comments.

Publisher

User Profile

IJ Publication

Reviewer

User Profile

Ramesh Krishna Mahimalur

More Detail

User Profile

Paper Category

Cloud Computing

User Profile

Journal Name

TIJER - Technix International Journal for Engineering Research

User Profile

p-ISSN

User Profile

e-ISSN

2349-9249

Subscribe us to get updated

logo logo

Scholar9 is aiming to empower the research community around the world with the help of technology & innovation. Scholar9 provides the required platform to Scholar for visibility & credibility.

QUICKLINKS

  • What is Scholar9?
  • About Us
  • Mission Vision
  • Contact Us
  • Privacy Policy
  • Terms of Use
  • Blogs
  • FAQ

CONTACT US

  • logo +91 82003 85143
  • logo hello@scholar9.com
  • logo www.scholar9.com

© 2025 Sequence Research & Development Pvt Ltd. All Rights Reserved.

whatsapp