Skip to main content
Loading...
Scholar9 logo True scholar network
  • Login/Sign up
  • Scholar9
    Publications ▼
    Article List Deposit Article
    Mentorship ▼
    Overview Sessions
    Q&A Institutions Scholars Journals
    Publications ▼
    Article List Deposit Article
    Mentorship ▼
    Overview Sessions
    Q&A Institutions Scholars Journals
  • Login/Sign up
  • Back to Top

    Transparent Peer Review By Scholar9

    A Comparative Study on AI/ML Optimization Strategies within DevOps Pipelines Deployed on Serverless Architectures in AWS Cloud Platforms

    Abstract

    The application of Artificial Intelligence (AI) and Machine Learning (ML) in modern DevOps pipelines is a rapidly growing trend, with organizations seeking efficient, scalable, and cost-effective solutions to integrate AI/ML models into production environments. AWS's serverless architecture, with its powerful cloud-native services such as AWS Lambda, Step Functions, and SageMaker, provides a flexible platform for deploying AI/ML workloads at scale. However, while the serverless paradigm offers considerable benefits in terms of scalability and resource management, it also presents unique challenges, including cold start latency, resource allocation, and computational efficiency. This research focuses on a comparative analysis of AI/ML optimization strategies deployed within DevOps pipelines on AWS's serverless architectures. The aim is to identify and evaluate the various optimization strategies available to enhance the performance of AI/ML models, mitigate existing challenges, and improve the efficiency and cost-effectiveness of cloud-based DevOps workflows. This paper reviews optimization techniques such as hyperparameter tuning, model compression, pruning, batch inference, and parallel processing, and their impact on the performance of ML models deployed within AWS Lambda and SageMaker environments. The study involves the empirical evaluation of real-world use cases, providing insights into the trade-offs between model accuracy, resource consumption, and execution time. Key findings suggest that while AWS serverless platforms provide excellent scalability and ease of use, careful management of resources and optimization of workflows is essential to maximize their potential. Furthermore, this paper contributes to the field by proposing recommendations for best practices in optimizing AI/ML workflows in serverless environments, while offering insights into future research directions.

    Publisher

    IJ Publication

    IJ Publication

    Reviewer

    Shubhita

    Shubhita Tripathi

    More Detail

    Category Icon

    Paper Category

    Cloud Computing

    Journal Icon

    Journal Name

    TIJER - Technix International Journal for Engineering Research External Link

    Info Icon

    p-ISSN

    Info Icon

    e-ISSN

    2349-9249

    Subscribe us to get updated

    logo logo

    Scholar9 is aiming to empower the research community around the world with the help of technology & innovation. Scholar9 provides the required platform to Scholar for visibility & credibility.

    QUICKLINKS

    • What is Scholar9?
    • About Us
    • Mission Vision
    • Contact Us
    • Privacy Policy
    • Terms of Use
    • Blogs
    • FAQ

    CONTACT US

    • +91 82003 85143
    • hello@scholar9.com
    • www.scholar9.com

    © 2026 Sequence Research & Development Pvt Ltd. All Rights Reserved.

    whatsapp