Back to Top

About

Solution Lead/Architect with 15+ years of in Information Technology with core skill set in Informatica Platform. Helped well known organization with my data warehousing skill set and executed several well known implementation in the data platform. Lead data governance and data management practices. Served as a judge for handful of awarding programs over the period of time and I am an active technology enthusiast. Authored a book on data warehousing and It is to be published in 2025.

View More >>

Skills

Experience

Sr Software Engineer

RBC Wealth Management

Apr-2019 to Present
Software Engineer

Namitus Technologies Inc

Mar-2014 to Mar-2019
Organization
Data Analyst

Associate systems LLC

Aug-2013 to Mar-2014

Education

Minot State University

M.Tech in Information system

Passout Year: 2022
Lamar University

M.Tech in Chemistry

Passout Year: 2013

Publication

  • dott image September, 2024

An Effective Graph Database-Centric Patient Healthcare Data Management Using A Robust HGCRN and LIM2DCE

In basic, effective data management in the healthcare domain is a difficult task due to the complex relationship between the patients and the healthcare sector. However, none of the prevai...

A Proficient Hospital Ratings Aware Patient Churn Prediction And Prevention System Using Abg-Fuzzy And Ner-Gfjdkmeans

Patient churn in healthcare denotes the rate at which patients stop visiting or seeking care from a hospital. High churn represents dissatisfaction, better alternatives, or accessibility...

Peer-Reviewed Articles

Data Management Strategies and Machine Learning Applications in the Indian Financial Industry: A Comprehensive Study

The effective management of data has emerged as a critical requirement in the modern financial industry, particularly in the Indian context where the sector experiences exponential data growth, regulatory complexities, and a rapidly evolving technological landscape. This paper aims to explore comprehensive data management strategies within Indian financial institutions, including banks, insurance companies, stock exchanges, and fintech startups, while integrating modern data science, machine learning (ML), and artificial intelligence (AI) techniques. By combining traditional data governance principles with contemporary analytical methodologies, this research presents an integrative framework that enhances decision-making, risk management, customer profiling, and regulatory compliance. Our methodology employs a mixed-method approach comprising quantitative data analysis from financial transactions, customer databases, and regulatory reports, alongside qualitative insights drawn from expert interviews across financial hubs such as Mumbai, Bengaluru, and Kolkata. Data is sourced from publicly available financial databases, institutional archives, and primary research involving structured interviews with senior data managers. Sampling combines purposive and stratified techniques to ensure representation across public, private, and fintech sectors. Analytical techniques range from statistical modeling and regression analysis to machine learning classification models for fraud detection and predictive analytics for credit scoring. Findings reveal that Indian financial institutions struggle with legacy system integration, data silos, and fragmented governance frameworks. However, organizations that have adopted advanced data pipelines, real-time analytics platforms, and AI-driven risk models exhibit superior agility, compliance adherence, and customer satisfaction. Furthermore, we identify significant variance in data maturity across different financial segments, with fintech companies showcasing more innovative data strategies compared to traditional banking entities. Three comprehensive tables capture industry-wise data practices, comparative data management strategies, and machine learning adoption levels. This study contributes to the literature by proposing a data governance-maturity model tailored to the Indian financial landscape, integrating regulatory alignment, technological advancement, and organizational culture. The research underscores the importance of aligning data management strategies with evolving regulatory norms such as those set by RBI, SEBI, and IRDAI, ensuring data privacy, customer-centric innovation, and operational resilience. In conclusion, the research advocates for a cross-sector collaborative approach, wherein regulatory bodies, financial institutions, and technology providers co-create dynamic data ecosystems that foster innovation while ensuring systemic stability. This research offers practical insights for data managers, policymakers, and technologists navigating the intersection of finance, data science, and machine learning in India’s evolving financial ecosystem.

Advanced Data Management and Analytics in the Pharmaceutical Industry: Leveraging Machine Learning and Big Data for Enhanced Decision-Making

The pharmaceutical industry stands at the intersection of healthcare innovation and technological advancement, making efficient data management an imperative for accelerating drug discovery, regulatory compliance, supply chain optimization, and patient safety. This research paper, titled "Advanced Data Management and Analytics in the Pharmaceutical Industry: Leveraging Machine Learning and Big Data for Enhanced Decision-Making," presents a comprehensive exploration of how modern data management frameworks and advanced analytics, particularly machine learning (ML) and big data analytics, are transforming pharmaceutical operations. The purpose of the research is to investigate and develop a multi-dimensional data management framework, integrating structured and unstructured data across research and development, clinical trials, manufacturing, and post-market surveillance. A mixed-method approach was adopted, combining quantitative data analysis from clinical databases, real-world evidence (RWE) repositories, and pharmaceutical manufacturing logs with qualitative insights from expert interviews across major Indian pharmaceutical firms such as Dr. Reddy’s Laboratories, Sun Pharmaceutical Industries, and Lupin Limited. Data collection leveraged electronic health records (EHRs), laboratory information management systems (LIMS), supply chain systems, and regulatory compliance databases. Sampling was conducted using purposive stratified techniques to ensure representation across diverse pharmaceutical functions, from drug discovery to distribution. Analytical techniques included descriptive statistics, supervised machine learning algorithms such as Random Forest and Gradient Boosting for predictive modeling, and unsupervised clustering for pattern discovery within clinical trial and supply chain data. Key findings reveal that machine learning models significantly enhance predictive accuracy in clinical trial outcomes and supply chain disruptions. Real-time data ingestion pipelines, coupled with natural language processing (NLP) algorithms applied to regulatory documents, streamline regulatory submissions and compliance monitoring. Ethical considerations included data anonymization, informed consent in patient data usage, and strict adherence to Good Clinical Practice (GCP) and General Data Protection Regulation (GDPR). The research contributes to the field by proposing a novel Pharmaceutical Data Management (PDM) Framework, which harmonizes real-time analytics, secure data sharing, and predictive modeling capabilities. This framework supports adaptive clinical trials, real-time pharmacovigilance, and personalized medicine initiatives. The study concludes with a discussion on the integration challenges, including data silos, legacy system interoperability, and evolving regulatory requirements. Practical implications include improved R&D productivity, reduced time-to-market for new therapies, enhanced supply chain resilience, and more effective post-market surveillance. The proposed framework, validated through expert reviews and pilot testing, offers a scalable and customizable model for pharmaceutical enterprises globally. In summary, this paper bridges the gap between data science and pharmaceutical operations, demonstrating how data-driven decision-making powered by advanced analytics can transform the industry’s operational efficiency, innovation capacity, and regulatory compliance.

Enhancing Data Reporting Efficiency Using Machine Learning Techniques in Real-Time Analytics

The modern data-driven economy relies heavily on real-time analytics and seamless data reporting processes, which have become pivotal across sectors including finance, healthcare, e-commerce, and manufacturing. Efficient data reporting not only facilitates timely decision-making but also enhances the accuracy and relevance of organizational intelligence. This paper explores the intersection of advanced data reporting practices and machine learning techniques, focusing on how real-time data pipelines can be optimized for efficiency, accuracy, and scalability. With the exponential growth of data, traditional methods often fall short in processing and analyzing streaming data in real-time. Our research investigates the integration of machine learning algorithms into automated data reporting systems to improve data validation, anomaly detection, and reporting accuracy. We designed a hybrid research approach comprising both quantitative and qualitative methods, including analysis of operational data from industry leaders in retail, banking, and manufacturing sectors, as well as structured interviews with data engineers and analysts. Sampling covered large organizations with diverse data infrastructures, and analysis incorporated techniques such as regression analysis, clustering, and natural language processing (NLP) for real-time text summarization. Ethical considerations focused on data privacy, consent, and algorithmic fairness. Results show that integrating machine learning with real-time data reporting can reduce data processing errors by 37%, enhance anomaly detection accuracy by 42%, and accelerate report generation time by 63%. Our tables highlight comparisons across industries, system architectures, and error reduction techniques. These findings bridge key gaps in existing literature, which either focus on static data reporting or siloed machine learning implementations. This study’s implications extend to data governance policies, system design best practices, and future advancements in predictive analytics for proactive reporting enhancements. The paper also outlines limitations such as computational overhead, interpretability challenges, and data privacy concerns, all of which open avenues for further research into federated learning, edge analytics, and explainable AI in real-time reporting contexts. By advancing methodologies for data reporting, this research contributes directly to improving operational efficiency and analytical agility in data-intensive environments, particularly for data science teams tasked with balancing speed, accuracy, and compliance

Cloud Data Warehousing: Transforming Scalable Data Management and Analytics for Modern Enterprises

Cloud data warehousing has emerged as a revolutionary solution addressing the ever-increasing needs of data management, real-time analytics, and scalable storage for businesses across industries. This research comprehensively investigates the paradigm shift from traditional on-premises data warehouses to cloud-based solutions, emphasizing their role in data science, machine learning workflows, and real-time decision-making. The objective of this paper is to assess the technical, operational, and economic benefits of cloud data warehouses and their direct impact on data-intensive applications in fields like e-commerce, finance, healthcare, and logistics. Through a mixed-methods approach involving primary data collection from enterprises using AWS Redshift, Google BigQuery, Snowflake, and Azure Synapse, supplemented with secondary literature, the study captures insights into deployment strategies, performance optimization techniques, and governance practices. Quantitative data is derived from performance benchmarks, while qualitative data reflects the perceptions of IT managers, data scientists, and infrastructure architects. Statistical methods including regression analysis, ANOVA, and clustering techniques provide insights into cost-performance trade-offs, latency patterns, and scalability factors. Ethical considerations such as data privacy, regulatory compliance, and responsible AI integration are also explored. Findings indicate that cloud data warehousing reduces infrastructure costs by up to 50%, enhances query performance by leveraging distributed architectures, and accelerates machine learning model training pipelines through seamless data access. The research contributes to the evolving discourse on hybrid and multi-cloud data strategies, emphasizing the importance of data integration, workload portability, and vendor lock-in mitigation. By presenting empirical data, case studies, and expert opinions, this paper provides a comprehensive understanding of how cloud data warehousing serves as a foundational pillar in modern data ecosystems, supporting both operational analytics and advanced data science initiatives. The study concludes with recommendations for optimizing data warehouse performance, improving data governance frameworks, and aligning cloud data strategies with business goals to maximize return on investment and competitive advantage.

Examining The Study Habit of Single Parent Children and Their Academic Performance

Background: Single-parent households are increasingly common, and their impact on children's development has been a subject of extensive research. While single parents often face unique challenges, including financial strain and increased household responsibilities, their children's academic outcomes are not universally negative. This study aims to investigate the relationship between study habits and academic performance in children from single-parent families. Method: A sample of 150 students from single-parent households studying in class X was recruited from various schools in Garo Hills region. Data were drawn by using scale on study habit and academic report card were used to generate data for the study. To accomplish this, the researchers employed survey method as a research design Results: No significant differences were found in overall study habits between children from single-parent and two-parent households. However, some specific study habits, such as time management and organizational skills, showed a trend towards being slightly lower in the single-parent group. No significant differences were found in overall academic performance between the two groups. Relationship between Study Habits and Academic Performance: Strong positive correlations were found between study habits and academic performance in both groups, indicating that effective study habits are crucial for academic success regardless of family structure.

Exploring the Integration of DevSecOps Practices in AI/ML-Driven Cloud Infrastructures Using AWS for Enhanced Security Automation

The convergence of DevSecOps, artificial intelligence/machine learning (AI/ML), and cloud technologies represents a transformative shift in software development and infrastructure management. This paper investigates the integration of DevSecOps principles into AI/ML-driven cloud infrastructures hosted on Amazon Web Services (AWS), aiming to enhance security automation in real-time deployments. As cloud-native applications scale with increasing complexity, security vulnerabilities also multiply, requiring a proactive, automated, and intelligent approach to detection, mitigation, and response. DevSecOps embeds security at every stage of the development lifecycle, while AI/ML introduces adaptability and pattern recognition capabilities that enable predictive threat management. AWS provides a flexible and scalable environment supporting multiple DevSecOps tools and AI/ML frameworks such as SageMaker, GuardDuty, CodePipeline, and Amazon Inspector. The study adopts a mixed-methods approach involving both qualitative and quantitative analyses, including case studies, structured interviews with cloud security professionals, and experimental testing using simulated threat scenarios. By leveraging real-world deployments and analyzing telemetry data, the research reveals how integrating AI-driven anomaly detection with continuous integration/continuous deployment (CI/CD) pipelines automates incident response and enhances compliance with industry standards such as ISO 27001 and SOC 2. Key findings indicate a 47% reduction in mean time to detect (MTTD) and a 63% improvement in mean time to respond (MTTR) to security breaches when DevSecOps practices are effectively implemented with machine learning-enhanced automation on AWS infrastructure. The research also explores ethical and organizational implications, such as the potential for algorithmic bias in security tools and the necessity of cross-functional training for developers, data scientists, and security teams. Limitations include varying levels of maturity in adopting DevSecOps frameworks across organizations and dependence on vendor-specific APIs. The conclusions emphasize the critical need for adaptive security measures in cloud-native AI systems and recommend a structured framework for DevSecOps adoption in AWS environments, ensuring resilience, scalability, and trust. The study contributes to the growing field of intelligent cybersecurity automation, proposing actionable methodologies for academic, industrial, and regulatory stakeholders seeking to secure AI/ML workloads in modern cloud ecosystems.

Evaluating the Impact of AWS-Based Cloud Technology on DevOps Efficiency and Scalability in AI-Powered Software Development Lifecycles

In the realm of software engineering, the convergence of cloud technologies with artificial intelligence (AI) and DevOps methodologies has emerged as a transformative force in redefining software development lifecycles. This research investigates the impact of AWS-based cloud infrastructures on enhancing the efficiency and scalability of DevOps practices in AI-powered application development. The paper begins by addressing the inherent challenges of integrating continuous integration and deployment (CI/CD) with intelligent workflows, particularly in managing dynamic, data-driven software ecosystems. By focusing on AWS's capabilities—such as Elastic Beanstalk, CodePipeline, and SageMaker—the study demonstrates how these tools streamline the deployment of AI models, foster collaboration between development and operations teams, and ensure resilient, scalable architectures. The methodology employed involves both qualitative and quantitative approaches, including case studies from mid- to large-scale enterprises, developer surveys, and performance metrics analysis from real-world deployment pipelines. The key findings reveal that AWS accelerates iteration cycles by over 40%, reduces system downtime through proactive monitoring tools like CloudWatch, and facilitates scalable training of machine learning models via distributed computing resources. Furthermore, the research highlights the role of AWS Lambda in enabling event-driven automation, significantly optimizing time-to-deployment. An in-depth comparison of traditional DevOps pipelines versus AWS-integrated DevOps workflows underscores a marked improvement in model governance, compliance adherence, and rollback capabilities in AI-centric projects. The conclusions drawn suggest a direct correlation between AWS cloud adoption and enhanced software development efficiency, especially in contexts where machine learning is integral. This paper contributes to the body of knowledge by offering an actionable framework for leveraging AWS to elevate DevOps maturity in AI environments. Future research directions include the exploration of hybrid cloud strategies, cost optimization models, and AI-driven anomaly detection in DevOps workflows.

A Comparative Study on AI/ML Optimization Strategies within DevOps Pipelines Deployed on Serverless Architectures in AWS Cloud Platforms

The application of Artificial Intelligence (AI) and Machine Learning (ML) in modern DevOps pipelines is a rapidly growing trend, with organizations seeking efficient, scalable, and cost-effective solutions to integrate AI/ML models into production environments. AWS's serverless architecture, with its powerful cloud-native services such as AWS Lambda, Step Functions, and SageMaker, provides a flexible platform for deploying AI/ML workloads at scale. However, while the serverless paradigm offers considerable benefits in terms of scalability and resource management, it also presents unique challenges, including cold start latency, resource allocation, and computational efficiency. This research focuses on a comparative analysis of AI/ML optimization strategies deployed within DevOps pipelines on AWS's serverless architectures. The aim is to identify and evaluate the various optimization strategies available to enhance the performance of AI/ML models, mitigate existing challenges, and improve the efficiency and cost-effectiveness of cloud-based DevOps workflows. This paper reviews optimization techniques such as hyperparameter tuning, model compression, pruning, batch inference, and parallel processing, and their impact on the performance of ML models deployed within AWS Lambda and SageMaker environments. The study involves the empirical evaluation of real-world use cases, providing insights into the trade-offs between model accuracy, resource consumption, and execution time. Key findings suggest that while AWS serverless platforms provide excellent scalability and ease of use, careful management of resources and optimization of workflows is essential to maximize their potential. Furthermore, this paper contributes to the field by proposing recommendations for best practices in optimizing AI/ML workflows in serverless environments, while offering insights into future research directions.

Integrating DevSecOps into Large-Scale Cloud Migration Projects: Challenges, Strategies, and Emerging Best Practices for 2025

In the rapidly evolving digital landscape, large-scale cloud migration projects have become pivotal for organizations aiming to enhance scalability, agility, and cost-efficiency. However, these migrations introduce complex security challenges that necessitate the integration of DevSecOps practices. This research delves into the intricacies of embedding DevSecOps into extensive cloud migration endeavors, focusing on the challenges faced, strategies employed, and best practices emerging in 2025. The study employs a mixed-methods approach, combining qualitative interviews with industry experts and quantitative analysis of migration case studies across various sectors. Key findings reveal that organizations integrating DevSecOps from the inception of migration projects experience a 40% reduction in security incidents and a 30% improvement in deployment speed. The research highlights the significance of continuous security integration, automated compliance checks, and cross-functional collaboration. Additionally, the study underscores the role of emerging technologies like AI and machine learning in enhancing threat detection and response. The paper contributes to the field by providing a comprehensive framework for organizations to effectively integrate DevSecOps into their cloud migration strategies, ensuring robust security postures while maintaining operational efficiency.

Enhancing CI/CD Automation in Containerized Environments through Intelligent Monitoring, Predictive Analytics, and Policy-Driven Deployment Frameworks

Continuous Integration and Continuous Deployment (CI/CD) automation has become central to modern software engineering, particularly in microservices and containerized environments, which emphasize agility, scalability, and consistency. However, ensuring that CI/CD pipelines remain resilient, intelligent, and policy-compliant in dynamically scaling container environments presents multiple challenges. This research proposes a next-generation CI/CD automation architecture that integrates intelligent monitoring, predictive analytics, and policy-driven frameworks to optimize deployment decisions and failure recovery. Using a mixed-methods approach—combining qualitative interviews with DevOps teams across Asia and Europe, and quantitative analysis of pipeline performance metrics—this study demonstrates how predictive models can pre-empt pipeline failures, reduce rollback rates, and optimize resource usage during deployments. The intelligent monitoring system leverages real-time container metrics (CPU, memory, network, logs) and anomaly detection algorithms such as Isolation Forest and DBSCAN. Predictive analytics models, built using Gradient Boosting and Random Forest algorithms, provide pipeline health forecasts and failure likelihood scores. Meanwhile, the policy engine enforces deployment standards based on SLAs, security checks, and resource thresholds using custom YAML-based schemas and OPA (Open Policy Agent). Results indicate a 42% reduction in deployment failures and a 30% decrease in mean time to resolution (MTTR) across production workloads. Our architecture also enhanced compliance traceability and reduced manual interventions by 55%. This paper contributes a robust CI/CD intelligence model and a decision-making policy layer that can be extended to hybrid and multi-cloud DevOps platforms. Ultimately, this research promotes sustainable, self-healing, and compliant CI/CD operations, which are vital in modern DevSecOps-driven digital transformation initiatives.

Role in Research Journals

Conference/Seminar/STTP/FDP/Symposium/Workshop

Conference
  • dott image May 2023

Informatica World 2023

Hosted By:

Informatica ,

Las Vegas, Nevada, United States
Informatica world conference

Membership

dott image
Senior Member

IEEE - Institute of Electrical and Electronics Engineers

From year 2025 to Present

https://www.ieee.org/

Scholar9 Profile ID

S9-022025-2209758

Publication
Publication

(3)

Review Request
Article Reviewed

(13)

Citations
Citations

(0)

Network
Network

(1)

Conferences
Conferences/Seminar

(1)

Academic Identity