About

Balaji Govindarajan is an accomplished professional with over 21 years of extensive experience in the insurance domain, particularly in Property & Casualty Insurance. He has a strong background in both functional and automation testing, specifically with applications like Policypro, Duck Creek, and policy administration systems. His technical expertise spans multiple programming languages, including C++, Python, and C#, with proficiency in testing methodologies, configuration management, and project management. Balaji holds a Master of Business Administration from the University of Madras and a Bachelor of Engineering from Annamalai University. He has earned several certifications, including PMP, ACP, and CSM, showcasing his commitment to continuous learning and leadership in project management and Agile practices. In his most recent role as a Manager at Capgemini, he oversaw testing initiatives, demonstrating exceptional skills in test planning, requirements analysis, and documentation. His time management and organizational abilities have allowed him to effectively manage diverse tasks and priorities. Proficient in various defect management tools and methodologies, Balaji is dedicated to delivering high-quality software solutions and enhancing user accessibility in technology. Balaji Govindarajan is a highly experienced software quality assurance professional with over 20 years of expertise in testing, integration, and process optimization, particularly in the Property & Casualty Insurance domain. His extensive background in functional and automation testing covers a broad range of insurance applications, including PolicyPro, Duck Creek, Policy Administration, Underwriting, Policy Documents, and Billing systems. Proficient in C++, Python, .NET, and VC++ within Visual Studio, he has been actively involved in unit testing using C# and has a deep understanding of testing methodologies, requirement analysis, and defect management. His competencies span configuration and release management, test planning, project management, and business reporting. Currently serving as a Consultant at Capgemini, he has worked extensively with clients like Progressive Insurance and Chubb Insurance, leading testing efforts for major insurance applications. At Progressive Insurance, he played a crucial role in ensuring business and legal compliance of document management systems by reviewing test cases, conducting business meetings, performing API testing with Postman, and executing parallel testing between Duck Creek and legacy systems. His expertise in automation testing is evident through his experience in developing test cases in C# using Visual Studio and Xunit while ensuring accessibility compliance following WCAG 2.0 standards. His role at Chubb Insurance focused on leading end-to-end testing efforts for Duck Creek Policy underwriting and premium rating systems, ensuring smooth integration with third-party data providers through BizTalk and web services. He has worked on onboarding integration processes, conducted SQL-based test data mining, and executed automation scripts in Python to validate functionalities through regression testing. With certifications including PMP, Agile Certified Practitioner (ACP), Certified Business Architect (PCBA), Certified Accessibility Tester, SAFe Scrum Master (CSM), and Microsoft Azure and AI certifications, he brings a well-rounded approach to software testing and quality assurance. His technical proficiency extends to database technologies such as SQL Server and Oracle, defect management tools like ALM, BugTracker, and JIRA, and testing methodologies in both Agile and Waterfall environments. Balaji’s experience in accessibility testing using tools like NVDA, PAC3, and Color Contrast Analyzer further highlights his ability to ensure compliance with accessibility standards. With prior experience at DXC Technology and A & S Software Technologies, he has honed his ability to manage large-scale insurance testing projects, develop automation frameworks, execute end-to-end integration tests, and work collaboratively with business analysts, product managers, and development teams. He has also played a pivotal role in process improvement initiatives, defect prevention strategies, and continuous evaluation of testing best practices. His leadership in testing strategy development, peer reviews, and estimation of testing efforts using industry-standard tools like COSMIC Function Points and Story Points has been instrumental in delivering high-quality software solutions. His ability to effectively coordinate between stakeholders, manage project risks, and drive continuous process improvements has established him as a valuable asset in the field of software quality assurance and testing.

View More >>

Skills

Experience

Software Test Engineer

Progressive

Jul-2024 to Present
Professional Programmer Analyst

DXC Technology

Oct-2012 to Sep-2019
Associate Manager

DXC Technology India

Oct-2006 to Oct-2012
Test Engineer

A & S Software Technologies

May-2006 to Oct-2006
Software Test Engineer

Seismi Technologies

Jul-2003 to May-2006
Manager

Capgemini

Sep-2019 to Jun-2024

Education

University of Madras

MBA in Business Administration

Passout Year: 2009
Annamalai University

BE in Computer Engineering

Passout Year: 2000
SRM INSTITUTE OF SCIENCE& TECHNOLOGY

MCA in Computer Application

Pursuing

Publication

Enhancing ERP System Efficiency through Integration of Cloud Technologies

In the rapidly evolving business landscape, organizations increasingly rely on Enterprise Resource Planning (ERP) systems to streamline operations and improve decision-making processes. Howe...

Peer-Reviewed Articles

COMPARATIVE ANALYSIS OF REVERSE IMAGE SEARCH ENGINES USING DIVERSE IMAGE SETS

Eight well-known reverse image search engines—Google, Bing, TinEye, Yandex, Baidu, Getty Images, Shutterstock, and Alamy—are compared in this study based on several different factors. Language support, speed, accuracy, facial recognition, geographic coverage, cropping feature, number of images retrieved, ease of use, mobile app availability, privacy measures, input options, supported file formats, search methods, and additional features are some of these requirements. The study outlines each engine's advantages and disadvantages. Both Google and Bing are very user-friendly, fast, and support multiple languages. However, Google is more accurate and has features like facial recognition and SafeSearch. Yandex offers comparable functionality but targets the Russian market. TinEye promotes privacy and collects very little data, however, it has trouble with unique photos and doesn't have many sophisticated capabilities. Baidu offers little privacy and openness and caters mostly to the Chinese market. Although Shutterstock and Getty Images have extensive privacy policies, their accuracy is not as high. Alamy has a reduced precision of retrieval but complies with data standards. According to the analysis, each engine serves a particular purpose. Google or Bing may be preferred by users looking for smart image detection and user-friendliness. TinEye might work for users who are concerned about their privacy. In the end, the decision is based on personal preferences and search objectives.

DESIGN AND IMPLEMENTATION OF Wi-Fi DEAUTHENTICATION SYSTEM USING NODEMCU ESP8266

Network security is seriously threatened by Wi-Fi de-authentication attacks, which frequently lead to data interception, illegal access, and service interruption. The mechanics and ramifications of these assaults are explored in detail in this research study, which highlights how they could jeopardize network availability, secrecy, and integrity. To bridge theoretical understanding with actual experimentation, the paper presents a practical implementation of a Wi-Fi deauther utilizing the NodeMCU ESP8266 microcontroller platform. With the use of programs like the Arduino IDE and NodeMCU Flasher, the Wi-Fi deauther was created and put through testing to identify and stop de-authentication threats instantly. The system's high detection accuracy, quick response times, and little effect on network performance as a whole are demonstrated by the experimental findings. The NodeMCU ESP8266 platform demonstrated good resource management by managing the detection and countermeasures while keeping CPU use below 70% and guaranteeing less than 5% reduction in network performance and latency. This study advances wireless network security by demonstrating a scalable, affordable method of thwarting de-authentication attacks and by suggesting further improvements that would include machine learning integration and wider assault coverage. For network managers, cybersecurity experts, and researchers looking to strengthen wireless network defenses, the findings offer insightful information and useful recommendations.

Blockchain using Virtual TRY-ON

In today’s dynamic retail environment, the shift towards online shopping necessitates innovative solutions that enhance customer engagement and satisfaction. This project introduces a virtual try-on clothing platform designed to revolutionize the online shopping experience by merging cutting-edge augmented reality (AR) and machine learning technologies. The platform enables users to visualize how garments will fit and appear on their unique body shapes without the need to visit a physical store. By offering a user-friendly interface, the website allows customers to upload personal images or utilize real-time video features, facilitating an interactive and personalized shopping experience. Key functionalities include accurate size recommendations tailored to individual measurements, as well as curated fashion suggestions that align with users' personal styles. These enhancements aim to minimize return rates—a significant challenge in e-commerce—while simultaneously boosting customer satisfaction and driving sales. Additionally, the platform fosters social interaction through built-in sharing capabilities, allowing users to solicit feedback from friends and family, thus enriching the decision-making process. This aspect not only enhances the shopping experience but also builds a sense of community around fashion choices. By integrating advanced technology with a seamless and engaging user experience, this virtual try-on website represents a substantial advancement in online fashion retail. It sets the stage for a more personalized and interactive shopping journey, ultimately redefining how consumers engage with fashion in the digital age. As we look to the future, this platform aims to become a cornerstone of online retail, reflecting the evolving needs and preferences of today’s consumers.

The Power of AI and Machine Learning in Cybersecurity: Innovations and Challenges

Networks and sensitive data are no longer adequately protected by traditional security methods due to the ongoing evolution and sophistication of cyber-attacks. Cybersecurity can be enhanced with the exploitation of machine learning and artificial intelligence techniques, which make threat detection more effective and efficient. This article, while giving an outline of the field's present position, discusses the difficulties in adapting machine learning and artificial intelligence (ML) to cybersecurity. The research discusses machine learning methods that are applied to tasks like malware classification, anomaly detection, and network intrusion detection. Lastly, the necessity for sizable labeled datasets, the adversarial attacks on machine learning models, and the adversity of deciphering models of black-box ML are some of the boundaries and challenges that are also covered.

Design of 4-bit ALU for low-power and High-speed Applications.

This paper presents a novel design and optimization of a 4-bit Arithmetic Logic Unit (ALU) utilizing 90nm CMOS technology, specifically addressing the longstanding carry-out issue prevalent in existing architectures. Notably, our proposed 4-bit ALU architecture successfully minimizes delay and power consumption by incorporating an optimized carry-out design employing AND gates. A comprehensive comparison of three logic styles - Pass Transistor Logic (PTL), Complementary Metal-Oxide-Semiconductor (CMOS), and Transmission Gate Logic (TGL) - is conducted, yielding significant improvements in power-delay tradeoffs. Simulation results validate the efficacy of our design in resolving the carry-out issue, making it an attractive solution for low-power, high-speed digital applications.

Deep Learning for Polymer Classification: Automating Categorization of Peptides, Plastics, and Oligosaccharides

Polymers represent a diverse and vital class of materials across numerous industries, each with unique structural characteristics and functional properties. Traditional methods of polymer classification rely heavily on labor-intensive techniques prone to subjectivity and human error. The emergence of deep learning has significantly transformed material science by enabling automated analysis and classification of complex polymers. In this study, we focus on leveraging deep learning models to classify three distinct classes of polymers: peptides, plastics, and oligosaccharides. Peptides, plastics, and oligosaccharides represent significant subsets of the polymer family, each with distinct structural features and applications. Our research explores the effectiveness of various deep learning architectures, including deep learning to classify peptides, plastics, and oligosaccharides, achieving perfect accuracy with neural networks, K-Nearest Neighbors, and Random Forest classifiers. Principal Component Analysis enabled visualization of sample distribution, demonstrating deep learning's potential to automate and enhance polymer classification, reducing reliance on traditional, labor-intensive methods.

Detecting Fake Reviews in E-Commerce: A Deep Learning-Based Review

E-commerce platforms are increasingly vulnerable to fake reviews, which can distort product ratings and mislead consumers. Detecting these fraudulent reviews is critical to maintaining trust and transparency in online marketplaces. This review provides a comprehensive analysis of deep learning techniques used for fake review detection. Key models such as Recurrent Neural Networks (RNNs), Long Short-Term Memory (LSTM) networks, and Transformer-based models like BERT are explored for their ability to analyze textual data and detect linguistic anomalies. Additionally, behavioral analysis using Convolutional Neural Networks (CNNs) and hybrid models combining textual and behavioral features are discussed. The review also highlights the role of Graph Neural Networks (GNNs) for network analysis and unsupervised learning methods like autoencoders for anomaly detection. Despite advances, challenges such as evolving fake review tactics, data imbalance, and cross-platform adaptability remain. The paper concludes by discussing future research directions, including enhancing model interpretability and combining deep learning with blockchain for more secure and verified review systems.

Harnessing Deep Learning for Precision Cotton Disease Detection: A Comprehensive Review

Cotton cultivation plays a critical role in global agriculture, yet its productivity is significantly hindered by various plant diseases that impact yield and quality. Conventional disease detection methods often fall short due to their reliance on manual inspection and limited accuracy. This comprehensive review explores the application of deep learning techniques beyond Convolutional Neural Networks (CNNs) in enhancing cotton disease detection. The paper covers a range of deep learning methodologies, including CNNs, Recurrent Neural Networks (RNNs), and hybrid models that combine different neural network architectures. It examines how these techniques can improve the precision and efficiency of disease diagnosis for common cotton ailments such as boll rot, leaf spot, cotton wilt, and bacterial blight. By reviewing current research and case studies, the paper provides insights into the effectiveness of various deep learning approaches and their integration into practical agricultural systems. It also addresses the challenges faced in implementing these technologies and suggests future directions for advancing disease management strategies through deep learning. This review aims to offer a holistic perspective on the potential of deep learning to transform cotton disease detection and contribute to more sustainable agricultural practices.

Face Recognition : Diversified

This paper presents a novel lightweight hybrid architecture for face recognition, combining the strengths of MobileNet and attention mechanisms to enhance performance under challenging conditions such as facial occlusions (e.g., masks), varied illumination, and diverse expressions. The proposed model is evaluated against popular baseline models, including MobileNetV2, EfficientNetB2, and VGG16, on the Yale Face Dataset and a Simulated Masked Yale Dataset. On the Yale Dataset, the hybrid model achieved superior results with an accuracy of 93.78%, precision of 94.45%, recall of 93.33%, and F1-score of 93.89%, outperforming the baseline models in all key metrics. Additionally, when tested on the Simulated Masked Yale Dataset, the hybrid model exhibited increased resilience to occlusion with an accuracy of 63.45% and F1-score of 64.22%, significantly surpassing the other architectures.

Blockchain in Cybersecurity: Enhancing Trust and Resilience in the Digital Age

Blockchain technology has emerged as a transformative tool in the field of cybersecurity, offering a decentralized, immutable, and transparent framework to enhance trust and resilience in digital systems. This review explores the various applications of blockchain in cybersecurity, focusing on its ability to mitigate key security challenges such as data tampering, unauthorized access, and identity fraud. By analyzing the integration of blockchain in areas like secure data sharing, IoT security, and identity management, this paper highlights the strengths and limitations of blockchain-based solutions. Furthermore, it examines consensus mechanisms and cryptographic techniques that ensure the integrity and confidentiality of information. Despite its potential, blockchain faces challenges such as scalability, regulatory hurdles, and susceptibility to attacks like 51% and Sybil attacks. This review aims to provide a comprehensive understanding of blockchain's role in enhancing cybersecurity, while also identifying future research directions to overcome current limitations.

Deep Fakes and Deep Learning: An Overview of Generation Techniques and Detection Approaches

The rapid evolution of deep learning has fueled the rise of deep fakes, artificially generated media that can convincingly mimic real human faces, voices, and actions. These fabricated images, videos, and audio clips are created using sophisticated neural networks, posing significant threats to privacy, security, and public trust in digital content. This paper presents a comprehensive review of the key deep learning techniques driving both the creation and detection of deep fakes. On the generation side, methods such as Generative Adversarial Networks (GANs), autoencoders, and Recurrent Neural Networks (RNNs) are examined for their role in producing realistic manipulated media. GANs, particularly, have revolutionized deep fake creation by enabling the development of highly convincing facial expressions and motion sequences. Autoencoders are widely employed for face swapping and video manipulation, while RNNs, including Long Short-Term Memory (LSTM) networks, are critical in voice cloning and generating realistic speech patterns. In response to the escalating concerns over deep fakes, substantial research has focused on detection methodologies. This paper reviews the latest advancements in detection, particularly the use of Convolutional Neural Networks (CNNs) for image and video analysis, as well as hybrid models that combine CNNs with RNNs for more effective detection of spatial and temporal inconsistencies. Moreover, the paper explores emerging strategies such as adversarial training, transfer learning, and blockchain-based solutions that aim to strengthen detection robustness against increasingly sophisticated deep fakes.Finally, the paper addresses the broader ethical and societal challenges posed by deep fakes, including their use in disinformation campaigns, identity theft, and other malicious activities. The need for transparent, interpretable detection models and the importance of interdisciplinary collaboration to mitigate these risks are emphasized. By providing an in-depth analysis of both creation and detection techniques, this review aims to contribute to the development of more secure and reliable digital ecosystems in the face of this growing threat.

Voice Assistant System with Object Detection Technology for Visually Impaired

Navigation, object recognition, obstacle avoidance, and reading provide substantial obstacles for visually impaired people, impeding their independence and day-to-day functioning. Current solutions, including standard voice assistants and white canes, either offer little help or cause privacy issues since they rely too much on the cloud. In order to fully address these issues, we suggest an inventive offering an object detection system and voice assistant powered by Android to assist the blind with problems they face on a daily basis The system incorporates the Arduino Uno, YoloV7 for object detection, and Android for the camera module. The complete apparatus is small and light, making it easy to mount anywhere. The assessments are carried out in controlled settings that replicate situations that a blind person could face in the real world. The suggested device allows visually impaired people to be more accessible, comfortable, and able to navigate more easily than the white cane, according to the results. People with visual impairments sometimes struggle to traverse complicated situations effectively. It is also no easy assignment to assist them in becoming perceptive navigators. In ocular individuals, cognitive maps derived. The system finds potential obstructions in the user's path, calculates the user's trajectory, and provides navigational data. Two experimental scenarios have been used to assess the solution. While the data is currently insufficient to draw firm conclusions, it does show that the technology can effectively assist visually impaired individuals in navigating an unfamiliar built environment.

Quantum-Enhanced Machine Learning for Real-Time Ad Serving

This paper presents a groundbreaking approach to addressing the growing computational challenges in real-time ad serving by leveraging quantum computing to accelerate machine learning (ML) algorithms. We propose a hybrid framework, the Quantum AdServer, which utilizes quantum algorithms alongside classical computing to reduce the time complexity of critical ML tasks in programmatic advertising. We explore both Variational Quantum Circuits (VQC) for near-term implementation on noisy intermediate-scale quantum (NISQ) devices and the Harrow-Hassidim-Lloyd (HHL) algorithm for future scenarios where more advanced quantum hardware is available. Our approach demonstrates significant improvements in both speed and scalability of personalized ad delivery, potentially revolutionizing the field of computational advertising. Through comprehensive theoretical analysis, simulations, and a detailed comparison of quantum methods, we showcase the potential of quantum-enhanced ML in ad tech while discussing practical challenges, including current hardware limitations and integration with existing ad-serving systems.

VIDEO TO VIDEO TRANSLATION USING MBART MODEL

There are many languages are spoken in India due to different diversities and different regions,so it is difficult to understand the global languages such as English ,Spanish ,French ,German. so this paper aims to translation of one of the global language English to their regional languages such as Tamil.So what our project does is it takes the Youtube url as an input in which the video should be in English and then save the video and perform the Machine Learning libraries as gTTS and Whisper model,Mbart50 model etc.Through this we do Audio Extraction,Speech-To-Text-Conversion,Text-Translation,Text-To-Speech-Synthesis. Through this we had Integrating language translation and audio synthesis and break down the the Linguistic barriers.

INTEGRATING ARTIFICIAL INTELLIGENCE INTO CYBERCRIME INVESTIGATION: CHALLENGES AND FUTURE DIRECTIONS

Computer and social networking whereby criminals use the Internet to propagate criminal activities are some of the major challenges faced by existing policing strategies. Modern-day crimes include hacking into computer systems and stealing money from consumers, ransomware, identity theft cases, and hacking, all of which use the dark web and encryption. In this regard, artificial intelligence (AI) is the most efficient solution for improving the manner of cybercrime investigation. This paper also analyses how AI technologies such as machine learning natural language processing, and deep learning can be incorporated in cybercrime investigations and how they can assist in dealing with difficulties concerning data volume, complexity, and encryption. The advantages of utilizing AI are numerous from pattern recognition to repetitive tasks cutting down the investigation time. However, the paper recognizes that applying AI in business brings legal, technical, and ethical concerns including; privacy, bias, and legal constrictions. This research analyses existing legal frameworks of India, the EU, and the United States while looking at how it would be possible to incorporate AI into cybercrime investigations without violating the rights of a citizen. Further, it reveals infringement and possible bias, as well as unlawful use for violations, and recounts drawbacks related to the lack of resources and expertise that police departments confront. In the final section of the paper, directions for future research focusing on the use of AI in the fight against cybercrime are given in addition to that, the practice of cooperation with different countries, legal regulation of such activities, protection of ethical issues, and training of personnel are described. They are useful in making sure that the levels of artificial intelligence benefits are achieved fully without compromising security and the basic rights of an individual.

Advanced Machine Learning Techniques for Water Quality Prediction and Management: A Comprehensive Review

The incorporation of IoT, machine learning, and geospatial technologies has rapidized pace in data-driven approaches in water quality monitoring. The approach calls for data-driven methods in water quality assessment that would ensure such a process makes it not only accurate but cost-efficient and within the constraints of applied growing environmental challenges. The IoT sensors allow for real-time data generation, and the machine learning models, such as support vector machines, neural networks, and regression techniques, have changed the index of water quality prediction and analysis. Applications of GIS provide spatial visualization and management of water resources. This collection of papers constitutes the constraints in classical measuring techniques, advanced solutions through automation technologies based on sensor powers, and hybrid algorithms. However, the integration of these technologies solves the complexities associated with water quality measurement apart from having a basis for the supportive suggestions for sustainable water management to provide actionable insights for decision-makers. Hence, this review underlines the potential of such future integration of IoT, AI, and GIS technologies to revolutionize the monitoring of water quality, ensuring clean water through global environmental changes.

A Comparative Study of Fuzzy Goal Programming And Chance Constrained Fuzzy Goal Programming

This work is a comparative study of the traditional fuzzy goal programming model and the chance constrained fuzzy goal programming model. The right-hand side coefficient of the constraint matrix is assumed to be a right sided fuzzy number, and a random variable following the gumbel distribution, while the coefficients of the constraint matrix are triangular fuzzy numbers.The chance constrained problem is converted to its deterministic equivalence and the fuzzy constraints defuzzi ed. The bounds of the kth objectives are determined and utilized to obtain the membership function of the fuzzy goals. Lastly, the weighted sum goal programming technique is employed to obtain the optimal solution to the decision makers goal target using the attained membership function. Moreover, the CCFGP model proved a more satis cing approach to optimize the decision makers goals as it yielded an under-achievement of the decision makers goal target. Numerical illustration proved the superiority of the technique.

Real-Time Object Detection in Low-Light Environments using YOLOv8: A Case Study with a Custom Dataset

Object detection in low-light conditions presents significant challenges due to the reduced visibility and poor illumination, particularly in real-time applications. This paper proposes a novel approach using the YOLOv8 model for real-time object detection in night-time conditions. A custom dataset comprising various objects captured in low-light environments was utilized to train and evaluate the model. The results demonstrate superior performance in terms of speed and accuracy compared to previous models, particularly YOLOv3. We also include an analysis of the model's real-time performance using a custom video feed. Our findings show that YOLOv8 outperforms earlier YOLO versions in detecting objects accurately and quickly in low-light, real-time scenarios, making it a promising solution for night-time surveillance and other security-related applications.

AI Anthropomorphism: Effects on AI-Human and Human-Human Interactions

Objective: Anthropomorphism is the act of assigning distinctive human-like traits, feelings, and behavioral characteristics to non-human entities. The phenomenon known as artificial intelligence (AI) anthropomorphism involves imputing human-like behavioral characteristics onto generative artificial intelligence systems. This phenomenon holds significant implications for the future of human-human social interactions in society. This review paper examines the concept of AI anthropomorphism and its influence on human behavior, with a particular emphasis on how interaction between AI and humans can affect societal dynamics and social relationships among humans. Methodology: This paper examines the comprehensive understanding of AI anthropomorphism and the impact of AI-human interactions on human-human social interactions through the examination of several theoretical frameworks and empirical studies. The paper synthesizes information from the research literature on AI anthropomorphism. The paper incorporates insights from theoretical frameworks such as social presence theory, media equation theory, attachment theory, and uncanny valley theory. The paper entails an in-depth study of scholarly publications, case studies, and observational studies that highlights the implications for human relationships with anthropomorphized AI. Findings: The findings indicate that attributing human-like characteristics to AI can greatly increase user engagement, inclusivity, and understanding of AI, potentially enhancing human-human relationships by facilitating similar positive social behaviors. Excessive dependence on AI for social interaction can potentially diminish the quality of human communications and cause the erosion of social skills, thereby emphasising the importance of incorporating AI in a balanced manner. In conclusion, AI has the potential to enhance empathy, compassion, and teamwork in human communication. It is essential to strike a balance to avoid becoming overly reliant on generative AI and sacrificing authentic human connections. Subsequent investigations should prioritize the refinement of AI design and social chatbots to bolster and amplify human-human connections, rather than supplanting them.

Predicting Titanic Survivors Using Random Forest Machine Learning Algorithm

The ship wreck of the RMS Titanic is still remembered as a well-known tragedy that took many lives. Using passenger data to predict who would survive this disaster presents an intriguing challenge for machine learning. This research utilizes the Random Forest algorithm, an effective ensemble learning technique, to examine and forecast survival outcomes based on factors such as age, gender, ticket class, and fare. Through thorough data preprocessing, which includes addressing missing values and creating new features, The model constructed delivers precise survival predictions. Important factors like passenger class and gender emerge as the most influential elements affecting the results. The model achieves a conclusion of over 82%, surpassing conventional machine learning methods like Logistic Regression and Decision Trees. By prioritizing feature significance and ensuring the model's broad applicability, this study not only emphasizes the predictive capabilities of machine learning but also provides insights into the societal and structural dynamics at play during the tragedy. Our results illustrate the effectiveness of Random Forest for binary classification tasks and its potential for wider use in predictive analytics.

Improving Brain Cancer Detection with a CNN-RNN Hybrid Model: A Spatial-Temporal Approach

Accurate and early detection of brain cancer is critical for improving treatment outcomes and patient survival. However, traditional diagnostic methods relying on radiological interpretation often lead to variable accuracy and delayed diagnoses due to the complex nature of brain tumors. This paper presents a novel hybrid deep learning model that combines Convolutional Neural Networks (CNNs) for spatial feature extraction with Recurrent Neural Networks (RNNs) for temporal analysis, specifically designed to improve brain cancer detection from MRI and CT scans. By leveraging the strengths of both CNNs and RNNs, the model captures intricate spatial and temporal patterns in medical images, leading to significant improvements in detection accuracy, sensitivity, and specificity. Comparative evaluations show that the proposed hybrid model outperforms conventional diagnostic techniques and existing deep learning approaches. The results highlight the potential of this method for earlier and more reliable brain cancer diagnoses, ultimately contributing to more personalized and effective treatment plans. Furthermore, the paper suggests that this hybrid approach could be adapted for the detection of other complex medical conditions.

A Comparative Study of Classification Algorithms for Enhanced Lung Cancer Prediction Using Deep Learning and SOM-Based Microscopic Image Analysis

Lung cancer is one of the top causes of cancer-related fatalities worldwide, necessitating the development of efficient early detection techniques. This study explores a hybrid approach combining deep learning and a Self-Organizing Map (SOM) for the classification of three lung cancer subtypes: adenocarcinoma, squamous cell carcinoma, and neuroendocrine tumors, using microscopic images. A pre-trained MobileNet model is employed for feature extraction, while the SOM is used for dimensionality reduction and visualization of high-dimensional data. The extracted features are then classified using various machine learning algorithms, including Random Forest, LightGBM and Decision Tree. A comparative analysis of these classifiers is conducted to assess their performance in predicting cancer types. Additionally, thresholding is applied to highlight cancerous regions in the images, enhancing the visual detection of malignant cells. Results indicate that the hybrid model provides competitive classification accuracy, with the Random Forest and Decision Tree classifiers showing particular promise. This research demonstrates the potential of combining deep learning with traditional machine learning techniques for lung cancer detection, offering a pathway toward more accurate and efficient diagnostic tools.

Advances in Tomato Disease Detection: A Comprehensive Survey of Machine Learning and Deep Learning Approaches for Leaves and Fruits

Tomatoes contributed about 232 billion Indian rupees to the Indian economy in the financial year 2020; it is next to potatoes in vegetable production in South Asian countries. Tomatoes are the most familiar vegetable crop, extensively cultivated on cultivated land in India. The tropical weather of India is relevant for development, but specific weather conditions and several other features affect the standard progress of tomato plants. Besides these weather conditions and natural disasters, plant disease is a big crisis in crop production and plays a vital role in financial loss. The typical disease detection approaches for tomato crops cannot produce a predictable solution, and the recognition period for diseases is slower. A primary recognition of disease provides optimum solutions compared to the existing detection methods. Recently, distinct technologies such as AI, IoT, pattern recognition, computer vision (CV), and image processing have quickly developed and been executed for agriculture, specifically in the automation of disease and pest detection procedures. CV-based technology deep learning (DL) approaches have been performed for previous disease detection. This study proposes a wide-ranging investigation of the disease detection and classification approaches inferred for Tomato Leaf Detection. This work also reviews the advantages and disadvantages of the methods presented. Additionally, the advancements, challenges, and opportunities are discussed in this field, providing insights into the recent methods. This survey is an appreciated resource for practitioners, researchers, and stakeholders involved in tomato cultivation and agricultural technology.

Detection of Kidney Disease using Machine Learning & Data Science

Kidney disease identification with machine learning and data science is transforming patient consideration and early diagnosis by using predictive models to identify important risk factors and biomarkers. There are several organs in the human body that performed vital functions. The kidney is a vital organ that removes toxic substances from the body, filtering blood. The reason for this is that the kidney is considered to be one of the important body parts. To maintain the health of the body, the kidneys should be safeguarded. Which kidney is affected by a different illness depends on a number of factors. The reason behind renal illness appears to be different in different individuals. The renal disease dataset (obtained via Kaggle) has been subjected to machine learning in this investigation to identify indicators of kidney illness. The primary goal of the data study has been to identify the core sources of the data, which has allowed for the distinction of any negative consequences. To choose the fundamental attributes of the data in this case, the connection component has been used. The data has been concluded using those foundational credits, and the implications of machine learning classifiers have begun kidney disease diagnosis.

Review of AI driven Intrusion Detection System on Network based attacks

This review paper explores the integration of Artificial Intelligence (AI) in Intrusion Detection Systems (IDS), highlighting how AI enhances the effectiveness and efficiency of these systems. It covers the evolution of IDS, from traditional methods to advanced AI-based techniques, including machine learning and deep learning. The paper compares these methods, assessing their strengths and weaknesses in various cybersecurity contexts. The focus is on the transformative impact of AI on IDS, offering insights into future research directions and the potential of AI to revolutionize cybersecurity defenses.

Leveraging Artificial Intelligence Algorithms for Enhanced Malware Analysis: A Comprehensive Study

The escalation of sophisticated malware threats necessitates innovative solutions for their detection and neutralization. This paper discusses the role of Artificial Intelligence (AI) algorithms in the field of malware analysis, examining various AI methodologies, and scrutinizing their efficiencies and drawbacks. We further discuss the key AI algorithms utilized, their applicability, and future potential. This study provides a valuable resource for researchers and practitioners seeking to utilize AI for improved malware detection and mitigation.

Automated Evaluation of Speaker Performance Using Machine Learning: A Multi-Modal Approach to Analyzing Audio and Video Features

In this paper, we propose a novel framework for evaluating the speaking quality of educators using machine learning techniques. Our approach integrates both audio and video data, leveraging key features such as facial expressions, gestures, speech pitch, volume, and pace to assess the overall effectiveness of a speaker. We collect and process data from a set of recorded teaching sessions, where we extract a variety of features using advanced tools such as Amazon Rekognition for video analysis and AWS S3 for speech-to-text conversion. The framework then utilizes a variety of machine learning models, including Logistic Regression, K-Nearest Neighbors, Naive Bayes, Decision Trees, and Support Vector Machines, to classify speakers as either "Good" or "Bad" based on predefined quality indicators. The classification is further refined through feature extraction, where key metrics such as eye contact, emotional states, speech patterns, and question engagement are quantified. After a thorough analysis of the dataset, we apply hyperparameter optimization and evaluate the models using ROC-AUC scores to determine the most accurate predictor of speaker quality. The results demonstrate that Random Forest and Support Vector Machines offer the highest classification accuracy, achieving an ROC-AUC score of 0.89. This research provides a comprehensive methodology for automated speaker evaluation, which could be utilized in various educational and training environments to improve speaker performance.

A Model-Driven Application for Streamlining Organizational Processes Using Microsoft Power Apps: Workflow Automation and Data Flow Integration

This research looks into the design and realization of a model-driven application in Microsoft Power Apps for streamlining organizational processes. It brings together leave management, approval workflows, productivity tracking, task management and time logging into a single system. The software includes automated workflows that improve operational efficiency, transparency, and employee performance. The study goes over workflow and data flow structures needed to make a system efficient, detailed DFD for Data Flow Diagram tasks transform traditional manual processes — integration of business rules & Workflow Automation. The app can also integrate with your calendar, for easy access to all the responsibilities of an employee in one spot. Notifications and alerts make sure that important activity is not lost, while the reporting functionality gives managers a detailed understanding of what is going on. Automation of these processes, along with the rule based business logic which the app embeds greatly reduces manual labor and enables a standard process consistent for all employees throughout the organization. Ultimately, this project specifically addresses the root problem people have when trying to improve their operational efficiency, time management practices inside or outside of work and deliver tools tailored to provide better time data collection and alerting while supporting decisioning within the organization. The app is assessed by a research paper before the conclusion, evaluating it from the perspective of business and organizational goals.

Projects

Mar-2020 to May-2024

Special Lines Rate Revision Document Management

External Communication is the centre of Progressive’s outbound customer and agent communications and used for creating, delivering, and archiving customized print documents, e-mails, faxes, online forms. Objective of this project is to ensure forms issuance and contents matches with the business and legal requirements. Policies are created using different applications like Policypro, S2, PLACQ and SLIQ.
...see more

Conference/Seminar/STTP/FDP/Symposium/Workshop

Conference
  • dott image Feb 2025

Digital Transformation in Information Technology A Journey Towards Innovation and Excellence

Hosted By:

SRM Institute of Science and Technology (SRMIST), Vadapalani ,

Chennai, Tamil Nadu, India

Certificates

Issued : Oct 2024
  • dott image By : PMI
  • dott image Event : Certified Proje...
Certified Project Management Professional

Scholar9 Profile ID

S9-102024-0406199

Publication
Publication

(1)

Review Request
Article Reviewed

(52)

Citations
Citations

(0)

Network
Network

(4)

Conferences
Conferences/Seminar

(1)

PrevNext
SuMoTuWeThFrSa
      1
2345678
9101112131415
16171819202122
232425262728