Back to Top

About

Kendyala Srinivasulu Harshavardhan is an exceptionally innovative and results-driven technical leader with a proven track record in Identity & Access Management with over 13 years of experience within the financial services sector. He has a wealth of experience in leading Customer Authentication Platform (CIAM) teams and possesses a diverse skill set encompassing Authentication, Orchestration, Single Sign-On (SSO), Multi-Factor Authentication (MFA), and a range of industry-standard protocols including PingFederate, PingID, Ping Access, SAML 2.0, OAuth 2.0, OpenID, Identity Federation, and Provisioning, as well as WS-Federation. Srinivasulu holds a Master’s degree in computer science from the University of Illinois Springfield, underpinning his strong technical foundation. He brings a unique blend of technical expertise and strategic vision to his work, consistently driving innovative solutions and exceeding business objectives. With a keen focus on leveraging cutting-edge technologies and methodologies, Srinivasulu is adept at navigating complex challenges and delivering impactful outcomes in dynamic and fast-paced environments within the financial services industry.

View More >>

Skills

Experience

Vice President of Software Engineering

JPMorganChase Institute

May-2019 to Present
Organization
Senior Technical Lead

CapitalOne

Jul-2015 to May-2019

Education

placeholder
University of Illinois System

Masters Degree in Computer Science & Engineering

Passout Year: 2015
placeholder
Jawaharlal Nehru Technological University, Hyderabad (JNTUH)

Bachelors in Computer Science & Engineering

Passout Year: 2010

Peer-Reviewed Articles

Deep Learning for Polymer Classification: Automating Categorization of Peptides, Plastics, and Oligosaccharides

Polymers represent a diverse and vital class of materials across numerous industries, each with unique structural characteristics and functional properties. Traditional methods of polymer classification rely heavily on labor-intensive techniques prone to subjectivity and human error. The emergence of deep learning has significantly transformed material science by enabling automated analysis and classification of complex polymers. In this study, we focus on leveraging deep learning models to classify three distinct classes of polymers: peptides, plastics, and oligosaccharides. Peptides, plastics, and oligosaccharides represent significant subsets of the polymer family, each with distinct structural features and applications. Our research explores the effectiveness of various deep learning architectures, including deep learning to classify peptides, plastics, and oligosaccharides, achieving perfect accuracy with neural networks, K-Nearest Neighbors, and Random Forest classifiers. Principal Component Analysis enabled visualization of sample distribution, demonstrating deep learning's potential to automate and enhance polymer classification, reducing reliance on traditional, labor-intensive methods.

Detecting Fake Reviews in E-Commerce: A Deep Learning-Based Review

E-commerce platforms are increasingly vulnerable to fake reviews, which can distort product ratings and mislead consumers. Detecting these fraudulent reviews is critical to maintaining trust and transparency in online marketplaces. This review provides a comprehensive analysis of deep learning techniques used for fake review detection. Key models such as Recurrent Neural Networks (RNNs), Long Short-Term Memory (LSTM) networks, and Transformer-based models like BERT are explored for their ability to analyze textual data and detect linguistic anomalies. Additionally, behavioral analysis using Convolutional Neural Networks (CNNs) and hybrid models combining textual and behavioral features are discussed. The review also highlights the role of Graph Neural Networks (GNNs) for network analysis and unsupervised learning methods like autoencoders for anomaly detection. Despite advances, challenges such as evolving fake review tactics, data imbalance, and cross-platform adaptability remain. The paper concludes by discussing future research directions, including enhancing model interpretability and combining deep learning with blockchain for more secure and verified review systems.

Harnessing Deep Learning for Precision Cotton Disease Detection: A Comprehensive Review

Cotton cultivation plays a critical role in global agriculture, yet its productivity is significantly hindered by various plant diseases that impact yield and quality. Conventional disease detection methods often fall short due to their reliance on manual inspection and limited accuracy. This comprehensive review explores the application of deep learning techniques beyond Convolutional Neural Networks (CNNs) in enhancing cotton disease detection. The paper covers a range of deep learning methodologies, including CNNs, Recurrent Neural Networks (RNNs), and hybrid models that combine different neural network architectures. It examines how these techniques can improve the precision and efficiency of disease diagnosis for common cotton ailments such as boll rot, leaf spot, cotton wilt, and bacterial blight. By reviewing current research and case studies, the paper provides insights into the effectiveness of various deep learning approaches and their integration into practical agricultural systems. It also addresses the challenges faced in implementing these technologies and suggests future directions for advancing disease management strategies through deep learning. This review aims to offer a holistic perspective on the potential of deep learning to transform cotton disease detection and contribute to more sustainable agricultural practices.

Face Recognition : Diversified

This paper presents a novel lightweight hybrid architecture for face recognition, combining the strengths of MobileNet and attention mechanisms to enhance performance under challenging conditions such as facial occlusions (e.g., masks), varied illumination, and diverse expressions. The proposed model is evaluated against popular baseline models, including MobileNetV2, EfficientNetB2, and VGG16, on the Yale Face Dataset and a Simulated Masked Yale Dataset. On the Yale Dataset, the hybrid model achieved superior results with an accuracy of 93.78%, precision of 94.45%, recall of 93.33%, and F1-score of 93.89%, outperforming the baseline models in all key metrics. Additionally, when tested on the Simulated Masked Yale Dataset, the hybrid model exhibited increased resilience to occlusion with an accuracy of 63.45% and F1-score of 64.22%, significantly surpassing the other architectures.

Blockchain in Cybersecurity: Enhancing Trust and Resilience in the Digital Age

Blockchain technology has emerged as a transformative tool in the field of cybersecurity, offering a decentralized, immutable, and transparent framework to enhance trust and resilience in digital systems. This review explores the various applications of blockchain in cybersecurity, focusing on its ability to mitigate key security challenges such as data tampering, unauthorized access, and identity fraud. By analyzing the integration of blockchain in areas like secure data sharing, IoT security, and identity management, this paper highlights the strengths and limitations of blockchain-based solutions. Furthermore, it examines consensus mechanisms and cryptographic techniques that ensure the integrity and confidentiality of information. Despite its potential, blockchain faces challenges such as scalability, regulatory hurdles, and susceptibility to attacks like 51% and Sybil attacks. This review aims to provide a comprehensive understanding of blockchain's role in enhancing cybersecurity, while also identifying future research directions to overcome current limitations.

Deep Fakes and Deep Learning: An Overview of Generation Techniques and Detection Approaches

The rapid evolution of deep learning has fueled the rise of deep fakes, artificially generated media that can convincingly mimic real human faces, voices, and actions. These fabricated images, videos, and audio clips are created using sophisticated neural networks, posing significant threats to privacy, security, and public trust in digital content. This paper presents a comprehensive review of the key deep learning techniques driving both the creation and detection of deep fakes. On the generation side, methods such as Generative Adversarial Networks (GANs), autoencoders, and Recurrent Neural Networks (RNNs) are examined for their role in producing realistic manipulated media. GANs, particularly, have revolutionized deep fake creation by enabling the development of highly convincing facial expressions and motion sequences. Autoencoders are widely employed for face swapping and video manipulation, while RNNs, including Long Short-Term Memory (LSTM) networks, are critical in voice cloning and generating realistic speech patterns. In response to the escalating concerns over deep fakes, substantial research has focused on detection methodologies. This paper reviews the latest advancements in detection, particularly the use of Convolutional Neural Networks (CNNs) for image and video analysis, as well as hybrid models that combine CNNs with RNNs for more effective detection of spatial and temporal inconsistencies. Moreover, the paper explores emerging strategies such as adversarial training, transfer learning, and blockchain-based solutions that aim to strengthen detection robustness against increasingly sophisticated deep fakes.Finally, the paper addresses the broader ethical and societal challenges posed by deep fakes, including their use in disinformation campaigns, identity theft, and other malicious activities. The need for transparent, interpretable detection models and the importance of interdisciplinary collaboration to mitigate these risks are emphasized. By providing an in-depth analysis of both creation and detection techniques, this review aims to contribute to the development of more secure and reliable digital ecosystems in the face of this growing threat.

Voice Assistant System with Object Detection Technology for Visually Impaired

Navigation, object recognition, obstacle avoidance, and reading provide substantial obstacles for visually impaired people, impeding their independence and day-to-day functioning. Current solutions, including standard voice assistants and white canes, either offer little help or cause privacy issues since they rely too much on the cloud. In order to fully address these issues, we suggest an inventive offering an object detection system and voice assistant powered by Android to assist the blind with problems they face on a daily basis The system incorporates the Arduino Uno, YoloV7 for object detection, and Android for the camera module. The complete apparatus is small and light, making it easy to mount anywhere. The assessments are carried out in controlled settings that replicate situations that a blind person could face in the real world. The suggested device allows visually impaired people to be more accessible, comfortable, and able to navigate more easily than the white cane, according to the results. People with visual impairments sometimes struggle to traverse complicated situations effectively. It is also no easy assignment to assist them in becoming perceptive navigators. In ocular individuals, cognitive maps derived. The system finds potential obstructions in the user's path, calculates the user's trajectory, and provides navigational data. Two experimental scenarios have been used to assess the solution. While the data is currently insufficient to draw firm conclusions, it does show that the technology can effectively assist visually impaired individuals in navigating an unfamiliar built environment.

Quantum-Enhanced Machine Learning for Real-Time Ad Serving

This paper presents a groundbreaking approach to addressing the growing computational challenges in real-time ad serving by leveraging quantum computing to accelerate machine learning (ML) algorithms. We propose a hybrid framework, the Quantum AdServer, which utilizes quantum algorithms alongside classical computing to reduce the time complexity of critical ML tasks in programmatic advertising. We explore both Variational Quantum Circuits (VQC) for near-term implementation on noisy intermediate-scale quantum (NISQ) devices and the Harrow-Hassidim-Lloyd (HHL) algorithm for future scenarios where more advanced quantum hardware is available. Our approach demonstrates significant improvements in both speed and scalability of personalized ad delivery, potentially revolutionizing the field of computational advertising. Through comprehensive theoretical analysis, simulations, and a detailed comparison of quantum methods, we showcase the potential of quantum-enhanced ML in ad tech while discussing practical challenges, including current hardware limitations and integration with existing ad-serving systems.

VIDEO TO VIDEO TRANSLATION USING MBART MODEL

There are many languages are spoken in India due to different diversities and different regions,so it is difficult to understand the global languages such as English ,Spanish ,French ,German. so this paper aims to translation of one of the global language English to their regional languages such as Tamil.So what our project does is it takes the Youtube url as an input in which the video should be in English and then save the video and perform the Machine Learning libraries as gTTS and Whisper model,Mbart50 model etc.Through this we do Audio Extraction,Speech-To-Text-Conversion,Text-Translation,Text-To-Speech-Synthesis. Through this we had Integrating language translation and audio synthesis and break down the the Linguistic barriers.

INTEGRATING ARTIFICIAL INTELLIGENCE INTO CYBERCRIME INVESTIGATION: CHALLENGES AND FUTURE DIRECTIONS

Computer and social networking whereby criminals use the Internet to propagate criminal activities are some of the major challenges faced by existing policing strategies. Modern-day crimes include hacking into computer systems and stealing money from consumers, ransomware, identity theft cases, and hacking, all of which use the dark web and encryption. In this regard, artificial intelligence (AI) is the most efficient solution for improving the manner of cybercrime investigation. This paper also analyses how AI technologies such as machine learning natural language processing, and deep learning can be incorporated in cybercrime investigations and how they can assist in dealing with difficulties concerning data volume, complexity, and encryption. The advantages of utilizing AI are numerous from pattern recognition to repetitive tasks cutting down the investigation time. However, the paper recognizes that applying AI in business brings legal, technical, and ethical concerns including; privacy, bias, and legal constrictions. This research analyses existing legal frameworks of India, the EU, and the United States while looking at how it would be possible to incorporate AI into cybercrime investigations without violating the rights of a citizen. Further, it reveals infringement and possible bias, as well as unlawful use for violations, and recounts drawbacks related to the lack of resources and expertise that police departments confront. In the final section of the paper, directions for future research focusing on the use of AI in the fight against cybercrime are given in addition to that, the practice of cooperation with different countries, legal regulation of such activities, protection of ethical issues, and training of personnel are described. They are useful in making sure that the levels of artificial intelligence benefits are achieved fully without compromising security and the basic rights of an individual.

Advanced Machine Learning Techniques for Water Quality Prediction and Management: A Comprehensive Review

The incorporation of IoT, machine learning, and geospatial technologies has rapidized pace in data-driven approaches in water quality monitoring. The approach calls for data-driven methods in water quality assessment that would ensure such a process makes it not only accurate but cost-efficient and within the constraints of applied growing environmental challenges. The IoT sensors allow for real-time data generation, and the machine learning models, such as support vector machines, neural networks, and regression techniques, have changed the index of water quality prediction and analysis. Applications of GIS provide spatial visualization and management of water resources. This collection of papers constitutes the constraints in classical measuring techniques, advanced solutions through automation technologies based on sensor powers, and hybrid algorithms. However, the integration of these technologies solves the complexities associated with water quality measurement apart from having a basis for the supportive suggestions for sustainable water management to provide actionable insights for decision-makers. Hence, this review underlines the potential of such future integration of IoT, AI, and GIS technologies to revolutionize the monitoring of water quality, ensuring clean water through global environmental changes.

A Comparative Study of Fuzzy Goal Programming And Chance Constrained Fuzzy Goal Programming

This work is a comparative study of the traditional fuzzy goal programming model and the chance constrained fuzzy goal programming model. The right-hand side coefficient of the constraint matrix is assumed to be a right sided fuzzy number, and a random variable following the gumbel distribution, while the coefficients of the constraint matrix are triangular fuzzy numbers.The chance constrained problem is converted to its deterministic equivalence and the fuzzy constraints defuzzi ed. The bounds of the kth objectives are determined and utilized to obtain the membership function of the fuzzy goals. Lastly, the weighted sum goal programming technique is employed to obtain the optimal solution to the decision makers goal target using the attained membership function. Moreover, the CCFGP model proved a more satis cing approach to optimize the decision makers goals as it yielded an under-achievement of the decision makers goal target. Numerical illustration proved the superiority of the technique.

Real-Time Object Detection in Low-Light Environments using YOLOv8: A Case Study with a Custom Dataset

Object detection in low-light conditions presents significant challenges due to the reduced visibility and poor illumination, particularly in real-time applications. This paper proposes a novel approach using the YOLOv8 model for real-time object detection in night-time conditions. A custom dataset comprising various objects captured in low-light environments was utilized to train and evaluate the model. The results demonstrate superior performance in terms of speed and accuracy compared to previous models, particularly YOLOv3. We also include an analysis of the model's real-time performance using a custom video feed. Our findings show that YOLOv8 outperforms earlier YOLO versions in detecting objects accurately and quickly in low-light, real-time scenarios, making it a promising solution for night-time surveillance and other security-related applications.

AI Anthropomorphism: Effects on AI-Human and Human-Human Interactions

Objective: Anthropomorphism is the act of assigning distinctive human-like traits, feelings, and behavioral characteristics to non-human entities. The phenomenon known as artificial intelligence (AI) anthropomorphism involves imputing human-like behavioral characteristics onto generative artificial intelligence systems. This phenomenon holds significant implications for the future of human-human social interactions in society. This review paper examines the concept of AI anthropomorphism and its influence on human behavior, with a particular emphasis on how interaction between AI and humans can affect societal dynamics and social relationships among humans. Methodology: This paper examines the comprehensive understanding of AI anthropomorphism and the impact of AI-human interactions on human-human social interactions through the examination of several theoretical frameworks and empirical studies. The paper synthesizes information from the research literature on AI anthropomorphism. The paper incorporates insights from theoretical frameworks such as social presence theory, media equation theory, attachment theory, and uncanny valley theory. The paper entails an in-depth study of scholarly publications, case studies, and observational studies that highlights the implications for human relationships with anthropomorphized AI. Findings: The findings indicate that attributing human-like characteristics to AI can greatly increase user engagement, inclusivity, and understanding of AI, potentially enhancing human-human relationships by facilitating similar positive social behaviors. Excessive dependence on AI for social interaction can potentially diminish the quality of human communications and cause the erosion of social skills, thereby emphasising the importance of incorporating AI in a balanced manner. In conclusion, AI has the potential to enhance empathy, compassion, and teamwork in human communication. It is essential to strike a balance to avoid becoming overly reliant on generative AI and sacrificing authentic human connections. Subsequent investigations should prioritize the refinement of AI design and social chatbots to bolster and amplify human-human connections, rather than supplanting them.

Predicting Titanic Survivors Using Random Forest Machine Learning Algorithm

The ship wreck of the RMS Titanic is still remembered as a well-known tragedy that took many lives. Using passenger data to predict who would survive this disaster presents an intriguing challenge for machine learning. This research utilizes the Random Forest algorithm, an effective ensemble learning technique, to examine and forecast survival outcomes based on factors such as age, gender, ticket class, and fare. Through thorough data preprocessing, which includes addressing missing values and creating new features, The model constructed delivers precise survival predictions. Important factors like passenger class and gender emerge as the most influential elements affecting the results. The model achieves a conclusion of over 82%, surpassing conventional machine learning methods like Logistic Regression and Decision Trees. By prioritizing feature significance and ensuring the model's broad applicability, this study not only emphasizes the predictive capabilities of machine learning but also provides insights into the societal and structural dynamics at play during the tragedy. Our results illustrate the effectiveness of Random Forest for binary classification tasks and its potential for wider use in predictive analytics.

Improving Brain Cancer Detection with a CNN-RNN Hybrid Model: A Spatial-Temporal Approach

Accurate and early detection of brain cancer is critical for improving treatment outcomes and patient survival. However, traditional diagnostic methods relying on radiological interpretation often lead to variable accuracy and delayed diagnoses due to the complex nature of brain tumors. This paper presents a novel hybrid deep learning model that combines Convolutional Neural Networks (CNNs) for spatial feature extraction with Recurrent Neural Networks (RNNs) for temporal analysis, specifically designed to improve brain cancer detection from MRI and CT scans. By leveraging the strengths of both CNNs and RNNs, the model captures intricate spatial and temporal patterns in medical images, leading to significant improvements in detection accuracy, sensitivity, and specificity. Comparative evaluations show that the proposed hybrid model outperforms conventional diagnostic techniques and existing deep learning approaches. The results highlight the potential of this method for earlier and more reliable brain cancer diagnoses, ultimately contributing to more personalized and effective treatment plans. Furthermore, the paper suggests that this hybrid approach could be adapted for the detection of other complex medical conditions.

A Comparative Study of Classification Algorithms for Enhanced Lung Cancer Prediction Using Deep Learning and SOM-Based Microscopic Image Analysis

Lung cancer is one of the top causes of cancer-related fatalities worldwide, necessitating the development of efficient early detection techniques. This study explores a hybrid approach combining deep learning and a Self-Organizing Map (SOM) for the classification of three lung cancer subtypes: adenocarcinoma, squamous cell carcinoma, and neuroendocrine tumors, using microscopic images. A pre-trained MobileNet model is employed for feature extraction, while the SOM is used for dimensionality reduction and visualization of high-dimensional data. The extracted features are then classified using various machine learning algorithms, including Random Forest, LightGBM and Decision Tree. A comparative analysis of these classifiers is conducted to assess their performance in predicting cancer types. Additionally, thresholding is applied to highlight cancerous regions in the images, enhancing the visual detection of malignant cells. Results indicate that the hybrid model provides competitive classification accuracy, with the Random Forest and Decision Tree classifiers showing particular promise. This research demonstrates the potential of combining deep learning with traditional machine learning techniques for lung cancer detection, offering a pathway toward more accurate and efficient diagnostic tools.

Advances in Tomato Disease Detection: A Comprehensive Survey of Machine Learning and Deep Learning Approaches for Leaves and Fruits

Tomatoes contributed about 232 billion Indian rupees to the Indian economy in the financial year 2020; it is next to potatoes in vegetable production in South Asian countries. Tomatoes are the most familiar vegetable crop, extensively cultivated on cultivated land in India. The tropical weather of India is relevant for development, but specific weather conditions and several other features affect the standard progress of tomato plants. Besides these weather conditions and natural disasters, plant disease is a big crisis in crop production and plays a vital role in financial loss. The typical disease detection approaches for tomato crops cannot produce a predictable solution, and the recognition period for diseases is slower. A primary recognition of disease provides optimum solutions compared to the existing detection methods. Recently, distinct technologies such as AI, IoT, pattern recognition, computer vision (CV), and image processing have quickly developed and been executed for agriculture, specifically in the automation of disease and pest detection procedures. CV-based technology deep learning (DL) approaches have been performed for previous disease detection. This study proposes a wide-ranging investigation of the disease detection and classification approaches inferred for Tomato Leaf Detection. This work also reviews the advantages and disadvantages of the methods presented. Additionally, the advancements, challenges, and opportunities are discussed in this field, providing insights into the recent methods. This survey is an appreciated resource for practitioners, researchers, and stakeholders involved in tomato cultivation and agricultural technology.

Detection of Kidney Disease using Machine Learning & Data Science

Kidney disease identification with machine learning and data science is transforming patient consideration and early diagnosis by using predictive models to identify important risk factors and biomarkers. There are several organs in the human body that performed vital functions. The kidney is a vital organ that removes toxic substances from the body, filtering blood. The reason for this is that the kidney is considered to be one of the important body parts. To maintain the health of the body, the kidneys should be safeguarded. Which kidney is affected by a different illness depends on a number of factors. The reason behind renal illness appears to be different in different individuals. The renal disease dataset (obtained via Kaggle) has been subjected to machine learning in this investigation to identify indicators of kidney illness. The primary goal of the data study has been to identify the core sources of the data, which has allowed for the distinction of any negative consequences. To choose the fundamental attributes of the data in this case, the connection component has been used. The data has been concluded using those foundational credits, and the implications of machine learning classifiers have begun kidney disease diagnosis.

Review of AI driven Intrusion Detection System on Network based attacks

This review paper explores the integration of Artificial Intelligence (AI) in Intrusion Detection Systems (IDS), highlighting how AI enhances the effectiveness and efficiency of these systems. It covers the evolution of IDS, from traditional methods to advanced AI-based techniques, including machine learning and deep learning. The paper compares these methods, assessing their strengths and weaknesses in various cybersecurity contexts. The focus is on the transformative impact of AI on IDS, offering insights into future research directions and the potential of AI to revolutionize cybersecurity defenses.

Leveraging Artificial Intelligence Algorithms for Enhanced Malware Analysis: A Comprehensive Study

The escalation of sophisticated malware threats necessitates innovative solutions for their detection and neutralization. This paper discusses the role of Artificial Intelligence (AI) algorithms in the field of malware analysis, examining various AI methodologies, and scrutinizing their efficiencies and drawbacks. We further discuss the key AI algorithms utilized, their applicability, and future potential. This study provides a valuable resource for researchers and practitioners seeking to utilize AI for improved malware detection and mitigation.

Automated Evaluation of Speaker Performance Using Machine Learning: A Multi-Modal Approach to Analyzing Audio and Video Features

In this paper, we propose a novel framework for evaluating the speaking quality of educators using machine learning techniques. Our approach integrates both audio and video data, leveraging key features such as facial expressions, gestures, speech pitch, volume, and pace to assess the overall effectiveness of a speaker. We collect and process data from a set of recorded teaching sessions, where we extract a variety of features using advanced tools such as Amazon Rekognition for video analysis and AWS S3 for speech-to-text conversion. The framework then utilizes a variety of machine learning models, including Logistic Regression, K-Nearest Neighbors, Naive Bayes, Decision Trees, and Support Vector Machines, to classify speakers as either "Good" or "Bad" based on predefined quality indicators. The classification is further refined through feature extraction, where key metrics such as eye contact, emotional states, speech patterns, and question engagement are quantified. After a thorough analysis of the dataset, we apply hyperparameter optimization and evaluate the models using ROC-AUC scores to determine the most accurate predictor of speaker quality. The results demonstrate that Random Forest and Support Vector Machines offer the highest classification accuracy, achieving an ROC-AUC score of 0.89. This research provides a comprehensive methodology for automated speaker evaluation, which could be utilized in various educational and training environments to improve speaker performance.

A Model-Driven Application for Streamlining Organizational Processes Using Microsoft Power Apps: Workflow Automation and Data Flow Integration

This research looks into the design and realization of a model-driven application in Microsoft Power Apps for streamlining organizational processes. It brings together leave management, approval workflows, productivity tracking, task management and time logging into a single system. The software includes automated workflows that improve operational efficiency, transparency, and employee performance. The study goes over workflow and data flow structures needed to make a system efficient, detailed DFD for Data Flow Diagram tasks transform traditional manual processes — integration of business rules & Workflow Automation. The app can also integrate with your calendar, for easy access to all the responsibilities of an employee in one spot. Notifications and alerts make sure that important activity is not lost, while the reporting functionality gives managers a detailed understanding of what is going on. Automation of these processes, along with the rule based business logic which the app embeds greatly reduces manual labor and enables a standard process consistent for all employees throughout the organization. Ultimately, this project specifically addresses the root problem people have when trying to improve their operational efficiency, time management practices inside or outside of work and deliver tools tailored to provide better time data collection and alerting while supporting decisioning within the organization. The app is assessed by a research paper before the conclusion, evaluating it from the perspective of business and organizational goals.

Scholar9 Profile ID

S9-102024-0406201

Publication
Publication

(0)

Review Request
Article Reviewed

(54)

Citations
Citations

(0)

Network
Network

(0)

Conferences
Conferences/Seminar

(0)