DIRECTV is a streaming service having multiple vendors sending the advertisements to cast on the TV programs in multiple ways. The project is to migrate the existing Legacy Video content process from standalone applications to Cloud based applications.
Well versed in various roles such as individual contributor as Functional, Performance, AWS, Big Data Quality Assurance Engineer roles for E2E testing.
Worked as an SME for Migration Projects for Big Data Platforms for End-to-End Testing.
Developed Python Test Framework and used boto3, Pytest, pandas, NumPy etc. libraries for creation, management and testing of Pipeline.
Configured AWS CLI for authenticating and accessing the Services using boto3 and developed Project Directory.
Created Modules for managing EMR Clusters and developed test cases to automate Data pipeline testing.
Created Python scripts for extracting Test data URL’s from production to access Elasticsearch Indexes and used validators and other python libraries along with boto3 to ingest the URL’s into LAB environment.
Simulated user interactions with data pipeline interfaces to test the end-to-end workflow, including triggering jobs, monitoring progress, and verifying outputs.
Leveraged both Selenium and Python Libraries in Test framework for End-to-End workflow Big Data Pipeline Testing where Web based UI’s and Dashboards are part of Validation.
Experience in using JMeter for Performance Testing like Load, Stress and Endurance tests.
Experience with JVM performance tuning, Heap and Thread dump analysis.
Developed Automated Performance scripts and integrated them to CICD using Jenkins.
Have hands-on in Kinesis Data Stream and Flink services in AWS as part of data pipeline, also procure data validation in AWS that and troubleshoot for any data loss.
Worked on functions in AWS Lambdas which aggregates the data for Ad processing and stores results in Dynamo DB.
Dropped and created Tables and Indexes using Sort Keys for Dynamo DB tables.
Expertise in working with EMR and running Spark Streaming jobs and have expertise on Spark SQL queries.
Have hands on creating AWS Glue – crawler jobs for raw data events to populate on tables.
Experience in working with microservices as part of Hadoop projects and worked on applications like Kubernetes, Dockers, Logstash, Elastic Search, ECS, Flink etc.
Tuning Lambda functions and step functions in AWS environment and triggering the application modules on demand.
Good amount experience on running performance tests for different applications like Web services, ETL and Batch jobs hosted on On-Prem, Hadoop platforms and AWS cloud
Experience on STB and OTT Platform testing, deploying and validating builds on STB’s and other devices to test E2E Video and Audio testing.
Developed BDD tests using Cucumber by creating feature files for automation by writing behavior and step definitions.
Created BDD framework with Cucumber for End-to-End testing for web-testing and added test cases to CI tools.
Coordinating with Systems analysts, business partners and different stake holders to derive SLA for Load test. Also, providing critical factors requirements such as response time, throughput, transactions/sec, user threads, error rate, network utilization, connection pools hit ratios etc.
Worked on creating Grafana, Splunk and Prometheus dashboards to monitor the calls on individual API’s.
Proficient in using Azure Portal for Server Monitoring, App Insights and DB Performance Metrics.
Worked on AppDynamics and New Relic Dashboards to monitor the Performance test results and analyze the system health (Metrics).
Performed POC for a Go Lang dependency project which was integrating with Advanced Amazon cloud services.
Was part of Go Lang Pipeline set up which combines CI/CD stages that encapsulates directive to build, analyze and perform tasks. So, the pipeline will check out the Go Lambda function code in predefined workspace.
...see more