Tuesday, October 14, 2025

BrowserStack Testing

 Test any app: Web, mobile & enterprise

Any type of testing:  Functional |  Cross-browser & real device | Accessibility | Visual | Performance

Manual Testing -  Live testing, Testing toolkit

Automation without coding -  Low-code automation, Website scanner

Test automation - Automate testing, Visual testing (Percy), Load testing, Salesforce test automation, AI evals testing, API testing(Requestly)

Accessibility -  Accessibility testing (Design + code + Test + Monitor)

Manage & Optimize testing - Test management, Test reporting & analytics, Quality engineering insights

BrowserStack AI Agents for testplanning, creation, execution & valiation

Platform services - Real device cloud, Unfiied reporting, Failed test analysis, Analytics, Private devices

CI/CD - Collaboration, Issue tracking tools


AI Self-heal :  www.browserstack.com/docs/automate/selenium/self-health

AI Agents in Testing Life Cycle

 BrowserStack AI Agents at every step of testing life cycle

Improving coverage & accuracy while boosting productivity by up to 50%

1.Test Planning
Requirement analysis
Identify test scenarios
Create test plans
Test insights
Setup test environments

2.Test Creation
Write test cases
Maintain test cases
Create automated tests
Maintain automated tests
Create low-code  tests
Maintain low-code tests
Test data management


3.Test Execution

Manual or Automated
Identify relevant tests
Functional Testing
Cross-browser testing
Cross-device testing
Accessibility testing
Visual testing
Performance Testing
Exploratory Testing


4.Test Validation

Test reporting
Failed test analysis
Monitor test analytics
Build verification
Flaky test management
Log defects


BrowserStack AI Agents

 BrowserStack AI Agents across the platform

1.Automate & App Automate

Self-healing agent
failure analysis agent
Test selection agent
Test authoring agent


2.Percy
Visual review agent
Visual setup agent

3.Accessibility Testing
A11y issue detection agent
A11y remediation agent

4.Low-Code Automation
Low-code authoring agent
Self-healing agent

5.Test Reporting & Analytics
Test failure analysis agent

6.Test Management
Test case generator agent
Test selection agent
Low-code authoring agent
Test failure analysis agent
Test deduplication agent
Test case maintenance agent
Test data generation agent




Tuesday, July 8, 2025

Performance Testing

 

Fundamental of Performance Testing

Agenda

Ø  Basics of Performance Testing?

Ø  Performance testing tool – LoadRunner Overview

Ø  Key Concepts – Transactions, Think Time, Correlation

Ø  Demo – Recording a Script

What is Performance Testing?

Ø  Ensures system stability, scalability and speed

Performance testing helps identify how systems behave under expected and peak load. Each type targets a specific concern.

Ø  Types of tests:

ü  Load

ü  Stress

ü  Spike

ü  Endurance

ü  Soak

Notes:  Under a load how an application is behaving and what is the response time?
How quick the end user is getting the web page and what is the throughput?
How many end users? Can able to access the application without any performance degradation without any issues? For end user two key performance aspects 1. One how quickly you are getting the page. 2.how many users (volume)

Test case to simulate the load, concurrent users doing the same functionality, how the application is behaving and how the application response.

Types:  Load:  Online application, 200 end users uploading the document in one hour. Create 200 users and upload the document, check the response time. My load, expected load.

Stress:  More load on the system, simulate with 300 or 400 end user.

Spike:  Ramp up the users in 5 minutes,

Endurance:  Run the test for long time.

Soak: Similar of load test

It comes under non-functional testing.

Response time, throughput and security is beyond the functional.

Performance Testing Lifecycle

Analysis and Planning -> Test Design->Test Execution->Result Analysis & Reporting

Ø  Analysis and Planning

ü  Requirement gathering

Ø  Test Design

ü  Scripting

ü  Scenario creation

Ø  Test Execution

Ø  Result Analysis

Performance Testing Process

Test Phases   Initiate->Estimate->Strategize->Design->Execute->Analyze->Final Report

Automation test using Loader tool.

Open-Source & Licensed Tools

Feature/Aspect

Open-Source Tools (Jmeter, k6, Gatling)

Licensed Tools(LoadRunner) (Self performer, VCS)

Cost

Free

Expensive (based on protocol, Vusers, etc.)

Protocol Support

Mostly HTTP, Web APIs

Extensive (SAP, Citrix, Siebel, Oracle, TruClient, etc.)

Ease of Use

Varies (some CLI, some GUI)

Full-featured GUI, user-friendly

Community Support

Large (for JMeter, k6)

Vendor Support (OpenText)

Customization

High(open code base)

Moderate (through APIs or custom scripting)

Scalability

Depends on scripting and infrastructure

Highly scalable with built-in controllers and load generators

Reporting

Good (especially Gatling, k6)

Advanced Analytics and built-in reporting

Monitoring

Needs integration(AppDynamics, InfluxDB, etc.)

Built-in monitoring and third-party integration

CI/CD Integration

Strong (JMeter, K6, Gatling)

Possible but needs ALM Integration

 

Load Runner

Load Runner is a suite of tools. VuGen is where scripts are built. Controller manages the scenario. Load Generators simulate users. Analysis gives insights into performance

Ø  VuGen: Script creation

Ø  Controller: Test Scenario creation

Ø  Load Generators: Vitrual users

Ø  Analysis: Post-run reports

Load runner is a threat based tool like Java. Load can run the test scripts parallel.

Controller can run the 100 users means 100 threads parallel.

Load generator to run those threads using virtual users.

Controller sets a timer and ends when user logs off. It gives you response time.

All the logs will be collected in the controller, the tool analysis tools gives the nice presentable format of logs.

One controller machines acts as a Master and the other controller machines are slaves.

At present, everyone is using the Performance Centre. PC is a web based application.

PC application is hosted in web server; it connects to controller and load generator.

Going forward, LRE is in cloud

Performance Centre

PC is an enterprise-grade, web-base performance testing platform from OpenText that enables centralized test execution, monitoring and analysis.

Ø  PC Host- Controller & Load generators

Ø  PC Server-Test Scheduling, Results analysis and User management

Ø  ALM Server – Script & file storage and Project tracking

Load Runner Enterprise (LRE) Cloud

LRE is a cloud-based performance testing platform by OpenText that enables teams to plan, run and manage large-scale load tests across distributed environments.

Key Concepts

Ø  Transaction: Measures response time between checkpoints

Ø  Think Time: Simulates user wait

Ø  Pacing: Time between iterations

Ø  Correlation: Handles dynamic values

Ø  Parameterization: Replaces hardcoded values

 

OpenText had given the framework

1.       Virtual User Generator (Install in your machine)

2.       Analysis (Install in your machine) (Install in your machine)

Agenda

Ø  Vugen Recording Traffic

Ø  Load Runner Test Execution Architecture

Ø  Performance Center ALM

Ø  Monitoring Agent

Saturday, May 3, 2025

QA Leads Checklist

 QA Leads Checklist:

Activities:

Design/Planning

  1. QA Services and Estimation deck
  2. Test Strategy document downloaded from Project Access Library
  3. Test Plan document downloaded from Project Access Library
  4. Raise SNOW request (as applicable)
  5. Test plan peer review
  6. QA project plan update
  7. Lead Assessment Checklist
  8. Test plan review with Customer Solution managers
  9. Test Plan walkthrough with IT stakeholders
  10. Test plan walkthrough with Business
  11. Test plan Approval/Sign off

Execution

  1. Send OUT Daily status report to the project team
  2. Update project status in WSR
  3. Update KPI monthly matrix

Closure

  1. Test Summary document downloaded from project access library
  2. Update production readiness QA checklist
  3. Review test summary with Solution managers
  4. Test Summary walkthrough with IT Stakeholders
  5. Test Summary walkthrough with Business
  6. Test Summary Approval/Sign off
  7. Upload test artifacts to project SharePoint

Knowledge Transfer

  1. Provide KT session to the team

QA Expectations and Evaluation Criteria

 QA Expectations and Evaluation Criteria:

  1. Weekly Status Report
  2. Daily Execution Status Report
  3. Financial Reports

QA Project Artifacts - SM & Peer Review Feedback

 QA Project Artifacts:

  1. QA Estimates
  2. QA Project Planner
  3. Test Strategy and Test Plan
  4. Test Risk Management
  5. QA Reporting
  6. QA Escalation Management
  7. Test Closure
No, Project Name, Feedback, Provided By, Status (Open/Closed), QA Lead Comments (In case of open items), Review Date

BrowserStack Testing

 Test any app: Web, mobile & enterprise Any type of testing:  Functional |  Cross-browser & real device | Accessibility | Visual | P...