Hey there, Community fam!
Big news—our Katalon Community team just hosted the very first episode of Roundtable Connect: Designing Scalable and Maintainable Automation Projects .
While the meeting was a bit limited in size, it turned out to be an intimate, lively, and incredibly insightful session!
We covered it all—spotting the red flags of poor automation projects, figuring out how to design frameworks that can handle everything from small apps to mega-projects, and, even sharing some big-time fails and facepalm-worthy mistakes.
But don’t worry if you missed it! We’ve rounded up some of the juiciest highlights from the comments below👇. Dive in and enjoy the wisdom (and laughs)!
Please also find the recording of the session below
We’ll keep the fun going by updating this thread with all the juicy highlights from the discussion—so stay tuned!
A massive thank you to everyone who joined us for the first-ever Roundtable Connect! We can’t wait to send out your well-deserved thank-you gifts—a huge swag set!
Here’s to seeing you at the next episode—it’s going to be even bigger and better!
3 Likes
Point 1: What are the signs of a poorly designed automation framework or project?
- Lack of code reusability, leading to increased effort in updating and maintaining scripts.
- Limited scalability, such as failing to plan for headless execution or CI/CD integration from the beginning.
- Code duplication, requiring refactoring and deleting unused objects.
- Overuse of hardcoded values, causing failures when adjustments are needed, instead of utilizing dynamic, database-driven data.
Point 2: Why do we need scalability in automation testing?
- Difficulty arises when previous test scripts are not compatible with newer project versions due to added features or components.
- Testing pipelines need to be rebuilt when transitioning to new versions, adding significant effort.
- Scalability remains a major issue to address when ensuring automation testing frameworks remain efficient over time.
Point 3: How does framework design impact scalability and maintainability?
- Naming standards: Adopting consistent naming conventions significantly enhances project maintainability by keeping the framework well-organized and easy to navigate.
- Separation of concerns: Modular framework design ensures separation of:
- UI components
- Business logic
- Test data
This structure makes frameworks easier to maintain and update, as different aspects are stored in separate locations.
- Proper folder structure: Following frameworks like Katalon’s default folder structure or implementing best practices (e.g., Page Object Model in Selenium) ensures a clear and consistent organization.
- Cross-team usability: Well-organized frameworks are more accessible for both developers and testers, ensuring consistency in usage and implementation.
Point 4: What are some characteristics of good test automation?
- Reusability, maintainability, and modularity are key.
- Integration with other applications is essential.
- Performance should be optimized to avoid long execution times.
- Following good practices for handling waiting times and locators prevents scaling issues.
- Having clear guidelines ensures consistent approaches among team members.
Point 5: Is running time a characteristic of good test automation, or just a goal to achieve?
- Running time is a goal but directly impacts test automation effectiveness.
- Strategies to optimize execution time:
- Include only high-severity test cases in the CI/CD process.
- Reduce total execution time to avoid delays in deployment.
- After deployment, run less critical test cases separately.
Point 6: How can we ensure that the automation framework promotes modularity and code reusability?
- Use design patterns to enhance modularity and maintainability.
- Follow clear guidelines and best practices, including naming conventions and coding standards.
- Build common components (keywords, libraries) for shared functionalities like data preparation.
- Use custom keywords for repetitive steps like login/logout.
- Leverage data files for parameterizing test cases to handle large datasets efficiently.
Best Practices Highlighted:
- Optimize waiting times using smart waits (e.g.,
waitForElementPresent
) instead of hard-coded waits.
- Categorize test cases by severity to prioritize execution.
- Document best practices and share guidelines for uniformity within teams.
Point 7: How can we design a framework that scales effectively for project size and complexity?
- Decide on a good structure and framework at the beginning (e.g., folder structure, file organization).
- Involve senior members in framework design for experience-based decision-making.
- Apply best practices from other successful projects when no expert is available.
- Ensure logical modularization to keep components separated and organized (e.g., by feature or module).
- Implement parallel execution to reduce test execution time as the project grows.
Point 8: What if a junior member has a better idea for the framework than a senior? How to convince them?
- Build a process or prototype to demonstrate the proposed idea’s effectiveness.
- Collaborate and brainstorm with the team to evaluate the idea.
- Consider a small-scale safe space to test the idea without risking the entire framework.
- Maintain open communication and mutual respect, enabling junior members to share their ideas freely.
Point 9: How to handle framework design conflicts in a team, especially in large teams?
- Host team discussions to gather inputs from all members before finalizing decisions.
- Allocate time for each member to present their ideas, ensuring everyone’s perspective is heard.
- Involve team leaders or directors to consolidate inputs and guide the process.
Point 10: How to deal with resistance to changes in the framework, especially as a newcomer?
- Avoid making drastic changes immediately; first, understand the existing framework and workflow.
- Gradually introduce small improvements over time.
- Focus on understanding team dynamics and gaining trust before suggesting significant changes.
- Acknowledge that resistance to change often stems from people’s reluctance, not necessarily the framework’s flaws.
- Prioritize aligning changes with team readiness and fostering adaptability.
Point 11: Do participants have any concerns or experiences of big failures in their projects?
- Forgot to mark the “else” part as a failure handler in a custom keyword, leading to all test cases being marked as passed, despite errors.
=> Importance of handling failure exceptions to ensure accurate reporting in automation frameworks.
- Choose the wrong tool at the beginning of the project.
- Tools selected based on experience rather than long-term project needs.
- Some tools worked only for specific platforms (e.g., web or API), but the project required compatibility with multiple platforms.
- Manual testing mistakes, such as forgetting certain test cases.
- Caused additional work for testers and developers due to differing perspectives on features, though the cost of mistakes was minimal.
1 Like