TMMi with AI: How to Assess Your Squad’s Maturity with StackSpot AI

Cover of TMMi article with AI. It shows two young white men standing, while a black woman is sitting. They are in an office and appear to be talking.
Learn to measure your team's maturity with the TMMi framework and StackSpot AI Agent for actionable suggestions to boost software quality. More about TMMi with AI.

Curious about using TMMi with AI? In today’s agile development landscape, understanding a squad’s maturity is key to driving software quality and efficiency. The TMMi (Test Maturity Model Integration) is a globally recognized framework for evaluating and enhancing testing processes.

In this article, you’ll discover how the StackSpot AI Agent can support teams in measuring maturity based on TMMi levels—and how that can transform your squad’s development game.

What is an Agent?

The Agent is a powerful feature within StackSpot AI, built to enhance communication by leveraging contextual information. This allows it to generate smarter, more relevant responses and automate specific tasks effectively.

You can configure Agents to act as domain experts within a particular context. Once set up, they help achieve specific goals and increase the efficiency and quality of tasks, especially within software development workflows.

To make this happen, Agents can be configured to:

  • Define their own set of Instructions (Agent)
  • Set up a customized Knowledge Source base (including rules and parameters)
  • Act as Conversation Agents or Systematic Agents
  • Operate through Quick Commands

What is TMMi?

The Test Maturity Model Integration (TMMi) is a structured framework that helps organizations evaluate the effectiveness of their software testing processes. It features five levels of maturity, each with a clear set of criteria that teams must meet to move on to the next stage.

Level 1: Initial

At this starting point, teams typically lack a structured process. Testing is manual and ad hoc, with little or no automation.

Quick Commands can help teams at this stage by identifying improvement areas, boosting test coverage, and starting with automation.

Level 2: Managed

Teams at the managed level have a defined testing strategy, though it often relies heavily on manual functional tests.

Here, Quick Commands can guide code review practices and documentation, helping to ease the path toward automation.

Level 3: Defined

Teams at this stage have established various types of tests and begin incorporating non-functional testing.

Quick Commands support this evolution by helping define and monitor quality metrics while pushing for automated functional tests.

Level 4: Mature

These squads have achieved consistent, continuous test automation.

Quick Commands ensure the proper implementation and execution of unit, integration, and end-to-end (E2E) tests.

Level 5: Optimized

At this highest maturity level, automation is comprehensive and integrated with service virtualization.

Quick Commands assist in ongoing test optimization and monitoring, making sure validations are fully embedded in the CI/CD pipeline.

TMMi with AI: Building an Agent for Maturity Assessment

Creating an Agent to assess a squad’s test quality maturity involves gathering detailed information and applying a robust diagnostic logic.

Here’s a breakdown of how to set up and configure such an Agent:

1. Define the Agent’s purpose: What you want is for it to assess the maturity of a squad’s testing quality. This starts with identifying the right metrics and criteria. 

2. Collect key inputs, such as: 

  • The squad’s process structure
  • Overall test coverage
  • Availability of functional tests
  • Presence of E2E automation
  • Use of structured mutation and performance testing
  • Observability practices
  • Unit test coverage
  • Clarity and completeness of documentation
  • The squad’s specific development context
StackSpot AI interface. It contains the criteria presented in the text of the article before the image, such as squad name, structured process, test coverage and others. This is an image from the article on TMMi with AI.

Important Considerations

Understanding the squad’s context is critical. For instance, some of them may not work on back-end, front-end, or mobile development—so the assessment must be context-aware to avoid misjudgment.

Once the Agent is properly configured, it should be able to interact with users clearly and helpfully. To that end, we must feed it as many instructions as possible. For example:

StackSpot AI interface with the text: Guidelines: Always write the answer in Brazilian Portuguese.If the user doesn't understand the question, explain it in a more simplified wayEvaluate the context of each squad, at the end of the explanation, ask the user if they want to add any more information, if not, ask them if the questions make sense for their context, if not, based on the user's context, measure the quality of the applications with the information they provideDon't repeat the same question twice.Provide approved tools, based on the Knowledge Source inserted in this agent, as a suggestion for improving the maturity of the squad. This is an image from the article on TMMi with AI.

If the user is at an initial maturity level, provide feedback and actionable suggestions for improvement. Recommend company-approved tools included in the Knowledge Source to help elevate the squad’s testing practices.

You can also link multiple Knowledge Sources to enhance the Agent’s ability to communicate effectively.

Diagnosis: Applying the TMMi Framework

Based on the collected data, the Agent can now assess the squad’s maturity using the TMMi process described above.

Tools and Tips for Advancing Maturity

After its evaluation, the Agent offers feedback and targeted recommendations to raise the squad’s maturity level. Suggestions may include adopting tools or new development practices tailored to the current assessment.

If your squad is placed at Level 1, you’ll likely need to focus on small but impactful improvements like expanding test coverage and automating key areas.

This Agent doesn’t just assess—it actively guides your team toward higher-quality outcomes. Pretty cool, right?

StackSpot AI interface with the text: TMMi Maturidade Squad. Maturity Assessment. Initial: Does not apply, as you mentioned there is unit test and E2E coverage. Managed: Does not apply, as the unit test coverage is 70%, which is higher than expected for this level. Defined: Does not apply, as the unit test coverage is 70%, which is higher than expected for this leve. Mature: Your squad fits into this level because: Automated test coverage is 70%. There is implementation of different types of tests, such as unit tests and E2E. Observability is present in the applications. This is an image from the article on TMMi with AI.
StackSpot AI interface with the text: Optimized: To reach this level, the following would be necessary. Increase automated test coverage to at least 85%. Ensure performance testing is part of the CI/CD process. Monitor implemented tests and apply continuous improvements, such as mutation testing. Suggentions for Improvement: 1. Mutation testing: Implement mutation tests to ensure greater robustness in validations. 2. Performance Testing: Conduct performance test as part of the continuous delivery process. 3. Increase Test Coverage: Evaluate and improve automated test coverage to reach the Optimized level. These actions can help your squad progress to the Optimized level. This is an image from the article on TMMi with AI.

Conclusion

Assessing a squad’s maturity is a vital step toward boosting software quality. With TMMi as your guide and the StackSpot AI Agent as your assistant, you’ll gain valuable insights and practical strategies to improve testing processes.

By following this framework, your squads can evolve from manual beginnings to fully automated, optimized testing operations.

References

Consume innovation,
begin transformation

Subscribe to our newsletter to stay updated
on the latest best practices for leveraging
technology to drive business impact

Related posts