Category: testing

  • Will MBSE Benefit from Textual Notation?

    Model-Based Systems Engineering (MBSE) is evolving, and a key question arises: Will MBSE benefit from textual notation? The answer is a resounding yes, but it is not the whole story. Graphical representations and much more are still needed. Here’s why.

    Textual Notation as a Complementary Tool

    Textual notation is already a fundamental aspect of many basic modeling tools. Languages like PlantUML and Mermaid derive graphical representations from textual descriptions, making it easier for engineers to visualize their models. Similarly, domain-specific languages (DSLs) like Franca IDL being integrated into modeling tools like MagicDraw have proven effective. These tools allow users to either code or draw, integrating seamlessly with other model content, providing flexibility and ease of use.

    The Emergence of Two-Way Solutions

    Advanced tools are taking this integration further. For instance, sophisticated tools like MagicDraw now offer two-way translations between SysML v2 textual notation and graphical representations. This functionality allows for editing on both sides, akin to how markdown plugins work in VSCode. Such advancements are critical as they cater to both textual and graphical preferences, ensuring broader acceptance and usability.

    Bridging the Gap Between Coders and Modelers

    Textual notations are particularly appealing to those who are closer to coding. For coders, the familiarity of textual input can significantly lower the barrier to entry into the modeling world. Integrating textual notations into Integrated Development Environments (IDEs) where coding happens can streamline workflows and enhance productivity.

    However, for managers, architects, designers, and analysts, graphical representations, output management, and comprehensive lists consistent with models are crucial. Therefore, to effectively engage both technical and non-technical stakeholders, providing a combination of textual notations and corresponding graphical representations is essential.

    Addressing Broader Needs

    Reflecting on over a decade of experience in automotive projects, it is clear that MBSE must address a range of needs to be truly successful and widely accepted. Models must encompass various aspects such as cross-product, cross-solution, product solutions, functional modeling, system architecture, software architecture, and boardnet modeling.

    Additionally, the existing model content in UML and SysML v1 predominantly features graphical representations. Transitioning to textual notations won’t happen overnight. The need for tool integration is paramount. The enterprise environment is a complex network of tools for planning, requirements, system modeling, simulation, test management, and implementation. Tool content is often replicated between neighboring tools or linked, necessitating seamless integration.

    Machine-Readable Models and Analytics

    Models that are not machine-readable are essentially ineffective, serving only as “marketing” diagrams. Ensuring that models are machine-readable and providing model analytics, such as making model content available to tools like Tableau, is highly valued by business users.

    Conclusion

    In conclusion, textual representation like SysML v2 is foundational and will significantly benefit MBSE. However, it must complement diagramming and address other critical needs to be truly effective. The standards are in place, and now it’s time for the tool business to catch up. By embracing both textual and graphical representations, and addressing the diverse needs of stakeholders, MBSE can achieve greater acceptance and success.

  • How to use OCL

    #ocl #expression #modeling #language

    OCL, the Object Constraint Language, is a powerful expression language for UML, SysML, and other languages based on UML. Unfortunately, there is no good documentation around for using OCL inside tools like MagicDraw and such. But, at least we have some input from which we can extract most important aspects.

    Here is my link collection on OCL. The bad news are, this stuff is pretty old and it also varies in its statements. Either nobody is really working on it as a core module anymore or we just cannot find fresh insights here. Feel free to write me if you find any better resources.

    Definition

    Quick Reference

    Tutorial

    Standard

    API Reference

    MagicDraw

  • Umsetzung von Lean Quality Management

    #modeling #testing #generation #integration

    (Quelle: Drei Stossrichtungen für die Umsetzung von Lean Quality Management, Stefan Jobst, German Testing Magazin, Ausgabe 02/2020)

    In einer Zeit, in der die Beschleunigung der Entwicklungs- und Release-Zyklen für viele Bereiche des Business zu einem entscheidenden Wettbewerbsfaktor geworden ist, wird das altbekannte Dilemma zwischen Geschwindigkeit und Qualität weiter auf die Spitze getrieben. Mit ganzheitlichen Herangehensweisen zur Umsetzung von Continuous Delivery hat sich hier im Softwareentwicklungsprozess im vergangenen Jahrzehnt mit der Nutzung von DevOps-Ansätzen bereits einiges getan.

    Automatisierte Erstellung von Testfällen

    Ein Teil des Artikels beschäftigt sich mit der automatisierten Erstellung von Testfällen und verweist auf meinen Artikel Model-Driven Test Generation with VelociRator.

    Nachfolgende Abbildung veranschaulicht die Rundreise mit ihren Phasen rund um involvierte Tools zu den Disziplinen Requirement Management, Requirement Engineering, Output Management und Test Management.

    Rundreise von Epic zu Test

    Weiter zur detaillierten Beschreibung.

  • Model-Driven Test Generation with VelociRator

    #modeling #testing #generation #integration

    (excerpt in German available as part of article on Lean Quality Management in German Testing Magazin)

    Abstract

    Concept, code, and tests tend to drift apart in larger or longer running projects. A lot of time and money is wasted to keep them aligned and yet to a bad degree. With agile working models and higher frequency of deployment cycles this gets even worse.

    Two main reasons are identified:

    1. hands-on working style with insufficiently connected tools
    2. missing refactoring capability for concepts and tests (compared to code where we have learned to love powerful IDEs)

    In order to keep concept and code aligned I have presented model-driven system documentation in 2018, see Ni2018 in Publications.

    Today I will present model-driven test generation extending that successful alignment even further.

    After having motivated model-driven test generation, I will present a little journey that many of us work with, probably on a daily basis. I will start with an epic, manually created by some Product Owner in a well known tracking tool named Jira. Via interface this epic is pushed to a system model in MagicDraw. There, a business analyst describes the impact of the epic on already existing use cases. Then, VelociRator, the model-driven test generator, produces tests from modeled behavior. These tests are finally pushed back to Jira and linked to the initial epic closing the loop with a neat test coverage on the epic. 

    An outlook on model-driven test generation for automation will give a hint on how to combine test generation and test automation. Finally, I will conclude with why model-driven test generation saved our back during COVID-19.

    How does it work?

    The following figure shows the journey in phases around involved tools in the middle.

    Now, let’s repeat the quick journey taken above with a more detail.

    Define

    As PO I create a new issue like an epic in Jira. Let’s name it “Improve CDM and Lead Management”.

    Example epic

    Initially, the test coverage is empty. After assigning the issue to the team or some analyst of a team, the epic is pulled into a model.

    Analyse

    The model is our digital twin of the system under design. It contains system use cases describing the logical interaction of users with the system for a set of scenarios.

    Impact analysis, minimal version

    The analyst identifies which of our use cases are impacted by the new requirement by drawing dependencies from the use cases to the issue.

    Optionally, it is also possible to use change elements documenting what has to be changed and why.

    Design

    Next step is to dive into impacted use cases and change these accordingly.

    Example for a workflow given by an activity diagram

    The designer checks the steps in the workflow with associated rules, documentation, and expected results. Since testers, designers, and developers do their work based on those workflows, all relevant information is documented here. To the upper left we have highlighted a few steps so that we can easily find them in generated tests lateron.

    In addition, but not technically necessary, it is also possible to document the external view as some kind of micro architecture per use case.

    Example for external view – use case architecture

    Configure Tests

    Next, the tester configures the tests inside the model by providing values to test parameters of selected test paths.

    Example test configurations

    Each row is a test configuration and has name and documentation. All other columns represent test parameters like Account, CustomerType, and so on. When the test generator is switched on, it picks each selected test path and generates a test for each row.

    Test

    Back in Jira, the tester selects a generated test like the following.

    Example for test in Jira, extract, details omitted

    The name of the test is constructed from the name of the use case, the chosen alternative (ALT no fit), and the selected test configuration (person1). As part of its description you get a compact use case scenario. You also get full test details including test data and expected results per step.

    Closing the Loop

    Since the generated tests can be automatically linked to the originating epic, you also get a predefined test coverage.

    Example for test coverage

    If the tester now chooses to execute one or all of the tests, their results will be immediately reflected in the test coverage. Therefore, the PO can easily follow progress, too.

    Presentation

    See presentation below