Category: ai

  • Classifying Language Models along Autonomy & Trust Levels

    The Problem

    Language models are everywhere now. People praise them, but also complain about responses—unreliable, hallucination, cannot let it work alone, and so on. These systems, capable of understanding and generating human-like text, are often called copilots—a term borrowed from aerospace or car racing. That term indicates their main expected role being support for the pilot.

    But how do we actually classify what these models can do? And more importantly, how much can we trust them?

    A Hybrid Classification Framework

    Drawing inspiration from the SAE levels of driving automation and grounded in human-computer interaction research on trust in automation, we propose a two-dimensional framework for classifying language models:

    1. Operational Autonomy – adapted from SAE Levels (0–5): What can the model do on its own?
    2. Cognitive Trust and Delegation – how much mental effort does the user expend, and how much responsibility is delegated?

    Each level in the chart below reflects both dimensions.

    LevelAutonomy DescriptionTrust/Delegation Role
    0 – Basic SupportPassive tools like spellcheckers; no real autonomyNo Trust: User must fully control and interpret everything
    1 – Assisted GenerationSuggests words or phrases (autocomplete); constant oversight neededSuggestive Aid: User supervises and approves each suggestion
    2 – Semi-Autonomous Text ProductionGenerates coherent content from prompts (emails, outlines); needs close supervisionCo-Creator: User relies in low-stakes tasks but reviews all outputs
    3 – Context-Aware AssistanceCan handle structured tasks (e.g., medical summaries); users remain alertDelegate: User lets go during routine tasks but monitors for failure
    4 – Fully Autonomous Within DomainsWorks independently in narrow contexts (e.g., customer service bot)Advisor: Trusted within scope; user rarely intervenes
    5 – General Language AgentHypothetical general-purpose assistant capable across domains without oversightAgent: Fully trusted to operate independently and responsibly

    Why SAE Levels Make Sense

    While not acting in the physical world, it makes perfect sense to compare language models to autonomous vehicles in terms of their capabilities and limitations. The SAE classification helps clarify expectations, safety considerations, and technological milestones.

    Let’s first briefly revisit what each SAE level entails for automobiles:

    • Level 0 (No Automation): The human driver does everything; no automation features assist with driving beyond basic warnings.
    • Level 1 (Driver Assistance): The vehicle offers assistance with either steering or acceleration/deceleration but requires constant oversight.
    • Level 2 (Partial Automation): The system can manage both steering and acceleration but still requires the human to monitor closely.
    • Level 3 (Conditional Automation): The vehicle handles all aspects of driving under specific conditions; the human must be ready to intervene if necessary.
    • Level 4 (High Automation): The car can operate independently within designated areas or conditions without human input.
    • Level 5 (Full Automation): Complete autonomy in all environments—no human intervention needed.

    Adapting Levels to Language Models

    Level 0: Basic Support

    At this foundational level, language models serve as simple tools—spell checkers or basic chatbots—that provide minimal assistance without any real understanding or autonomy. They do not generate original content on their own but act as aids for humans who make all decisions.

    Example: Elementary grammar correction programs that flag mistakes but don’t suggest nuanced rewrites.

    Level 1: Assisted Generation

    Moving up one step, some language models begin offering suggestions based on partial input. For example, autocomplete functions in email clients that predict next words or phrases fall into this category—they assist but require constant supervision from users who must review outputs before accepting them.

    Example: Gmail’s smart compose feature.

    Level 2: Semi-Autonomous Text Production

    At this stage, models can generate longer stretches of coherent text when given prompts—think about AI tools that draft emails or outline articles—but they still demand continuous oversight. Users need to supervise outputs actively because errors such as factual inaccuracies or inappropriate tone remain common pitfalls.

    Example: ChatGPT generating email drafts or article outlines.

    Level 3: Context-Aware Assistance

    Now we reach an intriguing analogy with conditional automation—where AI systems can handle complex tasks within certain constraints yet require humans to step back temporarily while remaining alert for potential issues. Large language models operating at this level might manage summarization tasks under specific domains (e.g., medical summaries) but could falter outside their trained scope.

    Example: Medical AI assistants that can summarize patient records but require doctor oversight.

    Level 4: Fully Autonomous Within Domains

    Imagine an AI-powered assistant capable of managing conversations entirely within predefined contexts—say customer service bots handling standard inquiries autonomously within specified industries—but unable beyond those limits without retraining or manual intervention.

    Example: Customer service chatbots for specific industries like banking or retail.

    Level 5: Fully Autonomous General Language Understanding

    Envisioning true “full autonomy” for language models means creating systems that understand context deeply across countless topics and produce accurate responses seamlessly everywhere—all without prompting from humans if desired. While such systems remain theoretical today, research aims toward developing general-purpose AI assistants capable not only of conversing fluently across domains but doing so responsibly without oversight.

    Example: Theoretical future AI systems that could operate across all domains without human oversight.

    Current State and Implications

    Now that we have a clear classification framework, let’s examine where we stand today and what this means for practical applications.

    What does this classification tell us about our current standing? Most contemporary large-scale language models sit somewhere around Levels 2 or early-Level 3—they generate impressive content when given prompts yet still struggle with consistency outside narrow contexts and require vigilant supervision by humans who evaluate accuracy critically.

    However, there’s an important limitation to the SAE analogy that we need to address.

    The Trust Dimension

    While the SAE levels offer a useful metaphor for understanding increasing autonomy, they aren’t a perfect fit for language models because:

    • Language models don’t act in the physical world themselves—humans interpret and act on their outputs
    • Risk and impact in NLP are mediated by human cognition and behavior, unlike the immediate physical risks of self-driving cars
    • Autonomy in NLP often deals more with semantic understanding, trustworthiness, context handling, and ethical alignment than sensor-actuator loops

    Therefore, I also propose a mapping of the SAE levels to trust levels taking into account cognitive load and responsibility:

    • Level 0: No trust: tool offers isolated corrections, requires full user oversight (spellcheck)
    • Level 1: Suggestive aid: user must review and approve every suggestion (autocomplete)
    • Level 2: Co-creator: user maintains active oversight, only defers in low-stakes contexts (drafting emails)
    • Level 3: Delegate: user maintains regular oversight with frequent spot checks and validation (10-20% review)
    • Level 4: Advisor: user maintains strategic oversight with periodic reviews (5-10% audit), especially for high-stakes outputs
    • Level 5: Agent: user maintains governance oversight with systematic audits (1-5% review) despite autonomous operation

    Practical Implications

    Classifying language models along SAE-like levels provides practical benefits:

    1. Common vocabulary for developers, researchers, policymakers, and end-users
    2. Realistic expectations about capabilities—the difference between tools assisting writing versus fully automating complex decision-making processes
    3. Regulatory guidance for ensuring safe deployment at each stage
    4. Effort per level is increasing, probably exponentially

    Design Priorities

    It’s vital not simply to categorize these technologies for academic interest but also because such clarity informs design priorities:

    • Should future efforts focus on improving reliability before granting more independence?
    • How do safety concerns evolve as we move up each level?
    • What ethical considerations arise when deploying increasingly autonomous NLP systems?

    Each incremental step toward higher levels demands careful consideration regarding:

    • Transparency: Can users understand when they’re interacting with an assistant versus an agent?
    • Accountability: Who bears responsibility if an AI-generated statement causes harm?

    Conclusion

    Applying SAE-level classifications offers more than just terminology—it provides a roadmap illustrating how far we’ve come and how much further we need to go in developing intelligent language systems capable not only of mimicking human conversation but doing so responsibly across diverse environments.

    Recognizing where current technology resides on this spectrum enables us all—from engineers designing smarter assistants to regulators crafting informed policies—to make conscious choices grounded in realistic assessments rather than hype or fear.

    As artificial intelligence continues its ascent along these levels—from rudimentary support towards full autonomy—the journey will demand ongoing collaboration among technologists, ethicists, policymakers, and ultimately society itself to ensure these powerful tools serve humanity’s best interests every step along the way.

    References

    SAE J3016™. “Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles.” First published: 2014. Most recent version (as of 2024): SAE J3016_202104 (April 2021). 🔗 https://www.sae.org/standards/content/j3016_202104/

    Hoffman, R. R., Johnson, M., Bradshaw, J. M., & Underbrink, A. (2013). “Trust in automation.” IEEE Intelligent Systems, 28(1), 84–88. DOI: 10.1109/MIS.2013.24

  • Using Cursor AI as Architect and Modeler

    Cursor AI for dev-aware Architects and Modelers, produced using DALL-E, 2024-10-03

    Cursor AI with GitHub significantly improves my personal productivity and service portfolio. In-place coding and writing/blogging in one tool, awesome.

    As freelancer I am focusing on enterprise and IT architecture, customizing methods and modeling languages, and implementing integration of various tools like LeanIX, ARIS, MagicDraw, Confluence, Jira, Xray, ALM, and so on.

    This having said, programming can only be part of my job and it easily drains from my overall availability. So, I am really happy about any booster, be it other freelancers or better tooling. Moreover, since SysML v2 ad code-centric modeling approach is slowly entering the stage, I will be able to extend that productive approach even more.

    Cursor AI is so cool already, yet I would appreciate some improvements regarding different access scenarios

    • IDE: Cursor AI on Windows and Linux Desktops (UI, great)
    • Code: store all in GitHub repositories (storage layer)
    • Notes / Knowlege Base: store all as markdown files in one separate repository in GitHub (storage layer)
    • IDE anywhere: VSCode app for Android on Tablet accessing GitHub until Cursor AI becomes available (UI, mobile)
    • Git anywhere: GitHub for Android (read, search, mobile)

    It would also be nice to be able to add your own local LLM to the list improving data privacy even more.

    Google IDX beta also looks quite promising, based on VSCode for Web as well, but is sucking in all of your prompts and data…

  • Bridging Non-Technical and Technical Teams with Custom Modeling Languages: An E-Commerce Case Study

    FromProductToSolution_DallE_20240922

    In today’s fast-paced business environment, the synergy between non-technical (product) and technical teams is more crucial than ever. Yet, these teams often find themselves speaking different languages, leading to misunderstandings and project delays. How can organizations bridge this gap to foster better communication and collaboration?

    One effective approach is the use of custom modeling languages that incorporate concepts understood by both parties. By focusing on shared language elements and tracing ideas from rough concepts to detailed designs, teams can work more cohesively. This article explores how custom modeling languages centered around system objects, structure, and behavior can unite product and technical teams, using an e-commerce system as an example.

    The Power of Custom Modeling Languages

    Custom modeling languages serve as a common platform where both non-technical and technical teams can articulate and visualize system requirements and designs. These languages use intersecting concepts that are familiar to all stakeholders, facilitating clearer communication and reducing the risk of misunderstandings.

    Key Concepts:

    • System Objects: Fundamental elements that represent real-world entities within the system.
    • Structure: How system objects are organized, represented by product and solution blocks.
    • Behavior: How system objects act and interact, depicted through product and solution use cases.

    Intersecting Language Concepts

    System Objects with Structure and Behavior

    At the core of any system are the system objects, which possess both structure and behavior. By defining these objects, teams create a foundation that both sides understand.

    • Structure: Represents the static aspects—how components are organized.
    • Behavior: Represents the dynamic aspects—how components interact over time.

    Structure: Product and Solution Blocks

    • Product Blocks: High-level components that define what the system should do from a business perspective. For example, in an e-commerce system, this could be the “Shopping Cart” or “Product Catalog.”
    • Solution Blocks: Technical components that detail how the system will achieve the product requirements. This includes databases, servers, and application layers.

    Behavior: Product and Solution Use Cases

    • Product Use Cases: Scenarios that describe user interactions with the system, such as “Place an Order” or “Search for a Product.”
    • Solution Use Cases: Technical workflows that support product use cases, like “Process Payment Transaction” or “Update Inventory Database.”

    Product Level Modeling in an E-Commerce System

    At the product level, modeling focuses on capturing the business requirements and user interactions.

    Example: Customer Journey

    1. Browse Products: The customer explores the product catalog.
    2. Add to Cart: The customer selects items to purchase.
    3. Checkout: The customer provides payment and shipping information.
    4. Order Confirmation: The system confirms the order and provides tracking details.

    By mapping out these product use cases, non-technical teams can convey their needs clearly to technical teams.

    Solution Level Modeling

    At the solution level, the modeling becomes more detailed, incorporating components, classes, and methods that technical teams use to build the system.

    Example: Processing an Order

    1. Order Component: Manages order data and interactions.
      • ClassesOrderOrderItemPaymentDetails
      • MethodsvalidateOrder()processPayment()updateInventory()
    2. User Component: Handles user authentication and profiles.
      • ClassesUserAddressAuthentication
      • Methodslogin()logout()updateProfile()

    By aligning these solution blocks with the product blocks, technical teams can ensure they are meeting the business requirements.

    Tracing Ideas from Concept to Design

    The intersecting concepts allow for seamless tracing of requirements from initial ideas to detailed technical designs.

    • From Product Use Cases to Solution Use Cases: Each product scenario is linked to technical workflows.
    • From Product Blocks to Solution Components: Business components are mapped to their technical counterparts.
    • From System Objects to Classes and Methods: Objects defined at the product level are translated into classes and methods in the codebase.

    This traceability ensures that both teams are aligned throughout the project lifecycle.

    Applying the Toolkit Strategies

    To further enhance collaboration, organizations can implement several strategies:

    Bridge Gaps

    Use the shared modeling language to facilitate communication. Regular meetings to discuss models can help both teams stay aligned.

    Discussion with business stakeholders should be done based on the product modeling language while discussion with technical stakeholders should be done based on the solution modeling language. Intersecting discussions may focus on the transition from product to solution modeling to ensure that the business requirements are translated into technical requirements.

    Empathize More

    Encourage team members to understand each other’s perspectives. Non-technical staff can attend technical walkthroughs, while technical staff can participate in business requirement sessions.

    Define Roles

    Clearly outline who is responsible for each part of the modeling and solution process. This clarity prevents overlaps and confusion.

    E. g. a product owner is responsible for the product aspects of the system, while a solution owner is responsible for the solution modeling. That means that the former is also responsible for product use cases and product blocks, while the latter is also responsible for solution use cases and solution blocks.

    Foster Respect

    Acknowledge the expertise each team brings. Celebrate successes jointly to build mutual respect.

    Create Liaisons

    Appoint team members who are fluent in both product and technical aspects. These liaisons can translate and mediate between teams.

    Continuous Learning

    Promote ongoing education. Workshops and cross-training sessions can help team members appreciate the challenges and workflows of their counterparts.

    Conclusion

    Bridging the gap between non-technical and technical teams is not just about better communication — it’s about creating a cohesive environment where ideas flow seamlessly from concept to implementation. Custom modeling languages that use shared concepts like system objects, structure, and behavior can play a pivotal role in this process.

    By applying these principles and fostering a culture of empathy and continuous learning, organizations can enhance collaboration, drive innovation, and adapt more quickly to market demands. The e-commerce example illustrates how these concepts can be practically applied, but the approach is versatile enough to benefit projects across various industries.

    Future Enhancements and Sophistication

    While the toolkit provides a solid foundation for bridging product and technical teams, there’s room for further sophistication.

    At the product level, more detailed requirements modeling could be introduced, incorporating user stories, acceptance criteria, and business rules. Similarly, at the solution level, technical modeling could be expanded to include architectural patterns, data models, and API specifications.

    Underneath the solution level, additional layers of abstraction could be added, such as infrastructure modeling for cloud deployments or performance modeling for optimization.

    Moreover, extending the language concepts to include roles and interactions would provide a richer context for system behavior. Roles could represent different user types or system actors, while interactions could model the complex relationships between system components.

    These enhancements would create an even more comprehensive toolkit, enabling teams to model and communicate increasingly complex systems with greater precision and clarity.

    Final Annotations on how to use AI to Answer Questions

    My personal goal for this article was to answer a call for a LinkedIn advice question (https://www.linkedin.com/advice/0/what-do-you-technical-non-technical-teams-clash-obaif) in no time by involving AI. It turned out not to be that easy this time. Most of the time was spent checking the result and tweaking the prompt half a dozen times to improve it. The reason seems to be in the complexity of the request and the need to adapt to the limitations of the AI by playing with two abstraction levels and also taking into account my personal experience in various projects. The situation should improve with increasing personal knowledge base accessible by the AI.

    Used approach

    • Copy LinkedIn article into Cursor AI (https://www.cursor.com/) as separate markdown file (can be improved avoiding copying every single section)
    • Create a prompt in Cursor AI in yet another file
    • Use chat in Cursor AI (well, half a dozen iterations) and save result in yet another markdown file
    • Manually improve the result with selective changes directly in Cursor AI (select text, Ctrl-K, command).

    A beautiful side effect is that the sections are answered in a connected way and not separated from each other. I hope this can be helpful for you as well. Thank you for reading and forgive me not having spent more time on a perfectly generated image.

  • Staying Afloat in Software Development: A Comprehensive Toolkit for Managing Deadlines and Tasks

    ToolkitSoftwareDevelopment-eye-catcher_chatgpt4o_20240921
    ToolkitSoftwareDevelopment-eye-catcher_chatgpt4o_20240921

    In the fast-paced world of Software Development, it’s easy to feel overwhelmed by looming deadlines, complex projects, and the constant need to learn new technologies. The good news is that there are tools and techniques designed to help you navigate these challenges efficiently. By integrating time management strategies like time boxing with Clockodo, leveraging task and issue management in Confluence and Jira, utilizing code management with GitHub, enhancing communication through Microsoft Teams and email, expanding knowledge via online learning platforms, and automating workflows with tool integrations and Jenkins, you can transform chaos into productivity.


    Time Management with Time Boxing and Clockodo

    Effective time management is the cornerstone of productivity. One proven technique is time boxing, where you allocate fixed time periods to tasks, helping you maintain focus and prevent scope creep. Clockodo complements this approach by providing a time-tracking solution that allows you to monitor how you spend each time box. With Clockodo, you can:

    • Set Time Blocks: Define specific periods for tasks and track the actual time spent.
    • Analyze Productivity: Generate reports to identify patterns and optimize your schedule.
    • Do Accounting: Use report to list your billable time boxes.

    By combining time boxing with Clockodo, you create a disciplined environment that enhances focus and productivity.


    Project and Task Management with Confluence and Jira

    Managing complex projects requires robust tools that provide clarity and collaboration. Confluence serves as a centralized platform for documentation and knowledge sharing, while Jira excels in issue and task management.

    In Confluence:

    • Document Everything: Create project pages, meeting notes, and technical documentation.
    • Collaborate Seamlessly: Share insights with team members and gather feedback in real-time.
    • Organize Knowledge: Use hierarchical structures to keep information accessible and organized.

    In Jira:

    • Track Issues: Log bugs, feature requests, and tasks with detailed descriptions.
    • Visualize Progress: Use Kanban boards and dashboards to monitor the status of tasks.
    • Document per Scope: Use links to Confluence pages for extensive temporary as well as sustainable documentation being relevant in the context of the issue.

    Integrating Confluence and Jira ensures that your project management is both comprehensive and cohesive, facilitating better team coordination and project visibility.


    Code Management with GitHub

    In Software Development, code is your craft, and managing it efficiently is vital. GitHub offers a powerful platform for version control and collaboration.

    • Version Control: Keep track of code changes over time, enabling you to revert to previous states if needed.
    • Collaboration: Work with others through branches and pull requests, streamlining code reviews and integrations.
    • Issue-oriented Progress: Link Jira issues to dedicated branches, focusing on code changes relevant for such issues.

    By leveraging GitHub, you maintain code integrity, encourage collaborative development, and align coding efforts with project objectives.

    Confluence does only support history of pages, though, but not branches. If you need to branch your documentation you might look for alternatives like markdown in GitHub or publication of content from a modeling tool like MagicDraw to Confluence.


    Communication with Microsoft Teams and Email

    Clear and timely communication is essential, especially when collaborating remotely or across different teams. Microsoft Teams and traditional email remain indispensable tools.

    With Microsoft Teams:

    • Real-Time Communication: Chat with team members, initiate video calls, and conduct meetings.
    • File Sharing: Share documents and collaborate on files within the platform.
    • Integration: Connect with other tools like Jira and Confluence for streamlined workflows.

    With Email:

    • Formal Communication: Send detailed updates, reports, and official correspondence.
    • Documentation: Maintain records of communications for future reference.

    Utilizing both Teams and email ensures that you can communicate effectively in various contexts, keeping everyone informed and engaged.


    Automation with Tool Integration and Jenkins

    Automation is a game-changer in managing repetitive tasks and ensuring consistency. Jenkins, a leading automation server, enables continuous integration and continuous delivery (CI/CD) pipelines.

    • Automate Builds: Compile and test code automatically upon changes.
    • Integrate Tools: Connect Jenkins with GitHub, Jira, and other tools to create seamless workflows.
    • Monitor Processes: Receive notifications on build statuses and deployments.

    By automating tasks with Jenkins and integrating your toolset, you free up time to focus on more complex problem-solving and innovation.

    The more integration functions you have, the more powerful automation will become. E. g. you could imagine to have an implementation that not only integrates Jira into a modeling tool like MagicDraw, but also publishes model content to Confluence establishing a journey from issue via modeling to documentation. And then imagine what you could do with extending this to generate tests from model content to Jira tests or even execute these tests.


    Continuous Learning with Online Platforms and Sharpening your Knifes

    The field of Software Development is ever-evolving. Staying current requires continuous learning, which is facilitated by online platforms like Coursera, Udemy, and LinkedIn Learning.

    • Flexible Learning: Access courses that fit your schedule and learning pace.
    • Diverse Topics: Explore subjects ranging from programming languages to project management.
    • Community Support: Engage with peers and instructors to enhance understanding.

    By dedicating time to learning, you not only keep your skills sharp but also open doors to innovative solutions and ideas.

    You should also keep your knifes sharp meaning that you should regularly improve your tool set including those typically many little productivity helpers and integration snippets between tools. Over time you will get a powerful integrated toolkit.


    Conclusion

    The synergy of these tools and techniques creates a powerful ecosystem for managing your Software Development projects:

    1. Plan and Track Time: Use time boxing and Clockodo to allocate and monitor your time effectively.
    2. Manage Projects and Tasks: Leverage Confluence and Jira for comprehensive project oversight.
    3. Control and Collaborate on Code: Utilize GitHub for robust version control and collaborative coding.
    4. Communicate Effectively: Keep everyone aligned through Microsoft Teams and email.
    5. Automate Workflows: Implement Jenkins to automate processes and integrate tools.
    6. Continue Learning: Expand your knowledge and skills via online learning platforms.

    By adopting this integrated approach, you transform overwhelming workloads into manageable, efficient processes. Not only do you stay afloat amid deadlines and tasks, but you also set the stage for excellence and innovation in your work not to mention the boost in quality of your deliverables.

    Navigating the complexities of Software Development doesn’t have to be a solitary struggle against time and tasks. By harnessing the power of these specialized tools and strategies, you equip yourself with a robust framework that promotes productivity, collaboration, and continuous improvement. Embrace this comprehensive toolkit, and you’ll find yourself not just staying afloat, but confidently steering towards success.

    Finally, I would like to add that my personal goal for this article was to answer a call for a LinkeIn advice question in no time (https://www.linkedin.com/advice/3/youre-drowning-deadlines-tasks-computer-science-mfa1f). It typically contains half a dozen sections with bullet points. I found that it is much easier for me to write an article like this using ChatGPT-o1 and manually improving the result with selective changes using Cursor AI (https://www.cursor.com/). A beautiful side effect is that the sections are answered in a connected way and not separated from each other. I hope this can be helpful for you as well. Thank you for reading and forgive me not having spent more time on a perfectly generated image.

  • Exploring AI’s Impact on Jobs: Task-Based Analysis

    DALL·E 2024-05-31 18.19.33 - A futuristic workspace with a systems engineer working on a computer. The engineer is surrounded by holographic interfaces displaying elements aligned
    AI Integration in Systems Engineering, produced using DALL-E, 2024-05-31

    In the landscape of rapid technological advancement, the intersection of Artificial Intelligence (AI) and job roles is a topic of intense discussion. As Salman Khan insightfully points out in his new book “Brave New Words,” people aren’t replaced by AI but by individuals who leverage AI more productively. This perspective, while intriguing, calls for a deeper exploration of how AI impacts various job roles in today’s market.

    Understanding AI’s Impact on Systems Engineering

    One fascinating approach to gauge AI’s influence on jobs is proposed by There’s an AI for That. The website offers an ‘Impact Index’ for various job roles, including Software Engineer, Systems Engineer, Mechanical Engineer, and thousands of others. This index is calculated based on the number of AI applications that support tasks related to these roles. While specific and limited, this approach provides a more concrete perspective than speculative opinions like ‘I believe AI will never …’ or ‘Everybody will loose their jobs’ that are vague beliefs at best and sometimes even cause unnecessary alarm.

    Breaking Down Job Roles into Tasks

    To truly understand AI’s impact on jobs, it’s essential to break them down into their constituent tasks. For each task, we must examine whether AI can already support it effectively, whether AI needs refinement or regulation, or whether AI is currently incapable of supporting the task.

    Sal Khan’s book serves as a real eye-opener in this context illuminating what can be achieved in one year if you go all in. E. g., the one-on-one AI tutor called ‘Khanmigo’ initially gone live along GPT-4 shall not give the final answers right away, but instead guide the student step by step (a so-called Socratic tutor). It can summarize the learning process and give valuable feedback to the teacher incl. recommendations. It can even identify weak spots the teacher might wish to focus on. And it assists the teacher in creating lesson plans saving a lot of time and freeing resources.

    Drawing inspiration from this work, one could investigate systems engineering in a similar vein and develop an ‘MBSEamigo’ analogous to ‘Khanmigo’. The core idea here is to redefine roles based on tasks. In situations where role definitions vary, the strategy would be to identify common denominators and refine roles down to the tasks.

    For instance, consider the role of a systems engineer. This job can be broken down into approximately 1,000 tasks. Each task can then be assessed for AI impact. Here’s a glimpse into some typical tasks and their AI impact:

    • Running Coach: AI Impact: 100% | AIs: 12
    • Codebase Q&A: AI Impact: 95% | AIs: 9
    • Problem Solving: AI Impact: 90% | AIs: 15
    • Diagrams: AI Impact: 50% | AIs: 12
    • Data Protection: AI Impact: 50% | AIs: 4
    • Conversations with Clients: AI Impact: 50% | AIs: 1
    • Audits: AI Impact: 5% | AIs: 1
    • 3D Images: AI Impact: 5% | AIs: 22

    A Closer Look at Key Tasks

    Running Coach (AI Impact: 100%)

    In systems engineering, optimizing processes and workflows is crucial. AI tools can act as running coaches, continuously analyzing and improving these processes with precision and speed. For example, AI can automate routine checks and suggest improvements, ensuring systems run smoothly and efficiently.

    Codebase Q&A (AI Impact: 95%)

    AI-driven tools like code analyzers and automated testing frameworks significantly enhance codebase management. They can identify bugs, suggest fixes, and predict potential issues before they become critical, thereby reducing downtime and increasing productivity.

    Problem Solving (AI Impact: 90%)

    AI excels in problem-solving by offering data-driven insights and predictive analytics. For systems engineers, this means quicker diagnosis of issues and more effective solutions. AI can simulate various scenarios to find the most optimal solutions, saving time and resources.

    Applying Foundational Engineering Principles

    Incorporating AI into systems engineering must align with foundational engineering principles. This means ensuring that AI tools are used to enhance precision, reliability, and efficiency. Systems engineers should focus on maintaining robustness and accuracy in their projects while leveraging AI to handle repetitive and data-intensive tasks.

    To do this:

    • Verification and Validation: Regularly test AI tools to ensure they produce reliable and accurate results.
    • System Integration: Seamlessly integrate AI into existing systems without disrupting core functionalities.
    • Continuous Improvement: Use AI for ongoing analysis and optimization of engineering processes.
    • Documentation and Transparency: Keep thorough documentation of AI’s role and decisions in the engineering process for transparency and traceability.

    The Future of Systems Engineering with AI

    The journey of AI in systems engineering is just beginning. By continually refining AI tools and expanding their capabilities, we can create more efficient, innovative, and secure systems.

    As highlighted in Engineering.com, AI has the potential to revolutionize engineering by automating complex tasks, predicting system behaviors, and providing advanced analytics. Currently, using tools requires understanding how to manipulate and connect various elements, which involves tedious tasks like moving the mouse around. This repetitive work, similar to past challenges in the CAD world, is expected to be minimized in the future.

    SElive also identifies potential AI use cases. This integration not only boosts productivity but also fosters innovation by enabling engineers to focus on more strategic and creative aspects of system development. One notable example is the Technology Summarizer, which employs AI algorithms to analyze and condense vast amounts of technical documentation. This tool helps engineers quickly grasp essential information, stay updated with the latest advancements, and make informed decisions without being bogged down by extensive reading.

    The upcoming book “AI Assisted MBSE with SysML by Doug Rosenberg, Tim Weilkiens (mbse4u.com))” explores the integration of Artificial Intelligence in Model-Based Systems Engineering (MBSE) using the Systems Modeling Language (SysML). It highlights how AI can automate and enhance various MBSE tasks and provides methodologies along a comprehensive, step-by-step design of a Scanning Electron Microscope.

    Final Thoughts

    The integration of AI in the job market isn’t about replacement but about redefining roles and tasks so AI can be leveraged for increased productivity. The key lies in understanding AI’s impact on individual tasks and adapting job roles accordingly. As AI continues to evolve, our approach to integrating it into our workflows must also adapt, ensuring we stay ahead in this dynamic field.

    I am convinced that we will see a net job effect again as we have been seeing with automation. To the younger generations: experts are still needed. Do not let social media make you panic! Anybody can get anything out of ChatGPT, but only the expert speaks the sophisticated language of a discipline achieving much better results that also need to be curated by the very same expert.