There are many domain experts working for IntelliMagic. The hundreds of years of expertise from these experts is built in our products: That is what we call embedded expert knowledge.

    What Embedded Expert Knowledge means and offers?

    • Raw Data

      All the different platforms that IntelliMagic supports for SAN and z/OS environments generate data about their configuration, capacity, activity and performance. Just collecting this raw data and presenting it in a database or set of graphs does not unlock the value that is implicitly present in the data.

    • Meaningful Information

      The way that this raw data is turned into meaningful information is what makes IntelliMagic unique. We investigated the architecture and all the metrics in the raw data extensively and embedded our knowledge gained over the past 25+ years into the software.

    • Different From Statistics

      This is really different from standard IT Operation Analytics (ITOA) solutions that use statistical anomaly detection. Detecting statistical outliers can be useful, but the weakness of the statistical approach is that it does not predict upcoming issues or show root causes. Using correlation rather than interpretation does not distinguish cause and effect or explain why an anomaly occurs. Another problem with anomaly detection is that it often generates many false alerts that make you hunt issues that are not real.

    • Fundamental Understanding

      IntelliMagic software has a built-in fundamental understanding of how workloads, logical concepts and physical hardware interact. By embedding human expertise in the software, the full potential of the data is unlocked. The software detects risks before issues impact production. It also enables users to find true root causes quickly and reliably. Furthermore, it highlights where there is tangible potential for optimization, and ultimately it gives IT staff what they need to be most effective in delivering a reliable datacenter at an optimal cost level.

    How Embedded Expert Knowledge is implemented in IntelliMagic software

    Our expert knowledge is embedded on two levels:

    1. Data to information phase – When processing the raw data that comes from the SAN or z/OS systems
    2. Information to intelligence phase – When evaluating the information

    Data to Information

    The following are some ways in which our knowledge is embedded into our software in the ‘data to information’ phase:

    • which data sources deliver the necessary information,
    • which metrics contain valuable information and which ones are superfluous or meaningless,
    • what the exact meaning of a certain metric is for each different platform,
    • how metrics should be normalized against which other metric,
    • which metrics may be combined to provide brand new information.

    This might be mostly invisible, but this type of knowledge is a critically important prerequisite to using the more visible types of embedded knowledge that produce the dashboards, thresholds, ratings, drill downs, recommendations and explanations.

    Information to Intelligence

    In the ‘information to intelligence’ stage we further interpret the information with additional embedded expert knowledge to create true Availability Intelligence.

    Here are some examples of embedded expert knowledge used to turn information into intelligence:

    • utilization levels for internal components that are not measured, computed based on knowledge of the architectural throughput limits,
    • which of the vast amount of metrics are the most important to put in a dashboard to monitor for hidden issues,
    • which type of visualization is the most applicable for each metric and detail level,
    • what performance you should expect for a particular combination of workload and configuration, resulting in dynamic thresholds that consider the hardware and workload interaction,
    • what sort of thresholds are relevant for each metric, e.g. fixed thresholds or workload dependent,
    • what default levels of the thresholds should be for different configurations,
    • how thresholds should be configurable by the user,
    • for which metrics it is relevant to set Service Level Objectives,
    • what it means when a performance value is outside of the safe range, displayed in explanation fields that points to potential root causes,
    • the potential causes of the exceptions, allowing users to further drill down to find the root cause,
    • which other metrics to look at when looking at a certain metric, as presented in multicharts,
    • how to drill down to relevant connected information from each certain point onwards.

    Learn more and see examples?

    If you want to know more details and see examples of how this results in superior availability and reduced cost levels, make an appointment for a free demonstration.

    Related Resources

    Webinar

    A Mainframe Roundtable: The Leaders | IntelliMagic zAcademy

    Join industry leaders as they discuss the evolution of mainframes, sharing personal experiences, challenges, and future insights, along with practical advice from their careers in the Z ecosystem.

    Watch Webinar
    Webinar

    Challenging the Skills Gap – The Next Generation Mainframers | IntelliMagic zAcademy

    Hear from these young mainframe professionals on why they chose this career path and why they reject the notion that mainframes are obsolete.

    Watch Webinar
    Webinar

    New to z/OS Performance? 10 Ways to Help Maintain Performance and Cost | IntelliMagic zAcademy

    This webinar will delve into strategies for managing z/OS performance and costs. You'll gain insights into key metrics, learn how to identify bottlenecks, and discover tips for reducing costs.

    Watch Webinar

    Go to Resources