Examples of various SMF metrics captured in MQ Accounting data that provide insights into detailed MQ activity. This video covers:
- buffer pools
- Thread Elapsed Time
- CPU per elapsed time
- elapsed time per get
- IMS messages
More MQ Accounting Videos
- Overview of MQ Accounting Data
- Viewing Accounting Data by Queue Level
- Viewing Accounting Data by Connection Type
- Selected Accounting Data Metrics – Part 1
- Selected Accounting Data Metrics – Part 2
- Sample MQ Statistics and Accounting Dashboards
Video Transcript
So another starting point we had. We were starting from queue names and we started from connection type. Another way you might want to look at the data is by buffer pool. If you have a situation where you’re experiencing an unexpectedly high level of utilization and activity in a specific buffer pool, and you want to know where it’s coming from, you could do it here. So for this high volume buffer pool, you know, we could look at it by queue. So we’d say that you know, almost all the messages there relate to that particular queue, or we could look at it by connection type and see that it’s pretty evenly distributed there as well. Put that in my accounting dashboard.
All right, the accounting data also captures Thread Elapsed Time broken out across many MQ components. So the majority of the elapsed time here is in the channel initiator. If we do go ahead and look at the CICS transactions, we can see kind of the elapsed time profile across the different transactions. And so the elapsed time in MQ for the work coming from CICS is largely suspend time, journal write time, prepare and commit time. And we can see the profile across the various transactions.
All right. So we talked about total CPU earlier. Now we may be interested in looking at CPU per get or elapsed time per get. So let’s go ahead and kind of take a look at that. When we view, this is CPU per get, and in this environment, it’s higher for IMS than it is for the other connection types and we mentioned earlier that IMS work can be often, you want to analyze it by application, which is PSB, Program Specification Block. So let’s to go ahead and do that for the IMS work. We’ll set the level to be the PSB name. We’ll see the work by the IMS applications. Again, there are lots and lots of players there. So, let’s again, look at only the ones that PSBs, that generate most get calls. And so when we do that, we can see that for the kind of high volume ones, they have really comparable CPU per get. So let’s go ahead and capture that one in the dashboard.
All right. So we mentioned that with CPU per get, we might, if we’re interested in elapsed time per get, we can do that and here pretty consistent profile across the different connection types, except there’s this spike in the IMS workaround 10:00 AM. So again, let’s kind of follow the same thing we did before. Go ahead and isolate it, and then let’s look at it by PSB name. See what’s driving that, and again, got a lot of players, so let’s get the top get call issuers of get, there we go get calls. Okay. So when we do that, now we do see this particular PSB is the one that had the longer elapsed time that drove that spike in the average that we saw earlier; let’s go ahead and capture that.
All right. The MQ accounting data also reports on message lengths in a couple of different ways. One way is it captures the distribution of lengths, kind of grouped into buckets. So messages under a hundred bytes, under a thousand, under 10,000, or over 10,000. And so here we’re kind of reviewing that data by connection type. And we’re seeing that all the IMS messages pretty much are between a hundred and a thousand bytes, but the CICS work, there’s this sizable piece of it that is over a thousand bytes. So again, if we want to look at that by transaction, we can see that almost all of those larger messages are coming from these two transactions. Let’s go ahead and capture that in our dashboard.
If we look at this first transaction, we can see that the area charts are a fine way to look at it, but let me look at a line chart. We can see the volume of messages for both the under 1K and over 1K messages is declining over the interval. And this corresponds to the transaction that we saw earlier, where the CPU time was declining over the interval. So you can see that it’s because there’s a lower number of messages occurring as the interval goes on there. All right, let’s go ahead and capture that. So in addition to the message size buckets, the accounting data also captures minimum and maximum message lengths. So here for the put messages, we can see that the largest IMS messages are pretty consistent, but the CICS messages are bouncing around. And again, as you could guess, we could look at that by transaction and kind of see which transactions are responsible for the largest messages.
You May Also Be Interested In:
What's New with IntelliMagic Vision for z/OS? 2024.2
February 26, 2024 | This month we've introduced changes to the presentation of Db2, CICS, and MQ variables from rates to counts, updates to Key Processor Configuration, and the inclusion of new report sets for CICS Transaction Event Counts.
Viewing Connections Between CICS Regions, Db2 Data Sharing Groups, and MQ Queue Managers
This blog emphasizes the importance of understanding connections in mainframe management, specifically focusing on CICS, Db2, and MQ.
What's New with IntelliMagic Vision for z/OS? 2024.1
January 29, 2024 | This month we've introduced updates to the Subsystem Topology Viewer, new Long-term MSU/MIPS Reporting, updates to ZPARM settings and Average Line Configurations, as well as updates to TCP/IP Communications reports.
Book a Demo or Connect With an Expert
Discuss your technical or sales-related questions with our mainframe experts today