After my last post about SQL*Net message to client wait event I had a follow-up question about what’s the difference between SQL*Net message to client
and SQL*Net more data to client
wait events. I’ll post the answer here:
The first session data unit (SDU) bufferful of return data is written to TCP socket buffer under SQL*Net message to client
wait event.
If Oracle needs to return more result data for a call than fits into the first SDU buffer, then further writes for that call are done under SQL*Net more data to client
event.
So, whether and how much of the SQL*Net more data to client
vs. SQL*Net message to client
waits you see depends on two things:
- Amount of data returned to client per call
- Oracle Net SDU size
A quick test with my Snapper performance tool illustrates this:
In one session run following:
SQL> set arraysize 1 SQL> SQL> select rownum from dba_source; ROWNUM ---------- 1 2 3 4 5 6 ...many rows returned...
And I measure the waits in the other session:
SQL> @snapper out,gather=w 5 1 141 -- Session Snapper v1.06 by Tanel Poder --------------------------------------------------------------------------------------------------------------------------------------------- HEAD, SID, SNAPSHOT START , SECONDS, TYPE, STATISTIC , DELTA, D/SEC, HDELTA, HD/SEC --------------------------------------------------------------------------------------------------------------------------------------------- DATA, 141, 20080210 14:59:55, 5, WAIT, SQL*Net message to client , 6750, 1350, 6.75ms, 1.35ms DATA, 141, 20080210 14:59:55, 5, WAIT, SQL*Net message from client , 4668258, 933652, 4.67s, 933.65ms -- End of snap 1
No SQL*Net more data to client
waits show up above, as with arraysize = 1 setting Oracle fetches only 2 rows at a time. Therefore the returned resultset per fetch call always fits into the first SDU sized packet. Note that the reason why Oracle fetches 2 rows when arraysize is 1 is due to an OCI optimization which tries to be clever and proactively detect end-of-resultset conditions when fetching a single row - that way there’s a chance to automatically cancel a cursor and release its pins without an explicit cursor cancel or close call. This should have positive effect reducing shared pool fragmentation in some cases.
Anyway, now I’ll set the arraysize to 5000 and run the same SQL again:
SQL> set arraysize 5000 SQL> SQL> select rownum from dba_source; ROWNUM ---------- 1 2 3 4 5 6 ...many rows returned...
And snapper output is below:
SQL> @snapper out,gather=w 5 1 141 -- Session Snapper v1.06 by Tanel Poder ( ) --------------------------------------------------------------------------------------------------------------------------------------------- HEAD, SID, SNAPSHOT START , SECONDS, TYPE, STATISTIC , DELTA, D/SEC, HDELTA, HD/SEC --------------------------------------------------------------------------------------------------------------------------------------------- DATA, 141, 20080210 15:00:35, 5, WAIT, SQL*Net message to client , 32, 6, 32us, 6.4us DATA, 141, 20080210 15:00:35, 5, WAIT, SQL*Net more data to client , 2925, 585, 2.93ms, 585us DATA, 141, 20080210 15:00:35, 5, WAIT, SQL*Net message from client , 4861859, 972372, 4.86s, 972.37ms -- End of snap 1
Now we see SQL*Net more data to client waits as well as the 5000 rows returned for every fetch call just don’t fit into a single SDU buffer.
I’’ll reiterate that both SQL*Net message to client
and SQL*Net more data to client
waits only record the time it took to write the return data from Oracle’s userland SDU buffer to OS kernel-land TCP socket buffer. Thus the wait times can show only microseconds even over high latency WAN connections. Thanks to that, all of the time a TCP packet spent “flying” towards the client is actually accounted in SQL*Net message from client wait statistic. The problem here is though, that we don’t know how much of this time was spent on the wire and how much of it was application think time.
Therefore, unless you’re going to buy a tool which is able to interpret TCP ACK echo timestamps, you need to measure network latency using application side instrumentation.