Greetings. I've currently got an AG used for a large Data Warehouse environment. Of course I know an AG isn't ideal for this setting, but it's the hand I've been dealt. Anyways, I've recently discovered from this thread there's really no good way to measure AG latency, and wondering if I could somehow roll my own. We have about 15 DB's in our AG, but knowing the latency for only one of them is really critical. That said my hoakie idea is as follows:
On the Primary:
Create a table named agInsertTime in this DB that simply has an identity field, and a dateTime field. Once a minute, a job on the Primary will insert a getDate() value into the dateTime column in this table, of course generating the next highest value into the identity column as well.
Also create another new table in a DB that’s NOT in the AG named agRetrievalTime. It will have a an INT column, and two dateTime columns. Once a minute, a job will query the max value from the agInsertTime table on the Secondary, along with the dateTime field from that table, as well as the current getDate() value, and insert these values into agRetrievalTime.
I’ll then query the agRetrievalTime table for the difference between the the two dateTime values, grouped by the identity field having the highest difference.
Pretty sure this would work and wouldn’t be all that difficult. What I don’t know about this scenario is:
- A massive DML statement goes into a real table.
- Before the data from number 1 makes it to the Secondary, the job/ Insert statement I’ve described above occurs.
Will number 2 have to wait for number 1 to commit to the Secondary before it commits, or will this new record from number 2 possibly get there before number 1? If number 2 can arrive and be committed on the Secondary before number1, this is doomed.
Thoughts?
Thanks in advance! ChrisRDBA