This is the last debugging lab in the .NET Debugging Demos series. By now you should have the basics down for troubleshooting hangs, perf issues, memory leaks and crashes in .net applications. Hope you have enjoyed your debugging sessions.
The last one in the series is a managed memory leak caused by holding on to resources in an unexpected way. Since it is the last one I have tried to make the questions a little bit less leading than the previous ones:)
Previous demos and setup instructions
If you are new to the debugging labs, here you can find information on how to set up the labs as well as links to the previous labs in the series.
Information and setup instructions
Lab 1: Hang
Lab 1: Hang - review
Lab 2: Crash
Lab 2: Crash - review
Lab 3: Memory
Lab 3: Memory - review
Lab 4: High CPU hang
Lab 4: High CPU hang - review
Lab 5: Crash
Lab 5: Crash - review
Lab 6: Memory Leak
Lab 6: Memory Leak - review
The problem description is very similar to Lab 6.
We have started getting out of memory exceptions on the buggy bits site and we have been able to determine a scenario in which we think we are leaking memory but we can't seem to figure out where the memory is going.
The leak seems to be occurring on our News page for example
and we can reproduce it by stress testing.
It seems like it is leaking just a small bit every time but since it is something that customers look at a lot and over time the process will crash with an out of memory exception.
Reproduce the issue and gather data:
1. Restart IIS (iisreset)
2. Browse to http://localhost/BuggyBits/News.aspx
3. Set up performance monitoring per Lab 3 and start monitoring the performance
4. Stress the application with tinyget (tinyget -srv:localhost -uri:/BuggyBits/News.aspx -threads:50 -loop:20)
5. After tinyget has finished, get a hangdump with adplus (adplus -hang -pn w3wp.exe -quiet)
6. Stop the performance monitor log
Review the performance monitor log to figure out what we are leaking:
1. Open up the performance monitor log in performance monitor and look at the following counters (set the scale appropriately so that you can see the graphs in the window)
.NET CLR Memory\# Bytes in all heaps
.NET CLR Memory\# Total committed bytes
.NET CLR Memory\# Total reserved bytes
2. Compare Private Bytes, Virtual Bytes and #Bytes in all heaps
Q: Do the graphs for these 3 counters follow eachother or do they diverge? Based on this, can you tell if the issue we are facing is a virtual bytes leak, a native leak or a .NET leak?
Debug the memory dump
1. Open the memory dump, load up the symbols and load sos.dll (see information and setup instructions for more info)
Q: What is the size of the memory dump (on disk)
2. Run !eeheap -gc and !dumpheap -stat
Q: What is the size of the .NET heap according to !eeheap -gc?
Q: Is most of the memory stored on the regular heaps or on the large object heap?
We saw from performance monitor that we appeared to be leaking .net memory, so the next step is to determine what the memory is used for.
3. Run !dumpheap -stat
Q: What types of objects seem to use up most of the memory?
Q: Looking at the 10-20 bottommost object types in !dumpheap -stat, can you see any patterns among the objects? I.e. can they be grouped in any way?
Normally you will be able to see patterns such as many data related objects, many ui related objects, many xml related objects etc. like in this memory investigation
Q: Looking at the 10-20 bottom most object types in !dumpheap -stat, do the quantities of each of them seem normal or does anything seem out of the ordinary?
4. Dump the large objects using !dumpheap -min 85000
Q: What type of objects are stored on the LOH?
5. Take any of the objects on the LOH and run !gcroot <object address> to find out where they are rooted
Q: What does DOMAIN(001CCA88):HANDLE(Strong) stand for?
Q: What types of roots would you normally see when running !gcroot and what do they mean?
An example of a rootchain would look like this (if we are looking to see why the string at address 02ebf628 is still around)
In this case we have a strongly rooted (static) object of type MyNamespace.Person at 02ec05c0. Person has a member variable of type Address at 02ec0578 and Address has a membervariable of type string at 02ebf628, so the reason that the string is still around is because it is linked to an Address that is linked to a Person and that Person is stored as a static variable. Unless the chain is broken the string will not be available for collection until the application domain is recycled since static objects never go out of scope.
6. Look at the rootchain of the object that you did !gcroot on.
Q: Why is it sticking around? How could the root chain be broken?
Note: If you walk it from the bottom up and reach an object type that you don't recognize, look it up on the msdn help to get more info about it in order to understand why there is a link.
Putting it all together and determining the cause of the memory leak
1. Look at the code for news.aspx.cs and use the knowledge you gathered from the debugging session to figure out how the leak was generated.
Q: How long will the objects stick around?
Q: What can you do to avoid this type of eventhandler leak?
Resolve the issue and rerun the test to verify the solution
1. Search this site or msdn to find a resolution to the problem and resolve the issue
2. Rerun the test to verify that the assembly "leak" no longer exists.