Tuesday, September 29, 2009

Best Practices No 5: - Detecting .NET application memory leaks


Memory leaks in .NET application have always being programmer’s nightmare. Memory leaks are biggest problems when it comes to production servers. Productions servers normally need to run with least down time. Memory leaks grow slowly and after sometime they bring down the server by consuming huge chunks of memory. Maximum time people reboot the system, make it work temporarily and send a sorry note to the customer for the downtime.

Please feel free to download my free 500 question and answer eBook which covers .NET , ASP.NET , SQL Server , WCF , WPF , WWF@ http://www.questpond.com .

Avoid task manager to detect memory leak

Using private bytes performance counters to detect memory leak

3 step process to investigate memory leak

What is the type of memory leak? Total Memory = Managed memory + unmanaged memory

How is the memory leak happening?

Where is the memory leak?

Source code

Thanks, Thanks and Thanks

Avoid task manager to detect memory leak

The first and foremost task is to confirm that there is memory leak. Many developers use windows task manager to confirm, is there a memory leak in the application?. Using task manager is not only misleading but it also does not give much information about where the memory leak is.

First let’s try to understand how the task manager memory information is misleading. Task manager shows working set memory and not the actual memory used, ok so what does that mean. This memory is the allocated memory and not the used memory. Adding further some memory from the working set can be shared by other processes / application.

So the working set memory can big in amount than actual memory used.

Using private bytes performance counters to detect memory leak

In order to get right amount of memory consumed by the application we need to track the private bytes consumed by the application. Private bytes are those memory areas which are not shared by other application. In order to detect private bytes consumed by an application we need to use performance counters.
Below are the steps we need to follow to track private bytes in an application using performance counters:-
  • Start you application which has memory leak and keep it running.
  • Click start à Goto run and type ‘perfmon’.
  • Delete all the current performance counters by selecting the counter and deleting the same by hitting the delete button.
  • Right click à select ‘Add counters’ à select ‘process’ from performance object.
  • From the counter list select ‘Private bytes’.
  • From the instance list select the application which you want to test memory leak for.
If you application shows a steady increase in private bytes value that means we have a memory leak issue here. You can see in the below figure how private bytes value is increasing steadily thus confirming that application has memory leak.

The above graph shows a linear increase but in live implementation it can take hours to show the uptrend sign. In order to check memory leak you need to run the performance counter for hours or probably days together on production server to check if really there is a memory leak.

3 step process to investigate memory leak

Once we have confirmed that there is a memory leak, it’s time to investigate the root problem of the memory leak. We will divide our journey to the solution in 3 phases what, how and where.
  • What: - We will first try to investigate what is the type of memory leak, is it a managed memory leak or an unmanaged memory leak.
  • How: - What is really causing the memory leak. Is it the connection object, some kind of file who handle is not closed etc?
  • Where: - Which function / routine or logic is causing the memory leak.

What is the type of memory leak? Total Memory = Managed memory + unmanaged memory

Before we try to understand what the type of leak is, let’s try to understand how memory is allocated in .Net applications. .NET application have two types of memory managed memory and unmanaged memory. Managed memory is controlled by garbage collection while unmanaged memory is outside of garbage collectors boundary.

So the first thing we need to ensure what is the type of memory leak is it managed leak or unmanaged leak. In order to detect if it’s a managed leak or unmanaged leak we need to measure two performance counters.
The first one is the private bytes counter for the application which we have already seen in the previous session.
The second counter which we need to add is ‘Bytes in all heaps’. So select ‘.NET CLR memory’ in the performance object, from the counter list select ‘Bytes in all heaps’ and the select the application which has the memory leak.

Private bytes are the total memory consumed by the application. Bytes in all heaps are the memory consumed by the managed code. So the equation becomes something as shown in the below figure.

Un Managed memory + Bytes in all helps = private bytes, so if we want to find out unmanaged memory we can always subtract the bytes in all heaps from the private bytes.
Now we will make two statements:-
  • If the private bytes increase and bytes in all heaps remain constant that means it’s an unmanaged memory leak.
  • If the bytes in all heaps increase linearly that means it’s a managed memory leak.
Below is a typical screenshot of unmanaged leak. You can see private bytes are increasing while bytes in heaps remain constant

Below is a typical screen shot of a managed leak. Bytes in all heaps are increasing.

How is the memory leak happening?

Now that we have answered what type of memory is leaking it’s time to see how is the memory leaking. In other words who is causing the memory leak ?.
So let’s inject an unmanaged memory leak by calling ‘Marshal.AllocHGlobal’ function. This function allocates unmanaged memory and thus injecting unmanaged memory leak in the application. This command is run within the timer number of times to cause huge unmanaged leak.
private void timerUnManaged_Tick(object sender, EventArgs e)

It’s very difficult to inject a managed leak as GC ensures that the memory is reclaimed. In order to keep things simple we simulate a managed memory leak by creating lot of brush objects and adding them to a list which is a class level variable. It’s a simulation and not a managed leak. Once the application is closed this memory will be reclaimed.

private void timerManaged_Tick(object sender, EventArgs e)
            for (int i = 0; i < 10000; i++)
                Brush obj = new SolidBrush(Color.Blue);

In case you are interested to know how leaks can happen in managed memory you can refer to weak handler for more information
http://msdn.microsoft.com/en-us/library/aa970850.aspx .
The next step is to download ‘debugdiag’ tool from
Start the debug diagnostic tool and select ‘Memory and handle leak’ and click next.

Select the process in which you want to detect memory leak.

Finally select ‘Activate the rule now’.

Now let the application run and ‘Debugdiag’ tool will run at the backend monitoring memory issues.

Once done click on start analysis and let the tool the analysis.

You should get a detail HTML report which shows how unmanaged memory was allocated. In our code we had allocated huge unmanaged memory using ‘AllochGlobal’ which is shown in the report below.

Managed memory leak of brushes are shown using ‘GdiPlus.dll’ in the below HTML report.

Where is the memory leak?

Once you know the source of memory leak is, it’s time to find out which logic is causing the memory leak. There is no automated tool to detect logic which caused memory leaks. You need to manually go in your code and take the pointers provided by ‘debugdiag’ to conclude in which places the issues are.
For instance from the report it’s clear that ‘AllocHGlobal’ is causing the unmanaged leak while one of the objects of GDI is causing the managed leak. Using these details we need to them go in the code to see where exactly the issue lies.

Source code

You can download the source code from the top of this article which can help you inject memory leak.

Thanks, Thanks and Thanks

It would be unfair on my part to say that the above article is completely my knowledge. Thanks for all the lovely people down who have written articles so that one day someone like me can be benefit.

Saturday, September 26, 2009

Best Practice No 4:- Improve bandwidth performance of ASP.NET sites using IIS compression


Bandwidth performance is one of the critical requirements for every website. In
today’s time major cost of the website is not hard disk space but its bandwidth.
So transferring maximum amount of data over the available bandwidth becomes very
critical. In this article we will see how we can use IIS compression to increase
bandwidth performance.
Please feel free to download my free 500 question and answer videos which covers
Design Pattern, UML, Function Points, Enterprise Application Blocks,OOP'S, SDLC,
.NET, ASP.NET, SQL Server, WCF, WPF, WWF, SharePoint, LINQ, SilverLight, .NET
Best Practices @ these videos http://www.questpond.com/

Best Practice No 4:- Improve bandwidth performance of ASP.NET sites using IIS compression

How does IIS compression work?

Compression fundamentals: - Gzip and deflate

Enabling IIS compression
0, 1,2,3,4…10 IIS compression levels
3 point consideration for IIS compression optimization
Static data compression
Dynamic data compression

Compressed file and compression

CPU usage, dynamic compression and load testing

TTFB and Compression levels

IIS 7.0 and CPU roll off

Thanks, Thanks and Thanks


Some known issues on IIS compression

Links for further reading

How does IIS compression work?

Note :- All examples shown in this article is using IIS 6.0. The only reason we have used IIS 6 is because 7.0 is still not that common.

Before we move ahead and talk about how IIS compression works, let’s try to
understand how normally IIS will work. Let’s say the user requests for a
‘Home.html’ page which is 100 KB size. IIS serves this request by passing the
100 KB HTML page over the wire to the end user browser.

When compression is enabled on IIS the sequence of events changes as follows:-

• User requests for a page from the IIS server. While requesting for page the
browser also sends what kind of compression types it supports. Below is a simple
request sent to the browser which says its supports ‘gzip’ and ‘deflate’. We had
used fiddler (http://www.fiddler2.com/fiddler2/version.asp ) to get the request

• Depending on the compression type support sent by the browser IIS compresses data and sends the same over the wire to the end browser.

• Browser then decompresses the data and displays the same on the browser.

Compression fundamentals: - Gzip and deflate

IIS supports to kind of compressions Gzip and deflate. Both are more or less same where Gzip is an extension over deflate. Deflate is a compression algorithm which combines LZ77 and Huffman coding. In case you are interested to read more about LZ77 and Huffman you can read at
http://www.zlib.net/feldspar.html .

Gzip is based on deflate algorithm with extra headers added to the deflate payload.

Below are the headers details which is added to the deflate payload data. It starts with a 10 byte header which has version number and time stamp followed by optional headers for file name. At the end it has the actual deflate compressed payload and 8 byte check sum to ensure data is
not lost in transmission.

Google, Yahoo and Amazon use gzip, so we can safely assume that it’s supported by most of the browsers.

Enabling IIS compression

Till now we have done enough of theory to understand IIS compression. Let’s get our hands dirty to see how we can actually enable IIS compression.

Step 1:- Enable compression
The first step is to enable compression on IIS. So right click on websites  properties and click on the service tab. To enable compression we need to check the below two text boxes from the service tab of IIS website properties. Below figure shows the location of both the checkboxes.

Step 2:- Enable metabase.xml edit
Metadata for IIS comes from ‘Metabase.xml’ which is located at “%windir%\system32\inetsrv\”. In order that compression works properly we need to make some changes to this XML file. In order to make changes to this XML file we need to direct IIS to gives us edit rights. So right click on your IIS server root  go to properties and check ‘enable direct metabase edit’ check box as
shown in the below figure.

Step 3:- Set the compression level and extension types
Next step is to set the compression levels and extension types. Compression level can be defined between 0 to 10, where 0 specifies a mild compression and 10 specifies the highest level of compression. This value is specified using ‘HcDynamicCompressionLevel’ property. There are two types of compression algorithms ‘deflate’ and ‘gzip’. This property needs to be specified in both the
algorithm as shown in the below figures.

We need to also specify which file types need to be compressed. ‘HcScriptFileExtensions’ help us to specify the same. For the current scenario we specified that we need to compress ASPX outputs before they are sent to the end browser.

Step 4:- Does it really work?
Once you are done with the above 4 steps, it’s time to see if the compression
really works. So we will create a simple C# asp.net page which will loop “10000”
times and send some kind of output to the browser.

protected void Page_Load(object sender, EventArgs e)
for (int i; i < 10000; i++)

Response.Write("Sending huge data" + "<br>");

In order to see the difference before compression and after compression we will run the fiddler tool as we run our
ASP.NET loop page. You can download fiddler from http://www.fiddler2.com/fiddler2/version.asp .
Below screen shows data captured by fiddler without compression and with compression. Without compression data is “80501 bytes” and with compression it comes to “629 bytes”. I am sure that’s a great performance increase from bandwidth point of view.

0, 1,2,3,4…10 IIS compression levels

In our previous section we have set ‘HcDynamicCompressionLevel’ to value ‘4’. More the compression level value, more the data size will be less. As we increase the compression level the downside is more CPU utilization. One of the big challenges is to figure out what is the optimum compression level. This depends on lot of things type of data, load etc.
In our further coming section we will try to derive which is the best compression level for different scenarios.

3 point consideration for IIS compression optimization

Many developers just enable IIS compression with below default values. But default values do not hold good for every environment. It depends on many other factors like what content type is your site serving. If you site has only static HTML pages then compression levels
will be different as compared to site who are serving mostly dynamic pages.

The above table is taken from


If your site is only serving compressed data like ‘JPEG’ and ‘PDF’, it’s probably not advisable to enable compression at all as your CPU utilization increases considerably for small compression gains. On the other side we also need to balance compression with CPU utilization. The more we increase the
compression levels the more CPU resources will be utilized.

Different data types needs to be set to different IIS compression levels for optimization. In the further coming section we will take different data types, analyze the same with different compression levels and see how CPU utilization is affected. Below figure shows different data types with some examples of file types.

Static data compression

Let’s start with the easiest one static content type like HTML and HTM. If a user requests for static page from IIS who has compression enabled, IIS compresses the file and puts the same in ‘%windir%\IIS Temporary Compressed Files’ directory.
Below is a simple screen which shows the compressed folder snapshot. Compression
happens only for the first time. On subsequent calls for the same compressed
data is picked from the compressed files directory.

Below are some sample readings we have taken for HTML files of size range from 100 KB to 2048 KB. We have set the compression level to ‘0’.

You can easily see with the least compression level enabled the compression is almost 5 times.

As the compression happens only for the first time, we can happily set the compression level to ‘10’. The first time we will see a huge CPU utilization but on subsequent calls the CPU usage will be small
as compared to the compression gains.

Dynamic data compression

Dynamic data compression is bit different than static compression. Dynamic compression happens every time a page is requested. We need to balance between CPU utilization and compression levels.
In order find the optimized compression level, we did a small experiment as shown below. We took 5 files in a range of 100 KB to 2 MB. We then changed compression levels from 0 to 10 for every file size to check how much was the data was compressed. Below are compressed data readings in Bytes.

The above readings do not show anything specific, its bit messy. So what we did is plotted the below graph using the above data and we hit the sweet spot. You can see even after increasing the compression level from 4 to 10 the compressed size has no effect. We experimented this on 2 to 3 different environments and it always hit the value ‘4’ , the sweet spot.

So the conclusion we draw from this is, setting value ‘4’ compression level for dynamic data pages will be an optimized setting.

Compressed file and compression

Compressed files are file which are already compressed. For example files like JPEG and PDF are already compressed. So we did a small test by taking JPEG compressed files and below are our readings. The compressed files after applying IIS compression did not change much in size.

When we plot a graph you see that the compression benefits are very small. We may end up utilizing more CPU processor resource and gain nothing in terms of compression.

So the conclusion we can draw for compressed files is that we can disable compression for already compressed file types like JPEG and PDF.
CPU usage, dynamic compression and load testing
One of the important points to remember for dynamic data is to optimize between CPU utilization, compression levels and load on the server.

We used WCAT to do stress with 100 concurrent users. For every file size range from 100 KB to 2 MB we recorded CPU utilization for every compression level. We recorded processor time for W3WP exe using performance counter. To add this performance counter you can go to process à select processor time à select w3wp.exe from instances.

If we plot a graph using the above data we hit the sweet spot of 6. Till the IIS compression was 6 CPU utilization was not really affected.

TTFB and Compression levels

TTFB also termed as time to first byte gets the number of milliseconds that have passed before the first byte of the response was received. We also performed a small experiment on 1MB and 2 MB dynamic pages with different compression levels. We then measured the TTFB for every
combination of compression levels and file size. WCAT was used to measure TTFB.

When we plot the above data we get value ‘5’ as a sweet spot. Until the value reaches ‘5’ TTFB remain constant.

Print shot of WCAT output for TTFB measurement.

IIS 7.0 and CPU roll off

All the above experiments and conclusion are done on IIS 6.0. IIS 7.0 has a very important property i.e. CPU roll-off. CPU roll-off acts like cut off gateway so that CPU resources are not consumed

When CPU gets beyond a certain level, IIS will stop compressing pages, and when it drops below a different level, it will start up again. This is controlled using ‘staticCompressionEnableCpuUsage’ and ‘dynamicCompressionDisableCpuUsage’ attributes. It’s like a safety valve so that your CPU usage does not come by surprise.

Thanks, Thanks and Thanks

Every bit of inspiration for this article has come from Scott Forsyth's article on IIS compression. You can say I have just created a new version with more details.

Thanks to Jimmie to suggest the performance counters and other details of IIS compression.

I also did picked up some bits from this link of Microsoft


• If the files are already compressed do not enable compression on those files.
We can safely disable compression on EXE , JPEG , PDF etc.

• For static pages compression level can be set to 10 as the compression happens
only once.

• Compression level range can be from ‘4’ to ‘6’ for dynamic pages depending on
the server environment and configuration.The best way to judge which compression
level suits best is to perform TTFB, CPU utilization and compression test as
explained in this article.

In case you want to do a sanity check please refer this article

, i agree my results do not match exactly with scott but I think we are very much on the same page.

Some known issues on IIS compression

Below are some known issues of IIS compression


Links for further reading









Monday, September 14, 2009

.NET Best Practice No: 3:- Using performance counters to gather performance data

Is this Article worth reading ahead?

This article discusses how we can use performance counter to gather data from an application. So we will first understand the fundamentals and then we will see a simple example from which we
will collect some performance data.

Introduction: - My application performance is the best like a rocket 

Let us start this article by a small chat between customer and developer.

Scenario 1
Customer: - How’s your application performance?
Subjective developer: - Well it’s speedy, it’s the best …huuh aaa ooh it’s a like rocket.
Scenario 2
Customer: - How’s your application performance?
Quantitative developer: - With 2 GB RAM , xyz processor and 20000 customer records the customer screen load in 20 secs.I am sure the second developer looks more promising than the first developer. In this article we will explore how we can use performance counters to measure performance of an application. So let’s start counting 1,2,3,4….

Please feel free to download my free 500 question and answer videos which covers Design Pattern, UML, Function Points, Enterprise Application Blocks,OOP'S, SDLC, .NET, ASP.NET, SQL Server, WCF, WPF, WWF, SharePoint, LINQ, SilverLight, .NET Best Practices @ these videos http://www.questpond.com/

Courtesy :- http://scoutbase.org.uk

Thanks Javier and Michael

I really do not have the intellectual to write something on performance counters. But reading the below articles I was able to manage something. So first let me thank these guys and then we can move ahead in the article.
Thanks a bunch Javier Canillas for creating the performance counter helper , it really eases of lot of code http://perfmoncounterhelper.codeplex.com/ Thanks Michael Groeger for the wonderful article, I took the code of counter creation from your article http://www.codeproject.com/KB/dotnet/perfcounter.aspx
I also picked up lot of pieces from

At the end of the day its count, calculate and display
Any performance evaluation works on count, calculate and display. For instance if you want to count how many pages in memory where processed per second we first need to count number of pages and also how many seconds where elapsed. Once we are finished with counting we then need to calculate i.e. divide the number of pages by seconds elapsed. Finally we need to display the data of our performance.

Now that we know it’s a 3 step process i.e. count, calculate and display. The counting part is done by the application. So the application needs to feed in the data during the counting phase. Please note the data is not automatically detected by the performance counters , some help needs to be provided by the application. The calculation and display is done by the performance counter and monitor.

Performance counters are not magicians

If application does not provide counter data performance counters cannot measure by himself. Performance counter cannot measure applications which do not feed performance data. In other words the application needs to feed in counter data by creating performance counter objects.
Types of measures in application

Almost all application performance measurements fall in to one of the below 6 categories.

Instantaneous values: - Many times we just want to measure the most recent value. For instance we just want to measure how many customer records where processed? , how much RAM memory has been used etc. These types of measures are termed as instantaneous or absolute values. Performance counter supports these measurement types by using instantaneous counters.

Average values: - Sometimes instant / recent values do not really show the real picture. For instance just saying that application consumed 1 GB space is not enough. But if we can get some kind of average data consumption like 10 MB data was consumed per 1000 records probably you can get more insight of what is happening inside the application. Performance counter supports these kinds of measurement types by using average performanance counters like AverageBase, AverageTimer32, AverageCount64 etc.

Rate values: - There are situations when you want to know the rate of events with respect to time. For example you would like to how many records where processed per second. Rate counters help us to calculate these kinds of performance metrics.

Percentage values: - Many times we would like to see values as percentages for comparison purposes. For example you want to compare performance data between 2 computers. Comparing direct values will not be a fair comparison. So if we can have % values from both computers then the comparison can make more sense. If we want to compare values between different performance counters, percentage is much better option rather than using absolute values. For example if you want to compare how much RAM is utilized as compared to hard disk space. Comparing 1 GB ram usage with 50 GB hard disk usage is like comparing apples with oranges. If you can express these values as percentages then comparison will be fair and justifiable. Percentage performance counters
can help us to express absolute values as percentages.

Difference values: - Many times we would like to get difference performance data , for instance how much time was elapsed from the time application started, how much hard disk consumption was done by the application from the time it started etc. In order to collect these kinds of performance
data we need to record the original value and the recent value. To get final performance data we need to subtract the original value from the recent value. Performance counter provides difference counters to calculate such kind of performance data. So summarizing there are 5 types of performance counters which can satisfy all the above counting needs. Below figure shows the same in a pictorial format.

Example on which performance counter will be tested

In this complete article we will be considering a simple counter example as explained below. In this example we will have a timer which generates random number every 100 milliseconds. These random numbers are later checked to see if it’s less than 2. Incase its less than 2 then function ‘MyFunction’ is invoked.

Below is the code where the timer runs every 100 milliseconds and calculates random number. If the random number is smaller than 2 we invoke the function ‘MyFunction’.
private void timer1_Tick(object sender, EventArgs e)
// Generate random number between 1 to 5.
Random objRnd = new Random();
int y = objRnd.Next(1, 5);

// If random number is less than 2 call my Function
if (y > 2)

Below is the code for ‘MyFunction’ which is invoked when the value of random number is less than 2. The method does not do
anything as such.

private void MyFunction()


All our performance counters example in this article will use the above defined sample.
Adding our first instantaneous performance counter in 4 steps

Before we go in to in depth of how to add performance counters, let’s first understand the structure of performance counters. When we create performance counters it needs to belong to some group.
So we need to create a category and all performance counters will lie under that category.

We will like to just count how many times ‘MyFunction’ was called. So let’s create an instantaneous counter called as 'NumberOfTimeFunctionCalled'. Before we move ahead let’s see how many different types of instantaneous counters are provided by performance counters:-

Below definitions are taken from http://msdn.microsoft.com/enus/library/system.diagnostics.performancecountertype.aspx.

NumberOfItems32:- An instantaneous counter that shows the most recently observed value.

NumberOfItems64:- An instantaneous counter that shows the most recently observed value. Used, for example, to maintain a simple count of a very large number of items or operations. It is the same as NumberOfItems32 except that it uses larger fields to accommodate larger values.

NumberOfItemsHEX32:- An instantaneous counter that shows the most recently observed value in hexadecimal format. Used, for example, to maintain a simple count of items or operations.

NumberOfItemsHEX64:- An instantaneous counter that shows the most recently observed value. Used, for example, to maintain a simple count of a very large number of items or operations. It is the same as NumberOfItemsHEX32 except that it uses larger fields to accommodate larger values.

Step 1 Create the counter: - For our current scenario ‘NumberOfItems32’ will suffice. So let’s first create ‘NumberOfItems32’ instantaneous counter. There are two ways to create counters one is through the code and the other is using the server explorer of VS 2008. The code approach we will see later. For the time we will use server explorer to create our counter. So open your visual
studio  click on view  server explorer and you should see the performance counters section as shown in the below figure. Right click on the performance counters section and select create new category.

When we create a new category you can specify the name of the category and add counters in to this category. For the current example we have given category name as ‘MyApplication’ and added a counter type of ‘NumberOfItem32’ with name ‘NumberOfTimeFunctionCalled’.

Step 2 Add the counter to your visual studio
application: -
Once you have added the counter on the server explorer, you can drag and drop the counter on the ASPX page as shown below.

You need to mark ‘ReadOnly’ value as false so that you can modify the counter
value from the code.

Step 3 Add the code to count the counter: -
Finally we need to increment the counter. We have first cleared any old values in the counter during the form load. Please note that counter values are stored globally so they do not do reset by themselves we need to do it explicitly. So in the form load we have cleared the raw value to zero.

private void Form1_Load(object sender, EventArgs e)
perfNumberOfTimeFunctionCalled.RawValue = 0;

Whenever the function is called we are incrementing the value by using ‘Increment’ method. Every call to the increment
function increases the number by 1.

private void MyFunction()

Step 4 View the counter data: - Now that we have specified the counter in the application which increments every time
‘MyFunction’ function is called. It’s time to use performance monitor to display the performance counter. So go to start  run and type ‘perfmon’. You will see there are lots of by default performance counters. For clarity sake we will remove all the counters for now and add our performance counter i.e. ‘NumberofTimeFunctionCalled’.

You can now view the graphical display as shown in the below figure. Ensure that your application is running because application emits data which is then interpreted by the performance monitor.

Above view is a graphical view of the same. To view the same in textual format you use the view report tab provided by performance monitor. You can see the report shows that ‘MyFunction’ was called 9696 times from the time application started.

Creating more sensible counter

In the previous section we have measured how many times ‘MyFunction’ was called. But this performance count does not really show any kind of measure. It would be great if we can also see the count of how many times the timer was called. Then later we can compare between the numbers of time timer was called and ‘MyFunction’ was called.So create an instantaneous counter and increment this counter when the timer fires as shown in the below code.
private void timer1_Tick(object sender, EventArgs e)
Random objRnd = new Random();
int y = objRnd.Next(1, 5);
if (y > 2)

You can see both the counters in t he below graph the blue line showing the number of times ‘MyFunction’ was called and the
black one showing number of times timer called.

If we look in to the report view we can see for how many times the timer fired and how many times was ‘MyFunction’ called.

Average performance counters

In the previous section we had counted two counters one which says how many times did the timer fire and the other says how many times ‘MyFunction’ was called. If we can have some kind of average data which says how many times was ‘MyFunctionCalled’ for the number of times timer
called it will be make more sense.In order to get these kinds of metrics Average performance counters can be used. So for our scenario we need to count the number of time function was called and number of time the timer fired. Then we need to divide them to find on a average how many times was the function for the timer fired.

We need to add two counters one for the numerator and the other for the denominator. For the numerator counter we need to add ‘AverageCount64’ type of counter while for the denominator we need to add ‘AverageBase’ type of counter.

You need to add the ‘AverageBase’ counter after the ‘AverageCount64’ type counter or else you will get an error as shown below.

For every timer tick we increment the number of time timer called counter.
private void timer1_Tick(object sender, EventArgs e)
Random objRnd = new Random();
int y = objRnd.Next(1, 5);
if (y > 2)
For every function call we increment the number of time function called counter.
private void MyFunction()



If you run the application in the view report mode you should see something as shown below. You can see on a average
‘MyFunction’ is called 0.5 times.

If you do the calculation you will get the same figure which is been calculated by the performance monitor.

Rate performance counters

From our sample we would now like to find out the rate of ‘MyFunction’ calls with respect to time. So we would like know how many calls are made every second. So browse to the server explorer and add ‘rateofCountsPerSecond32’ counter as shown in the below figure. Increase this counter every time ‘MyFunction’ is called.

If you run the application you should be able to see the ‘RateofMyFunctionCalledPerSecond’ value. Below is a simple report which shows the rate of counter data which was ran for 15 seconds. The total call made in this 15 second was 72. So the average call is 5 ‘MyFunction’ calls per second.

Performance counters left

We have left percentage counters and difference counters as they are pretty simple and straightforward. In order to maintain this article to the point and specific I have excused both these counter types.
Adding counters by C# code

Till now we have added the performance counter using server explorer. You can also add the counter by code. The first thing is we need to import System.Diagnostics namespace.We then need to create object of ‘CounterCreationDataCollection’ object.
CounterCreationDataCollection Mycounters = new CounterCreationDataCollection();

Create our actual counter and specify the counter type.
CounterCreationData totalOps = new CounterCreationData();
totalOps.CounterName = "Numberofoperations";
totalOps.CounterHelp = "Total number of operations executed";
totalOps.CounterType = PerformanceCounterType.NumberOfItems32;

Finally create the counter inside a category. Below code snippet is creating the counter in ‘MyCategory’ category.
PerformanceCounterCategory.Create("MyCategory","Sample category for Codeproject", Mycounters);

Let’s ease some pain using Performance counter helper

Its quiet a pain to write the counter creation code. You can use performance counter helper to ease and make your code smaller. You can find the performance counter helper at

Do not use it in production

Oh yes, use it only when you are doing development. If you are using in production ensure that there is an enabling and disabling mechanism or else it will affect your application performance.


• Use performance counters to measure application data.
• Performance counters comes in various categories like instantaneous, average , rate etc.
• Performance counters should not be used in production. In case it’s used should have a disabling mechanism.
• Performance counter cannot measure by itself application needs to provide data so that performance monitors can calculate and display the data.

Source code

You can find the sample source code for the above performance counters discussed at you can download the source code from here