How improve managed code performance?

Here I am pointing out the main points which need to take care for developer when writing the managed code.

  • Optimize assembly and class design.
  • Maximize garbage collection (GC) efficiency in your application.
  • Use Finalize and Dispose properly.
  • Minimize boxing overhead.
  • Evaluate the use of reflection and late binding.
  • Optimize your exception handling code.
  • Make efficient use of iterating and looping constructs.
  • Optimize string concatenation.
  • Evaluate and choose the most appropriate collection type.
  • Avoid common threading mistakes.
  • Make asynchronous calls effectively and efficiently.
  • Develop efficient locking and synchronization strategies.
  • Reduce your application’s working set.
  • Apply performance considerations to code access security.

In this section I will explain the Optimize assembly and class design and est of the points will explain in parts (if I will write in same article then article will be very long)

Optimize assembly and class design Design Considerations

The main factor to application performance is the application architecture and design. Make sure performance is a functional requirement that your design and test performance takes into account throughout the application development life cycle. Application development should be an iterative process. Performance testing and measuring should be performed between iterations and should not be left to deployment time.
This section summarizes the major design considerations to consider when you design managed code solutions:



● Design for efficient resource management.

● Reduce boundary crossings.

● Prefer single large assemblies rather than multiple smaller assemblies.

● Factor code by logical layers.

● Treat threads as a shared resource.

● Design for efficient exception management.



Design for Efficient Resource Management

Avoid allocating objects and the resources they encapsulate before you need them, and make sure you release them as soon as your code is completely finished with them. This advice applies to all resource types including database connections, data readers, files, streams, network connections, and COM objects. Use finally blocks or Microsoft Visual C#® using statements to ensure that resources are closed or released in a timely fashion, even in the event of an exception. Note that the C# using statement is used only for resources that implement IDisposable; whereas finally blocks can be used for any type of cleanup operations.

Reduce Boundary Crossings

Reduce the number of method calls that cross remoting boundaries because this introduces marshaling and potentially thread switching overhead. With managed code, there are several boundaries to consider:

● Cross application domain. This is the most efficient boundary to cross because it is within the context of a single process. Because the cost of the actual call is so low, the overhead is almost completely determined by the number, type, and size of parameters passed on the method call.

● Cross process. Crossing a process boundary significantly impacts performance. You should do so only when absolutely necessary. For example, you might determine that an Enterprise Services server application is required for security and fault tolerance reasons. Be aware of the relative performance tradeoff.

● Cross machine. Crossing a machine boundary is the most expensive boundary to cross, due to network latency and marshaling overhead. While marshaling overhead impacts all boundary crossings, its impact can be greater when crossing machine boundaries. For example, the introduction of an HTTP proxy might force you to use SOAP envelopes, which introduces additional overhead. Before introducing a remote server into your design, you need to consider the relative tradeoffs including performance, security, and administration.

● Unmanaged code. You also need to consider calls to unmanaged code, which introduces marshaling and potentially thread switching overhead. The Platform Invoke (P/Invoke) and COM interop layers of the CLR are very efficient, but performance can vary considerably depending on the type and size of data that needs to be marshaled between the managed and unmanaged code.





Prefer Single Large Assemblies Rather Than Multiple Smaller Assemblies

To help reduce your application’s working set, you should prefer single larger assemblies rather than multiple smaller assemblies. If you have several assemblies that are always loaded together, you should combine them and create a single assembly. The overhead associated with having multiple smaller assemblies can be attributed to the following:

● The cost of loading metadata for smaller assemblies.
● Touching various memory pages in pre-compiled images in the CLR in order to load the assembly (if it is precompiled with Ngen.exe).
● JIT compile time.
● Security checks.

Because you pay for only the memory pages your program accesses, larger assemblies provide the Native Image Generator utility (Ngen.exe) with a greater chance to optimize the native image it produces. Better layout of the image means that necessary data can be laid out more densely, which in turn means fewer overall pages are needed to do the job compared to the same code laid out in multiple assemblies. Sometimes you cannot avoid splitting assemblies; for example, for versioning and deployment reasons. If you need to ship types separately, you may need separate assemblies.


Factor Code by Logical Layers

Consider your internal class design and how you factor code into separate methods. When code is well factored, it becomes easier to tune to improve performance, maintain, and add new functionality. However, there needs to be a balance. While clearly factored code can improve maintainability, you should be wary of over abstraction and creating too many layers. Simple designs can be effective and efficient.

Treat Threads as a Shared Resource

Do not create threads on a per-request basis because this can severely impact scalability. Creating new threads is also a fairly expensive operation that should be minimized. Treat threads as a shared resource and use the optimized .NET thread pool.

Design for Efficient Exception Management

The performance cost of throwing an exception is significant. Although structured exception handling is the recommended way of handling error conditions, make sure you use exceptions only in exceptional circumstances when error conditions occur. Do not use exceptions for regular control flow.



Class Design Considerations

Class design choices can affect system performance and scalability. However, analyze your tradeoffs, such as functionality, maintainability, and company coding guidelines. Balance these with performance guidelines. This section summarizes guidelines for designing your managed classes:

● Do not make classes thread safe by default.

● Consider using the sealed keyword.

● Consider the tradeoffs of virtual members.

● Consider using overloaded methods.

● Consider overriding the Equals method for value types.

● Know the cost of accessing a property.

● Consider private vs. public member variables.

● Limit the use of volatile fields.

Do Not Make Classes Thread Safe by Default

Consider carefully whether you need to make an individual class thread safe. Thread safety and synchronization is often required at a higher layer in the software architecture and not at an individual class level. When you design a specific class, you often do not know the proper level of atomicity, especially for lower-level classes.

For example, consider a thread safe collection class. The moment the class needs to be atomically updated with something else, such as another class or a count variable, the built-in thread safety is useless. Thread control is needed at a higher level. There are two problems in this situation. Firstly, the overhead from the thread-safety features that the class offers remains even though you do not require those features.

Secondly, the collection class likely had a more complex design in the first place to offer those thread-safety services, which is a price you have to pay whenever you use the class.



In contrast to regular classes, static classes (those with only static methods) should be thread safe by default. Static classes have only global state, and generally offer services to initialize and manage that shared state for a whole process. This requires proper thread safety.



Consider Using the sealed Keyword

You can use the sealed keyword at the class and method level. If you do not want anybody to extend your base classes, you should mark them with the sealed keyword. Before you use the sealed keyword at the class level, you should carefully evaluate your extensibility requirements. If you derive from a base class that has virtual members and you do not want anybody to extend the functionality of the derived class, you can consider sealing the virtual members in the derived class. Sealing the virtual methods makes them candidates for inlining and other compiler optimizations.
Consider the following example.

public class MyClass{
protected virtual void SomeMethod() { ... }

}
You can override and seal the method in a derived class.
public class DerivedClass : MyClass {
protected override sealed void SomeMethod () { ... }

}


This code ends the chain of virtual overrides and makes DerivedClass.SomeMethod a candidate for inlining.

Consider the Tradeoffs of Virtual Members
Use virtual members to provide extensibility. If you do not need to extend your class design, avoid virtual members because they are more expensive to call due to a virtual table lookup and they defeat certain run-time performance optimizations. For example, virtual members cannot be inlined by the compiler. Additionally, when you allow subtyping, you actually present a very complex contract to consumers and you inevitably end up with versioning problems when you attempt to upgrade your class in the future.

Consider Using Overloaded Methods

Consider having overloaded methods for varying parameters instead of having a sensitive method that takes a variable number of parameters. Such a method results in special code paths for each possible combination of parameters.

//method taking variable number of arguments

void GetCustomers (params object [] filterCriteria)

//overloaded methods

void GetCustomers (int countryId, int regionId)

void GetCustomers (int countryId, int regionId, int CustomerType)




Consider Overriding the Equals Method for Value Types

You can override the Equals method for value types to improve performance of the Equals method. The Equals method is provided by System.Object. To use the standard implementation of Equals, your value type must be boxed and passed as an instance of the reference type System.ValueType. The Equals method then uses reflection to perform the comparison. However, the overhead associated with the conversions and reflections can easily be greater than the cost of the actual comparison that needs to be performed. As a result, an Equals method that is specific to your value type can do the required comparison significantly more cheaply. The following code fragment shows an overridden Equals method implementation that improves performance by avoiding reflection costs.



public struct Rectangle{

public double Length;

public double Breadth;

public override bool Equals (object ob) {

if(ob is Rectangle)

return Equals((Rectangle)ob);

else

return false;

}

private bool Equals(Rectangle rect) {

return this.Length == rect.Length && this.Breadth==rect.Breadth;

}

}




Know the Cost of Accessing a Property

A property looks like a field, but it is not, and it can have hidden costs. You can expose class-level member variables by using public fields or public properties. The use of properties represents good object-oriented programming practice because it allows you to encapsulate validation and security checks and to ensure that they are executed when the property is accessed, but their field-like appearance can cause them to be misused.

You need to be aware that if you access a property, additional code, such as validation logic, might be executed. This means that accessing a property might be slower than directly accessing a field. However, the additional code is generally there for good reason; for example, to ensure that only valid data is accepted. For simple properties that contain no additional code (other than directly setting or getting a private member variable), there is no performance difference compared to accessing a public field because the compiler can inline the property code. However, things can easily become more complicated; for example, virtual properties cannot be inlined.

If your object is designed for remote access, you should use methods with multiple parameters instead of requiring the client to set multiple properties or fields. This reduces round trips. It is extremely bad form to use properties to hide complex business rules or other costly operations, because there is a strong expectation by callers that properties are inexpensive. Design your classes accordingly.

Consider Private vs. Public Member Variables

In addition to the usual visibility concerns, you should also avoid unnecessary public members to prevent any additional serialization overhead when you use the XmlSerializer class, which serializes all public members by default.

Limit the Use of Volatile Fields

Limit the use of the volatile keyword because volatile fields restrict the way the compiler reads and writes the contents of the field. The compiler generates the code that always reads from the field’s memory location instead of reading from a register that may have loaded the field’s value. This means that accessing volatile fields is slower than nonvolatile ones because the system is forced to use memory addresses rather than registers.


Maximize garbage collection (GC) efficiency in your application.

No comments:

Post a Comment