Whereas one approach is to implement the ICloneable
interface (described here, so I won't regurgitate), here's a nice deep clone object copier I found on The Code Project a while ago and incorporated it into our code.
As mentioned elsewhere, it requires your objects to be serializable.
using System;
using System.IO;
using System.Runtime.Serialization;
using System.Runtime.Serialization.Formatters.Binary;
/// <summary>
/// Reference Article http://www.codeproject.com/KB/tips/SerializedObjectCloner.aspx
/// Provides a method for performing a deep copy of an object.
/// Binary Serialization is used to perform the copy.
/// </summary>
public static class ObjectCopier
{
/// <summary>
/// Perform a deep copy of the object via serialization.
/// </summary>
/// <typeparam name="T">The type of object being copied.</typeparam>
/// <param name="source">The object instance to copy.</param>
/// <returns>A deep copy of the object.</returns>
public static T Clone<T>(T source)
{
if (!typeof(T).IsSerializable)
{
throw new ArgumentException("The type must be serializable.", nameof(source));
}
// Don't serialize a null object, simply return the default for that object
if (ReferenceEquals(source, null)) return default;
using var Stream stream = new MemoryStream();
IFormatter formatter = new BinaryFormatter();
formatter.Serialize(stream, source);
stream.Seek(0, SeekOrigin.Begin);
return (T)formatter.Deserialize(stream);
}
}
The idea is that it serializes your object and then deserializes it into a fresh object. The benefit is that you don't have to concern yourself about cloning everything when an object gets too complex.
In case of you prefer to use the new extension methods of C# 3.0, change the method to have the following signature:
public static T Clone<T>(this T source)
{
// ...
}
Now the method call simply becomes objectBeingCloned.Clone();
.
EDIT (January 10 2015) Thought I'd revisit this, to mention I recently started using (Newtonsoft) Json to do this, it should be lighter, and avoids the overhead of [Serializable] tags. (NB @atconway has pointed out in the comments that private members are not cloned using the JSON method)
/// <summary>
/// Perform a deep Copy of the object, using Json as a serialization method. NOTE: Private members are not cloned using this method.
/// </summary>
/// <typeparam name="T">The type of object being copied.</typeparam>
/// <param name="source">The object instance to copy.</param>
/// <returns>The copied object.</returns>
public static T CloneJson<T>(this T source)
{
// Don't serialize a null object, simply return the default for that object
if (ReferenceEquals(source, null)) return default;
// initialize inner objects individually
// for example in default constructor some list property initialized with some values,
// but in 'source' these items are cleaned -
// without ObjectCreationHandling.Replace default constructor values will be added to result
var deserializeSettings = new JsonSerializerSettings {ObjectCreationHandling = ObjectCreationHandling.Replace};
return JsonConvert.DeserializeObject<T>(JsonConvert.SerializeObject(source), deserializeSettings);
}
Best Solution
Having read that exceptions are costly in terms of performance I threw together a simple measurement program, very similar to the one Jon Skeet published years ago. I mention this here mainly to provide updated numbers.
It took the program below 29914 milliseconds to process one million exceptions, which amounts to 33 exceptions per millisecond. That is fast enough to make exceptions a viable alternative to return codes for most situations.
Please note, though, that with return codes instead of exceptions the same program runs less than one millisecond, which means exceptions are at least 30,000 times slower than return codes. As stressed by Rico Mariani these numbers are also minimum numbers. In practice, throwing and catching an exception will take more time.
Measured on a laptop with Intel Core2 Duo T8100 @ 2,1 GHz with .NET 4.0 in release build not run under debugger (which would make it way slower).
This is my test code: