Windows Systems Programming: Spring 2004

[ Home | Syllabus | Course Notes | Assignments | Search]


Threading

Why threads

Starting a thread is very easy - here is an example that starts 10 threads - each thread executes for 5 seconds and then stops

using System;
using System.Threading;

class MyApp
{
    static void Main ()
    {
        for (int i=0; i<10; i++) {
            Thread thread = new Thread (new ThreadStart (ThreadFunc));
            thread.Start ();
        }        
    }

    static void ThreadFunc ()
    {
        DateTime start = DateTime.Now;
        while ((DateTime.Now - start).Seconds < 5)
            ;
    }
}
With a small change we can make the threads stop when the main thread is done
using System;
using System.Threading;

class MyApp
{
    static void Main ()
    {
        for (int i=0; i<10; i++) {
            Thread thread = new Thread (new ThreadStart (ThreadFunc));
            thread.IsBackground = true;
            thread.Start ();
        }        
    }

    static void ThreadFunc ()
    {
        DateTime start = DateTime.Now;
        while ((DateTime.Now - start).Seconds < 5)
            ;
    }
}

Timer Threads

The System.Threading namespace’s Timer class enables you to utilize timer threads—threads that call a specified method at specified intervals. To demonstrate, the console application in the following code listing uses a timer thread to alternately write “Tick” and “Tock” to the console window at 1-second intervals:

using System;
using System.Threading;

class MyApp
{
    static bool TickNext = true;

    static void Main ()
    {
        Console.WriteLine ("Press Enter to terminate...");
        TimerCallback callback = new TimerCallback (TickTock);
        Timer timer = new Timer (callback, null, 1000, 1000);
        Console.ReadLine ();
    }

    static void TickTock (object state)
    {
        Console.WriteLine (TickNext ? "Tick" : "Tock");
        TickNext = ! TickNext;
    }
}

Thread Synchronization

Even a simple statement like "i++" for a shared variable "i" can cause synchronization problems

why? on many architectures "i++" is implemented as

	load i, reg1
	add 1, reg1
	store reg1, i
This can be interrupted while "i" in in the register by another process that also increments the value of "i"
Can result in "i" being too low
To solve
The Interlocked Class

The simplest way to synchronize threads is to use the System.Threading.Interlocked class. Interlocked has four static methods that you can use to perform simple operations on 32-bit and 64-bit values and do so in a thread-safe manner:

Method

Purpose

Increment

Increments a 32-bit or 64-bit value

Decrement

Decrements a 32-bit or 64-bit value

Exchange

Exchanges two 32-bit or 64-bit values

CompareExchange

Compares two 32-bit or 64-bit values and replaces one with a third if the two are equal

The following example increments a 32-bit integer named count in a thread-safe manner:

Interlocked.Increment (ref count);

The next example decrements the same integer:

Interlocked.Decrement (ref count);

 

Routing all accesses to a given variable through the Interlocked class ensures that two threads can’t touch the variable at the same time, even if the threads are running on different CPUs.


Demonstrating lack of synchronization

This program adds a set of numbers in 10 reader threads.

Meanwhile writer thread randomly swaps two values in the set - sum should be the same - but could cause a reader thread to compute the wrong answer

 
using System;
using System.Threading;

class MyApp
{
    static Random rng = new Random ();
    static byte[] buffer = new byte[100];
    static Thread writer;

    static void Main ()
    {
        // Initialize the buffer
        for (int i=0; i<100; i++)
            buffer[i] = (byte) (i + 1);

        // Start one writer thread
        writer = new Thread (new ThreadStart (WriterFunc));
        writer.Start ();

        // Start 10 reader threads
        Thread[] readers = new Thread[10];

        for (int i=0; i<10; i++) {
            readers[i] = new Thread (new ThreadStart (ReaderFunc));
            readers[i].Name = (i + 1).ToString ();
            readers[i].Start ();
        }
    }

    static void ReaderFunc ()
    {
        // Loop until the writer thread ends
        for (int i=0; writer.IsAlive; i++) {
            int sum = 0;

            // Sum the values in the buffer
            for (int k=0; k<100; k++) // should lock out the writer
                sum += buffer[k];

            // Report an error if the sum is incorrect
            if (sum != 5050) {
                string message = String.Format ("Thread {0} " +
                    "reports a corrupted read on iteration {1}",
                     Thread.CurrentThread.Name, i + 1);
                Console.WriteLine (message);
                writer.Abort ();
                return;
            }
        }
    }

    static void WriterFunc ()
    {
        DateTime start = DateTime.Now;

        // Loop for up to 10 seconds
        while ((DateTime.Now - start).Seconds < 10) {
            int j = rng.Next (0, 100);
            int k = rng.Next (0, 100);
            Swap (ref buffer[j], ref buffer[k]); // swaps random values
        }
    }

    static void Swap (ref byte a, ref byte b)
    {
        byte tmp = a;
        a = b;
        b = tmp;
    }
}
To solve wrap the summatino code in a monitor
// Sum the values in the buffer
            Monitor.Enter (buffer);

            try {
                for (int k=0; k<100; k++)
                    sum += buffer[k];
            }
            finally {
                Monitor.Exit (buffer); /// make sure you leave the critical section no matter what
            }

The C# lock Keyword

The previous section shows one way to use monitors, but there’s another way, too: C#’s lock keyword (in Visual Basic .NET, SyncLock). In C#, the statements

lock (buffer) {
  ...
}

are functionally equivalent to

Monitor.Enter (buffer);
try {
  ...
}
finally {
    Monitor.Exit (buffer);
}
/// thus
	lock (buffer) {
           for (int k=0; k<100; k++)
               sum += buffer[k];
        }

 

Reader/Writer Locks

Reader/writer locks are similar to monitors in that they prevent concurrent threads from accessing a resource simultaneously. The difference is that reader/writer locks are a little smarter: they permit multiple threads to read concurrently, but they prevent overlapping reads and writes as well as overlapping writes. For situations in which reader threads outnumber writer threads, reader/writer locks frequently offer better performance than monitors. No harm can come, after all, from allowing several threads to read the same location in memory at the same time.

 

    static void ReaderFunc ()
    {
        // Loop until the writer thread ends
        for (int i=0; writer.IsAlive; i++) {
            int sum = 0;

            // Sum the values in the buffer
            rwlock.AcquireReaderLock (Timeout.Infinite);

            try {
                for (int k=0; k<100; k++)
                    sum += buffer[k];
            }
            finally {
                rwlock.ReleaseReaderLock ();
            }

            // Report an error if the sum is incorrect
            if (sum != 5050) {
                string message = String.Format ("Thread {0} " +
                    "reports a corrupted read on iteration {1}",
                     Thread.CurrentThread.Name, i + 1);
                Console.WriteLine (message);
                writer.Abort ();
                return;
            }
        }
    }

    static void WriterFunc ()
    {
        DateTime start = DateTime.Now;

        // Loop for up to 10 seconds
        while ((DateTime.Now - start).Seconds < 10) {
            int j = rng.Next (0, 100);
            int k = rng.Next (0, 100);

            rwlock.AcquireWriterLock (Timeout.Infinite);

            try {
                Swap (ref buffer[j], ref buffer[k]);
            }
            finally {
                
            }
        }
    }


    Events

using System;
using System.Threading;

class MyApp
{
    static AutoResetEvent are1 = new AutoResetEvent (false);
    static AutoResetEvent are2 = new AutoResetEvent (false);

    static void Main ()
    {
        try {
            // Create two threads
            Thread thread1 =
                new Thread (new ThreadStart (ThreadFuncEven));
            Thread thread2 =
                new Thread (new ThreadStart (ThreadFuncOdd));

            // Start the threads
            thread1.Start ();
            thread2.Start ();

            // Wait for the threads to end
            thread1.Join ();
            thread2.Join ();
        }
        finally {
            // Close the events
            are1.Close ();
            are2.Close ();
        }
    }

    static void ThreadFuncEven ()
    {
        for (int i=0; i<100; i+=2) {
            Console.WriteLine (i);  // Output the next even number
            are1.Set ();            // Release the other thread
            are2.WaitOne ();        // Wait for the other thread
        }
    }

    static void ThreadFuncOdd ()
    {
        for (int i=1; i<101; i+=2) {
            are1.WaitOne ();        // Wait for the other thread
            Console.WriteLine (i);  // Output the next odd number
            are2.Set ();            // Release the other thread
        }
    }
}

 


Waiting on Multiple Synchronization Objects

Occasionally a thread needs to block on multiple synchronization objects. It might, for example, need to block on two or more AutoResetEvents and come alive when any of the events becomes set to perform some action on behalf of the thread that did the setting. Or it could block on several AutoResetEvents and want to remain blocked until all of the events become set. Both goals can be accomplished using WaitHandle methods named WaitAny and WaitAll.

WaitHandle is a System.Threading class that serves as a managed wrapper around Windows synchronization objects. Mutex, AutoResetEvent, and ManualResetEvent all derive from it. When you call WaitOne on an event or a mutex, you’re calling a method inherited from WaitHandle. WaitAny and WaitAll are static methods that enable a thread to block on several (on most platforms, up to 64) mutexes and events at once. They expose the same functionality to managed applications that the Windows API function WaitForMultipleObjects exposes to unmanaged applications. In the following example, the calling thread blocks until one of the three AutoResetEvent objects in the syncobjects array becomes set:

AutoResetEvent are1 = new AutoResetEvent (false);
AutoResetEvent are2 = new AutoResetEvent (false);
AutoResetEvent are3 = new AutoResetEvent (false);
  .
  .
  .
WaitHandle[] syncobjects = new WaitHandle[3] { are1, are2, are3 };
WaitHandle.WaitAny (syncobjects);

Changing WaitAny to WaitAll blocks the calling thread until all of the AutoResetEvents are set:

WaitHandle.WaitAll (syncobjects);

// Create the ArrayList and a thread-safe wrapper for it ArrayList list = new ArrayList (); ArrayList safelist = ArrayList.Synchronized (list);


Thread Synchronization via the MethodImpl Attribute

The .NET Framework offers a simple and easy-to-use means for synchronizing access to entire methods through the MethodImplAttribute class, which belongs to the System.Runtime.CompilerServices namespace. To prevent a method from being executed by more than one thread at a time, decorate it as shown here:

[MethodImpl (MethodImplOptions.Synchronized)]
byte[] TransformData (byte[] buffer)
{
  ...
}

Now the framework will serialize calls to TransformData. A method synchronized in this manner closely approximates the classic definition of a critical section—a section of code that can’t be executed simultaneously by concurrent threads.

 


Copyright chris wild 1999-2004.
For problems or questions regarding this web contact [Dr. Wild].
Last updated: April 07, 2004.