Dan Byström’s Bwain

Blog without an interesting name

Optimizing away II.3

Posted by Dan Byström on January 1, 2009

Oh, the pain, the pain and the embarrassment…

I just came to realize that although a “long” in C# is 64 bits, in C++ it is still 32 bits. In order to get a 64 bit value in MSVC++ you must type either “long long” or “__int64”. I didn’t know that. 😦

This means that although the assembler function I just presented correctly calculates a 64 bit value, it will be truncated to 32 bits because the surrounding C++ function is declared as a long.

This in turn means that for bitmaps larger than 138 x 138 pixels – the correct result cannot be guaranteed. (With 64 bit values, the bitmap can instead be 9724315 x 9724315 pixels in size before an overflow can occur.)

Unfortunately, although I had unit test to verify the correctness of the function, I only tested with small bitmaps.

I have uploaded a new version. Ekeforshus

Posted in .NET, Programming | 3 Comments »

Optimizing away II.2

Posted by Dan Byström on December 30, 2008

I was asked to upload the source and binary for my last post.

Posted in .NET, Programming | 2 Comments »

Optimizing away II

Posted by Dan Byström on December 22, 2008

Continued from Optimizing away. Ok, now I have worked up the courage.

Prepare yourself for a major disappointment. I really do not know how to tweak that C#-loop to run a nanosecond faster. But I can do the same calculation much faster. How? Just my old favorite party trick. It goes like this:

1. Add a new project to your solution

2. Chose Visual C++ / CLR / Class Library

3. Insert the following managed class:

	public ref class FastImageCompare
	{
	public:
		static double compare( void* p1, void* p2, int count )
		{
			return NativeCode::fastImageCompare( p1, p2, count );
		}
		static double compare( IntPtr p1, IntPtr p2, int count )
		{
			return NativeCode::fastImageCompare( p1.ToPointer(), p2.ToPointer(), count );
		}
	};

4. Insert the following function into an unmanaged class (which I happened to call NativeCode):

unsigned long long NativeCode::fastImageCompare( void* p1, void* p2, int count )
{
	int high32 = 0;

	_asm
	{
		push	ebx
		push	esi
		push	edi

		mov		esi, p1
		mov		edi, p2
		xor		eax, eax
again:
		dec		count
		js		done

		movzx	ebx, [esi]
		movzx	edx, [edi]
		sub		edx, ebx
		imul	edx, edx

		movzx	ebx, [esi+1]
		movzx	ecx, [edi+1]
		sub		ebx, ecx
		imul	ebx, ebx
		add		edx, ebx

		movzx	ebx, [esi+2]
		movzx	ecx, [edi+2]
		sub		ebx, ecx
		imul	ebx, ebx
		add		edx, ebx

		add		esi, 4
		add		edi, 4

		add		eax, edx
		jnc		again

		inc		high32
		jmp		again
done:
		mov		edx, high32

		pop		edi
		pop		esi
		pop		ebx
	}

}

Yeah. That’s it. Hand tuned assembly language within a .NET Assembly. UPDATE 2009-01-01: return type of the function changed from “unsigned long” to “unsigned long long”, see here.

I guess that’s almost cheating. And we will be locked inside the Intel platform. Most people won’t mind I guess, but other may have very strong feelings about it. If we really would like to exploit this kind of optimizations while still be portable (to Mono/Mac for example) one possibility would be to load the assembly with native code dynamically. If it fails we could fall back to an alternative version written in pure managed code.

(I know from experience that some people with lesser programming skills react to this with a “what? it must be a crappy compiler if you can write faster code by yourself”. Let me assure you that this is not the case. On the contrary: I’m amazed about the quality of the code emitted by the C# + .NET JIT compilers.)

Posted in .NET, Programming | 13 Comments »

Optimizing away

Posted by Dan Byström on December 16, 2008

Follow-up on Improving performance…  and Genetic Programming: Evolution of Mona Lisa.
I just tested that I can optimize this loop:

unchecked
{
    unsafe
    {
        fixed ( Pixel* psourcePixels = sourcePixels )
        {
            Pixel* p1 = (Pixel*)bd.Scan0.ToPointer();
            Pixel* p2 = psourcePixels;
            for ( int i = sourcePixels.Length ; i > 0 ; i--, p1++, p2++ )
            {
                int r = p1->R - p2->R;
                int g = p1->G - p2->G;
                int b = p1->B - p2->B;
                error += r * r + g * g + b * b;
            }
        }
    }
}

so that it runs even 60% faster. Don’t dare to tell you how, however.

EDIT: Continued here Ekeforshus

Posted in .NET, Programming | 4 Comments »

Besserwisser post on EvoLisa

Posted by Dan Byström on December 14, 2008

As you are all probably already familiar with, Roger Alsing got this really really cool idea earlier this week.

In short: would it be possible to construct a vector version of some image by overlaying just a few semi-transparent polygons? And if so, how do you figure out how these polygons would look like?

I can’t figure out how he got this idea. If someone had asked me if it could be done I’m afraid I’d just answered “Heck, no. No way. No use even trying.”. Therefore; a brilliant idea I must say!

Some people saw it as a proof of evolution. Some people saw it as proof of creationism. Some people saw it as an image compression algorithm. Some people saw it as a cool but useless toy. I guess some other people didn’t know what to think.

I guess I saw it as a powerful demonstration of a technique to use when you really have no idea how to attack a really complicated problem. I wounder if someone know how to construct an algorithm that directly converges towards the desired image without using randomness? The word “tricky” really sounds like an understatement!

Out of curiosity, I looked at the fitness function. That is, the piece of code that tries to compare how similar/different two images are. Then I noticed that since Roger apparently had been under much pressure to reveal his quick and dirty hack to the public, he hadn’t had the time to optimize that code. So I couldn’t resist doing just that.

And behold: the performance increase was a whopping 25 times! Yeah,a 25 time speed increase is like cruising in 50 km/h compared to breaking the sound barrier. Pretty cool, eh?

So, here is how I did just that. Ekeforshus

Posted in Programming | 1 Comment »

Improving performance…

Posted by Dan Byström on December 14, 2008

…of the fitness function in the EvoLisa project. In case you managed to miss it, EvoLisa is an already world famous project created by Roger Alsing earlier this week.

Continued from this post.

With just a few changes, we can actually make the original fitness function run 25 times faster. I’ll start by presenting the original code and then directly the improved version. After that I’ll discuss my reasoning behind each one of the changes.

Original code:

	public static class FitnessCalculator
	{
		public static double GetDrawingFitness( DnaDrawing newDrawing, Color[,] sourceColors )
		{
			double error = 0;

			using ( var b = new Bitmap( Tools.MaxWidth, Tools.MaxHeight, PixelFormat.Format24bppRgb ) )
			using ( Graphics g = Graphics.FromImage( b ) )
			{
				Renderer.Render(newDrawing, g, 1);

				BitmapData bmd1 = b.LockBits(
					new Rectangle( 0, 0, Tools.MaxWidth, Tools.MaxHeight ),
					ImageLockMode.ReadOnly,
					PixelFormat.Format24bppRgb );

				for ( int y = 0 ; y < Tools.MaxHeight ; y++ )
				{
					for ( int x = 0 ; x < Tools.MaxWidth ; x++ )
					{
						Color c1 = GetPixel( bmd1, x, y );
						Color c2 = sourceColors&#91;x, y&#93;;

						double pixelError = GetColorFitness( c1, c2 );
						error += pixelError;
					}
				}

				b.UnlockBits( bmd1 );
			}

			return error;
		}

		private static unsafe Color GetPixel( BitmapData bmd, int x, int y )
		{
			byte* p = (byte*)bmd.Scan0 + y * bmd.Stride + 3 * x;
			return Color.FromArgb( p&#91;2&#93;, p&#91;1&#93;, p&#91;0&#93; );
		}

		private static double GetColorFitness( Color c1, Color c2 )
		{
			double r = c1.R - c2.R;
			double g = c1.G - c2.G;
			double b = c1.B - c2.B;

			return r * r + g * g + b * b;
		}

	}
&#91;/sourcecode&#93;

Optimized code, 25 times as fast:

&#91;sourcecode language='csharp'&#93;
	public struct Pixel
	{
		public byte B;
		public byte G;
		public byte R;
		public byte A;
	}

	public class NewFitnessCalculator : IDisposable
	{
		private Bitmap _bmp;
		private Graphics _g;

		public NewFitnessCalculator()
		{
			_bmp = new Bitmap( Tools.MaxWidth, Tools.MaxHeight );
			_g = Graphics.FromImage( _bmp );
		}

		public void Dispose()
		{
			_g.Dispose();
			_bmp.Dispose();
		}

		public double GetDrawingFitness( DnaDrawing newDrawing, Pixel&#91;&#93; sourcePixels )
		{
			double error = 0;

			Renderer.Render(newDrawing, g, 1);

			BitmapData bd = _bmp.LockBits(
				new Rectangle( 0, 0, Tools.MaxWidth, Tools.MaxHeight ),
				ImageLockMode.ReadOnly,
				PixelFormat.Format32bppArgb );

			unchecked
			{
				unsafe
				{
					fixed ( Pixel* psourcePixels = sourcePixels )
					{
						Pixel* p1 = (Pixel*)bd.Scan0.ToPointer();
						Pixel* p2 = psourcePixels;
						for ( int i = sourcePixels.Length ; i > 0 ; i--, p1++, p2++ )
						{
							int r = p1->R - p2->R;
							int g = p1->G - p2->G;
							int b = p1->B - p2->B;
							error += r * r + g * g + b * b;
						}
					}
				}
			}
			_bmp.UnlockBits( bd );

			return error;
		}

	}

First of all we notice that each time the fitness function is called, a new bitmap is constructed, used and then destroyed. This is fine for a function that seldom gets called. But for a function that is repeatedly called, we’ll be far better off if we reuse the same Bitmap and Graphics objects over and over.

Therefore I have changed the class from being static into one that muct be instantiated. Of course, that requires some minor changes to the consumer of this class, but in my opinion this will only be for the better. Although convenient, static methods (and/or singletons) are very hostile to unit testing and mocking, so I’m trying to move away from them anyway.

To my surprise, this first optimization attempt only buys us a few percent performance increase. I’m somewhat surprised at this, but anyway, it’s a start, and now it will get better. Read on.

So, once we’ve added a constructor to create the bitmap and graphics objects once and for all (as we’ll as making the class disposable so that the two GDI+ objects can be disposed) we move on to the real performance issues:

	for ( int y = 0 ; y < Tools.MaxHeight ; y++ )
	{
		for ( int x = 0 ; x < Tools.MaxWidth ; x++ )
		{
			Color c1 = GetPixel( bmd1, x, y );
			Color c2 = sourceColors&#91;x, y&#93;;

			double pixelError = GetColorFitness( c1, c2 );
			error += pixelError;
		}
	}
&#91;/sourcecode&#93;

This code looks pretty innocent, eh? It is not.

Even for a moderately sized bitmap, say 1,000 by 1,000 pixels, the code in the inner loop is executed 1,000,000 times. Thats a pretty big number. This means that each tiny little "error", performance-wise, is multiplied by 1,000,000 so every little tiny tiny thing will count in the end.

So for example, just each method call will consume time compared to having the method's code inline within the loop. Above we find two method calls GetPixel and GetColorFitness which will be far better off moved inside the loop, but as I will end up explaining is that the worst performance hog here is really the innocent looking line "Color c2 = sourceColors&#91;x, y&#93;;". Anyway, off we go:

&#91;sourcecode language='csharp'&#93;
			unchecked
			{
				unsafe
				{
					for ( int y = 0 ; y < Tools.MaxHeight ; y++ )
					{
						for ( int x = 0 ; x < Tools.MaxWidth ; x++ )
						{
							byte* p = (byte*)bmd1.Scan0 + y * bmd1.Stride + 3 * x;
							Color c1 = Color.FromArgb( p&#91;2&#93;, p&#91;1&#93;, p&#91;0&#93; );
							Color c2 = sourceColors&#91;x, y&#93;;

							int R = c1.R - c2.R;
							int G = c1.G - c2.G;
							int B = c1.B - c2.B;

							error += R * R + G * G + B * B;
						}
					}
				}
			}
&#91;/sourcecode&#93;

The above changes, including changing the variables R, G &amp; B from double into int will buy us approximately a 30% speed increase. Ain't much compared to 25 times but still we're moving on. Then we can look at the "Color c1" and notice that we can get rid of it completely by simply changing the inner code like so:

&#91;sourcecode language='csharp'&#93;
			byte* p = (byte*)bmd1.Scan0 + y * bmd1.Stride + 3 * x;
			Color c2 = sourceColors&#91;x, y&#93;;

			int R = p&#91;2&#93; - c2.R;
			int G = p&#91;1&#93; - c2.G;
			int B = p&#91;0&#93; - c2.B;

			error += R * R + G * G + B * B;
&#91;/sourcecode&#93;

Now we actually have code that executes TWICE as fast as our original code. And now we must turn our attention to the first two. The rest I don't think we can do much about.

Think about it. What we want to to is loop over each and every pixel in the image. Why then do we need to <em><strong>calculate </strong></em>the memory address for <em><strong>each pixel</strong></em> when we want to move to the <em><strong>next pixel</strong></em>? For each pixel we do completely unnecessary calculations. First "(byte*)bmd1.Scan0 + y * bmd1.Stride + 3 * x"; this contains four variables, two additions and two multiplications when really a single increment is all we need.

Then "sourceColors[x, y]". Fast enough and nothing we can improve here, right? No, no no, this is far WORSE! It looks completely harmless, but not only is a <strong>similar formula as the previous one</strong> taking place behind the scenes; for each pixel, the <em><strong>x and y parameters are being bounds checked</strong></em>, ensuring that we do not pass illegal values to the array-lookup!!!

So this innocent-looking expression will cause someting like this to happen somewhere around a million times for each fitness calculation:


			// pseudo-code
			if ( x < sourceColors.GetLowerBound( 0 ) || y < sourceColors.GetLowerBound( 1 ) || x > sourceColors.GetUpperBound( 0 ) || y > sourceColors.GetUpperBound( 1 ) )
				throw new IndexOutOfRangeException( "(Index was outside the bounds of the array." );
			Color c2 = *( &sourceColors + x * ( sourceColors.GetUpperBound( 1 ) + 1 ) + y );

Now we’re in for a little heavier refactoring. Unfortunately the sourcePixel matrix is laid out column-by-row instead of row-by-column which would have been better, so in order to solve this issue I’ll even change it into a vector of type “Pixel” instead. This requires change to the method signature and to the construction of the the matrix/vector itself of course, but once in place:

			unchecked
			{
				unsafe
				{
					fixed ( Pixel* psourceColors = sourceColors )
					{
						Pixel* pc = psourceColors;
						for ( int y = 0 ; y < Tools.MaxHeight ; y++ )
						{
							byte* p = (byte*)bmd1.Scan0 + y * bmd1.Stride;
							for ( int x = 0 ; x < Tools.MaxWidth ; x++, p += 3, pc++ )
							{
								int R = p&#91;2&#93; - pc->R;
								int G = p[1] - pc->G;
								int B = p[0] - pc->B;

								error += R * R + G * G + B * B;
							}
						}
					}
				}
			}

we´re actually in for a performance improvement of 15 times!!!

Yeah, that’s actually how bad (performance-wise) the innocent looking line “Color c2 = sourceColors[x, y];” was. Bet some of you didn’t know that!!! 🙂

In order to change sourceColors from a matrix of Color into a vector of Pixel (declared as in the second code window above) I did this:

		public static Pixel[] SetupSourceColorMatrix( Bitmap sourceImage )
		{
			if ( sourceImage == null )
				throw new NotSupportedException( "A source image of Bitmap format must be provided" );

			BitmapData bd = sourceImage.LockBits(
			new Rectangle( 0, 0, Tools.MaxWidth, Tools.MaxHeight ),
			ImageLockMode.ReadOnly,
			PixelFormat.Format32bppArgb );
			Pixel[] sourcePixels = new Pixel[Tools.MaxWidth * Tools.MaxHeight];
			unsafe
			{
				fixed ( Pixel* psourcePixels = sourcePixels )
				{
					Pixel* pSrc = (Pixel*)bd.Scan0.ToPointer();
					Pixel* pDst = psourcePixels;
					for ( int i = sourcePixels.Length ; i > 0 ; i-- )
						*( pDst++ ) = *( pSrc++ );
				}
			}
			sourceImage.UnlockBits( bd );

			return sourcePixels;
		}

Probably a little overkill… but what the heck… Now I guess many people who are familiar with LockBits and direct pixel manipulation will cry out HEY YOU CAN’T DO THAT! YOU MUST TAKE THE “STRIDE” INTO ACCOUNT WHEN YOU MOVE TO A NEW SCAN LINE.

Well, yes… and no. Not when I use the PixelFormat.Format32bppArgb! Go figure! 🙂

So our new changes means that we process each ROW in the bitmap blindingly fast, as compared to the original version and combined with our caching of the bitmap we have gained a performance boost of 20 times!

Now for my final version I have rendered the drawing in PixelFormat.Format32bppArgb, which is the default format for bitmaps in GDI+. In that format each pixel will be exactly four bytes in size, which in turn means that GDI+ places no “gap” between the last pixel in one row and the first pixel in the next row and so we are actually able to treat the whole image as a single vector, processing it all in one go.

To conclude: in c# we can use unsafe code and pointer arithmetic to access an array far faster than the normal indexer, because we short-circuit the bounds checking. If we iterate over several consecutive elements in the array our gain is even larger, because we just increment the pointer instead of recalculating the memory address of the cells of the array over and over.

Normally we don’t want to bother with this because the gain of safe and managed code is bigger than the performance gain. But when it comes to image processing this perspective may not be as clear.

BTW, I wrote a post on a similar topic a few months ago: Soft Edged Images in GDI+.

EDIT: Continued here Ekeforshus

Posted in .NET, GDI+, Programming | 15 Comments »

Better late than never

Posted by Dan Byström on September 15, 2008

Now this was nice – and unexpected:
http://www.telegraph.co.uk/news/newstopics/religion/2910447/Charles-Darwin-to-receive-apology-from-the-Church-of-England-for-rejecting-evolution.html

Darwin: 1 – Creationism: 0

BTW, isn’t it odd that a thing called “intelligent design” tends to be the very opposite? 🙂 Ekeforshus

Posted in Uncategorized | Leave a Comment »

Soft edged images in GDI+

Posted by Dan Byström on August 24, 2008

Last week I felt a sudden urge to create bitmaps with rounded corners and soft edges. Partly because I thought it would look nice, but mostly just as an intellectual exercise. (And also, according to Joel Spolsky, much of the success behind the iPod and iPhone can be attributed to their rounded corner design. That’s the best explanation I have come across, anyway!!! So, rounded corners sell! 🙂 )

Here are some sample pictures:

  1. Ordinary Graphics.DrawImage
  2. We can use a ColorMatrix to draw a semi-transparent image. However, I can’t figure out a way to use this technique to achieve the result I’m after.
  3. This is what I’m trying to achieve. What shall I call it? Smooth edges? Soft edges? Fluffy edges?
  4. In order to achieve 3) I’m about to inject this mask into the original image’s alpha channel.
  5. Although not discussed anymore in the article, we can use Graphics.SetClip to draw an image with round corners, but note the jaggedness at the corners. There may be some way to create anti-aliased clipping regions. If so, please drop a comment and tell me!
  6. Here I have used the technique which I’m discussing here (which produced 3) but without the “fluff” (a PathGradientBrush to be exact). Notice the smooth corners.

After spending way to much time figuring out a “pure” way to do this in GDI+, I gave up and decided to use LockBits to directly manipulate the pixels in the image. This is actually very straightforward and easy, but I can’t help feeling that there ought to be another way. If you know one, please drop a comment here. Using LockBits will result in an IntPtr pointing to the actual bits, leaving us with some different ways to access them:

  1. System.Runtime.InteropServices.Marshal.Copy. We can copy the pixels into a byte array, manipulate them and then copy them back. Unless we´re batch processing large amount of 16 mega pixels images I’ll guess we won’t notice much performance degradation, but it feels a little backward. If we’re going for a pure VB.NET solution, this is our only option I guess.
  2. The unsafe keyword in C#. This is just perfect, if it wasn’t for my previous traumatic experience with how .NET restricts unsafe assemblies. Anyway, this is what I’ll use in the sample code below.
  3. C++ with Managed Extensions. I suspect that most .NET programmers feel uneasy with this, but I think this is a much underestimated option. I’m not kidding you when I say that it actually takes less than ten minutes to add a new C++ project to your Visual Studio solution and write a C++ version of the unsafe code that I’m about to use. A few years ago I wrote a short tutorial on this.

The strategy I decided to go for is simple:

  1. Create a mask like the one in picture 4). This is done by the methods createRoundRect and createFluffyBrush below. The meaning of this mask is simple: the darker a pixel is, the more transparent shall the real image become in that particular place. And vice versa: the lighter a pixel is, the more opaque shall the image become in that part. Is is no coincidence that this is just the way the alpha channel works. 🙂
  2. Then I create a new bitmap as an exact copy of the original image (if I have more use of the original image that is, otherwise this can be skipped).
  3. Finally, I take one of the red, blue or green channels (irrelevant which one) from the mask and copy it into the new bitmap’s alpha channel! This is done by transferOneARGBChannelFromOneBitmapToAnother (surprise!).
  4. Done

So, here are the three helper methods:


        static public GraphicsPath createRoundRect( int x, int y, int width, int height, int radius )
        {
            GraphicsPath gp = new GraphicsPath();

            if (radius == 0)
                gp.AddRectangle( new Rectangle( x, y, width, height ) );
            else
            {
                gp.AddLine( x + radius, y, x + width - radius, y );
                gp.AddArc( x + width - radius, y, radius, radius, 270, 90 );
                gp.AddLine( x + width, y + radius, x + width, y + height - radius );
                gp.AddArc( x + width - radius, y + height - radius, radius, radius, 0, 90 );
                gp.AddLine( x + width - radius, y + height, x + radius, y + height );
                gp.AddArc( x, y + height - radius, radius, radius, 90, 90 );
                gp.AddLine( x, y + height - radius, x, y + radius );
                gp.AddArc( x, y, radius, radius, 180, 90 );
                gp.CloseFigure();
            }
            return gp;
        }


		public static Brush createFluffyBrush(
			GraphicsPath gp,
			float[] blendPositions,
			float[] blendFactors )
		{
			PathGradientBrush pgb = new PathGradientBrush( gp );
			Blend blend = new Blend();
			blend.Positions = blendPositions;
			blend.Factors = blendFactors;
			pgb.Blend = blend;
			pgb.CenterColor = Color.White;
			pgb.SurroundColors = new Color[] { Color.Black };
			return pgb;
		}

		public enum ChannelARGB
		{
			Blue = 0,
			Green = 1,
			Red = 2,
			Alpha = 3
		}

		public static void transferOneARGBChannelFromOneBitmapToAnother(
			Bitmap source,
			Bitmap dest,
			ChannelARGB sourceChannel,
			ChannelARGB destChannel )
		{
			if ( source.Size!=dest.Size )
				throw new ArgumentException();
			Rectangle r = new Rectangle( Point.Empty, source.Size );
			BitmapData bdSrc = source.LockBits( r, ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb );
			BitmapData bdDst = dest.LockBits( r, ImageLockMode.ReadWrite, PixelFormat.Format32bppArgb );
			unsafe
			{
				byte* bpSrc = (byte*)bdSrc.Scan0.ToPointer();
				byte* bpDst = (byte*)bdDst.Scan0.ToPointer();
				bpSrc += (int)sourceChannel;
				bpDst += (int)destChannel;
				for ( int i = r.Height * r.Width; i > 0; i-- )
				{
					*bpDst = *bpSrc;
					bpSrc += 4;
					bpDst += 4;
				}
			}
			source.UnlockBits( bdSrc );
			dest.UnlockBits( bdDst );
		}

I’m not about to explain how any of the above methods work. If you don’t understand… just google for any of the keywords and you’ll find tons of tutorials. If something is still unclear, drop a comment here and I’ll see what I can do. The purpose of this article is the simple idea that you can inject a mask created with normal GDI+ operations into an image’s alpha channel, making that image transparent, semi-transparent or opaque exactly where you want it to. So now we just put it all together:

			Bitmap bmpFluffy = new Bitmap( bmpOriginal );
			Rectangle r = new Rectangle( Point.Empty, bmpFluffy.Size );

			using ( Bitmap bmpMask = new Bitmap( r.Width, r.Height ) )
			using ( Graphics g = Graphics.FromImage( bmpMask ) )
			using ( GraphicsPath path = createRoundRect(
				r.X, r.Y,
				r.Width, r.Height,
				Math.Min( r.Width, r.Height ) / 5 ) )
			using ( Brush brush = createFluffyBrush(
				path,
				new float[] { 0.0f, 0.1f, 1.0f },
				new float[] { 0.0f, 0.95f, 1.0f } ) )
			{
				g.FillRectangle( Brushes.Black, r );
				g.SmoothingMode = SmoothingMode.HighQuality;
				g.FillPath( brush, path );
				transferOneARGBChannelFromOneBitmapToAnother(
					bmpMask,
					bmpFluffy,
					ChannelARGB.Blue,
					ChannelARGB.Alpha );
			}
			// bmpFluffy is now powered up and ready to be used

The code above is sprinkled with magic numbers, so you can tell that it’s not production code! 🙂

  • Math.Min( r.Width, r.Height ) / 5. This is just may way of saying that I want the size of the rounded corners to be 20% the size of the shortest bitmap side.
  • new float[] { 0.0f, 0.1f, 1.0f } and float[] { 0.0f, 0.95f, 1.0f } this controls how much “fluff” I want at the edges. For example, you may find that new float[] { 0.0f, 0.1f, 0.2, 1.0f } and float[] { 0.0f, 0.9f, 1.0f, 1.0f } suits you better!
  • The arguments to transferOneARGBChannelFromOneBitmapToAnother. Why do I copy the blue channel??? Well, red or green will just just fine too! Since we have painted only gray scale values to bmpMask, the red, green and blue channels will be identical!

And of course, the mask doesn’t have to be created this way. Among other things, you could save a predefined Edge from Paint Shop Pro’s Picture Frames and load it into your app.

Happy coding! Ekeforshus

Posted in .NET, GDI+, Programming | 9 Comments »

Real Programmers don’t use Pascal – 25 year annivarsary

Posted by Dan Byström on July 31, 2008

I just happened to notice that the legendary text “Real Programmers Don’t Use Pascal” was published exactly 25 years ago this month (and I just have a couple of hours left before this month ends, so I thouht I had to write up something quickly). You can often find it on the Net filed under “humor”. This can, of course, only be done by Quiche Eaters! Sometimes someone has replaced Pascal with Visual Basic. I have no problem with that. 🙂

Among all the words of wisdom, I found these:

  • Real Programmers aren’t afraid to use GOTOs.
  • Real Programmers can write five page long DO loops without getting confused.
  • Real Programmers like Arithmetic IF statements– they make the code more interesting.
  • Real Programmers write self-modifying code, especially if they can save 20 nanoseconds in the middle of a tight loop.
  • Real Programmers don’t need comments– the code is obvious.
  • Since Fortran doesn’t have a structured IF, REPEAT … UNTIL, or CASE statement, Real Programmers don’t have to worry about not using them. Besides, they can be simulated when necessary using assigned GOTOs.

I guess that the number of programmers out there who have written (or even know what it means to write) arithmetic IF-statements or self-modifying code are decreasing rapidly by the day. And writing a five page long DO loops without getting confused may be a cool party trick, but nothing to applause in real work. But the remaining two points are something I’d like to comment!

Real Programmers aren’t afraid to use GOTOs

Well, why on earth should they be? Really? This “truth” seems to be something that all student know by heart. Thou shalt not use GOTO. Thou shalt not use GOTO. Without even understanding why GOTO was such a big problem in the first place. In a structured language it is not. How could it be that more students seem to know about this silly “non-problem” while at the same time failing to understand the most fundamental thing about programming: Thou shalt never duplicate code. Thou shalt not use copy-and-paste. I just wonder…

Real Programmers don’t need comments– the code is obvious

I think this is almost true! It only needs slight modification:

Real Programmes write code in such a way that the code becomes obvious

I have come to realise that when I download a piece of code from the Net, the first thing I usually have to do is delete all the comments so that I can read the code!!! Most of the time, comments seem to have been put there just “because you must write comments”. Then after a time, changes are made to the code and the comments are not updated, leaving any reader totally confused! Skip the comments and write clean code with long descriptive methods and variable names I say!

(This goes for much of the documentation too, I think. The code is the best documentations because it is “the truth”! This is one of the reasons Tests (Test Driven Development) is such a great thing: you get “true documentation” for free.)

Posted in Nostalgia, Programming | Leave a Comment »

Quote of the day

Posted by Dan Byström on July 18, 2008

“He doesn’t seem to have all his methods compiled”.
— Douglas Nilsson Ekeforshus

Posted in Programming | Leave a Comment »