This is a mirror of official site: http://jasper-net.blogspot.com/

GZip your downloads

| Thursday, August 5, 2010
Gonzalo yesterday pointed me to a feature in the HTTP client stack for .NET that I did not know about.

If you want the server to gzip the response before sending it to you, set the AutomaticDecompression flag in your HttpWebRequest:

var request = (HttpWebRequest) WebRequest.Create (uri);
request.AutomaticDecompression = DecompressionMethods.GZip;

This will set the Accept-Encoding HTTP header to gzip when you make your connection and automatically decompress this for you when you get the response stream.

Update: on the comments there is a suggestion that Deflate is another option you can use, and you can combine both GZip + Deflate on the flags above.

Read more: Miguel de Icaza's web log

Posted via email from .NET Info

Far Cry 2 Editor Internals

|
This project gives a idea of how the Dunia engine works and how the Far Cry 2 editor was writen, the source code is given in c#

Read more: Codeplex

Posted via email from .NET Info

Performance tuning tricks for ASP.NET and IIS 7

|
In this first installment of performance tuning tricks for ASP.NET and IIS 7 we will look at some of the easy, yet powerful possibilities in the web.config file. By taking advantage of these few tricks we can increase the performance of any new or existing website without changing anything but the web.config file.

The following XML snippets must be placed in the <system.webServer> section of the web.config.

HTTP compression
You’ve always been able to perform HTTP compression in ASP.NET by using third-party libraries or own custom built ones. With IIS 7 you can now throw that away and utilize the build-in compression available from the web.config. Add the following line to enable HTTP compression:

<urlCompression doDynamicCompression="true" doStaticCompression="true" dynamicCompressionBeforeCache="true"/>

By default, only text based content types are compressed.

doDynamicCompression
Setting this attribute to true enables compression of dynamically generated content such as pages, views, handlers. There really aren’t any reasons not to enable this.

doStaticCompression
This attribute allows you to decide whether or not you want static files such as stylesheets and script files to be compressed. Images and other non-text content types will not be compressed by default. This is also something you want to enable.

dynamicCompressionBeforeCache
If you do output caching from within your ASP.NET website, you can tell IIS 7 to compress the output before putting it into cache. Only if you do some custom output caching you might run into issues with setting this to true. Try it and test it. If your website works with this enabled, then you definitely want to keep it enabled.

Tip
By default, only text based content types are compressed. That means if you send application/x-javascript as content type, you should change it to text/javascript. If you use some custom modules in your website, then you might experience conflicts with the IIS 7 compression feature.

Resources
Add new mime-types for compression
Configure HTTP compression in IIS 7 (Technet)
Cache static files
To speed up the load time for the http://madskristensen.net/post/Performance-tuning-tricks-for-ASPNET-and-IIS-7-part-1.aspxvisitors, it is crucial that everything that can be cached by the browser IS cached by the browser. That includes static files such as images, stylesheets and script files. By letting the browser cache all these files means it doesn’t need to request them again for the duration of the cache period. That saves you and your visitors a lot of bandwidth and makes the page load faster. A well primed browser cache also triggers the load and DOMContentLoaded event sooner.

Read more: .NET slave Part 1

Posted via email from .NET Info

Windows 7 Heap Performance Analysis

|
Introduction

Heap performance impacts application performance, it is always a good thinking to consider if an application should have its own memory management framework for better performance. This article is to provide information for developers to make right decisions.

HeapPerf    

HeapPerf is a tool written with STL to measure performance. It reads a text file to count each words appeared in the text file. It overrides new and delete operators to get total amount time spent in new and delete. The code getting performance data is shown as below:

class CPerfCounter
{
public:
   CPerfCounter(long &sum) : _sum(sum)
   {
       QueryPerformanceCounter(&_startCounter);
   }
   
   ~CPerfCounter()
   {
       LARGE_INTEGER endCounter;
       QueryPerformanceCounter(&endCounter);
       endCounter.QuadPart -= _startCounter.QuadPart;
       if (endCounter.HighPart)
       {
           DebugBreak();
       }

       InterlockedExchangeAdd(&_sum, endCounter.LowPart);
   }
   
   LARGE_INTEGER _startCounter;

   long &_sum;
};

static int s_heapType = HEAPTYPE_OS;  
static int s_lazyHeapId = -1;
static long s_allocSum = 0;
static long s_freeSum = 0;

// STL uses global new/delete operators to allocate/free memories. needs
// to use STL in a static library
void* __cdecl operator new(size_t size)
{
   CPerfCounter pc (s_allocSum);

   void *pv;

Read more: Codeproject

Posted via email from .NET Info

IPV4 vs IPV6

|
  IP, the Internet Protocol, is one of the pillars which support the Internet. Almost 20 years old, first specified in a remarkably concise 45 pages in RFC 791, IP is the network-layer protocol for the Internet.
  In 1991, the IETF decided that the current version of IP, called IPv4, had outlived its design. The new version of IP, called either IPng (Next Generation) or IPv6 (version 6), was the result of a long and tumultuous process which came to a head in 1994, when the IETF gave a clear direction for IPv6. IPv6 is designed to solve the problems of IPv4. It does so by creating a new version of the protocol which serves the function of IPv4, but without the same limitations of IPv4. IPv6 is not totally different from IPv4: what you have learned in IPv4 will be valuable when you deploy IPv6. The differences between IPv6 and IPv4 are in five major areas: addressing and routing, security, network address translation, administrative workload, and support for mobile devices. IPv6 also includes an important feature: a set of possible migration and transition plans from IPv4.
Since 1994, over 30 IPv6 RFCs have been published. Changing IP means changing dozens of Internet protocols and conventions, ranging from how IP addresses are stored in DNS (domain name system) and applications, to how datagrams are sent and routed over Ethernet, PPP, Token Ring, FDDI, and every other medium, to how programmers call network functions. The IETF, though, is not so insane as to assume that everyone is going to change everything overnight. So there are also standards and protocols and procedures for the coexistence of IPv4 and IPv6: tunneling IPv6 in IPv4, tunneling IPv4 in IPv6, running IPv4 and IPv6 on the same system (dual stack) for an extended period of time, and mixing and matching the two protocols in a variety of environments.

Internet Protocol Version 4 (IPV4)

  Internet Protocol version 4 (IPv4) is the fourth version of the Internet Protocol (IP) and it is the first version of the protocol to be widely deployed. Together with IPv6, it is at the core of standards-based internetworking methods of the Internet. IPv4 is still by far the most widely deployed Internet Layer protocol. IPv4 is described in IETF publication RFC 791, replacing an earlier definition RFC 760. IPv4 is a connectionless protocol for use on packet-switched Link Layer networks e.g., Ethernet. It operates on a best effort delivery model, in that it does not guarantee delivery, nor does it assure proper sequencing, or avoid duplicate delivery.IPV4 does not contain error control or flow control mechanisms. However it discards data if found corrupted through the checksum method employed in header of the datagram. These aspects, including data integrity, are addressed by an upper layer transport protocol e.g., Transmission Control Protocol. Ipv4 is the fourth version of Internet protocol, but the first one to be widely deployed. It uses a 32 bit addressing and allows for 4,294,967,296 unique addresses. Ipv4 has four different class types, the class types are A, B, C, and D.

Read more: Web Development Tools and Articles

Posted via email from .NET Info

Tips and Tricks for Error Handling in ASP.NET Web Applications

|
Error handling is very important for any serious application. It is very crucial that the application is capable of detecting the errors and take corrective measures to the maximum possible extend. If the error situation is beyond the control of the application, it should report the situation to the user/administrator so that an external action can be taken.

The following are the most common ways of handling exceptions in an ASP.NET web application.

Structured Exception Handling
Error Events
Custom Error Pages
Structured Exception Handling

The most popular error handling is Structured Exception Handling (SEH). Most of us are familiar with it in the form of try..catch blocks. All of us use SEH a lot in whatever application that we are working on. The primary focus of SEH is to make sure that a block of code is executed correctly, and if an exception takes place, we have another piece of code which can take care of the exception and take some corrective measures if possible.

SEH is used to protect the application from an exceptional situation where something unexpected happens. For example the application tries to connect to a database server and the server is not available. It is an exceptional situation or an exception :-). If such a case happens the developer can handle the situation in the catch block and take the necessary action. I have seen smart developers making use of SEH for the application logic too. for example, assume that the developer needs to open a file and display the content. Ideally he needs to check if the file exists first and then if it exists open it as follows. (This kind of programming is called DEFENSIVE Programming).

If file exists    
   work with the file
else    
  take action
end if

The above approach reduces the complexity of checking for the existing of the file etc. However it adds some overhead to the system. Generating and handling an exception takes some system resources. I have also seen people using SEH to branch the code execution to a specific upper function block, from inside a series of complex nested functions.

For example function a calls b and b calls c and c calls d and d calls e and if the developer wants to transfer control to a specific location in any of the parent methods, he or she can make use of SEH. SEH is mostly used for procedure/function level error handling. Any code which access external resources like database connections, files/folders, URLs should always be inside try..catch blocks because they are most vulnerable for exceptions.

I have experienced a lot of trouble with combo boxes too and hence would recommend that any action that you do on a combo box should be inside try/catch blocks. Another area where you might consider is working with a dataaset where a null value can cause an exception.

Error Events

Most of the times we will be able to put the code into try..catch blocks. But exceptions occurs in exceptional situations. There might be several cases when we would not be able to fore-see that a given block code is vulnerable to a specific exception.

Read more: Beyond Relational

Posted via email from .NET Info

Using RichTextBox in Silverlight 4

|
Introduction and Background

This is my first article on CodeProject.

Although Silverlight 4.0 was released in April 2010, numerous examples already abound for its new features. These include examples demonstrating the RichTextBox control as well. However, what I found was that most of these examples catered to runtime aspects, such as selecting user-typed text at runtime, and formatting it. The ubiquitous “text editor” and “Notepad” examples using Silverlight RichTextBox are what you'll mostly find if you did a Google search for the control. So what does one do if one wants to learn formatting a Silverlight RichTextBox at design through XAML? To answer this question, I demonstrate a simple example. Again, this example might seem rudimentary but it solves our objective – pure XAML code demonstrating how to format the RichTextBox. I have also thrown in a couple of other elements such as an Image and Hyperlink for good measure. Following this example, I also take up a few examples to showcase other features that the RichTextBox offers.

What You Will Need

Visual Studio 2010 (any version will do)
RichTextBox Control

The RichTextBox control in Silverlight 4.0 is a control that enables you to display or edit rich content. This content may include formatted paragraphs, hyperlinks, and inline images.

Creating the Example

Create a Silverlight 4.0 application using Visual Studio 2010. Name it as RichTextDemo. Drag and drop a RichTextBox from the ToolBox onto the MainPage.

Read more: Codeproject

Posted via email from .NET Info

Hosting WPF controls in a WinForms application

|
Although I am not really using WinForms a lot, I know that there are situations when a WPF control needs to be somehow plugged in to a WinForms project, especially when you want to bring a custom UI design to an existing app.  And although it seems to be a quite interesting task to do, in fact it is quite simple.

To get started, I created a WPF User Control Library project in Visual Studio that will be used as the control sample.
I am going to use it to create a simple reusable control that can be plugged in both WPF and WinForms projects. Once the solution is created, you will see the familiar WPF designer:

NOTE: You can add additional properties and events inside the control.  If those have the proper access modifiers set, you will be able to later on access them from your WinForms app.

Now you can build the project and get the output DLL. Once you have the library compiled, switch to your WinForms application.

In a WinForms project, there is the ElementHost control available in the Toolbox

Read more: DZone

Posted via email from .NET Info

How to Migrate Your Entire Google Account to a New One

|
Whether you finally decided to shed sassyhacker957@gmail.com for a more professional handle or you want to swap Google accounts for less embarrassing reasons, Google doesn't have a built-in system for migrating your data to a new account. So we figured it out.

A lot of us have a ton of data stored in Google's services, but if you want to migrate to a new Google account, you'll need to do some digging. Here's how to migrate your data from Google's most popular services (Gmail, Google Calendar, Docs, Reader, Voice, Blogger, and YouTube) from your current account (hereafter referred to as "Account 1") to your new account (hereafter, "Account 2") while incurring the least amount of data loss.

A few of the services (such as Google Reader) adhere to some fairly universal import/export standards that make it easy, whereas other services (such as newer YouTube accounts) may require you to start from scratch to keep full functionality. In these few cases, we'll note what you can do and what you'll lose by using that method instead of starting over.

Note: Unfortunately, Google Apps has still not caught up to regular Google Accounts in terms of available services. While some of these (such as Calendar and Documents) will work for migrating to a Google Apps account, other services (such as Reader or Voice) are still not available to Google Apps at this time. I'll note where the service is not available to Apps users, as well as when they need to go through a different process of migrating that particular service.

Read more: Lifehacker

Posted via email from .NET Info

Hardware Hackers Reveal Apple's Charger Secrets

|
In this 7-minute video we explore the mysteries of Apple device charging. Usually, device makers need to sign a confidentially agreement with Apple if they want to say their charger 'works with iPhone / iPod,' and they're not allowed to talk about how the insides work. If you don't put these secret resistors on the data lines too, you get the dreaded Charging is not supported with this accessory. We demonstrate how anyone can make their own chargers that work with iPhone 4, 3Gs, etc.

Read more: Slashdot

Posted via email from .NET Info

Implementing a multithreaded http/https debugging proxy server in C#

|
A complete proxy server except instead of SSL Tunneling, will perform a "man-in-the-middle" decryption on SSL traffic allowing you to inspect the encrypted traffic
Download HTTPProxy-src - 10.14 KB
Download HTTPProxy-bin - 10.4 KB
Introduction

This article will show you how to implement a multithreaded http proxy server in C# with a non-standard proxy server feature of terminating and then proxying https traffic. I've added a simple caching mechanism, and have simplified the code by ignoring http/1.1 requests for keeping connections alive, etc.

Disclaimer:Understand that this code is for debugging and testing purposes only. The author does not intend for this code or the executable to be used in any way that may compromise someone's sensitive information. Do not use this server in any environment which has users that are unaware of its use. By using this code or the executable found in this article you are taking responsibility for the data which may be collected through its use.

Background

If you are familiar with fiddler, then you already know how this proxy server works. It essentially performs a "man-in-the-middle" on the http client to dump and debug http traffic. The System.Net.Security.SslStream class is utilitzed to handle all the heavy lifting.

Using the code

The most important part about this code is that when the client asks for a CONNECT, instead of just passing tcp traffic, we're going to handle an ssl handshake and estabish an ssl session and  receive a request from the client. In the mean time we'll send the same request to the destination https server.

First, let's look at creating a server that can handle multiple concurrent tcp connections. We'll use the System.Threading.Thread object to start listening for connections in a separate thread. This thread's job will be to listen for incoming connections, and then spawn a new thread to handle processing, thus allowing the listening thread to continue listening for new connections without blocking while one client is processed.

public sealed class ProxyServer
{
  private TcpListener _listener;
  private Thread _listenerThread;

  public void Start()
  {
     _listener = new TcpListener(IPAddress.Loopback, 8888);
     _listenerThread = new Thread(new ParameterizedThreadStart(Listen));
     _listenerThread.Start(_listener);
  }
       
  public void Stop()
  {
     //stop listening for incoming connections
     _listener.Stop();
     //wait for server to finish processing current connections...
     _listenerThread.Abort();
     _listenerThread.Join();
  }

  private static void Listen(Object obj)
  {
     TcpListener listener = (TcpListener)obj;
     try
     {
        while (true)
        {
           TcpClient client = listener.AcceptTcpClient();
           while (!ThreadPool.QueueUserWorkItem(new WaitCallback(ProxyServer.ProcessClient), client)) ;
        }
     }
     catch (ThreadAbortException) { }
     catch (SocketException) { }
  }

  private static void ProcessClient(Object obj)
  {
     TcpClient client = (TcpClient)obj;
     try
     {

Read more: Codeproject

Posted via email from .NET Info

How to Install Ubuntu on Your Nexus One/Android!

|
This guide is for those of you who want to install Ubuntu as a sub-system under your Nexus One or any other rooted Android smartphone, I’ve tried to make it easy as possible for everyone.

UPDATE: Now Ubuntu is also available on HTC Evo 4G!

As you can see, I’ve used a rooted Nexus One here but you could have trouble with other Android phones as not all Android phones are built exactly alike but might work well also, you never know until you try it.

I am also working on Ubuntu on my rooted HTC Evo 4G, that should be available later this month over at HTCEvoHacks.com.

You could probably also run Ubuntu directly off your Nexus One/Android phone but that probably means not being able to use it as a phone and you probably would lose your camera.

To be on the practical side, I think it’s ideal to run Ubuntu along with your Nexus One/Android phone as I will be showing you here.

This Ubuntu install will not affect your existing Android system, the Ubuntu terminal will run in the background while the Ubuntu X11 graphical user interface will run as an app under Android VNC app.

Why are you installing Ubuntu on your Nexus One/Android phone?
Being able to have Ubuntu on your Nexus One/Android phone means that you can run native Ubuntu/linux applications off your phone!

I also see many uses in college engineering classes when they are studying Ubuntu/linux.  Instead of heading to the lab or having dual-boot on their computer, students will be able to use their Nexus One/Android phone as a test device.

Even for web designers, their Android phone can become a portable test web server to test out their new designs.

The list will go on and there’s absolutely no reason why we shouldn’t run Ubuntu or other linux systems on Android phones.

Lastly, for open-source people like me, free code is priceless, it’s going to be what’s driving the world in the next 10-20 years, if it ain’t already.

I don’t want to bore you with my philosophy so let me show you step-by-step how to install Ubuntu on your Nexus One.

How to Install Ubuntu on your Nexus One/Android Phone!
Before anything, download ubuntu.zip and unzip it:

ubuntu.zip on Megaupload
or ubuntu.zip on FileFactory

(Please feel free to mirror other places if you’d like!)

1. First, you will need a rooted Nexus One/Android phone.  If you have a Nexus One, go follow these directions and root your phone first!

Read more: nexus one hack

Posted via email from .NET Info

Debug Your .NET Web Project With IIS Express [Tips & Tricks]

|
For those of us too impatient to wait for a hotfix for Visual Studio to natively support IIS Express, I've done some digging and found a way to [fairly] easily setup a debugging environment for IIS Express and VS 2010 (should work for VS 2008 also, though!). This assumes you are at least an intermediate user of .NET/IIS and Visual Studio.

Prerequisites

Download and install WebMatrix beta. This download includes IIS Express (as of this posting, I did not find a standalone download of IIS Express).
Be using Visual Studio 2010 or 2008 and a web project to debug.
Steps to Setup IIS Express

It's actually quite simple to setup IIS express. Once WebMatrix is done installing, go to "My Documents\IISExpress8\config".
Right-click "applicationhost.config" and open in your favorite text editor.

Go to line 145. Notice the <sites> element. This is where we can configure our website for IIS Express. Copy the site that is already there and add another entry below it.

Remove the "autoStart" attribute for the first site.

Read more: Intrepid Studio

Posted via email from .NET Info

How to start or stop a Windows Service using C#

|
We can start, stop, pause, continue and refresh a service using Start, Stop, Pause, Continue, and Refresh methods.  Close method disconnects this ServiceController instance from the service and frees all the resources that the instance allocated.

The following code snippet checks if a service is stopped, start it; otherwise stop it.

Let's say, you have a service named "MyServiceName". First you create a ServiceController object and then call its Start or Stop methods to start and stop a windows service.


ServiceController service = new ServiceController("MyServiceName");
if ((service.Status.Equals(ServiceControllerStatus.Stopped)) ||
   (service.Status.Equals(ServiceControllerStatus.StopPending)))
   service.Start();
else    service.Stop();


ServiceController class also have Pause, Continue, and Refresh methods to pause, continue, and refresh a windows service.

Read more: C# Corner

Posted via email from .NET Info

Write an Online Basketball Shooting Game Using Silverlight 4 and Farseer Engine

|
Part 1 In this series of articles we will develop an online basketball game using the famous two-dimensional physics engine - Farseer (mainly PhysicsHelper 3.0) under Visual Studio 2010 and Silverlight 4.0 environments.
Part 2 In the first article of this series, you’ve learned the fundamentals in using Farseer Engine and its enhanced buddy - PhysicsHelper. Starting from this installment, we are going to develop the online basketball shoot game itself.
Part 3 In the previous article, you've learned all the elementary work in writing the ball shooting game. And also, we've made clear the loop principle under the PhysicsHelper environment. In this article we will study the rest and more interesting parts.
Introduction
In particular, we will use the PhysicsHelper kit to help to achieve the goal of basketball moving and collision detection as well as other functions. In this installment, we are mainly going to make preparation work for using Physics Helper. Please note that because there are numerous new characteristics and somewhat traps in using the PhysicsHelper API compared with (and different from) the initial Farseer Engine API we plan to use more space to discuss these questions.

In addition, due to the novelty and difficulty of PhysicsHelper, I suggest entry-level readers to read the first two sections, skip the rest, and continue to read from the second article. When you will be studying the source code later you can go back to read the unread sections in this article to gain a better understanding. Anyway, all the notes put forward in this article deserves your close attention when you use PhysicsHelper.

NOTE
The development environments we'll use in the ball shooting application are:

Windows XP Professional (SP3);
.NET 4.0;
Visual Studio 2010
Silverlight 4
Microsoft Silverlight 4 Tools for Visual Studio 2010
PhysicsHelper 3.0
Microsoft Expression Blend 3;

Introduction to PhysicsHelper 3.0
Farseer physics engine, as a two-dimensional game engine, plays a decisive role in Silverlight based game development. First, two-dimensional physics engines that aim at the current popular Silverlight game development are extremely rare. Second, the author of Farseer physics engine has devoted many years of hard work insisting to maintain the engine, so that the engine has provided the typical functions of a two-dimensional physics engine. Third, Farseer is an open-sourced two-dimensional physics engine (the support site is http://farseerphysics.codeplex.com/), which are extremely beneficial for developers to deeply study the API even for secondary development. Although the latest version of Farseer physics engine is 3.0, it is not the official release.

So, in this series we will choose to base on the mature Physics Helper for Blend, Silverlight and WPF (http://physicshelper.codeplex.com/) to develop our game. Moreover, it is worth noting that PhysicsHelper depends on FarseerPhysics.2.1.1 (which is also a very popular and stable version), which provides a set of assistant classes and behavior components, greatly simplifying the application of Farseer API. However, to fully grasp the depth or the use of the PhysicsHelper API still requires developers to understand the underlying Farseer API. So, to gain a full understanding with the game developing principle you are highly suggested to own some skills of using Farseer API and had better first read related articles, such as Get started with Farseer Physics 2.1.3 in Silverlight 3 and Adding Behaviors Programmatically.

Components in PhysicsHelper
Specifically, the PhysicsHelper kit provides two forms of Farseer engine class package:

1. A group of helper classes, as follows:

CameraController
CameraLayer
PhysicsController
PhysicsJoint
PhysicsSprite
PhysicsStaticHolder

Read more: dotNetSlackers Part 1, Part 2

Posted via email from .NET Info

How to use the ArcGIS iPhone SDK with MonoTouch

|
The iPhone is an amazing platform, yet coming for the .NET world, Objective C really makes me feel like I'm going back to the old days of C, now when Novell came out with MonoTouch I thought that I could do something with that.

I spent many years in Visual Studio and working in .NET, going to a Mac with XCode makes it a little hard, Novell started many years ago the Mono project an open source, cross-platform implementation of C# and the CLR that is binary compatible with Microsoft.NET. Mono provides you as well with a version of the Visual Studio for the Mac called MonoDevelop that will make the .NET developer feel better using a Mac or Linux. Those are open source and you can download them from their main page.

MonoTouch is the implementation of that library in the iPhone, iPod Touch and iPad world, allowing developer to create C# based applications that runs on Apple’s iOS by binding on the native libraries. The best part is that, there is no JIT or interpreter shipped with your application, only native code when you deploy to the Apple AppStore as well as MonoTouch does not support VB.NET ;-)

Let’s get stated.

So I thought to give MonoTouch a chance. My test is going to be to use the native library created by ESRI to consume maps on the iPhone.  So I downloaded the Beta of the ArcGIS library for the iPhone 4.

Add the native library (.a) to the project just by  right click –> Add –> Add Files, select the library file and add it into the main directory of your project.

Read more: Al Pascual ASP.NET Blog

Posted via email from .NET Info

MVP In Silverlight/WPF: The Sample

|
Coming up with a good sample project for this series wasn’t easy. It had to be small enough to be easy to comprehend and look into, yet it also has to make it clear why i prefer the MVP approach over the MVVM approach, which isn’t easy to do when you have a very simple sample. There has to be business logic, and it has to be encapsulated by a Service Layer. But i obviously wanted to avoid having to use a database and go through everything to get all of that working while still being easy to download and play around with it. The Service Layer has been implemented very quickly, and is not representative of a real Service Layer. It doesn’t use a database, it holds its data statically in memory (and doesn’t even care about thread-safety of this data either) and i didn’t even write tests for any of it. It’s just a simple Service Layer, implemented with a Request/Response Service Layer. It accepts Requests and returns Responses with DTO’s (not entities obviously) to the client. That’s it.

The client code has been written entirely using Test Driven Development. Apart from the Views, everything is tested and the tests are obviously also included in the downloadable Visual Studio solution. Some tests were written after a piece of code was written, but most of the tests were written before the actual code was written. I hope you go through the tests to see just how much UI logic you can actually cover quite easily. I also hope you’ll notice that the large majority of tests is very short and focused, which would be harder to achieve when using MVVM. If you have questions regarding the implementation of the User Controls or their tests, it might be better to hold off on asking them until i’ve published the posts that cover writing the implementations and the actual tests. You can always ask questions if you want of course, but odds are high that i’m gonna cover the answer to your question in one of the future posts anyway.

One more important thing: the client in this sample is Silverlight, not WPF. You can apply all of these ideas to WPF programming as well obviously.

Read more: The Inquisitive Coder – Davy Brion's Blog

Posted via email from .NET Info

ASP.NET Membership Training – 3 new Videos

|
Hi folks. Here are three more videos in my collection on Security concepts and working with ASP.NET Membership

Posted via email from .NET Info

Windows Internet Explorer Platform Preview

|
The Internet Explorer Platform Preview has been updated. We encourage you to try out the newly added platform capabilities, and report any issues that you find in the Internet Explorer 9 web platform. Note: some features are incomplete or might change later. For more information, visit the IE9 Test Drive site and read the Platform Preview User Guide .

Read more: MS Download

Posted via email from .NET Info

10 must-have Windows server tools

|
Over the years, Microsoft has given us a staggering number of tools to help with server administration. Since there are so many tools available, I decided to talk about some of my favorites.

Note: This article is also available as PDF download and as a photo gallery.

1: System Center Capacity Planner

It might seem strange to start out by talking about a tool that Microsoft has discontinued. But I’ve found System Center Capacity Planner (Figure A) to be so helpful, I wanted to mention it anyway. In case you are not familiar with this tool, it’s designed to help make sure your proposed server deployment will be able to handle the anticipated workload.

According to Microsoft, the System Center Capacity Planner is being replaced by the System Center Configuration Manager Designer (which I have not yet had a chance to use). The end of life announcement for System Center Capacity Planner indicates that it is no longer available, but at the time of this writing, you can still download it from TechNet, as well as from other third-party sites.

2: PowerShell
3: Best Practices Analyzer
4: Security Configuration Wizard
5: ADSI Edit
6: DCDIAG
7: Microsoft File Server Migration Wizard
8: LDIF Directory Exchange

Read more: TechRepublic

Posted via email from .NET Info

XUIFramework: A GUI Framework based on XML and MFC

|
Introduction

It is always difficult to start a new article and the first question that arises is what to title it. I must admit that this time I have been a bit pretentious but my first goal was to provide a new framework based on an XML description (like XUL or XAML). Actually I started to develop with MFC two years ago and like every newbie I had to fight even to do simple operations (display different kinds of pictures: BMP, JPEG, animated GIF, change font...) and I still don't understand why it is always so difficult to do simple things. Besides recently a friend of mine showed me another IDE (Borland C++) and I was very impressed by the number of properties available by default. These reasons convinced me to start this project.

Before starting, I studied all the "dynamic screen" projects and I found very interesting materials in the following articles and my project is more or less based on them.

DynScreen
GUI Editor
Diagram Editor
Note: In this article, I am using XUIxxx but I have still not renamed all my classes so actually you have to translate it in GUI or Dyn. In any case, if you are not interested in the whole project you may find some interesting parts like the use of CxImage to display pictures in the widgets folder.

Architecture

This project tries to use an OO approach and could be described in modules. I tried to use platform independent libraries and in particular the widget manager uses the STL.

There are mainly four parts:

An XML parser: I am using TinyXml because it is portable (Windows XP/CE, Unix) and uses a logical approach. XML is used to save and load widget properties. For instance, if you put a Static in the form with a red background color, it will be saved as:


1003
73,34
265,154
Hello Code Project


The format used looks like the one in use in WxWidgets and it's not surprising because my main goal is to do the porting to this framework.

A class to handle settings (CSettings): Actually this class is what is called a Singleton in design patterns and it means that you can instantiate it only once. You can call a pointer on it from everywhere you want in your code as long as you include its header file (it's a kind of improved global variable, I am saying that for people coming from C). I am talking about this class, but in this project I am not using it.
A properties manager using PropertyViewLib: This class is used to display widget properties. To do so, each widget derives from it and implements two methods: GetProperties() and PropertyChanging(). GetProperties(...) is used to add/remove fields from the properties toolbar while PropertyChanging is called every time you modify a property.

Read more: Codeproject

P.S. I just love the name !!! Use it everyday !

Posted via email from .NET Info

Data Services Streaming Provider Series: Implementing a Streaming Provider

|
The Open Data Protocol (OData) enables you to define data feeds that also make binary large object (BLOB) data, such as photos, videos, and documents, available to client applications that consume OData feeds. These BLOBs are not returned within the feed itself (for obvious serialization, memory consumption and performance reasons). Instead, this binary data, called a media resource (MR), is requested from the data service separately from the entry in the feed to which it belongs, called a media link entry (MLE). An MR cannot exist without a related MLE, and each MLE has a reference to the related MR. (OData inherits this behavior from the AtomPub protocol.) If you are interested in the details and representation of an MLE in an OData feed, see Representing Media Link Entries (either AtomPub or  JSON) in the OData Protocol documentation.

To support these behaviors, WCF Data Services defines an IDataServiceStreamProvider interface that, when implemented, is used by the data service runtime to access the Stream that it uses to return or save the MR.  

What We Will Cover in this Series
Because it is the most straight-forward way to implement a streaming provider, this initial post in the series demonstrates an IDataServiceStreamProvider implementation that reads binary data from and writes binary data to files stored in the file system as a FileStream. MLE data is stored in a SQL Server database by using the Entity Framework provider. (If you are not already familiar with how to create an OData service by using WCF Data Services, you should first read Getting Started with WCF Data Services and the WCF Data Service quickstart in the MSDN documentation.) Subsequent posts will discuss other strategies and considerations for implementing the IDataServiceStreamProvider interface, such as storing the MR in the database (along with the MLE) and handling concurrency, as well as how to use the WCF Data Services client to consume an MR as a stream in a client application.

Steps Required to Implement a Streaming Provider
This initial blog post will cover the basic requirements for creating a streaming data service, which are:

Create the ASP.NET application.
Define the data provider.
Create the data service.
Implement IDataServiceStreamProvider.
Implement IServiceProvider.
Attribute the model metadata.
Enable large data streams in the ASP.NET application.
Grant the service access to the image file storage location and to the database.
Now, let’s take a look at the data service that we will use in this blog series.

The PhotoData Sample Data Service
This blog series features a sample photo data service that implements a streaming provider to store and retrieve image files, along with information about each photo.

Read more: WCF Data Services Team Blog Part 1

Posted via email from .NET Info

Update on Google Wave

|
We have always pursued innovative projects because we want to drive breakthroughs in computer science that dramatically improve our users’ lives. Last year at Google I/O, when we launched our developer preview of Google Wave, a web app for real time communication and collaboration, it set a high bar for what was possible in a web browser. We showed character-by-character live typing, and the ability to drag-and-drop files from the desktop, even “playback” the history of changes—all within a browser. Developers in the audience stood and cheered. Some even waved their laptops.

We were equally jazzed about Google Wave internally, even though we weren’t quite sure how users would respond to this radically different kind of communication. The use cases we’ve seen show the power of this technology: sharing images and other media in real time; improving spell-checking by understanding not just an individual word, but also the context of each word; and enabling third-party developers to build new tools like consumer gadgets for travel, or robots to check code.

But despite these wins, and numerous loyal fans, Wave has not seen the user adoption we would have liked. We don’t plan to continue developing Wave as a standalone product, but we will maintain the site at least through the end of the year and extend the technology for use in other Google projects. The central parts of the code, as well as the protocols that have driven many of Wave’s innovations, like drag-and-drop and character-by-character live typing, are already available as open source, so customers and partners can continue the innovation we began. In addition, we will work on tools so that users can easily “liberate” their content from Wave.

Read more: Google blog

Posted via email from .NET Info

Microsoft Exchange Server 2007 SP1 VHD

|
We have always pursued innovative projects because we want to drive breakthroughs in computer science that dramatically improve our users’ lives. Last year at Google I/O, when we launched our developer preview of Google Wave, a web app for real time communication and collaboration, it set a high bar for what was possible in a web browser. We showed character-by-character live typing, and the ability to drag-and-drop files from the desktop, even “playback” the history of changes—all within a browser. Developers in the audience stood and cheered. Some even waved their laptops.

We were equally jazzed about Google Wave internally, even though we weren’t quite sure how users would respond to this radically different kind of communication. The use cases we’ve seen show the power of this technology: sharing images and other media in real time; improving spell-checking by understanding not just an individual word, but also the context of each word; and enabling third-party developers to build new tools like consumer gadgets for travel, or robots to check code.

But despite these wins, and numerous loyal fans, Wave has not seen the user adoption we would have liked. We don’t plan to continue developing Wave as a standalone product, but we will maintain the site at least through the end of the year and extend the technology for use in other Google projects. The central parts of the code, as well as the protocols that have driven many of Wave’s innovations, like drag-and-drop and character-by-character live typing, are already available as open source, so customers and partners can continue the innovation we began. In addition, we will work on tools so that users can easily “liberate” their content from Wave.

Read more: Google Blog

Posted via email from .NET Info

eCryptfs - Enterprise Cryptographic Filesystem

|
eCryptfs is a POSIX-compliant enterprise-class stacked cryptographic filesystem for Linux.
It is derived from Erez Zadok's Cryptfs, implemented through the FiST framework for generating stacked filesystems. eCryptfs extends Cryptfs to provide advanced key management and policy features. eCryptfs stores cryptographic metadata in the header of each file written, so that encrypted files can be copied between hosts; the file will be decryptable with the proper key, and there is no need to keep track of any additional information aside from what is already in the encrypted file itself. Think of eCryptfs as a sort of ``gnupgfs.''

Read more: eCryptfs

Posted via email from .NET Info

knfsd

|
This is a much-improved Linux NFS server with support for NFSv3 as well as NFSv2. NFSv4 is being worked on. These patches are considered stable and are indeed shipping with most distributions. The stock Linux 2.2 NFS server can't be used as a cross-platform file server.

Read more: FreshMeat

Posted via email from .NET Info

Btrfs

|
Btrfs (B-tree file system, pronounced "Butter F S", "B-tree F S"[2]) is a GPL-licensed copy-on-write file system for Linux.

  Btrfs is intended to address the lack of pooling, snapshots, checksums and integral multi-device spanning in Linux file systems, these features being crucial as the use of Linux scales upward into larger storage configurations common in the enterprise.[1] Chris Mason, the principal author of the filesystem, has stated its goal was "to let Linux scale for the storage that will be available. Scaling is not just about addressing the storage but also means being able to administer and to manage it with a clean interface that lets people see what's being used and makes it more reliable."[3]
  Oracle has also begun work on CRFS (Coherent Remote File System), a network filesystem protocol intended to leverage the Btrfs architecture to gain higher performance than existing protocols (such as NFS and CIFS) and to expose Btrfs features such as snapshots to remote clients.[4]
   Btrfs 1.0 (with finalized on-disk format) was originally slated for a late 2008 release,[5] but a stable release has not been made as of July 2010. It has, however, been accepted into the mainline kernel for testing as of 2.6.29-rc1.[6] Several Linux distributions (including SLES 11 SP1 and the upcoming RHEL 6)[7] have also begun offering Btrfs as an experimental choice of root file system during installation. It is also used as the default file system for the mobile operating system MeeGo.[8]
The principal developer of the ext3 and ext4 file systems, Theodore Ts'o, has stated that ext4 is a stop-gap and that Btrfs is the way forward,[9] having "a number of the same design ideas that reiser3/4 had".

Read more: Wikipedia

Posted via email from .NET Info

MonoTools 2 for VisualStudio has been released

| Wednesday, August 4, 2010
We just released Mono Tools for Visual Studio.

There are four main features in MonoTools 2:

Soft debugger support.
Faster transfer of your program to the deployment system.
Support for Visual Studio 2010 in addition to 2008.
Polish, polish and more polish.

Posted via email from .NET Info

Introduction to Microsoft.Data.dll

|
I’ve been pretty busy recently working on cool features for “ASP.NET WebPages with Razor Syntax” (what a mouth full) and other things. I’ve worked on tons of stuff that I wish I could share with you, but what I can share is something that many people haven’t blogged about - Microsoft.Data.dll.

What is Microsoft.Data

It’s an awesome new assembly/namespace that contains everything you’ll ever need to access a database. In ASP.NET WebPages we wanted people to be able to access the database without having to write too many lines of code. Any developer that has used raw ADO.NET knows this pain:

using (var connection = new SqlConnection(@"Data Source=.\SQLEXPRESS;AttachDbFilename=|DataDirectory|\Northwind.mdf;
                                          Initial Catalog=|DataDirectory|\Northwind.mdf;Integrated Security=True;User Instance=True")) {
   using (var command = new SqlCommand("select * from products where UnitsInStock < 20", connection)) {
       connection.Open();
       using (SqlDataReader reader = command.ExecuteReader()) {
           while (reader.Read()) {
               Response.Write(reader["ProductName"] + " " + reader["UnitsInStock"]);
           }
       }
   }
}

Wow, that’s a lot of code compared to:

using (var db = Database.OpenFile("Northwind")) {
   foreach (var product in db.Query("select * from products where UnitsInStock < 20")) {
       Response.Write(product.ProductName + " " + product.UnitsInStock);
   }
}

The user doesn’t have to learn about connection strings or how to create a command with a connection and then use a reader to get the results. Also, the above code is tied to Sql Server since we’re using specific implementations of the connection, command, and reader(SqlConnection, SqlCommand, SqlDataReader).

Compare this with code below it. We’ve reduced the amount of lines required to connect to the database, and the syntax for accessing columns is also a lot nicer, that’s because we’re taking advantage of C#’s new dynamic feature.

Why is it so much easier you ask? Well, the Database class is what you’ll be working with when accessing data. There are several methods that let you perform different kinds of queries and factory methods for connecting to the database.

Connecting to the Database
Sql Compact 4 is our main story when developing locally with web matrix, so we optimized for the “I have a database file under App_Data in my web site and I want to access it” case. The first overload we’re going to look at does exactly that and is named appropriately, Database.OpenFile.

Database.OpenFile takes either a full path or a relative path, and uses a default connection string based on the file extension in order to connect to a database. To see this in action, run the starter site template in webmatrix and add this code to the Default.cshtml:

var db = Database.OpenFile("StarterSite.sdf");
@ObjectInfo.Print(db.Connection)

The first line will create a database object with a connection pointing to the sdf file under App_Data. The second line is taking advantage of our ObjectInfo helper (more on this later) to show the properties of the database object.

Read more: Unhandled Exception

Posted via email from .NET Info

Visual Studio LightSwitch

|
vs_lightswitch_beta_logo.png

THE SIMPLEST WAY TO BUILD BUSINESS APPLICATIONS FOR THE DESKTOP AND CLOUD

Microsoft Visual Studio LightSwitch Beta helps you solve specific business needs by enabling you to quickly create professional-quality business applications, regardless of your development skills. LightSwitch is a new addition to the Visual Studio family. Visit this page often to learn more about this exciting product.

Read more: Visual Studio LightSwitch

Posted via email from .NET Info

Add-on Performance

|
In previous posts, we’ve written about the ways we’re making IE9 much faster, like the new script engine that uses multiple cores, and the new rendering subsystem that uses the dedicated graphics chip found on modern PCs. Another aspect of browser performance involves the external code that IE runs on behalf of users, or add-ons.

Add-ons introduce unique features that can enhance the browsing experience. However, they also decrease the browser’s performance in crucial activities like navigating to webpages and creating new tabs. In this way, add-ons affect key usage scenarios like startup and navigation.

Add-on performance is integral to an overall fast browsing experience. IE users expect the browser to be fast, with or without add-ons. We work towards several common goals with add-on developers: providing valuable features with the smallest performance and reliability impact possible (more on reliability in another post).

This blog post is the first in a series on how add-on developers can improve add-on performance. In this post, we’ll share data on the performance impact of add-ons today and how IE enables users to identify the performance impact of their add-ons and stay in control of their PCs. We’ll describe the user scenarios that are important for measuring performance and will walk through how to measure them.

We want add-on developers to have all the information they need to deliver fast, reliable add-ons that respect user choices. We want to make it clear how to test add-on performance. We ask add-on developers to start measuring add-on performance today and making their add-ons faster.

What is An Add-on?

Add-ons refer to Toolbars, Explorer Bars and Browser Helper Objects today. When add-ons are enabled in the browser, they can cause a performance impact for every tab opened and every webpage the user visits.

Another common type of extension is plug-ins, specifically ActiveX controls, like Adobe Flash, Apple QuickTime, and Microsoft Silverlight. Unlike add-ons that run in the browser across all web-pages, plug-ins run inside webpages and their performance impact is localized to the webpages that use them. The specifics of this post are about add-ons. Plug-in developers have similar opportunities to make the browsing experience faster and more reliable.

Accelerators, Webslices and Search Providers are a third class of extension. These are written in pure XML format, and were designed to not impact page or browser performance, reliability, or security.

Toolbar Buttons are another type of extension but they only impact IE’s performance when users press them and they’re mapped to an action that launches an add-on.

Understanding Add-on Performance Impact

Several studies regarding website response time report that users notice any delay over 0.2 seconds. Actions that are faster than 0.2 seconds appear instantaneous.  Scenarios with response times slower than that threshold can feel “slow” to users.

Read more: IEblog Part 1

Posted via email from .NET Info

Enforcing Single Instance WPF Applications

|
Introduction

Today the WPF Disciples, and in particular my good friend and fellow WPF Disciple, Pete O'Hanlon, were sitting around the proverbial campfire, discussing how to enforce single instance WPF apps, for Pete's cool Goldlight project. By single instance WPF apps, I mean limiting an executable to only one instance in execution. This can be useful in scenarios where multiple application instances may play havoc with shared state. I took some time away from writing my book, to see if I could come up with something usable.

Building a Single Instance Application Enforcer

The singleton application model works like this:

User starts app1.
User starts app2.
App2 detects app1 is running.
App2 quits.


There is, however, a second part to our challenge, as Pete pointed out. What happens if we wish to pass some information from app2 to app1 before app2 quits? If, for example, the application is associated with a particular file type, and the user happens to double click on a file of that type, then we would have app2 tell app1 what file the user was trying to open.

To accomplish this we need a means to communicate between the two application instances. There are a number of approaches that could be taken, and some include:

Named pipes
Sending a message to the main application's window with a native API call

MemoryMappedFile
I chose to go for the MemoryMappedFile, along with an EventWaitHandle. The consensus by the Disciples was for a Mutex (not an EventWaitHandle), but the Mutex turned out not to provide for the initial signaling that I needed. I have encapsulated the logic for the singleton application enforcement, into a class called SingletonApplicationEnforcer (see Listing 1). The class instantiates the EventWaitHandle, and informs the garbage collector via the GC.KeepAlive method, that it should not be garbage collected. If this is the only application that has instantiated the EventWaitHandle with the specified name, then the createdNew argument will be set to true. This is how we determine, if the application is the singleton application.

Read more: Codeproject

Posted via email from .NET Info

SQL SERVER – The Self Join – Inner Join and Outer Join

| Tuesday, August 3, 2010
Self Join has always been an note-worthy case. It is interesting to ask questions on self join in a room full of developers. I often ask – if there are three kind of joins, i.e.- Inner Join, Outer Join and Cross Join; what type of join is Self Join? The usual answer is that it is an Inner Join. In fact, it can be classified under any type of join. I have previously written about this in my interview questions and answers series. I have also mentioned this subject when I explained the joins in detail over SQL SERVER – Introduction to JOINs – Basic of JOINs.

When I mention that Self Join can be the outer join, I often get a request for an example for the same. I have created example using AdventureWorks Database of Self Join earlier, but that was meant for inner join as well. Let us create a new example today, where we will see how Self Join can be implemented as an Inner Join as well as Outer Join.

Let us first create the same table for an employee. One of the columns in the same table contains the ID of manger, who is also an employee for the same company. This way, all the employees and their managers are present in the same table. If we want to find the manager of a particular employee, we need use self join.

USE TempDb
GO
-- Create a Table
CREATE TABLE Employee(
EmployeeID INT PRIMARY KEY,
Name NVARCHAR(50),
ManagerID INT
)
GO
-- Insert Sample Data
INSERT INTO Employee
SELECT 1, 'Mike', 3
UNION ALL
SELECT 2, 'David', 3
UNION ALL
SELECT 3, 'Roger', NULL
UNION ALL
SELECT 4, 'Marry',2
UNION ALL
SELECT 5, 'Joseph',2
UNION ALL
SELECT 7, 'Ben',2
GO
-- Check the data
SELECT *
FROM Employee
GO

We will now use inner join to find the employees and their managers’ details.

-- Inner Join
SELECT e1.Name EmployeeName, e2.name AS ManagerName
FROM Employee e1
INNER JOIN Employee e2
ON e1.ManagerID = e2.EmployeeID
GO

Read more: Journey to SQL Authority with Pinal Dave

Posted via email from .NET Info

Server installation options for ASP.NET MVC 2

|
I’ve answered several questions about installing ASP.NET MVC 2 on a server lately, and since I didn’t find a full summary I figured it was time to write one up. Here’s a look at some of the top options:
  • WebPI
  • Bin deploy
  • Run the full AspNetMVC2_VS2008.exe installer
  • Command-line install with aspnetmvc2.msi
WebPI

WebPI has quickly become my favorite way to install Microsoft web platform software (including development tools) on my development machine, and it’s a great option for installing on the server as well. I like WebPI for a lot of reasons – here are the top three:

It’s a tiny download (less than 2 MB)
It figures out which dependencies you need and which you already have installed, so you get the smallest download and fastest install possible
It’s one place to go to get all the new releases
So if you have desktop access to the server, probably the best option is to install ASP.NET MVC 2 via WebPI.

Bin Deployment

ASP.NET MVC was designed so you can use it without needing install permissions, e.g. working with a hosting provider who didn’t have ASP.NET MVC installed. Phil Haack wrote up instructions for Bin Deploying an ASP.NET MVC 1.0 application, and it’s only gotten easier since then. If your server has ASP.NET 4 installed, you’ll just need to set the reference to System.Web.Mvc to “Copy Local”


Read more: Jon Galloway

Posted via email from .NET Info

Globalization in C# .NET Assemblies

|
Abstract

C# .NET assemblies are often built and deployed globally. The current business model of outsourcing development overseas results in global customers. The Developer implementing the outsourced solution must take local languages and customs into account. There are different ways of expressing characters, dates, times, currencies and floating point numbers in different cultures. In this article, Anupam Banerji explains the .NET approach towards globalization, and introduces the tools and methods to successfully implement globalization.

Introduction

The software development business model today often results in the separation of the design and actual implementation in two separate countries. This will result in cultural differences. Dates and characters in Japan are expressed differently to those in the U.A.E. or Australia. A database in Spain holds currency information differently to that in the U.S. The Developer must be able to implement a foreign design that delivers the results expected overseas.

The .NET Framework provides us with objects to handle globalization issues. There are also several ways to compare and sort different cultural formats. Converting a Gregorian date to a Hijri date is a matter of a single call.

Culture & Locale Settings

There are two ways to set the culture in the .NET Framework. Both properties belong to the System.Threading namespace. They are Thread.CurrentThread.CurrentCulture and the Thread.CurrentThread. CurrentUICulture. The CurrentCulture property is set to calculate values of variables in a specific cultural format. The CurrentUICulture property determines the actual format that the value is displayed in.

To set the properties, we create an instance of the CultureInfo object.

using System.Globalization;
CultureInfo ci = new CultureInfo("en-US");

The string “en-US” tells the .NET runtime that the neutral culture is in English and that the locale, or region is the United States. The CultureInfo instance has a number of read only properties, which are used by the Developer to determine specific cultural settings.

Custom Cultures

A custom culture can be created by creating an instance of the CultureAndRegionInfo Builder property. The Register() method of this object instance should be called to save the culture to the registry. The example below shows a new culture created out of an existing culture:

CultureAndRegionInfoBuilder cb = new CultureAndRegionInfoBuilder("fr-AB", CultureAndRegionModifiers.None);
cb.LoadDataFromCultureInfo(new CultureInfo ("fr-FR"));
cb.LoadDataFromRegionInfo(new RegionInfo ("FR"));

This creates a new culture “fr-AB”, copying the cultural settings from French in France. To register the new culture, the assembly must be executed with administrative rights.

Sorting and Comparing Strings

String sorting is a key difference between cultures. Characters are treated differently in the sort order and cultures often support more than one sort order. For example, the German in Germany culture has two sort orders; one is the dictionary sort order and the other is the phonebook sort order. Hungarian has the default and technical sort orders, and Georgian has the traditional and the modern sort orders! The design should specify sort order requirements before being approved.

Read more: Codeproject

Posted via email from .NET Info

Serial Port Communication In C#

|
Welcome to my tutorial on Serial Port Communication in C#. Lately Ive seen a lot of questions on how to send and receive data through a serial port, so I thought it was time to write on the topic. Back in the days of Visual Basic 6.0, you had to use the MSComm Control that was shipped with VB6, the only problem with this method was you needed to make sure you included that control in your installation package, not really that big of a deal. The control did exactly what was needed for the task.

We were then introduced to .Net 1.1, VB programmers loved the fact that Visual Basic had finally evolved to an OO language. It was soon discovered that, with all it's OO abilities, the ability to communicate via a serial port wasn't available, so once again VB developers were forced to rely on the MSComm Control from previous versions of Visual Basic, still not that big of a deal, but some were upset that an intrinsic way of serial port communication wasn't offered with the .net Framework. Worse yet, C# developers had to rely on a Visual Basic control and Namespace if they wanted to communicate via serial port.

Then along comes .Net 2.0, and this time Microsoft added the System.IO.Ports Namespace, and within that was the SerialPort Class. DotNet developers finally had an intrinsic way of serial port communication, without having to deal with the complexities of interoping with an old legacy ActiveX OCX control. One of the most useful methods in the SerialPort class is the GetPortNames Method. This allows you to retrieve a list of available ports (COM1,COM2,etc.) available for the computer the application is running on.

Now that we have that out of the way, lets move on to programming our application. As with all application I create, I keep functionality separated from presentation, I do this by creating Manager classes that manage the functionality for a given process. What we will be looking at is the code in my CommunicationManager class. As with anything you write in .Net you need to add the references to the Namespace's you'll be using:

using System;
using System.Text;
using System.Drawing;
using System.IO.Ports;

In this application I wanted to give the user the option of what format they wanted to send the message in, either string or binary, so we have an enumeration for that, and an enumerations for the type of message i.e; Incoming, Outgoing, Error, etc. The main purpose of this enumeration is for changing the color of the text displayed to the user according to message type. Here are the enumerations:

#region Manager Enums
/// <summary>
/// enumeration to hold our transmission types
/// </summary>
public enum TransmissionType { Text, Hex }

/// <summary>
/// enumeration to hold our message types
/// </summary>
public enum MessageType { Incoming, Outgoing, Normal, Warning, Error };

Posted via email from .NET Info

Top 3 ways to return TOP 10 rows by an SQL query

|
In the past couple of months we have had quite a bit of influx of new people trying out DB2. Most have previous experience with other DBMS like Oracle, Microsoft SQLServer, MySQL, and PostgreSQL. I see that reflected in the volume of the questions that appear quite simple for those of us who have been around DB2. However, if you paid for your kids braces with your Oracle SQL skill, the way you do things in DB2 may not be as apparent. Just today I got a lengthy list of questions from an ISV looking to make use of DB2 on the Cloud. So, I decided to write a few posts that may take DB2 people back to basics but, I hope, will make DB2 a bit more familiar to those who have not tried it before. This is the first post in what I hope will be a mini-series on how to get things done in DB2 for those that know how to get things done in other SQL databases.

One of the questions that I got was: “Can you define in the SQL itself a maximum number of retrieved rows (“TOP” in SQL Server, “rownum” in oracle)?” Let me start by saying that I love it when people ask this question. Why? Because for the longest time I would come across code where a programmer would use the simplest SQL to fetch out a huge result set, sort it in the application to find the top 10 rows and dump the rest. Every decent DBMS out there lets you do it right; there is absolutely no excuse for this type of sillines. I am being kind here.

For example, in Microsoft SQL Server you would use TOP:
SELECT TOP 10 column FROM table

MySQL and PostgreSQL SQL would use LIMIT like so:
SELECT column FROM table LIMIT 10

PostgreSQL v8.3 and later can also use this more standard SQL:
SELECT column FROM table FETCH FIRST 10 ROWS ONLY

An Oracle programmer would write
SELECT column FROM table WHERE ROWNUM <= 10

In Sybase, you would set rowcount
SET rowcount 10
SELECT column FROM table

DB2, as you would expect, also has special SQL syntax to limit the number of rows returned by a query. You can simply append FETCH FIRST n ROWS ONLY to you query and you are set. By the way, this is SQL:2008 standard but I doubt many people care.

SELECT column FROM table FETCH FIRST 10 ROWS ONLY

Read more: FreeDB2.com

Posted via email from .NET Info

I Want A Debugger Robot

|
Hi,

My name is Sabin from the Platforms Global Escalation Services team at Microsoft, and today I want to share with you a recent experience I had debugging an issue reported by an hardware manufacturer.

The customer was doing a reboot test for their new server product line. They found that after hundreds of continuous reboots there was always a single instance that the server took more than 20 minutes to start up, when compared to an average 2 minute normal startup time. This happened only once every 300+ to 1000+ reboots. The number of reboots it took before the problem happened again varied randomly so it was difficult to predict when the problem would occur.

Although they setup a live kernel debugging environment, they didn’t want to watch the computer screen for 10+ hours waiting for the problem to happen so they could manually hit Ctrl+Break in windbg. So instead they setup a video camera to film the computer screen 24x7, and they managed to find that when the “mysterious delay” happened the computer showed a gray screen with “Microsoft (R) Windows (R) version 5.1 (Build 3790: Service Pack 2)”.

The case came to me and the customer even shipped a problematic server to our office to troubleshoot the cause of the delay. The problem was that I didn’t want to stare at the computer screen for 10+ hours either!

The first thing I thought was that it would be fantastic if there were a robot sitting in front of Windbg, watching the elapsed time for each reboot, so it could hit Ctrl+Break in windbg if the server takes more than 10 minute to start. Then I asked myself, “Why not?”

I decided to build such a “robot” myself.  I went around and checked the Debuggers SDK document (which can be found in the windbg help document debugger.chm), and I realized that what I needed was a customized debugger. The functionality of the debugger is simple, it should be able to recognize the time when the server first starts and the time when the server reboots. If there is more than 10 minutes between these two times the customized debugger automatically breaks in to the server. The event callback interface IDebugEventCallbacks::SessionStatus and the client interface IDebugControl::SetInterrupt can meet my needs perfectly.

It is not that difficult to build such a customized debugger, which I called DBGRobot. I would like to share some code snippets which you may find helpful when building a customized debugger for a special debugging scenario, or as the basis for building a more complicated debugging robot.

First, we need to download and install the Windows Driver Kit Version 7.1.0. When installing the WDK be sure to select Debugging Tools for Windows.

http://www.microsoft.com/whdc/DevTools/WDK/WDKpkg.mspx

If you install the WDK to its default folder, which for version 7.1.0 is C:\WinDDK\7600.16385.1, the C:\WinDDK\7600.16385.1\Debuggers\sdk\samples folder will contain the sample code from the Debugger SDK. The dumpstk sample is the one particularly interesting to us. We can copy some common code from it, such as the out.cpp and out.hpp which is the implementation of the IDebugOutputCallbacks interface.

Now let’s do some coding.  The common code is copied from the Debuggers SDK sample Dumpstk. I also listed it here for clarity.

The first step is to create the IDebugClient, IDebugControl and IDebugSymbols interfaces (although IDebugSymbols is not used in this case). You need to call the DebugCreate() function to create the IDebugClient interface, and then use IDebugClient->QueryInterface() to query the IDebugControl and IDebugSymbols interfaces.


void
CreateInterfaces(void)
{
   HRESULT Status;

   // Start things off by getting an initial interface from
   // the engine.  This can be any engine interface but is
   // generally IDebugClient as the client interface is
   // where sessions are started.
   if ((Status = DebugCreate(__uuidof(IDebugClient),
                             (void**)&g_Client)) != S_OK)
   {
       Exit(1, "DebugCreate failed, 0x%X\n", Status);
   }

   // Query for some other interfaces that we'll need.
   if ((Status = g_Client->QueryInterface(__uuidof(IDebugControl),
                                          (void**)&g_Control)) != S_OK ||
       (Status = g_Client->QueryInterface(__uuidof(IDebugSymbols),
                                          (void**)&g_Symbols)) != S_OK)
   {
       Exit(1, "QueryInterface failed, 0x%X\n", Status);
   }
}

If you want to see the output from the debugging engine, you also need to implement the IDebugOutputCallbacks interface. The main function to be implemented is IDebugOutputCallbacks::Output(), which is quite simple as we only need to see the output in the command prompt stdout stream:

STDMETHODIMP
StdioOutputCallbacks::Output(
   THIS_
   IN ULONG Mask,
   IN PCSTR Text
   )
{
   UNREFERENCED_PARAMETER(Mask);
   fputs(Text, stdout);
   return S_OK;
}

Here comes our main code logic: we need to implement the IDebugEventCallbacks interface and monitor the SessionStatus events. In order for the debugger engine to deliver the SessionStatus events to us we need to set the DEBUG_EVENT_SESSION_STATUS mask in IDebugEventCallbacks::GetInterestMask():

Read more: netdebugging

Posted via email from .NET Info

Serialization in C# .NET I - Custom Serialization

|
Abstract

Serialization in C# .NET plays a key role in various functions, such as remoting. Developers may often need to perform custom serialization in order to have complete control over the serialization and deserialization processes. The standard .NET serialization processes will therefore not be enough to provide the Developer with control over these processes. In this series of articles, Anupam Banerji explains serialization, the need for custom serialization, and how to implement custom serialization in your code.

Introduction

Serialization of objects is a new feature in the .NET Framework. Prior to serialization, the only way to store class objects was to write the objects into a stream object. This has two consequences. First, code has to be written to store each object. If the property is a user-defined type, or an object containing several other objects, then this task becomes very complicated very quickly. Second, if changes are made to class objects, then the code to store them must be changed too. This results in a doubling of effort for each change.

Serialization was introduced to provide the Developer with a simple, efficient and consistent way to store class objects. There are very few requirements to implementing standard serialization. The standard .NET serialization model also includes serialization events in order to recalculate values when stored objects are retrieved.

Standard Serialization

Standard Serialization is implemented in classes through a series of attributes. To implement serialization of a class, add the [Serializable] attribute above the class declaration. To exclude any calculated field, tag it with the [NonSerialized] attribute.

To recalculate objects when the object is deserialized, the Developer must implement the IDeserializationCallback interface.

To serialize a class, the Developer has a choice between a BinaryFormatter object and a SoapFormatter object. The BinaryFormatter serialization object should be used when serialization and deserialization occur between two .NET assemblies. The SoapFormatter object should be used when serialization and deserialization occur between a .NET assembly and Simple Object Access Protocol (SOAP) compliant executable. SOAP formatting will be discussed in another article.

Custom Serialization

Custom serialization is implemented through the ISerializable interface. The interface implements the GetObjectData() method and an overloaded constructor used for deserialization. The GetObjectData() method is implemented as follows:

public void GetObjectData(SerializationInfo info, StreamingContext context)
{
   // Implemented code
}
The method takes two arguments, one is the SerializationInfo object which implements the IFormatterConverter interface. We will use it in an example below. The StreamingContext object contains information about the purpose of the serialized object. For example, a StreamingContext of the Remoting type is set when the serialized object graph is sent to a remote or unknown location.

The overloaded constructor has two arguments; the SerializationInfo object and the StreamingContext object.

The BinaryFormatter serialization object fires four events that the Developer may implement: OnSerializing, OnSerialized, OnDeserializing and OnDeserialized. The events are implemented as attributes in the class implementing the ISerializable interface. The methods marked as serialization events must have a StreamingContext argument, or else a runtime exception occurs. The methods must also be marked as void.

If both interfaces are implemented, the OnDeserialization() method in the IDe-serializationCallback interface is called after the OnDeserialized event, as the example output below shows.

A Quick Example: A Custom Serialization Class

We implement both serialization interfaces in our class declaration below:

using System.Runtime.Serialization;

[Serializable]
class TestClass : ISerializable, IDeserializationCallback  
{
   public string Name
   {
       get;
       private set;
   }

   public int ToSquare
   {
       get;
       private set;
   }

   [NonSerialized]
   public int Squared;

   public TestClass(string name, int toSquare)
   {
       Name = name;
       ToSquare = toSquare;
       ComputeSquare();
   }

   public TestClass(SerializationInfo     info, StreamingContext context)
   {
       // Deserialization Constructor

       Name = info.GetString("Name");
       ToSquare = info.GetInt32("ToSquare");
       Console.WriteLine("Deserializing constructor");
       ComputeSquare();
   }

   private void ComputeSquare()
   {
       Squared = ToSquare * ToSquare;
   }

   [OnSerializing]
   private void OnSerializing(StreamingContext context)
   {
       Console.WriteLine("OnSerializing fired.");
   }

Read more: Codeproject

Posted via email from .NET Info

Changing the name of your SQL server

|
My company recently changed their standard naming conventions for computers, so yesterday I had to rename my workstation. Usually this isn’t a big deal, except that I’m running locally a default instance of SQL 2005 and a named instance of SQL 2008.Again, not a big deal since this is just my local playground. But I wanted to sync up the names.

Let’s say that my laptop was named “CHICAGO”. That makes the default instance also “CHICAGO'”, and my named instance “CHICAGO\KATMAI”. Now my laptop name changed to “NEWCHICAGO”. My SQL instances stay as “CHICAGO” and “CHICAGO\KATMAI”. How do you change them to match the new computer name?

Couldn’t be simpler, just execute two procedures. For the default instance.

USE master;
GO

EXEC sp_dropserver 'CHICAGO';
GO

EXEC sp_addserver 'NEWCHICAGO', local;
GO

It’s the same for a named instance. Just add the instance name.

USE master;
GO

EXEC sp_dropserver 'CHICAGO\KATMAI';
GO

EXEC sp_addserver 'NEWCHICAGO\KATMAI', local;
GO
Then, just restart the SQL service and you should see the name change.

Read more: SQLServerPedia

Posted via email from .NET Info

Getting started with shader effects in WPF

|
Introduction

Hardware accelerated effects for WPF were first introduced in .NET 3.5 SP1. Very complex effects and graphically rich applications can be created with little impact on performance thanks to the huge computing power of modern graphic cards. However, if you want to take advantage of this feature, you first need to learn a thing or two. The purpose of this article is to provide all the information you need to get started with Effects.

What is an effect?

Effects are an easy-to-use API to create (surprisingly) graphical effects. For example, if you want a button to cast a shadow, there are several ways to accomplish the task, but the simplest and most efficient method is to assign the "Effect" property of the button, either from code or in XAML:

Collapse
MyButton.Effect = new DropShadowEffect() { ... };
<Button Name="MyButton" ... >
 <Button.Effect>
   <DropShadowEffect ... />
 </Button.Effect>
</Button>  
As you can see, effects are so easy to use that you don't need any further explanation. The fun starts when you decide to write your own effects...

BitmapEffect, Effect, ShaderEffect... What?

First of all, there are several .NET classes that share the "Effect" suffix, and to make it even more confusing they are all in the System.Windows.Media.Effects namespace. However, not all of those classes are useful when it comes to hardware acceleration, in fact some of them are completely useless.

BitmapEffect

The BitmapEffect class and its subclasses were originally supposed to provide the functionality of effects. However, this API doesn't use any hardware acceleration and it has been marked obsolete in .NET 4.0. It's strongly recommended to avoid using the BitmapEffect class or any of its subclasses!

Effect and its derived classes

As stated above, you apply an effect to a control by assigning the control's Effect property (the property is actually inherited from UIElement, just in case you needed to know). Now the question is... What needs to be assigned to the Effect property? The answer is as simple as it can be - it's an object of type Effect.

The Effect class is the base class of all hardware accelerated effects. It has three subclasses: BlurEffect, DropShadowEffect and ShaderEffect. The first two are ready-to-use effects included directly in the .NET library. The ShaderEffect class is the base class of all custom effects.

Why BlurEffect and DropShadowEffect?

Why are there only 2 fully implemented effects in the library and why don't these 2 effects derive from ShaderEffect? I can't answer the first question, but I can tell you what makes BlurEffect and DropShadowEffect so special.

Both DropShadowEffect and BlurEffect are using complex algorithms that require multiple passes, but multi-pass effects are not normally possible. However, the guys at Microsoft probably did a few dirty hacks deep inside the unmanaged core of the WPF rendering engine and created these two effects.

Note: It is possible to create a single-pass blurring algorithm, but such algorithm is terribly slow compared to multi-pass blurring. Anyway, there are probably more reasons why these 2 effects are implemented in a special way.

How does it work?

If you want to take advantage of hardware acceleration, you first need to know how the whole thing works.

A few words about the GPU architecture

The architecture of Graphic Processing Units (GPUs) is different than the architecture of CPUs. GPUs are not general-purpose, they are designed to perform simple operations on large data sets. The operations are executed with high amount of parallelism, which results in great performance.

Modern GPUs are becoming more and more programmable and the range of tasks that can be executed on GPUs is growing (although there are several restrictions described below). Small programs executed on GPU are called shaders. There are several kinds of shaders - vertex shaders and geometry shaders are used when rendering 3D objects (not used by WPF Effects) and pixel shaders are used to perform simple operations on pixels.

There are even attempts to use the sheer computing power of GPUs for general purpose programming... Unfortunately there are several restrictions, such as limited number of instructions in one program, no ability to work with advanced data structures, limited memory management abilities etc. Amazing speed comes with several trade-offs...

Pixel shaders

A pixel shader is a short program that defines a simple operation executed on each pixel of the output image. That's pretty much all you need to create all kinds of interesting pixel-based effects.

Read more: Codeproject

Posted via email from .NET Info

Creating Primary Keys Across Databases

|
When you horizontally partitioning data across multiple SQL Azure databases or using Data Sync Server for SQL Azure, there might come a time when you need to write to a member database without causing primary key merge conflicts. In this case you need to be able to generate a primary key that is unique across all databases. In this article we will discuss different techniques to generate primary keys and their advantages and disadvantage.

UniqueIdentifier
One way to generate a unique primary keys is to use the NEWID() function in Transact-SQL, which generates a GUID as a uniqueidentifier data type. The GUID is guaranteed to be unique across all databases.

Advantages:

It is a native type to SQL Azure.
It is infinitely big and you will never run out of GUIDs
Works with both horizontal partitioning and Data Sync Services.
Disadvantage:

The disadvantages of using this technique is that based on the GUID; there is no way to identify what database generated it. This can cause extra complications when doing horizontal partitioning.
The uniqueidentifier data type is large and will add to the size of your row.
Bigint
Another option is to use a bigint data type in place of an int. In this technique, the primary key is generated from being an identity column; however each identity in each database starts at a different offset. Different offset create the non-conflicting primary keys.

The first question most people ask, is bigint data type big enough to represent all the primary keys need. The bigInt data type can be as large as 9,223,372,036,854,775,807 because it is stored in 8 bytes. This is 4,294,967,298 times bigger than the maximum size of an int data type: 2,147,483,647. This means that you could potentially have 4 billion SQL Azure databases horizontally partitioned with tables of around 2 billion rows. More information about data types and sizes can be found here.

On the first SQL Azure database you would create the table like this:

CREATE TABLE TEST(x bigint PRIMARY KEY IDENTITY (1,1))
On the second SQL Azure database you would create the table like this:

CREATE TABLE TEST(x bigint PRIMARY KEY IDENTITY (2147483648,1))
And continue incrementing the seed value for each database in the horizontal partitioning.

Advantages:

It is easier to upgrade from a legacy tables that used an int data type as the primary key to a bigint data type (the legacy table would be the first partition).
You can reparation easier than some of the other techniques, since moving rows involve a straight forward case statement (not a recalculated hash).
The data tier code implementing the partitioning can figure out which partition that the primary key is in, unlike a using a uniqueidentifier for a primary key.
The bigint data type consumes 8 bytes of space, which is smaller than the uniqueidentifier data type that take up 16 bytes.
Disadvantages:

The database schema for each partition is different.
This technique works well for horizontal partitioning, but not for Data Sync Service.
Primary Key Pool
In this technique a single identity database is built where all the primary keys are stored, however none of the data. This identity database just has a set of matching tables that contain a single column of integers (int data type) as an auto incrementing identity. When an insert is needed on any of the tables across the whole partition, the data tier code inserts into the identity database and fetches the @@IDENTITY. This primary key from the identity database is used as the primary key to insert into the member database or the partition. Because the identity database is generating the keys there is never a conflict.

Read more: SQL Azure team blog

Posted via email from .NET Info