This is a mirror of official site: http://jasper-net.blogspot.com/

Win32 Exceptions – OS Level Point of View

| Thursday, May 20, 2010
One of the widely used and not well documented feature of 32-bit Windows operating system is structured exceptions handling. This common exception service provided by operating system is used by C++ compilers.

This article describes the win32 exception layering and discusses structured exceptions handling (SEH) on OS level and on C++ compiler level.

Describing exceptions from OS level point of view helps to understand functionality and the performance cost of exceptions. Then, it will be not difficult to reveal how it is possible to mix SEH and C++ exceptions and how it is possible to catch SEH in C++ exceptions which could be the main motivation why to read this article.

The best way how to understand the topic is to play with attached examples.

Win32 Exception Layering

The following diagram shows how the common exception service provided by OS (OS Level SEH) is used by C++ compilers for structured exceptions handling (C++ Compiler Level SEH) and for well-known C++ exceptions (C++ Exception Handling).

It is important to understand that both C++ Compiler Level SEH and C++ Exception Handling use the same OS Level SEH.

OS Level SEH

The common exception service provided by OS handles:

Software (Synchronous) Exceptions - explicitly passes control to the operating system through software interrupt
Hardware (Asynchronous) Exceptions - e.g. access violation, integer division by 0, illegal instruction, etc...
Exception Callback Function

Whenever some exception occurs, OS calls the Exception Callback Function within the current thread context.

This function can be defined by the user and the prototype is as follows (defined in excpt.h header):

EXCEPTION_DISPOSITION __cdecl _except_handler
(

struct _EXCEPTION_RECORD* _ExceptionRecord,
void*                     _EstablisherFrame,
struct _CONTEXT*          _ContextRecord,
void*                     _DispatcherContext
);

According to the returned value, OS will perform a certain action (defined in excpt.h):


typedef enum _EXCEPTION_DISPOSITION
{

ExceptionContinueExecution, // tells OS to restart faulting instruction
ExceptionContinueSearch,    // tells OS to continue Exception
                             // Callback Function searching
ExceptionNestedException,
ExceptionCollidedUnwind
} EXCEPTION_DISPOSITION;

The first parameter _ExceptionRecord describes the exception (defined in WinNT.h):

typedef struct _EXCEPTION_RECORD
{

DWORD     ExceptionCode;    // which exception occurred
DWORD     ExceptionFlags;   // additional exception info
struct _EXCEPTION_RECORD *ExceptionRecord;
PVOID     ExceptionAddress; // where the exception occurred
DWORD     NumberParameters;
ULONG_PTR ExceptionInformation[EXCEPTION_MAXIMUM_PARAMETERS];
} EXCEPTION_RECORD;

The third parameter _ContextRecord is a pointer to the CONTEXT structure which represents the register values of a particular thread at the time of the exception. This structure is defined in WinNT.h and it is the same structure which is used for the Get/SetThreadContext API methods.


Read more: Codeproject

Posted via email from jasper22's posterous

Turn On MSDTC Windows 7

|
image1.png

Here is a step by step guide to turn on MSDTC on Windows 7. MSDTC settings are somewhat hidden in Windows 7 and I could not get to them as I did on Windows XP or Server 2003. I found a way and I’m putting it out here.

Step 1
Run dcomcnfg. This will open MMC with Component Services snapin.

Read more: One .NET Way

Posted via email from jasper22's posterous

Google one-ups DNS pre-resolution, adds predictive pre-connections to Chromium

|
It's only been live for a few hours, and Google hasn't yet published before-and-after comparisons, but it looks like speculative pre-connection is now built into the developer tree of Chromium.

As with most of these clever under-the-hood type changes, it's hard to describe just how much this will improve your browsing experience, but I'm going to try.

Basically, pre-connection opens an HTTP (or HTTPS) connection to a search engine before you've finished typing your query into the Chrome address bar (Omnibox). With the socket already open to Google (or Yahoo, or...) your complete search term can be quickly transmitted. Like the clever DNS pre-fetching already present in stable builds of Chrome, we're probably looking at significant speed-ups of half a second or more. Neat.

If that wasn't cool enough, the same patch includes pre-connection to 'subresources, such as images'.

Read more: DownloadSquad

Posted via email from jasper22's posterous

NASA Finds Cause of Voyager 2 Glitch

|
Earlier this month, engineers suspended Voyager 2's science measurements because of an unexpected problem in its communications stream. A glitch in the flight data system, which formats information for radioing to Earth, was believed to be the problem. Now NASA has found the cause of the issue: it was a single memory bit that had erroneously flipped from a 0 to a 1. The cause of the error is yet to be understood, but NASA plans to reset Voyager's memory tomorrow, clearing the error.

Read more: Slashdot

Posted via email from jasper22's posterous

Office 2010 technical support articles

|

Writing XML in C# .NET with XmlTextWriter

|
A few days ago I started writing application that will write and read settings that are saved in XML file.

So, considering that I have never done this in C# (i’m fairly new to it), I’ve read some documentation and came to the conclusion that XmlTextWriter will be the best for the job. Am I wrong? I could use XmlDocument but since I don’t need to do anything more then simply write XML, I see no reason for using it.

So let’s get dirty. :)


First, add the required reference when working with XML in C#.

using System.Xml;

Now we need to create a new XmlTextWriter instance.

XmlTextWriter writer = new XmlTextWriter("data.xml", null);

data.xml will be the name of the file that will be created, in this case in the same folder as the exe itself.

null means that the default encoding will be used which is UTF-8.

Now we are going to set the indenting.

writer.Formatting = Formatting.Indented;

Now, let’s get to the actual XML elements. First, we need to add declaration to the XML.

writer.WriteStartDocument();

Which will output this:

<?xml version="1.0"?>

Then it is time to add the root element to the XML.

writer.WriteStartElement("users");

After root, we will add a node called profile with an attribute called id that has the value of 10.

Read more: lessthanweb

Posted via email from jasper22's posterous

Creating Professional Installers with Wix#

|
Here at Russell we have a fairly strict deployment story. As you can imagine with any financial company, when dealing with other people’s money mistakes are punishable by death. Having a failed rollout because your 37 page install document wasn’t completely accurate and left out some minor detail that prevented the server administrator from continuing is a pretty big mistake.

To mitigate this risk, we as developers try to control and automate as much of our deployments as possible. Some of the strategies we use include:

Automated build that produces a deployment package that is rigorously tested for completeness
A single rollout script to be run in SQL server that is tested with every build
Heavy use of PowerShell scripts to automate complex tasks like setting up MSMQ queues
Deployment of applications using MSI installers built with Wix
Wix or Windows Installer XML, is a great tool for creating very high quality, professional looking installers. The down side is that it has a learning curve steeper than Everest and suffers from severe angle bracket tax. It is not uncommon to struggle for several days trying to get a Wix based MSI to do all the little things you want it to accomplish for you.

To get over these hurdles you need a solid foundation of knowledge and a really nice abstraction layer. Kevin Miller has laid out the solid foundation for us in his series from 2007 called Creating Windows Installers Using WIX. I highly suggest reading it and walking through your own project using the raw Wix model.

When you are done with that, let me introduce you to a really nice abstraction layer. Oleg Shilo’s Wix# project allows you to define your installation package in C#, you know, a general purpose programming language that is designed to model real world processes. Not a markup language like XML. Seriously, if I could go back to 2001 and find the guy that decided that XML was the solution to all problems and beat him senseless, I would.

Read more: I Am Not Myself

Posted via email from jasper22's posterous

Gogole Wave: New features for Robots: Bundled Annotations, Inline Blips, Read-Only Roles

|
Over the last few releases, we've been rolling out incremental improvements to the robots API, based on the feedback from all of you developers. For those of you who haven't been reading the forum waves and changelogs, here's a summary of the new features:

Bundled Annotations:

When you're adding new text to a blip, you often want to annotate that text with a particular set of annotations. In the past, you had to calculate the range of that text and use the annotate operation, like so:

blip.append('New text')
blip.range(len(blip.text), len(blip.text)+8).annotate('style/fontWeight', 'bold')

This often led to off-by-1 errors and frustration. Now with bundled annotations, you can specify both the content to append and the annotation(s) to apply to that content, all in the same operation, like so:

blip.append('New Text', bundled_annotations=[('style/fontWeight', 'bold')])

For more information, read the announcement wave.

Read more: Google Wave

Posted via email from jasper22's posterous

Using Eucalyptus for developing private & public Cloud

|
I’ve been doing some work with Eucalyptus Cloud setup lately and I thought I could speak about my experience with it. I was interested in the possibility of using other popular open source available in cloud domain and I thought Eucalyptus could be a good candidate for developing Private & Public clouds.

Many instructions said below refer to a single-cluster installation, in which all components except NC(Node Controller) are co-located on one machine, which we refer to as front-end. All other machines, running only NCs, will be referred to as nodes. In more advanced configurations, such as those with multiple CCs or with Walrus deployed separately, the front-end will refer to just the machine running the CLC.

Steps for installation :

All packages can be found on the Eucalyptus Web site: http://open.eucalyptus.com/downloads

Unpack the Eucalyptus source using the below mentioned command:

tar zvxf eucalyptus-1.6.1-src.tar.gz
cd eucalyptus-1.6.1

Installing the eucalyptus-cloud,eucalyptus-cc packages on the front-end machine .

sudo apt-get install eucalyptus-cloud eucalyptus-cc

Next, installing the eucalyptus-nc package on each node.(mynode / 0.0.0.0)

sudo apt-get install eucalyptus-nc

- Finally, on the node, bring down the eucalyptus-nc service and modify the file /etc/eucalyptus/eucalyptus.conf with the name of the bridge that we set up as the node’s primary interface.
Hence after successfully adding a node to the cluster we have to register the eucalyptus components.

Registering Eucalyptus Components :

Eucalyptus assumes that each node in the system belongs to a cluster and that each cluster belongs to a cloud. Each node (there is only one node in this example) which runs a copy of eucalyptus-nc. Similarly, each cluster (again, there is only one cluster in this example) must run a copy of eucalytpus-cc. For simplicity, the eucalyptus-cc in this example runs on the same machine as the cloud controller (eucalyptus-clc).

Read more: experience@imaginea

Official site: Eucalyptus Web

Posted via email from jasper22's posterous

Google's Going Native in Chrome With SDK

|
Google is accelerating its effort this week to bring more powerful and fully functioned applications to the Web with the release of the Native Client SDK preview.

Native Client is an open source technology that enables native C or C++ code to run in a Web browser, bringing more advanced applications to the Web that can run inside of Google's Chrome browser.

The approach extends the capabilities of Web-based applications beyond the limitations imposed by using JavaScript, and the SDK builds on efforts to promote the technology that Google has had underway since last year.

"When we released the research version of Native Client a year ago, we offered a snapshot of our source tree that developers could download and tinker with, but the download was big and cumbersome to use," David Springer, a senior software engineer at Google, wrote in a blog post. "The Native Client SDK preview, in contrast, includes just the basics you need to get started writing an app in minutes."

With the new SDK, Google is providing a GCC-based compiler for C and C++ source code as well as samples to help developers build native-code-compliant applications.

One concern that has been raised about Native Client is its portability: JavaScript is available for multiple browsers, while Native Client is being developed by Google and currently works only for Chrome.

"Native Client seems like a huge leap backwards to me," a commenter using the alias "Guspaz" wrote in response to the Native Client blog post. "Why would anybody want to use Native Client when it will prevent your app from running on a variety of platforms such as older Macs, smartphones, tablets, and even smartbooks (including ChromeOS) that run PowerPC or ARM processors? I'm just not seeing the point here."

As it turns out, portability is a key theme for Native Client development, according to Google. Henry Bridge, Google's product manager for Native Client, responded to concerns about lock-in by noting that Google is deeply committed to building a system that's platform-neutral. That said, Bridge admitted that Google has yet to ship a neutral-platform format for the SDK, or an ARM compiler either. He added that Google has no plans to build a compiler for PowerPC.

Read more: developer.com

Posted via email from jasper22's posterous

Running a Silverlight application in the Google App Engine platform

|
This post shows you how to host a Silverlight application in the Google App Engine (GAE) platform. You deploy and host your Silverlight application on Google’s infrastructure by creating a configuration file and uploading it along with your application files.

I tested this by uploading an old demo of mine - the four stroke engine silverlight demo. It is currently being served by the GAE over here: http://fourstrokeengine.appspot.com/

The steps to run your Silverlight application in GAE are as follows:

Account Creation

Create an account at http://appengine.google.com/. You are allocated a free quota at signup.

Select “Create an Application”

Read more: Raj Kaimal

Posted via email from jasper22's posterous

THE CASE OF THE LOW HANGING FILTER DRIVER FRUIT

|
Not all our cases are crashes, leaks, or high CPU.  Sometimes the problems we are faced with are purely a question of why a given application runs slow on a particular version of Windows versus another version of windows.  In other cases an application may just start running slow for no reason.   OK, not likely.  There is ALWAYS SOME RESASON.  Something changed!  In this case, the customer reported that an application started running slow when booted into “Normal Mode”, but when the OS was booted in safe mode, the application would run fast.  In this particular case the customer reported that a given operation went from taking just a few seconds (safe mode) to several minutes (normal mode).   Further research found that the problem was related to accessing the registry and registry performance in general.  At this point I’m already thinking, “Registry Access?” and “Safe Mode”.  What could affect registry access that does not run in safe mode?  Well lots of services DO NOT start in safe mode.  What kind of services could affect registry calls?  Antivirus?  Maybe…  Let’s look deeper.

One of the first things I typically do in such cases is to ask for a kernrate log of the slow and fast scenario. http://download.microsoft.com/download/8/e/c/8ec3a7d8-05b4-440a-a71e-ca3ee25fe057/rktools.exe  Kernrate is a sampling profiler.   It basically checks the location of the instruction pointer at regular intervals and stores the results in a hash table.   We can then get a breakdown of the %time spent in each module that is executing.    Even better you can zoon in to each module.  Zooming in shows utilization at a function level within the module and requires symbols to be present in a flat symbol directory on the machine being profiled.  I recommend downloading the symbol pack for this (http://www.microsoft.com/whdc/devtools/debugging/symbolpkg.mspx) or use symchk.exe (included in the debugging tools) to download the symbols.  We’ll talk more about symbols and symchk.exe in an upcoming post.

In a lot of cases kernrate data is only a starting point.   We will find some code that is running a lot longer in one case verses another and that in turn requires a follow up code review and multiple debugs to further isolate the problem.  This case however was different.  The following is output from beyond compare that shows a comparison between the module execution time in kernel.   The slow test run is on the right, and the fast test run is on the left.  Keeping in mind that I was looking for something different between safe mode and normal mode, I simply started by looking at the modules listed on the slow side (Right) there were not on the fast side (Left).   What was loaded during the normal run that was not a factor during the safe mode run.  Right away FILTERDRVXYZ just above the HAL jumped off the page.  (Some names were changed to protect the innocent. J)  I did a http://www.live.com search to find out what this driver was.  It was a file system filter driver for an antivirus program.

Read more: NTDEBUGGING BLOG

Posted via email from jasper22's posterous

How to find CPU Information on a Linux machine

|
You can use the following command on your linux shell to find out the information related to CPU on your linux box:

shell> cat /proc/cpuinfo
rocessor : 0
vendor_id : AuthenticAMD
cpu family : 16
model : 2
model name : Quad-Core AMD Opteron(tm) Processor 2350
stepping : 3
cpu MHz : 2000.082
cache size : 512 KB
physical id : 0
siblings : 4
core id : 0
cpu cores : 4
apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 5
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm 3dnowext 3dnow constant_tsc nonstop_tsc pni cx16 popcnt lahf_lm cmp_legacy svm extapic cr8_legacy altmovcr8 abm sse4a misalignsse 3dnowprefetch osvw
bogomips : 4003.77
TLB size : 1024 4K pages
clflush size : 64
cache_alignment : 64
address sizes : 48 bits physical, 48 bits virtual
power management: ts ttp tm stc 100mhzsteps hwpstate [8]


Read more: ProgrammingBulls

Posted via email from jasper22's posterous

Suggested Hardware for up to 500 Concurrent Users

|
We wrote this whitepaper using the Performance Toolkit to test hardware configurations up to the 500 user level:
http://www.microsoft.com/downloads/details.aspx?FamilyID=3bf7ecda-7eaf-4f1c-bbfe-cae19bc8bb78&displaylang=en

Read more: Dynamics CRM Benchmarking

Posted via email from jasper22's posterous

Quickstart: Creating a Bootable USB Drive

|
This is a task that can come in quite handy in numerous situations, especially as more and more client & server machines go virtual and the OS itself needs to be installed on hardware that doesn’t have a DVD drive. Hopefully this will prevent some of you from having to spend hours trying to figure out why it’s ‘so hard to make a bootable USB drive’ when burning an ISO image to DVD is so easy.

Two kinds of Bootable USBs

Bootable USB without OS so you can put any files on it - whether those files boot or not is entirely up to what you’re doing…you’re on your own here. An example of using this method is with custom Windows Deployments where custom ISO images or boot folders are created. These must be deployed via USB using this option, NOT the second one mentioned below
Bootable USB with an OS so you can just install the OS image from boot. This option requires a Windows Installation source, so you can’t use this for custom OS images, non-Windows or other bootable purposes
Bootable USB without OS

Insert your USB (4GB+ preferable) stick to the system and backup all the data from the USB as we are going to format the USB to make it as bootable.
Open elevated Command Prompt. To do this, type in CMD in Start menu search field and hit Ctrl + Shift + Enter. Alternatively, navigate to Start > All programs >Accessories > right click on Command Prompt and select run as administrator.
When the Command Prompt opens, enter the following command:
DISKPART and hit enter.
LIST DISK and hit enter.

Once you enter the LIST DISK command, it will show the disk number of your USB drive. In the below image my USB drive disk no is Disk 1.

In this step you need to enter all the below commands one by one and hit enter. As these commands are self explanatory, you can easily guess what these commands do.
SELECT DISK 1 (Replace DISK 1 with your disk number)
CLEAN
CREATE PARTITION PRIMARY
SELECT PARTITION 1
ACTIVE
FORMAT FS=NTFS (Format process may take few seconds)
ASSIGN
EXIT

Read more: shadowbox | 2010 wave for developers

Posted via email from jasper22's posterous

How to replace the inbox Serial.sys driver with the Serial sample driver from the WDK

|
Windows Resource Protection (WRP) prevents badly behaved applications from overwriting essential Windows operating system files. Unfortunately, WRP can create difficulties for a driver developer who wants to develop and test a driver to replace a Windows inbox driver. The .kdfiles kernel debugger command is currently the only supported way to replace an inbox driver (and circumvent WRP) in Windows 7 and Windows Vista.

I discovered this fact the hard way when I recently tried to install the Serial sample driver from the WDK on my Windows 7 computer. At that time, the readme file for this sample described an out-of-date technique for replacing the inbox Serial.sys driver with the Serial sample driver. This technique is discussed in a white paper titled Windows File Protection and Windows, which was published when Windows XP was released in 2001. In Windows XP, Windows File Protection—the predecessor to WRP—can be disabled by changing the value of a registry key named SFCDisable, but this technique does not work in Windows 7 and Windows Vista.

The .kdfiles command can be used to replace files only on a target computer that is being controlled by the Windows kernel debugger. Typically, the debugger runs on a host computer that communicates with the target computer through a debugging cable.

In my debugging configuration, a null-modem cable connects a COM (serial communication) port on the target computer to a COM port on the host computer. However, a serial connection might be too slow if you intend to copy large files between the target and the host. If you need faster communication, the Windows debugging tools also support IEEE 1394 and USB 2.0 connections.

Because I wanted to use the kernel debugger to step through the Serial sample driver, which controls a COM port, I needed a target computer that had a second COM port to connect to the kernel debugger on the host computer. I dedicated COM2 to the debugging connection, but left COM1 under the control of the installed serial driver.

Windows has intrinsic support to enable a computer to serve as a target for a kernel debugger. The debugging support module that runs on the target computer has a built-in serial driver that it uses to communicate with the host computer. In other words, the debugging connection does not use the installed serial driver. Thus, communication with the kernel debugger on the host computer continues without interruption if the debugger hits a breakpoint in the installed serial driver.

On the target computer, Device Manager does not display the COM port that the debugger is using. However, if the target computer is restarted with kernel debugging disabled, this COM port is again displayed by Device Manager and is again controlled by the installed serial driver.

Read more: Windows Driver Kit (WDK) Documentation Blog

Posted via email from jasper22's posterous

Mockup to XAML

|
Convert Balsamiq Mockups to XAML. This project supports BMML mockup control conversion using plugins. A standard set of controls are included with the core application.

The purpose of this project is to provide the Silverlight and WPF community the ability to convert a Balsamiq mockup file to XAML code.

More information about Balsamiq Mockups can be found on the company's website:
http://www.balsamiq.com/
The developers of this project are in no way related to Balsamiq.

Please refer to the Documentation section for information on creating plugins for the MockupToXaml application.

Read more: Codeplex

Posted via email from jasper22's posterous

Google Translation API Integration in .NET

|
Language localization is one of important thing of site of application nowadays. If you want your site or application more popular then other then it should support more then language. Some time it becomes difficult to translate all the sites into other languages so for i have found a great solution. Now you can use Google Translation API to translate your site or application dynamically. Here are steps you required to follow to integrate Google Translation API into Microsoft.NET Applications.

First you need download class library dlls from the following site.

http://code.google.com/p/google-language-api-for-dotnet/

Go this site and download GoogleTranslateAPI_0.1.zip.

Then once you have done that you need to add reference GoogleTranslateAPI.dll like following.

Read more: All about me

Posted via email from jasper22's posterous

COMMUNITY GOODIES: RESOURCE INDEX FOR C++/CLI

|
Introduction

This section helps you explore the features in C++/CLI. You will find useful learning links, videos, walkthroughs, guided tours, books and tutorials.

Overview

Stanley B. Lippman. Pure C++ Hello, C++/CLI

http://msdn.microsoft.com/en-us/magazine/cc163681.aspx

Vivek Ragunathan. C++/CLI Primer - Enter the World of .NET Power Programming

http://www.codeproject.com/KB/mcpp/C___CLI_Primer.aspx

Learning Links

MSDN documentation. Language Features for Targeting the CLR

http://msdn.microsoft.com/en-us/library/xey702bw.aspx

MSDN documentation. CLR Development (How Do I in Visual C++)

http://msdn.microsoft.com/en-us/library/ms177554.aspx

MSDN documentation. Best Practices for Writing Efficient and Reliable Code with C++/CLI

http://msdn.microsoft.com/en-us/library/aa730837(VS.80).aspx

Herb Sutter. Hello, C++/CLI Keywords

http://blogs.msdn.com/hsutter/archive/2003/11/23/53519.aspx

Herb Sutter. Hello, C++/CLI Rationale

http://www.gotw.ca/publications/C++CLIRationale.pdf


Standard ECMA-372 C++/CLI Language Specification

http://www.ecma-international.org/publications/standards/Ecma-372.htm

Stanley B. Lippman. Why C++/CLI Supports both Templates for CLI Types and the CLI Generic Mechanism

http://blogs.msdn.com/slippman/archive/2004/08/05/209606.aspx

Bjarne Stroustrup's (developer of C++) views on C++/CLI

http://www2.research.att.com/~bs/bs_faq.html#CppCLI

Walkthroughs

http://msdn.microsoft.com/en-US/library/e6w9eycd(v=VS.100).aspx

How do I videos

Read more: COMMUNITY GOODIES

Posted via email from jasper22's posterous

Multi-config USB devices and Windows

|
Part of the USB device framework is the ability of a device to expose "configurations," mutually exclusive definitions of what the device can do. Each configuration exposes its own:
·        Set of USB interfaces and endpoints,
·        Device power requirements, and
·        Class- or device-specific information.

If the device has multiple configurations, they can be very similar or very different, but software must choose one of them in order for the device to work. Often we see the first configuration chosen (and by "first" I mean the one defined by the configuration descriptor at index 0). A frequently asked question is: What does it take to influence this choice of configuration value made in software? Let's take a look at each software component.

ü  The USB core stack supports devices with multiple configurations. Client drivers can select any of the device's configurations. MSDN reference

·        The USBCCGP composite device driver has support for multiple device configurations, with a few caveats. MSDN reference
USBCCGP will not load on a multi-config device by default because the hub driver doesn't create a "USB\COMPOSITE" PNP ID for a composite device if it has multiple configurations. However, you can write your own INF that matches a device-specific PNP ID to get USBCCGP to load as your device's function driver.
To select a configuration other than index 0, you must set registry settings as specified in "Selecting the Configuration for a Composite USB Device". During enumeration, USBCCGP will first attempt to select the configuration whose descriptor is found at the specified "original" index. If the attempt fails, normally due to the configuration requiring more than 100mA while the device's upstream hub has only bus power, then USBCCGP attempts to select the configuration found at the specified "alternate" index instead.
Drivers that are clients of USBCCGP cannot change the device's configuration value.

·        KMDF itself can be used by the device's function driver; however, KMDF's USB I/O Target functionality does not support any device configuration other than the first.

×    WinUSB does not support any device configuration other than the first.

·        Class drivers often lack support for multiple device configurations. If your device implements a class defined by a USB class specification, please refer to the specification and the relevant Microsoft documentation for a definitive answer.

To sum up, if you are writing your own driver for your device, you can freely choose the device's configuration at runtime, though if you do so you cannot use some of KMDF's USB features. Otherwise, unless your device's class supports multiple configurations, you need to use USBCCGP's selection mechanism, outlined above, to bring multiple configurations into play.

Read more: Microsoft Windows USB Core Team Blog

Posted via email from jasper22's posterous

Host Silverlight Control in C++ using ATL

|
This article explains that how we can host MS Silverlight Control in a C++ application using ATL/WTL without using the browser (IE/FF).

Mostly people think that Silverlight control can't be used in C++ application. In this article, we will walk through hosting a silverlight control and loading silverlight contents (.xap) file in it.

Using this approach, we can write desktop based applications using Silverlight (without using IE/FF) and distribute it. By using MS Silverlight runtime which is very light weight as compared to .NET framework.

Why do we need to host the Silverlight Control in C++ application? why don't we use WPF right a way? This approach gives you a way to make your desktop application with a Silverlight User Interface and without using the complete .NET framework.. We only need Silverlight runtime.

I am using Silverlight 4.0 for this sample.

COM Reference:

Microsoft provides COM reference for silverlight control. Please take a look at this URL http://msdn.microsoft.com/en-us/library/cc296246(VS.95).aspx.

Create an ATL Application:

Create an ATL out-of-process(executable) project. I used "AtlProject" name is my sample.

Step:1

Implement IXcpControlHost2 Interface.

Open AtlProject.idl file and use following .idl file in order to implement IXcpControlHost to host the Silverlight ActiveX control. This .idl is provided by Microsoft.


Read more: Codeproject

Posted via email from jasper22's posterous

The Glasgow Haskell Compiler and LLVM

|
If you read the LLVM 2.7 release notes carefully you would have noticed that one of the new external users is the Glasgow Haskell Compiler (GHC). As the author of the LLVM backend for GHC, I have been invited to write a post detailing the design of the backend and my experiences with using LLVM. This is that post :).

I began work on the backend around July last year, undertaking it as part of an honours thesis for my bachelor of Computer Science. Currently the backend is quite stable and capable on Linux x86, able to bootstrap GHC itself. Other platforms haven't received any attention yet.

What is GHC and Haskell

GHC is a compiler for Haskell, a standardized, lazy, functional programming language. Haskell supports features such as static typing with type inference, lazy evaluation, pattern matching, list comprehension, type classes and type polymorphism. GHC is the most popular Haskell compiler, it compiles Haskell to native code and is supported of X86, PowerPC and SPARC.

Existing pipeline

Before the LLVM backend was added, GHC already supported two backends, a C code generator and a native code generator (NCG).

The C code generator was the first backend implemented and it works pretty well but is slow and fragile due to its use of many GCC specific extensions and need to post processes the assembly code produced by GCC to implement optimisations which aren't possible to do in the C code. The native code generator was started later to avoid these problems. It is around 2-3x quicker than the C backend and generally reduces the runtime of a Haskell program by around 5%. GHC developers are hoping to depreciate the C backend in the next major release.

Why an LLVM backend?

Offload work: Building a high performance compiler backend is a huge amount of work, LLVM for example was started around 10 years ago. Going forward, the LLVM backend should be a lot less work to maintain and extend than either the C backend or NCG. It will also benefit from any future improvements to LLVM.
Optimisation passes: GHC does a great job of producing fast Haskell programs. However, there are a large number of lower level optimisations (particularly the kind that require machine specific knowledge) that it doesn't currently implement. Using LLVM should give us most of them for free.
The LLVM Framework: Perhaps the most appealing feature of LLVM is that it has been designed from the start to be a compiler framework. For researchers like the GHC developers, this is a great benefit and makes LLVM a very fun playground. For example, within a couple of days of the public release of the LLVM backend one developer, Don Stewart, wrote a genetic algorithm to find the best LLVM optimisation pipeline to use for various Haskell programs (you can find his blog post about this here).


Read more: LLVM PROJECT BLOG

Posted via email from jasper22's posterous

If Windows 3.11 required a 32-bit processor, why was it called a 16-bit operating system ?

|
Commenter Really16 asks via the Suggestion Box how 32-bit Win32s was, and why Windows 3.11 was called 16-bit Windows when it required a 32-bit CPU and ran in 32-bit protected mode.

First, let's look at how Windows worked in so-called Standard mode. Actually, it was quite simple: In Standard mode, Windows consisted of a 16-bit protected-mode kernel which ran applications in 16-bit protected mode. I suspect there would be no controversy over calling this a 16-bit operating system.

With the introduction of Enhanced mode, things got more complicated. With Enhanced mode, there were actually three operating systems running at the same time. The operating system in charge of the show was the 32-bit virtual machine manager which ran in 32-bit protected mode. As you might suspect from its name, the virtual machine manager created virtual machines. Inside the first virtual machine ran... a copy of Standard mode Windows. (This is not actually true, but the differences are not important here. Don't make me bring back the Nitpicker's Corner.)

The other virtual machines each ran a copy of MS-DOS and were responsible for your MS-DOS sessions. Recall that Enhanced mode Windows allowed you to run multiple MS-DOS prompts that were pre-emptively multi-tasked. These other virtual machines ran in a variety of modes, but spent most of their time in virtual-86 mode. MS-DOS applications could use the DPMI interface to switch into 16-bit protected mode, or even 32-bit protected mode if they wanted to. (And that's how Standard mode Windows ran inside the first virtual machine: It used the DPMI interface to switch to 16-bit protected mode.)

It's kind of stunning to realize that Enhanced mode Windows was really a completely new operating system with multiple virtual machines, pre-emptively multi-tasked with virtual memory. In principle, it could have created a virtual machine and hosted yet another random operating system inside it, but in practice the only two operating systems it bothered to host were Standard mode Windows and MS-DOS.

Enhanced mode Windows was called a 16-bit operating system because it ran 16-bit Windows applications (inside a "Windows box", you might say). The supervisor operating system was a 32-bit operating system, but since applications didn't run in supervisor mode, that really didn't mean much. For all anybody cared, the supervisor operating system could have been written in 6502 assembly language. As long as it does its supervisory job, it doesn't matter what it's written in. What people care about is the applications that you could run, and since Enhanced mode Windows ran 16-bit Windows applications, and since it ran a copy of 16-bit Standard mode Windows to do all the things that people considered Windows-y, it was the number 16 that was important.

Read more: The old new thing

Posted via email from jasper22's posterous

Why I Switched to Git From Mercurial

|
I used Mercurial for three years, but started switching to Git about a year ago. I now grudgingly recommend Git to anyone who intends to be a full-time programmer. Git's interface is bad in many ways, which is the main complaint about it, and it's a legitimate one. It's just an interface, though, and this is a tool you're going to use all day, every day, in a wide variety of situations.

Here are all of the ways that Mercurial has harmed me, or that I've seen it harm others, and the ways in which Git does good where Mercurial does evil:

One: Mercurial is bad at handling large amounts of data. A friend accidentally committed a couple GB of data into a Mercurial repository. It became completely broken, to the point where most commands would die because they ran out of memory. Git has no problem with large data. It's awesome to be able to put, say, an entire home directory or ports install under version control without fear. (I recently put a multi-gigabyte Mac Ports install under version control with Git without even thinking about it.)

Two: Mercurial's repository model is clunky and stays hidden in the background (this is a bad thing; don't let anyone tell you otherwise). If you have a Mercurial repository whose size is dominated by a single, 20 MB directory, and you then rename that directory, your repository just doubled to 40 MB. This has limited my ability to manage real-life Mercurial repositories. Git's repository model is so good that I only hesitate slightly when calling it perfect. It allows me to think about what's going on in the repository with an ease that I never had with Mercurial, despite using it much more than Git.

Read more: Extra Cheese

Posted via email from jasper22's posterous

ICopyHook implementation

|
Lately, I had to implement ICopyHook extension for my project. But, I could not get it working like other normal extensions. I tried searching for sample source code on the web/CodeProject with no success. So, I had no choice but to dig down to get it working. And success was not too far away.

Introduction to ICopyHook Interface

ICopyHook handler is a shell extension that determines if a folder or a printer can be moved/copied/renamed or deleted. It works with folder only and not with individual files. ICopyHook should only approve or deny the operation by returning the appropriate value.

ICopyHook interface has one method, CopyCallback, that we need to implement as per our liking. ICopyHook is really not the name of the interface, it is defined as follows:

#ifdef UNICODE
#define ICopyHook ICopyHookW
#else
#define ICopyHook ICopyHookA
#endif
CopyCallback method in ICopyHookA is defined as:

STDMETHOD_(UINT,CopyCallback) (THIS_ HWND hwnd, UINT wFunc, UINT wFlags,
  LPCSTR pszSrcFile, DWORD dwSrcAttribs,
  LPCSTR pszDestFile, DWORD dwDestAttribs) PURE;
and CopyCallback method in ICopyHookW is defined as:

STDMETHOD_(UINT,CopyCallback) (THIS_ HWND hwnd, UINT wFunc, UINT wFlags,
 LPCWSTR pszSrcFile, DWORD dwSrcAttribs,
 LPCWSTR pszDestFile, DWORD dwDestAttribs) PURE;
So you see, you need to implement the right CopyCallback method for your type of compilation, i.e., for UNICODE or non-UNICODE.

Implementing ICopyHook

Create an ATL DLL project. I have named it as CopyHook.
Add a new ATL Object from the Insert menu.
From the category, select Objects, and select Simple Object from the Objects list.
Give a name (I have given it as MyHook). In the Properties, select Threading Model as 'Apartment', and interface as 'Dual'.
Add ICopyHook in the list of derivations of the class.
Collapse
class ATL_NO_VTABLE CMyHook : public ComObjectRootEx<CComSingleThreadModel>,
 public CComCoClass<CMyHook, &CLSID_MyHook>,
 public ICopyHook,  // ICopyHook interface.
 public IDispatchImpl<IMyHook, &IID_IMyHook, &LIBID_COPYHOOKLib>
Add the following in the COM Map:
Collapse
BEGIN_COM_MAP(CMyHook)
 COM_INTERFACE_ENTRY(IMyHook)
 COM_INTERFACE_ENTRY(IDispatch)
 COM_INTERFACE_ENTRY_IID(IID_IShellCopyHook , CMyHook)
END_COM_MAP()
Add the appropriate CopyCallBack method to the class and implement it. My implementation of the CopyCallBack is just to popup dialog.
And of course, include shlobj.h.

Read more: Codeproject

Posted via email from jasper22's posterous

Workaround for accessing some ASMX services from Silverlight 4

|
n this blog we normally talk about building and accessing WCF services, since that is the recommended way to build services for Silverlight. However, we continue to support ASMX services using the familiar “Add Service Reference” feature.

Just recently some folks trying to talk to the ASMX services that SharePoint 2010 exposes brought an interesting issue to my attention. You would hit this issue if you were accessing SharePoint’s userprofileservice.asmx, but more generally you could hit this with any ASMX service that uses known types with Guid/char (for example the operation signature returns Object, but you actually return a Guid/char from within the operation).

When you hit this issue, you get the following exception as Silverlight tries to deserialize the service response:

System.ServiceModel.Dispatcher.NetDispatcherFaultException was unhandled by user code. The formatter threw an exception while trying to deserialize the message: There was an error while trying to deserialize parameter http://tempuri.org/:HelloWorldResponse. The InnerException message was 'Error in line 1 position 268. Element 'http://tempuri.org/:HelloWorldResult' contains data of the 'http://microsoft.com/wsdl/types/:guid' data contract. The deserializer has no knowledge of any type that maps to this contract. Add the type corresponding to 'guid' to the list of known types - for example, by using the KnownTypeAttribute attribute or by adding it to the list of known types passed to DataContractSerializer.'.  Please see InnerException for more details.

(Same goes for http://microsoft.com/wsdl/types/:char)

The error is causes by the fact that ASMX has a slightly different schema for the Guid and char types, which is currently not supported in Silverlight. We are looking at addressing this as soon as possible.

As a workaround, we can use the message inspectors feature that we introduced in Silverlight 4. The attached sample shows how to implement a message inspector can be added to the client when you are talking to an ASMX service. It processes every incoming message and adjusts the way char and Guid are represented, which results in WCF understanding those types correctly.

If you reuse the classes defined in the attached solution, all you need to do is add the new behavior to your generated proxy, like so:

ServiceReference1.WebService1SoapClient proxy = new ServiceReference1.WebService1SoapClient();
           
// add behavior here
proxy.Endpoint.Behaviors.Add(new AsmxBehavior());
           
proxy.HelloWorldCompleted += new EventHandler<ServiceReference1.HelloWorldCompletedEventArgs>(proxy_HelloWorldCompleted);
proxy.HelloWorldAsync();

Read more: Silverlight Web Services Team

Posted via email from jasper22's posterous

2010 Best Tech to Work For

|
While the blog was on hiatus, Fortune released it's 2010 Best Companies to Work For list. For the fourth straight year, a software company was listed as the number one best place to work for (SAS 2010, NetApp 2009, Google 2008, 2007). In total, there were 11 hardware / software companies included on the list, all of whom who have appeared on the list before:

2010besttechcompaniesto.jpg

Read more: My technical experience

Posted via email from jasper22's posterous

Shut down, restart, log off and forced log off system using C#

|
In this article, I am going to show:
How to Shut Down a machine
How to Log Off a machine
How to forced log off a machine
How to restart a machine using c#

To perform our task, very first let us create a windows application project.   On form drag and drop four buttons for four different operations

Navigate to code behind of form and add reference of System.Runtime.InteropServices

Add a static extern method to Form.cs

[DllImport("user32.dll")]
public static extern int ExitWindowsEx(int operationFlag, int operationReason);

Log off  the System

On click event of Log Off button, call ExitWindowsEX method with proper flag.  For log off  operation flag value is 0.

private void btnLogOff_Click(object sender, EventArgs e)
{
   ExitWindowsEx(0, 0);
}

Forced Log off the System

On click event of Forced Log Off button, call ExitWindowsEX method with proper flag.  For Forced  log off  operation flag value is 4.

private void btnForcedLogOff_Click(object sender, EventArgs e)
{
  ExitWindowsEx(4, 0);
}

Shut Down the System

On click event of Shut down button, call ExitWindowsEX method with proper flag.  For shut down  operation flag value is 1.

private void btnShutDown_Click(object sender, EventArgs e)
{
   ExitWindowsEx(1, 0);
}

Restart the System

On click event of Restart button, call ExitWindowsEX method with proper flag.  For Restart  operation flag value is 2.

private void btnRestart_Click(object sender, EventArgs e)
{
   ExitWindowsEx(2, 0);
}

Now when you run the application all system operation should be performed.

For your reference full source code is given here


Read more: C# Corner

Posted via email from jasper22's posterous

Most Common ASP.NET Support issues - Reporting from deep inside Microsoft Developer Support

|
Microsoft Developer Support or ("CSS" - Customer Support Services) is where you're sent within Microsoft when you've got problems. They see the most interesting bugs, thousands of issues and edge cases and collect piles of data. They report this data back to the ASP.NET team (and other teams) for product planning. Dwaine Gilmer, Principal Escalation Engineer, and I thought it would be interesting to get some of that good internal information out to you, Dear Reader. With all those cases and all the projects, there's basically two top things that cause trouble in production ASP.NET web sites. Long story short, Debug Mode and Anti-Virus software.

Thanks to Dwaine Gilmer, Doug Stewart and Finbar Ryan for their help on this post! It's all them!

#1 Issue - Configuration

Seems the #1 issue in support for problems with ASP.NET 2.x and 3.x is configuration.

Symptoms

People continue to deploy debug versions of their sites to production. I talked about how to automatically transform your web.config and change it to a release version in my Mix talk on Web Deployment Made Awesome. If you want to save yourself a headache, release with debug=false.

Additionally, if you leave debug=true on individual pages, note that this will override the application level setting.

Here's why debug="true" is bad. Seriously, we're not kidding.

Overrides request execution timeout making it effectively infinite
Disables both page and JIT compiler optimizations
In 1.1, leads to excessive memory usage by the CLR for debug information tracking
In 1.1, turns off batch compilation of dynamic pages, leading to 1 assembly per page.
For VB.NET code, leads to excessive usage of WeakReferences (used for edit and continue support).
An important note: Contrary to what is sometimes believed, setting retail="true" in a <deployment/> element is not a direct antidote to having debug="true"!

#2 Issue - Problems with an External (non-ASP.NET) Root Cause

Sometimes when you're having trouble with an ASP.NET site, the problem turns out to not be ASP.NET itself. Here's the top three issues and their causes. This category are for cases that were concluded because of external reasons and are outside of the control of support to directly affect. The sub categories are 3rd party software, Anti-virus software, Hardware, Virus attacks, DOS attacks, etc.

Read more: SCOTT HANSELMAN'S COMPUTERZEN.COM

Posted via email from jasper22's posterous

NQueue

|
NQueue provides an enterprise level work scheduling and execution framework and toolset that contains no single point of failure. Using a farm of servers and a clustered SQL server backend, multiple NQueue windows services compete to evaluate configured schedules and execute work

NQueue is a distributed system written in C# composed of the following high level components;
• Admin tool – A windows form application that enables add and delete operations against the various artefacts in a NQueue installation (jobs, schedules, host instances). This would typically be used to view or change the system configuration although it can also be used to immediately enqueue items of work for execution (during testing for example)
• NQueue Monitor website – A website allowing operations and support staff to view the progress of configured jobs. They may also pause or disable job instances from this tool.
• SQL Database – all state for the system is stored in a central (clustered) SQL database.
• Windows services – NQueue processing services running on any number of configured servers competing to evaluate job schedules and execute job code.
• Client API – A .net class library that users can consume/inherit from to allow their job code to interact with the framework.
• NQueueCmd – command line enqueuing of work to execute immediately.

Read more: Codeplex

Posted via email from jasper22's posterous

7 Windows 7 Registry Hacks

|
In the seven months since Windows 7 made its appearance, it's already helped bury some of the bad will generated by its immediate and underloved predecessor, Windows Vista. We were a little bit skeptical at first about whether it was worth buying, but we've really come to appreciate its features, from increased speed to DirectX 11 support, as well as its overall attractiveness and ease of use.
Of course that doesn't mean it can't be improved a bit more in certain areas. Sure, keyboard and mouse shortcuts are nice, but you can only get more thorough personalization—like changing the look of the logon screen, the Taskbar, or even Internet Explorer 8's title bar—by digging deeper—into the Registry.
We did some investigating and dug up these seven tweaks that you can make to drastically change the way Windows 7 looks and behaves, most of which require spending only a few minutes in Regedit. (One requires spending a few seconds in Windows Explorer, too.) Insert the standard disclaimer here: Playing around in the Registry can be potentially dangerous to your computer, so don't dive in unless you feel confident about looking for, and changing, things in the Registry.
The easiest way to start Regedit is to hit the Window key on your keyboard, type regedit, and then hit Enter. (You can also do this by clicking on the Start button as well.) Before you make any changes it's probably smart to back up the key or subkey you're planning on tinkering with. Once you've navigated to the key you're planning to change, right-click on it and select "Export" from the pop-up menu. Pick a location to save the resulting REG file, and you're protected.
In this story, Registry entries are frequently represented with quotation marks around them for clarity; you shouldn't type those in when you're making your changes. And once you've changed a key, it won't take effect right away—you'll need to exit Windows (or reboot) and restart first.
Have a favorite Registry tweak or hack of your own? Let us know in the comments.

1. Change Your Logon Screen Background

Changing the wallpaper on your desktop is one of the easiest things to do in Windows. But if you can have that display any image you want, why not do the same with your logon screen?
1. Navigate to HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Authentication\LogonUI\Background.
2. Find the "OEMBackground" key; or right-click in the right pane and select "New," then "DWORD (32-bit) Value" to create it (and then give it that name).
3. Double-click on "OEMBackground" to open it.
4. Change the value in the "Value data" field to 1.
5. Click OK.
6. Using Windows Explorer, navigate to your Windows directory, then System32\oobe. If there's a folder in here called "info," go into it; if there's a folder inside of that one called "backgrounds," go into that. If neither exists, you'll need to create them both first.
7. Copy the image (it must be a JPEG, and smaller than 256KB in size) you want to use as your logon screen background into the info\backgrounds folder.
8. Rename the image backgroundDefault.jpg. (Note: If you choose an image that's sized differently than your desktop and you change your resolution, it will be adjusted to fit—with a possible loss in quality. The info\background folder also supports 12 other files of specific resolutions. The files should be named backgroundXXXXX.jpg, where the XXXXX is one of the following: 900x1440, 960x1280, 1024x1280, 1280x1024, 1024x768, 1280x960, 1600x1200, 1440x900, 1920x1200, 1280x768, or 1360x768. For example, background1920x1200.jpg will be used at 1,920-by-1,200 resolution, and so on.)
The next time you restart your computer, or log out, you'll see this image as the new logon screen. If you chose an image that prevents the buttons and text from looking their best on the logon screen, you can adjust their appearance as well.
1. Navigate back to HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Authentication\LogonUI (you're not going into Background this time).
2. Add a DWORD value called "ButtonSet."
3. Change its value to either 1 (darker text shadows and lighter buttons, intended for lighter backgrounds) or 2 (no text shadows and opaque buttons, for darker backgrounds); 0 is the Windows default.

Read more: ExtremeTech

Posted via email from jasper22's posterous

Exception handling best practices in ASP.NET web applications

|
Exception handling plays an important part of application management and user experience. If implemented correctly it can make maintenance easier and bring the user experience to a higher level. If not, it can be a disaster.

How many times have you seen the error message that doesn't make any sense or at least provides some valuable information, or even better - how many times have you seen the famous error screen with exception message and a complete stack trace on yellow background? Too many times, I would say. This is why, among other things, some of my colleagues were very interested in exception handling techniques and best practices.

The goal of this article is to provide an overview of what exception handling is from the user perspective and the perspective of people who maintain the application, and to show the best practices of how to implement useful error handling in ASP.NET web applications. This article is related to my previous articles CSS Message Boxes for different message types and Create MessageBox user control using ASP.NET and CSS since this two articles describes how to show user-friendly messages.

1. What information should be presented to the user?

Like I mentioned before, meaningless error messages will confuse users. Not having any error message and allowing the application to stop will make them wish they never clicked on the link that pointed to your website. :)  Messages like "An error occurred" or "System.InvalidOperationException: The ConnectionString property has not been initialized" mean nothing to the end user. One doesn't know what has happened exactly, has the information been saved, and what should one do next.

Read more: Janko at warp speed

Posted via email from jasper22's posterous

How to live forever (if you’re a CLR object)

|
Just subscribe to a static event, or an event of a long-lived object (such as a singleton instance). That long lived object will keep alive all objects that subscribed to any of its events (including you).

Read more: Kirill Osenkov

Posted via email from jasper22's posterous

Final Release of Silverlight 4 Tools for Visual Studio 2010 is now available!

|
The WPF and Silverlight Designer Team is delighted to be able to tell you that the final release of Silverlight 4 Tools for Visual Studio 2010 is now available!

Silverlight 4 Tools for Visual Studio 2010 includes many essential features to help you work with your Silverlight 4 applications:

Support for targeting Silverlight 4 in the Silverlight designer and project system
RIA Services application templates and libraries to simplify access to your data services (check out this Silverlight.tv video and whitepaper giving full details)
Support for Silverlight 4 elevated trust and out-of-browser applications
Enhanced support for other new Silverlight 4 features, including:
Working with Implicit Styles
Go To Value Definition - navigate directly from controls on your page to styles that are applied to them.
Style Intellisense - easily modify styles you already have in XAML
Working with Data Source Window outputs
Data Source Selector - easily select and modify your data source information
Grid Row and Column context menu - Add, remove, and re-sort DSW outputs and other Grid layouts
Thickness Editor for editing Margins, Padding etc
Sample Data Support -  see your item templates and bindings light up at design time
Working with Silverlight 4 Out-of-Browser applications
Automatically launch and debug your OOB app from inside the IDE
Specify XAP signing for trusted OOB apps
Set the OOB window characteristics
Please Note: many of the new designer features work well with WPF as well as Silverlight projects, so this download is definitely recommended for Visual Studio 2010 WPF designer users too.

Read more: WPF & Silverlight Designer

Posted via email from jasper22's posterous

A Sample Silverlight 4 Application Using MEF, MVVM, and WCF RIA Services

|
This article is part one of a series on developing a Silverlight business application using MEF, MVVM Light, and WCF RIA Services.

Part 1 - Introduction, Installation, and General Application Design Topics
Part 2 - MVVM Light Topics
Part 3 - Custom Authentication, Reset Password and User Maintenance

Contents

Introduction
Requirements
Installation
Installing the IssueVision Sample Database
Installing the Web Setup Package
Architecture
Solution Structure
IssueVisionModel Class
The ViewModel Classes
The View Classes and Code-behind Files
Custom Controls for Layout
Dynamic Theming
Next Steps
History

Introduction

This sample application is the result of my initiative to learn Silverlight and WCF RIA Services. With my background of using WPF and MVVM for the past several years, I found that there is a lack of sample LOB applications that can combine the latest Silverlight enhancements with MVVM. This three part article series is my effort at creating such a sample. The choice of an issue tracking application comes from David Poll's PDC09 talk, and the design architecture is from Shawn Wildermuth's blog posts.

The main features of this issue tracking application are:

Login screen provides custom authentication and password reset based on security question and answer.
My Profile screen is for updating user information, password, security questions and answers.
User Maintenance screen is only available to Admin users, and lets Admin user add/delete/update users.
New Issue screen is for creating new issues (bugs, work items, spec defects, etc.).
My Issues screen is for tracking all active and resolved issues assigned to a user.
All Issues screen is for tracking all issues (Open, Active, Pending or Resolved).
Bug Report screen provides a summary of bug trend, bug count and the functionality to print the summary.
Four different Themes are available and can be applied dynamically at any time.
Requirements

In order to build the sample application, you need:

Microsoft Visual Studio 2010
Microsoft Silverlight 4 Tools for Visual Studio 2010
Silverlight 4 Toolkit April 2010 (included in the sample solution)
MVVM Light Toolkit V3 SP1 (included in the sample solution)

Installation

After downloading the setup package to a location on your local disk, we need to complete the following steps:

1. Installing the IssueVision Sample Database

To install the sample database, please run SqlServer_IssueVision_Schema.sql and SqlServer_IssueVision_InitialDataLoad.sql included in the setup package zip file. SqlServer_IssueVision_Schema.sql creates the database schema and database user IVUser; SqlServer_IssueVision_InitialDataLoad.sql loads all the data needed to run this application, including the initial application user ID user1 and Admin user ID admin1 with passwords all set as P@ssword1234.

Read more: Codeproject Part 1, Part 2

Posted via email from jasper22's posterous

50 Open Source Tools To Replace Popular Security Software

|
While it's pretty painless to convert from commercial office software to an open source version, if you'd like to replace commercial security products with open source counterparts, you'll likely have to do some work.
You may need to combine several open source tools to get the functionality you get from a single commercial product. Or you may need to educate yourself about underlying technology before you find the open source applications usable.

However, open source security tools do offer a great deal of flexibility – not to mention cost advantages. If you want complete control over the way your network functions, having access to the source code gives you that ability.

For this list, we've compiled a set of open source security tools and their commercial counterparts. We're not suggesting that the open source apps have all the same features and use the same methods as the commercial products they can replace.

Instead, we're saying that they provide end users with some of the same benefits and deserve consideration, particularly as businesses small and large look for ways to stretch their budgets.

Open Source Tools: Anti-Spam, Anti-Virus/Anti-Malware, Anti-Spyware, Application Firewall, Backup, Browser Add-Ons. PAGE ONE.
Open Source Tools: Data Removal, Encryption. PAGE TWO.
Open Source Tools: File Transfer, Forensics, Gateway/Unified Threat Management Appliances, Intrusion Detection, Network Firewalls. PAGE THREE.
Open Source Tools: Network Monitoring, Password Crackers, Password Management, User Authentication, Web Filtering. PAGE FOUR.

Read more: Datamation

Posted via email from jasper22's posterous

3FD - Framework For Fast Development

|
This is a C++ framework that provides a solid error handling structure, garbage collection, multi-threading and portability between compilers. The goal is to make C++ easier to code leveraging all the performance that you can get and coding faster.

Read more: Codeplex

Posted via email from jasper22's posterous

T 4 beginners – TOC

|
The current available posts on the T 4 beginner series (T4 template) are:


Read more: Bnaya Eshet

Posted via email from jasper22's posterous

Code Metrics: Number of IL Instructions

|
In my previous posting about code metrics I introduced how to measure LoC (Lines of Code) in .NET applications. Now let’s take a step further and let’s take a look how to measure compiled code. This way we can somehow have a picture about what compiler produces. In this posting I will introduce you code metric called number of IL instructions.

NB! Number of IL instructions is not something you can use to measure productivity of your team. If you want to get better idea about the context of this metric and LoC then please read my first posting about LoC.

What are IL instructions?

When code written in some .NET Framework language is compiled then compiler produces assemblies that contain byte code. These assemblies are executed later by Common Language Runtime (CLR) that is code execution engine of .NET Framework. The byte code is called Intermediate Language (IL) – this is more common language than C# and VB.NET by example. You can use ILDasm tool to convert assemblies to IL assembler so you can read them.

As IL instructions are building blocks of all .NET Framework binary code these instructions are smaller and highly general – we don’t want very rich low level language because it executes slower than more general language. For every method or property call in some .NET Framework language corresponds set of IL instructions. There is no 1:1 relationship between line in high level language and line in IL assembler. There are more IL instructions than lines in C# code by example.

How much instructions there are?

I have no common answer because it really depends on your code. Here you can see some metrics from my current community project that is developed on SharePoint Server 2007.

Read more: Gunnar Peipman's ASP.NET blog

Posted via email from jasper22's posterous

Step by Step Guide to trace the ASP.NET application for Beginners

|
I am new to ASP.NET and while learning Tracing i found that there are few resources available for the tracing for beginners which should cover important topics. Even i searched the code project but not able to get good start up tutorial so i decided to write the tutorial for beginners.

Content

Types of tracing  
Writing Custom Trace Information
Page.Trace Vs System.Diagnostics.Trace
Integrate System.Diagnostics with ASPX Page (Routing all Trace information to web page)
trace.axd file
Creating Custom Trace Listeners
Saving Trace information in File  
Types of tracing  

In ASP.NET there two types of Tracing

Application Level
Page Level
Page level Tracing takes the precedence over the Application Level tracing.

Lets start with creating new website.

In web.config add following entries to enable Application level tracing below System.web element.

 <trace pageOutput="true"

enabled="true"
requestLimit="10"
localOnly="false"
mostRecent="true"
traceMode="SortByTime"
/>

Read more: Codeproject

Posted via email from jasper22's posterous

Database Export Wizard for ASP.net and SQL Server

|
With this article I would like to share a simple but useful little tool: ExportWizard, a Step Wizard for Database Export.  

It guides users through a few simple steps to choose a database object (table, view, or query), select columns, and export the data in any of the standard formats (CSV, HTML, XML, or SQL).

The UI: 3 simple steps

The task of exporting from a database can be broken down as follow:

Select a source database object (table, view, or query).
Select columns to include in the export.
Select the export format (CSV, HTML, XML, SQL...).
These simple sequential tasks are a good fit for a step wizard.

The implementation discussed in this article is a Web control, so the screenshots below are inside a web browser. It could be coded as a desktop application as well with the same basic elements and the same steps arrangement.

Step 1: Choose data source

Select table, view or SQL query to export data from.

Read more: Codeproject

Posted via email from jasper22's posterous

It’s a new Sysinternals Tool Day! RAMMap v1.0 released

|
“RAMMap v1.0: Have you ever wondered how Windows allocates physical memory or what’s using it? RAMMap is a new utility for analyzing system RAM usage on Windows Vista and Windows 7 that provides insight never before available. RAMMap shows information about each page of memory, summaries of memory usage by type, views of file data stored in memory, and more.” 

Windows Sysinternals - RAMMap v1.0

“Have you ever wondered exactly how Windows is assigning physical memory, how much file data is cached in RAM, or how much RAM is used by the kernel and device drivers? RAMMap makes answering those questions easy. RAMMap is an advanced physical memory usage analysis utility for Windows Vista and higher. It presents usage information in different ways on its several different tabs:

Use Counts: usage summary by type and paging list
Processes: process working set sizes
Priority Summary: prioritized standby list sizes
Physical Pages: per-page use for all physical memory
Physical Ranges: physical memory addresses
File Summary: file data in RAM by file
File Details: individual physical pages by file
Use RAMMap to gain understanding of the way Windows manages memory, to analyze application memory usage, or to answer specific questions about how RAM is being allocated. RAMMap’s refresh feature enables you to update the display and it includes support for saving and loading memory snapshots.

Read more: Greg's Cool [Insert Clever Name] of the Day

Official site: Sysinternals

Posted via email from jasper22's posterous

Google Open Sources VP8 and it's Going Into Flash - Take That, H.264!

|
The first big announcement at Google I/O today was a game-changer for HTML5 web video.  As many predicted, Google announced the open sourcing of On2's VP8 video codec.  Unlike H.264, which is patent encumbered, or Ogg Theora, which might have a patent pool being assembled against it, VP8 is now fully open and completely royalty-free.  Just minutes later, Adobe CTO Kevin Lynch announced that his company would use VP8 in Flash.

Before today, On2, which was acquired by Google, had over 2 billion VP8 installs recorded.  Google found that the codec was a great piece of technology that was optimized for the web, efficient with bandwidth, and a best-in-class coder for real-time streaming video.  Google didn't forget about audio codecs either.  The Ogg Vorbis audio codec will join VP8 in the WebM initiative.  You can find a WebM developer preview now at webmproject.org.

For years, the battle over HTML5's video codec has raged on.  Although H.264 is an excellent codec and royalty-free until 2016 for non-commercial video, it's patented - and that doesn't sit well with the "open" web philosophy.  Ogg Theora on the other hand, is not patent encumbered (so far), but its quality is inferior.  By the grace of Google, we now have a third player in this arena (with performance potentially on par with H.264) that adheres to the open web ideology and is well-positioned for rapid growth.

Read more: DZone

Posted via email from jasper22's posterous

Gmail's New API: Email as Enterprise Platform

|
Google has announced the availability of a new Application Programming Interface (API) that allows 3rd party services to offer contextually relevant content and functionality inside the email interface of Google Apps Gmail users. It's just the latest sign that the email sector is heating up again.

If you've noticed the way that certain Google properties like YouTube and Picassa have been treated differently when linked-to inside an email viewed in Gmail (both have resulted in video or image previews for the past few months) then you've got some idea how other applications can now relate to the contents of your mail. Now imagine other parts of an email being built on top of by a developer ecosystem. The potential here is very exciting.

Email remains a rich and important platform for communication, now Google hopes it will become a platform for development too. The API will for now be limited to Gmail for Google Apps, where it can be deployed to entire groups by an Apps customer via the Google Apps Marketplace. A related move in the consumer version of Gmail came this Spring, when the company launched OAuth for IMAP. Earlier this morning the company announced the availability of new automating scripts for Google Apps as well.

Ten services have been selected as launch partners and offer all the more indication of the possibilities.

In Google's words:
Several new contextual gadget integrations for Gmail are available to Google Apps customers in the Apps Marketplace today:

  • AwayFind lets you mark certain contacts or message topics as ‘Urgent’ and then alerts you via phone, SMS or IM when relevant messages arrive.
  • Kwaga displays social network profiles and lists recent email exchanges with people you correspond with.
  • Gist brings together information from across the web about people you’re corresponding with, providing rich person and company profiles, news and updates.
  • Pixetell detects email links to video messages created with Pixetell’s video software and lets you preview, comment on, and share those videos without leaving your inbox.
  • Smartsheet lets you access and update entries in Smartsheet’s sales pipeline and project management tool.
  • Xobni, Rapportive, Manymoon, Newmind Group, and BillFLO have also launched their own contextual gadget integrations.

Read more: ReadWrite Enterprise

Posted via email from jasper22's posterous

Using netcat to view TCP/IP traffic

|
There are times when you do want to see what bytes are flowing over wire in HTTP communication (or any TCP/IP communication). A good tool on Unix/Linux to use for this purpose is netcat (it is available as command nc), as long as you have the ability to set proxy host and post at the client side. This is best explained by the following diagram:

netcat-proxy.png


Let us say your client program running on machine chost is talking to the Server program running on machine shost and listening for connections at port 8000. To capture the request and response traffic in files, you need to do two things:

Setup a netcat based proxy either on a third machine phost or any of the client or server machines. The commands are shown in the above diagram (click to enlarge). The first command mknod backpipe p creates a FIFO. The next command nc -l 1111 0<backpipe | tee -a in.dump | nc shost 8000 | tee -a out.dump 1>backpipe does a number of things: (a) runs a netcat program that listens for incoming connections at port 1111, writes output to stdout and reads input from FIFO backpipe; (b) runs a tee program that write a copy of the previous netcat output to file in.dump; (c) runs a second netcat program that reads the output of the first netcat program, connects to the server program running on shost at port 8000 and forwards all data to the newly established connection. the response messages from this connection are written back to the stdout of this program; (d) runs a second tee program that sends the output of the second netcat program (ie; the response messages from the server program) to FIFO backpipe and also appends a copy to file out.dump. Data bytes written to FIFO backpipe are read by the first netcat program and returned to the client program as response message.
Specify the proxy host and port for the client. This can often be done without modifying the program. For example, most Browsers have GUI options to set proxy host and post; Java programs allow setting http.proxyHost and http.proxyPort system properties; and CURL based PHP programs have option CURLOPT_PROXY.
The request message gets captured in file in.dump and response message in out.dump on the machine where netcat based capturing proxy is running.

Read more: Pankaj Kumar's Weblog

Posted via email from jasper22's posterous

Authoring/Integrating API Help for VS2008 and VS2010

|
  From time to time I have had to develop API's that are used by other developers inside and outside of my organization.  It has always been my thought to be able to package it up with nice documentation and professional looking help files.  Through some freely available tools, it is fairly easy to generate professional looking help files, but I always seemed to get stuck on integrating it with the VS IDE.  In this article I will show how I solved the problem for both VS2008 and the new Help3.x format for VS2010.

The tools you will need

GhostDoc  
Sandcastle
Sandcastle GUI
MSHC Migrate Utility
Visual Studio SDK (VS2010)
Visual Studio SDK (VS2008)
And, of course, you will need either or both of Visual Studio 2008/2010.

Sample API

public class API
{

public API()
{
}

public string Foo( string s )
{

return "Foo got " + s;
}
}

GhostDoc

The GhostDoc project is a free VS plug-in that greatly simplifies generating the XMLDoc Comments in your API source code.  As you define and implement your classes it is pretty trivial to create these comments within VS by simply typing three '/'s, giving you

Read more: Codeproject

Posted via email from jasper22's posterous

Using Microsoft ASP.NET MVC to Easily Extend a Web Site into the Mobile Space

|
Learn how to build mobile Web sites using the ASP.NET MVC framework. See how to create customized mobile experiences by extending the Views in the MVC framework and using the latest device detection techniques.

Read more: .NET Software Development Videos & Tutorial Directory

Posted via email from jasper22's posterous

The New ISO Hacking Standard

|
New York, May 17, 2010 -- The world’s national standards bodies met again during April, this time in Malaka, Malaysia and they extended talks about the Open Source Security Testing Methodology Manual. This ultimate security guide, better known to security experts and hackers alike as the OSSTMM (spoken like “awesome” but with a “t”), is a formal methodology for breaking any security and attacking anything the most thorough way possible. So why is the International Standards Organization talking about it?

Some national standards organizations like ANSI in the USA and UNINFO in Italy have had their eye on the OSSTMM for years. Others, like DIN in Germany, were only recently shown the benefits of the OSSTMM but then supported it immediately. Released for free in January 2001 by Pete Herzog as the underdog to the security industry’s product-focused security advice, the manual achieved an instant cult following. The fact that OSSTMM is open to anyone for peer review and further research led to it growing from its initial 12 page release to its current size of 200. The international support community also grew to over 7000 members with dozens of research contributors dedicating their time to enhancing it. For testing security operations and devising tactics it has no equal. Its popularity and growth happened so fast that the non-profit organization ISECOM created the Open Methodology License (OML) asserting the OSSTMM as an open Trade Secret to assure it remained free, as in no price, as well as free from commercial and political influence. The OSSTMM seemed to have all the features of being the answer for securing the world except that it had never been formally recognized…until now.

With such fanatical devotion from experts and the underground, the OSSTMM soon gained the attention of governments from city to state to national which is how it eventually got to the ISO. ISO is the acronym of the International Standards Organization. Headquartered in Geneva, Switzerland, ISO is the collection of people who create manuals standardizing all sorts of things like paper sizes (ISO 216), what determines a water-resistant watch (ISO 2281), how to properly conduct quality management (ISO 9001), the C programming language (ISO 9899), shoe sizes (ISO 9407), or what defines proper information security (ISO 27001 and 27002). However they currently have nothing on operational security, the means of assuring security for processes and systems in action. The only way that can be done is by attacking it every way possible, pushing the impossible, and see why and how the security breaks. That’s exactly what the OSSTMM does.

During past ISO meetings, the Subcommittee 27, mostly known for its ISO/IEC 27000 family (Information Security Management System) and ISO/IEC 15408 (Common Criteria), already discussed the topic within different working groups (WG) with no clear outcome. Meanwhile, some ISECOM members, like Dr. Fabio Guasconi in Italy and Heiko Rudolph together with Aaron Brown in Germany, have become active participants in their respective ISO national bodies to help inform their ISO colleagues about the many benefits the OSSTMM could provide to various ISO standards. In Malaka, Dr. Guasconi, the national body representative of Italy’s UNINFO, made significant progress on this front when he held a complete presentation to WG4 and WG3, the latter one being devoted to security evaluation criteria. WG3 then eventually expressed a formal interest in carving deeper into the security testing methodology topic, issuing and approving a resolution for starting a study period of one year. The base of this study period, which is the first step towards a standardization path, would be constituted by the OSSTMM 3 and all security experts from national bodies will freely contribute and comment on it. By the end of the study period it will be determined how ISO will receive OSSTMM contents in its family of security standards. As outlined in Malaka’s presentation there are many standards that could benefit from a standard aligned with OSSTMM contents, such as 21827, 15408, 18045, 19790 and, of course, 27001. Parts of OSSTMM concepts have already been posted as comments within the project for ISO 27008, which is dedicated to technical audits on security controls. It looks like this hacker’s guide has really grown up.

The OSSTMM is currently in its third revision and still in Beta, therefore only available to team members, select reviewers, and federal government agencies that require it for drafting policy. This third version is a complete re-write of the methodology and has at its foundation the ever-elusive security and trust metrics. It required 6 years of research and development to produce the perfect operational security metric, an algorithm which computes the Attack Surface of anything. In essence, it is a numerical scale to show how unprotected and exposed something currently is. This number is the basis required for making a proper trust assessment, another feature of the OSSTMM 3 to do away with risk assessment in favor of a more factual metric using trust. Security professionals, military tacticians, and security researchers know that without knowing how exposed a target is, it’s just not possible to say how likely a threat will cause damage and how much. But to know this requires a thorough security test which happens to be exactly what the OSSTMM provides.

Read more: Isecom

Posted via email from jasper22's posterous