This is a mirror of official site: http://jasper-net.blogspot.com/

Аркадий Духин - Случайная любовь

| Friday, December 10, 2010

Wikileaks wallpapers

| Thursday, December 9, 2010

Carousel Video Posted – Parts 1 and 2

|
I’m very pleased to be able to announce that the videos Creating a Carousel Part 1 and Creating a Carousel Part 2 are now available.  These videos cover the material that is also discussed in my two part blog entry (beginning here) including such advanced topics as
  • The Silverlight Layout System
  • Overriding MeasureOverride and ArrangeOverride
  • Matrix Transforms
  • Adding Attached Dependency Properties
  • Programmatic Animation

Read more: Jesse Liberty

Posted via email from .NET Info

The 50 Best Registry Hacks that Make Windows Better

|
We’re big fans of hacking the Windows Registry around here, and we’ve got one of the biggest collections of registry hacks you’ll find. Don’t believe us? Here’s a list of the top 50 registry hacks that we’ve covered.

It’s important to note that you should never hack the registry if you don’t know what you’re doing, because your computer will light on fire and some squirrels may be injured. Also, you should create a System Restore point before doing so. Otherwise, keep reading.

Prevent Windows Update from Forcibly Rebooting Your Computer
We’ve all been at our computer when the Windows Update dialog pops up and tells us to reboot our computer. I’ve become convinced that this dialog has been designed to detect when we are most busy and only prompt us at that moment.

There’s a couple of ways that we can disable this behavior, however. You’ll still get the prompt, but it won’t force you to shut down. Here’s how to do it.

Prevent Windows Update from Forcibly Rebooting Your Computer
How to Clean Up Your Messy Windows Context Menu


Read more: How-to-geek

Posted via email from .NET Info

Use the old Windows XP screensavers, including Aquarium, with Windows Vista and 7

|
Ah, Windows Dancer, how fondly I remember thee. How you would bump, jive and gyrate across my screen! Mr Clippy was mere nothingness when compared to you. And Aquarium! A three-dimensional fish tank... on my screen! I still remember the first time I showed my mother. She actually reached out to touch the fish; I had to slap her hand to remind her that it was only an illusion.

With Windows Vista, these XP gems were summarily broken. Most of us didn't realize, as we were too busy dealing with Vista's despicable, retentive ineptitude, but now that we're onto Windows 7... well, wouldn't it be nice to relive some of those nice Windows XP experiences?

One clever guy on the WinMatrix forums (back in 2009, it must be said) has ported the Windows XP screensavers to work with Vista and 7. Unfortunately, Windows Dancer has been removed for copyright reasons, but the Original Screensavers pack (including Aquarium) is still available! Once you've unzipped it, you should have an EXE called MCE2005Screensavers.exe. I chose to unzip that file (using 7-Zip), as I don't like running EXEs, but many people have attested that the file is free of viruses.

Read more: DownloadSquad

Posted via email from .NET Info

ACTA

|
The Anti-Counterfeiting Trade Agreement (ACTA) is a proposed plurilateral agreement for the purpose of establishing international standards on intellectual property rights enforcement. ACTA would establish a new international legal framework that countries can join on a voluntary basis and would create its own governing body outside existing international institutions such as the World Trade Organization (WTO), the World Intellectual Property Organization (WIPO) or the United Nations. Negotiating countries have described it as a response "to the increase in global trade of counterfeit goods and pirated copyright protected works."[2] The scope of ACTA includes counterfeit goods, generic medicines and copyright infringement on the Internet.
The idea to create a plurilateral agreement on counterfeiting was developed by Japan and the United States in 2006. Canada, the European Union and Switzerland joined the preliminary talks throughout 2006 and 2007. Official negotiations began in June 2008, with Australia, Mexico, Morocco, New Zealand, the Republic of Korea and Singapore joining the talks. According to reports, negotiations reached "agreement in principle" in early October 2010, with only a small number of issues outstanding.[5] According to European Union officials, a final deal was expected within weeks.

After a series of draft text leaks in 2008, 2009 and 2010 the negotiating parties published an official version of the then current draft on 20 April 2010.[7] A new consolidated draft text, reflecting the outcome of the final (Tokyo) round of negotiations, was released on 6 October 2010.

The final text was released on 15 November 2010.

Read more:  Wikipedia

Posted via email from .NET Info

Just Ping

|
Online web-based ping: Free online ping from 50 locations worldwide

Read more: Just Ping

Posted via email from .NET Info

Conversations add-on brings a Gmail-like experience to your Thunderbird 3.3 inbox

|
threads.jpg

Mozilla Thunderbird 3.3 (Miramar) is shaping up to be a pretty significant upgrade to the open-source email app. One thing I always find myself wishing for, however, is conversation view. I've been using Gmail for years now, and conversations have become part of the way I work.

A new add-on from Mozilla Labs brings a richer threaded view to Miramar. Once installed, clicking a collapsed thread in your inbox causes the entire conversation to load in Thunderbird's reading pane. Unread messages are expanded and those you've previously read are collapsed -- just like Gmail. Buttons are also added to reading pane which allow you to expand or collapse all messages or pop the conversation out into a new tab.

Read more: DownloadSquad

Posted via email from .NET Info

Microsoft rolls out free Office Web Apps to 15 additional countries

|
webapp-ms.jpg

Microsoft began previewing Office Web Apps (OWA) back in September of 2009, and today the Office team has announced expanded availability of the free-to-use OWA. Originally available in just 11 countries, the total number has been more than doubled and is now open to users in China, Denmark, Finland, Hong Kong, Italy, Japan, The Netherlands, New Zealand, Norway, Portugal, South Korea, Spain, Sweden, and Taiwan.

Read more: DownloadSquad
Read more: MS Office Web Apps

Posted via email from .NET Info

Fix To Chinese Internet Traffic Hijack Due In Jan.

|
Policymakers disagree about whether the recent Chinese hijacking of Internet traffic was malicious or accidental, but there's no question about the underlying cause of this incident: the lack of built-in security in the Internet's main routing protocol. Network engineers have been talking about this weakness in the Internet infrastructure for a decade. Now a fix is finally on the way

Read more: Slashdot

Posted via email from .NET Info

FFsniFF (FireFox sniFFer)

|
FFsniFF is a simple Firefox extension, which transforms your browser into the html form sniffer. Every time the user click on 'Submit' button, FFsniFF will try to find a non-blank password field in the form. If it's found, entire form (also with URL) is sent to the specified e-mail address. It also has the ability to hide itself in the 'Extensions manager'. This extension is meant to be as an example of the 'evil side of Firefox extensions'.

Configuration

FFsniFF has no GUI (so the only way how to find it is looking into Extensions window*) and it cannot be configured after installation. You have to edit it by hand to change the settings (e-mail address, SMTP server..). Please look into the file chrome/content/ffsniff/ffsniffOverlay.js .
* as from version 0.2, the FFsniFF has the ability to hide itself from 'Extensions manager'

From version 0.2 there's a package creator script (written in Python) which will ask you some questions and create 'xpi' package for you, so there's no need of manual configuration any more (just run the file 'pkg_creator.py').

Read more: FFsniFF

Posted via email from .NET Info

Pash

| Wednesday, December 8, 2010
PowerShell open source reimplementation for "others" (Mac, Linux, Solaris, etc...) and Windows (including Windows Mobile and Windows CE)
About the name

Pash = Posh (PowerShell) + bash(one of the Unix shells)

Goals

The main goal is to provide a rich shell environment for other operating systems as well as to provide a hostable scripting engine for rich applications. The user experience should be seamless for people who are used to Windows version of PowerShell. The scrips, cmdlets and providers should runs AS-IS (if they are not using Windows-specific functionality). The rich applications that host PowerShell should run on any other operating system AS-IS. Secondary goal: the scripts should run across the machines and different OS's seamlesly (but following all the security guidelines).

Environment

The current implementation of Pash is written using pure .Net 2.0. It compiles on VS 2008 as well as on Mono. So all the developers can choose the environment that fits their needs and preferences. The produced assemblies can be executed "right out of the box" on Windows, Linux, Mac (or others) without any additional recompilation. Note: for Windows Mobile and Windows CE the produced code should be recompiled against the .NET Compact Framework.

Read more: Pash

Posted via email from .NET Info

IPython

|
IPython: an interactive computing environment
The goal of IPython is to create a comprehensive environment for interactive and exploratory computing. To support, this goal, IPython has two main components:
An enhanced interactive Python shell.
An architecture for interactive parallel computing.

All of IPython is open source (released under the revised BSD license). You can see what projects are using IPython here, or check out the talks and presentations we have given about IPython.

IPython supports Python 2.5 and 2.6 officially. If you need to use Python 2.4, the 0.10 series probably works OK but has not been extensively tested with 2.4.

An experimental Python3 port has been started recently. The code is currently only available in source form from github but we welcome testing, contributions and improvements. Please join the IPython developers list to participate!

Citing IPython

Several of the authors of IPython are connected with academic and scientific research, so it is important to us to be able to show the impact of our work in other projects and fields.
If IPython contributes to a project that leads to a scientific publication, please acknowledge this fact by citing the project, you can see CitingIPython for a ready-made citation entry. We have a listing of projects using IPython, for which updates are always also welcome.


Read more: IPython

Posted via email from .NET Info

Creating a NuGet Package

|
NuGet is a package manager for .NET that was recently released by Microsoft as a CTP. This library is similar to gems or cpan or similar libraries in other languages. I decided to try my hand at creating a HelloWorld package.

First I needed a package. My original idea was to create something useful enough to contribute to the public feed of NuGet packages. I started by creating the Shelf library, which is a small set of extension methods. Notably, I created the Each<T> method that extends IEnumerable<T>. It takes an Action<T>, invoking the action for every item in the sequence. It's a trivial and small library at the moment, but imagine it is something useful and complicated. The library itself isn’t the point here.

Create the package “manifest” (my term).

<package>
 <metadata>
   <id>shelf</id>
   <version>2010.1203.2330.42313</version>
   <authors>Kwak</authors>
   <description>Shelf is a library of common extension methods</description>
   <language>en-US</language>
 </metadata>
 <files>
   <file src="Shelf\bin\Release\*.dll" target="lib" />
   <file src="Shelf\bin\Release\*.pdb" target="lib" />
 </files>
</package>


The documentation says the files element is optional. It seems I didn’t discover the “convention” they were speaking about in the documentation. I needed it.

I ran the command-line tool nuget.exe passing it the pack command and the package manifest. It spat out the packaged file (BTW, you can use just about any zip tool to browse the contents of the package).

There are couple of different ways to deploy the package. Submitting your package for inclusion in the public feed is one option. You can put the file on any accessible URL. The NuGet source includes a “Server” utility. Until my library grows to a point where it’s not laughably trivial, I opted for just putting it in my file system and pointing the package manager to folder. Phil Haack has a great post explaining the deployment options.

So with my package (shelf.2010.1203.2330.42313.nupkg) located in my local packages folder, anytime I want to make use of the shelf library, I simply go to the package manager console and type

Install-Package shelf

Read more: Coffee Driven Developer

Posted via email from .NET Info

Cool Tools To Know [Perl]

|
chromatic mentioned something in the preface to his book Modern Perl that I had been looking for but hadn’t yet found.  He then went on to mention a couple of other things which were just plain neat.  I knew about one of them, but not the second and thought they were both great ideas and thought I’d try and get them wider attention.

I’d been looking for a way to manage having multiple Perl installations on my system.  Ruby has something called the Ruby Version Manager which makes this really straightforward, and I figured there had to be something like it for Perl.  I hadn’t looked much, but what I had didn’t get me anywhere.  The Preface for Modern Perl mentioned it in passing.  The tool is called App::perlbrew and allows you to easily switch from one Perl to another and can help manage those Perl installations.

Combine that with local::lib so you can have your own installed modules, and you’ve almost got rvm.  The ability to have named module sets and turn them on and off at will is missing.  If it’s important, the two could be combined into a complete Perl environment management tool… but I don’t know if it is important.  A per-project Perl with modules installed might be enough.

One useful tool mentioned in Modern Perl is the Perl module Modern::Perl, also by chromatic.  It’s a helper to take some of the boilerplate that a modern Perl program should put in every program and module and condense it to one easy to read, clearer line of code.  It isn’t a big deal, but is nice.  (CPAN, as usual was full of similar things, but I didn’t think they were as well thought.)  I had heard of Modern::Perl because I’d been following chromatic’s blog.

Read more: Laufeyjarson writes…

Posted via email from .NET Info

The Three Differences between Chrome OS and Android

|
On December 7th, Google is expected to announce the release of a laptop with the first version of the Chrome operating system. Concurrently, Google is going great guns with Android. Does Google really need two operating systems? So what’s going on here?

Here’s what Google is up to. Yes, both Android and Chrome OS are Linux-based operating systems. Neither, at the application level, uses the common Linux desktop application programming interfaces (API) that are used by the GNOME or KDE desktops and their applications.

They’re also similar in that both use a common set of techniques to make them more secure. The most important of these is process sand-boxing. What this means is that any Chrome or Android application has just enough access to the system to do its job.

Once you’re past this, the two look and act in very different ways. Here are their main points of difference:

1) Android is for Phones & Tablets; Chrome OS is for Netbooks

Google said at the start that “Google Chrome OS is being created for people who spend most of their time on the Web, and is being designed to power computers ranging from small netbooks to full-size desktop systems.” Google hasn’t always been on message with this.

Google also took its time getting even a Chrome beta out the door. Now that Chrome OS is about to be unveiled, we know that it is going to be Google’s “desktop” operating system, while Android is for smart phones and tablets.

The Android interface is designed foremost for touch. Google Chrome OS looks and acts just like the Chrome Web browser.

2) Chrome OS won’t run Linux desktop or Android Apps

I use quotes around “desktop” with good reason. While Chrome OS will be used like a desktop operating system, it’s not a traditional fat-client desktop like Windows or even a Linux desktop such as Mint. Instead, all of its “applications” will be cloud-based. To see what I mean, just look at the Chrome browser and Google Apps. You’re looking at a sketch of the Google Chrome OS.

Read more: ZDNet

Posted via email from .NET Info

Announcing Vsi Builder 2010, a new extension for Visual Studio 2010

|
I just released on CodePlex and on the Visual Studio Gallery a new extension for Visual Studio 2010 named Vsi Builder 2010, due to a good feedback coming from a previous, stand-alone version for VS 2008.

What is it about?

The new extensibility model in Visual Studio 2010 introduced the new VSIX file format and the concept of "extension" in order to share and install components for Visual Studio 2010, such as tool windows, packages and code editor extensions.

The problem is that reusable code-snippets (.snippet files) and old-style add-ins cannot be packaged and deployed via the VSIX format. In fact, they still need to be packaged to .Vsi installers, like it was back in Visual Studio 2005 and 2008.

Vsi Builder 2010 helps you build redistributable .Vsi installers for your code snippets and add-ins the quickest way, by adding an easy-to-use tool window to Visual Studio 2010.

How do I get it?

There are severaly ways to install Vsi Builder 2010. You can download it directly from within Visual Studio 2010 via the Extension Manager tool, but you can also download the VSIX installer from CodePlex or from the Visual Studio Gallery.

Source Code

The extension has been completely written in Visual Basic 2010 and the source code is available on CodePlex via the source control offered by Team Foundation Server.

How do I use it?

In order to make things easier, I've recorded a short video that you can download from the extension page on CodePlex. Also, this is a screenshot that displays how it looks like:


Read more: Alessandro Del Sole's Blog

Posted via email from .NET Info

Verve: A Type Safe Operating System

|
The Singularity project (an OS written in managed code used for research purposes) has provided several very useful research results and opened new avenues for exploration in operating system design. Recently, MSR released a paper covering an operating system research project that takes a new approach to building an OS stack with verifiable and type safe managed code. This project employs a novel use of Typed Assembly Language, which is what you think it is: Assembly with types (implemented as annotations and verified statically using the verification technology Boogie and the theorem prover Z3(Boogie generates verification conditions that are then statically proven by Z3. Boogie is also a language used to build program verifiers for other languages)). As with Singularity, the C# Bartok compiler is used, but this time it generates TAL. The entire OS stack is verifiably type safe (the Nucleus is essentially the Verve HAL) and all objects are garbage collected. It does not employ the SIP model of process isolation (like Singularity). In this case, again, the entire operating system is type safe and statically proven as such using world-class theorem provers.

Here's the basic idea (from the introduction of the paper):

Typed assembly language (TAL) and Hoare logic can verify the absence of many kinds of errors in low-level code. We use TAL and Hoare logic to achieve highly automated, static verification of the safety of a new operating system called Verve. Our techniques and tools mechanically verify the safety of every assembly language instruction in the operating system, run-time system, drivers, and applications (in fact, every part of the system software except the boot loader). Verve consists of a “Nucleus” that provides primitive access to hardware and memory, a kernel that builds services on top of the Nucleus, and applications that run on top of the kernel.

Here, Microsoft research scientist and operating system expert (he worked on the Singularity project) Chris Hawblitzel sits down with me to discuss the rationale behind the Verve project, the architecture and design of Verve and the Nucleus, Typed Assembly Language (TAL), potential for Verve in the real world, and much more.

Read more: Channel9

Posted via email from .NET Info

Gov2.0 and Facebook ‘Like’ Buttons

|
I am all for Gov2.0.  I think that it can genuinely make a difference and help bring public sector organisations and people closer together and give them new ways of working.  However, with it comes responsibility, the public sector needs to understand what it is signing its users up for.

In my post Insurers use social networking sites to identify risky clients last week I mentioned that NHS Choices was using a Facebook ‘Like’ button on its pages and this potentially allows Facebook to track what its users were doing on the site.  I have been reading a couple of posts on ‘Mischa’s ramblings on the interweb’ who unearthed this issue here and here and digging into this a bit further to see for myself, and to be honest I really did not realise how invasive these social widgets can be.

Many services that government and public sector organisations offer are sensitive and personal. When browsing through public sector web portals I do not expect that other organisations are going to be able to track my visit – especially organisations such as Facebook which I use to interact with friends, family and colleagues.

This issue has now been raised by Tom Watson MP, and the response from the Department of Health on this issue of Facebook is:

“Facebook capturing data from sites like NHS Choices is a result of Facebook’s own system. When users sign up to Facebook they agree Facebook can gather information on their web use. NHS Choices privacy policy, which is on the homepage of the site, makes this clear.”

"We advise that people log out of Facebook properly, not just close the window, to ensure no inadvertent data transfer.”

I think this response is wrong on a number of different levels.  Firstly at a personal level, when I browse the UK National Health Service web portal to read about health conditions I do not expect them to allow other companies to track that visit; I don't really care what anybody's privacy policy states, I don't expect the NHS to allow Facebook to track my browsing habits on the NHS web site.

Secondly, I would suggest that the statement “Facebook capturing data from sites like NHS Choices is a result of Facebook’s own system” is wrong.  Facebook being able to capture data from sites like NHS Choices is a result of NHS Choices adding Facebook's functionality to their site.

Finally, I don't believe that the "We advise that people log out of Facebook properly, not just close the window, to ensure no inadvertent data transfer.” is technically correct.

(Sorry to non-technical users but it is about to a bit techy…)

Read more: The Other James Brown

Posted via email from .NET Info

“The IPv6 Survival Guide [Wiki]” brought to you by Microsoft and the TechNet wiki

|
Introductory Information…
Technical Articles…
Transition Technologies…
Mobile IPv6…
Configuration…
Windows Support for IPv6…
Hands On…
Case Studies…
Developer Resources…
Videos…
Books…
Blogs…
Forums…
Twitter…
Industry and Other Resources…


Read more: Greg's Cool [Insert Clever Name] of the Day
Read more: TechNet Wiki

Posted via email from .NET Info

How to install gitolite on Ubuntu 10.10 (Maverick Meerkat)

|
At work we recently switched from subversion to Git for our version control. I wont go into it too much, but the main reasons where:
  • We wanted a distributed system, for the flexibility it offers individuals
  • We wanted the enhanced branch/merge
  • Just for fun really, broaden our horizons
Anyway, I love GitHub, but it’s not the answer to everything! I wanted a central repository that I could control, so having had a brief glimpse at Gitorious and gitosis, I settled on gitolite. Now, I’m usually quite a lazy sys admin, and unless I desperately need a feature in the latest version of an application, I’m usually happy to fall back on the my chosen package manager, in this case Ubuntu’s APT. Gitolite got a package as of version 10.10 Maverick Meerkat, so I told our local Ubuntu mirror to download all the 10.10 packages and after that, upgraded the server I had in mind so that I could use the gitolite package.

Install Gitolite

Nice and easy this part, on the server:

server> sudo apt-get update
server> sudo apt-get install gitolite

Creating a Public/Private key pair

If you already have one, send the public halve over to the server and skip this part.

Read more: DaveDevelopment::Dave Marshall

Posted via email from .NET Info

Session memory – who’s this guy named Max and what’s he doing with my memory?

|
SQL Server MVP Jonathan Kehayias (blog) emailed me a question last week when he noticed that the total memory used by the buffers for an event session was larger than the value he specified for the MAX_MEMORY option in the CREATE EVENT SESSION DDL. The answer here seems like an excellent subject for me to kick-off my new “401 – Internals” tag that identifies posts where I pull back the curtains a bit and let you peek into what’s going on inside the extended events engine.

In a previous post (Option Trading: Getting the most out of the event session options) I explained that we use a set of buffers to store the event data before  we write the event data to asynchronous targets. The MAX_MEMORY along with the MEMORY_PARTITION_MODE defines how big each buffer will be. Theoretically, that means that I can predict the size of each buffer using the following formula:

max memory / # of buffers = buffer size

If it was that simple I wouldn’t be writing this post.

I’ll take “boundary” for 64K Alex
For a number of reasons that are beyond the scope of this blog, we create event buffers in 64K chunks. The result of this is that the buffer size indicated by the formula above is rounded up to the next 64K boundary and that is the size used to create the buffers. If you think visually, this means that the graph of your max_memory option compared to the actual buffer size that results will look like a set of stairs rather than a smooth line. You can see this behavior by looking at the output of dm_xe_sessions, specifically the fields related to the buffer sizes, over a range of different memory inputs:

...
...

Max approximates True as memory approaches 64K
The upshot of this is that the max_memory option does not imply a contract for the maximum memory that will be used for the session buffers (Those of you who read Take it to the Max (and beyond) know that max_memory is really only referring to the event session buffer memory.) but is more of an estimate of total buffer size to the nearest higher multiple of 64K times the number of buffers you have. The maximum delta between your initial max_memory setting and the true total buffer size occurs right after you break through a 64K boundary, for example if you set max_memory = 576 KB (see the green line in the table), your actual buffer size will be closer to 767 KB in a non-partitioned event session. You get “stepped up” for every 191 KB block of initial max_memory which isn’t likely to cause a problem for most machines.

Things get more interesting when you consider a partitioned event session on a computer that has a large number of logical CPUs or NUMA nodes. Since each buffer gets “stepped up” when you break a boundary, the delta can get much larger because it’s multiplied by the number of buffers. For example, a machine with 64 logical CPUs will have 160 buffers using per_cpu partitioning or if you have 8 NUMA nodes configured on that machine you would have 24 buffers when using per_node. If you’ve just broken through a 64K boundary and get “stepped up” to the next buffer size you’ll end up with total buffer size approximately 10240 KB and 1536 KB respectively (64K * # of buffers) larger than max_memory value you might think you’re getting. Using per_cpu partitioning on large machine has the most impact because of the large number of buffers created. If the amount of memory being used by your system within these ranges is important to you then this is something worth paying attention to and considering when you configure your event sessions.

The DMV dm_xe_sessions is the tool to use to identify the exact buffer size for your sessions. In addition to the regular buffers (read: event session buffers) you’ll also see the details for large buffers if you have configured MAX_EVENT_SIZE. The “buffer steps” for any given hardware configuration should be static within each partition mode so if you want to have a handy reference available when you configure your event sessions you can use the following code to generate a range table similar to the one above that is applicable for your specific machine and chosen partition mode.

DECLARE @buf_size_output table (input_memory_kb bigint, total_regular_buffers bigint, regular_buffer_size bigint, total_buffer_size bigint)
DECLARE @buf_size int, @part_mode varchar(8)
SET @buf_size = 1 -- Set to the begining of your max_memory range (KB)
SET @part_mode = 'per_cpu' -- Set to the partition mode for the table you want to generate

WHILE @buf_size <= 4096 -- Set to the end of your max_memory range (KB)
BEGIN
   BEGIN TRY

       IF EXISTS (SELECT * from sys.server_event_sessions WHERE name = 'buffer_size_test')
           DROP EVENT SESSION buffer_size_test ON SERVER
       DECLARE @session nvarchar(max)
       SET @session = 'create event session buffer_size_test on server
                       add event sql_statement_completed
                       add target ring_buffer
                       with (max_memory = ' + CAST(@buf_size as nvarchar(4)) + ' KB, memory_partition_mode = ' + @part_mode + ')'

       EXEC sp_executesql @session

       SET @session = 'alter event session buffer_size_test on server
                       state = start'

       EXEC sp_executesql @session

       INSERT @buf_size_output (input_memory_kb, total_regular_buffers, regular_buffer_size, total_buffer_size)


Read more: Using SQL Server Extended Events

Posted via email from .NET Info

Hosting in IIS using NetTcpBinding

|
I've recently run across several non-intuitive steps when trying to host a simple Hello-World WCF service in IIS using NetTcpBinding.  These tips apply to IIS7.

The "gotchas" I ran into are:

  • Make sure the Net.Tcp Listener Adapter and Net.Tcp Port Sharing Service are both running. 
    • If you're running on Server2k8, these appear under ServerManager -> Configuration -> Services
  • Make sure to enable the Net.Tcp protocol in site bindings for your website
  • In inetmgr, right-click the site (Probably "Default Web Site")
  • Select Edit Bindings
  • If net.tcp isn't already there, you can add it with the default port like so:
  • click Add…
  • Type = net.tcp
  • Binding information = 808:*
  • Make sure to actually allow the Net.Tcp protocol under your site's advanced settings in IIS.
    • The symptom of not doing this is the exception:
    • The message could not be dispatched because the service at the endpoint address 'net.tcp://<your service>.svc' is unavailable for the protocol of the address.
    • To fix it, select your app in inetmgr, click Advanced Settings… in the Actions pane
    • Under Enabled Protocols, add net.tcp.  The format is that each protocol must be comma-separated.
    • Such as: http,net.tcp
  • Make sure there is no space between the protocols; it's just a comma.
  • Make sure to close the client, regardless of whether it's successful or not.  You can do this in a finally block; close it if the state is Opened, otherwise, abort it if it isn't already closed.
  •                if (client != null)
                   {
                       IChannel clientChannel = (IChannel)client;
                       if (clientChannel.State == CommunicationState.Opened)
                       {
                           clientChannel.Close();
                       }
                       else if (clientChannel.State != CommunicationState.Closed)
                       {
                           clientChannel.Abort();
                       }
                   }

    • If you think your service is in a bad state and you want to start fresh, I found the following steps reliable:
    • Call iisreset

    Read more: James Osborne's Blog

Posted via email from .NET Info

Using IntelliTrace outside the IDE - 64-bit Applications

|
IntelliTrace is one of my favorite features in Visual Studio Ultimate 2010, alongside the debugger and the new C++ IntelliSense and VC++ libraries.
There are some neat things that I've done using IntelliTrace without the IDE at all, such as collecting from 64-bit applications, monitoring test runs that I can't use Microsoft Test Manager to test, and much more.
I'm going to show you how to use the command line to collect and IntelliTrace log from a 32-bit or 64-bit process.

Locating the Collection Plan

Inside of the "Team Tools\TraceDebugger Tools" directory beneath the Visual Studio 2010 installation, you will find two files:
IntelliTrace.exe
en\CollectionPlan.xml

If you are using a version of VS 2010 that is not English, you will find the CollectionPlan.xml file under a directory named after your localized language name.

These two files allow us to use IntelliTrace without the IDE at all.  The easiest way to launch an application with IntelliTrace is to use the launch command:

IntelliTrace.exe launch /collectionplan:CollectionPlan.xml /file:out.iTrace MyAwesomeExecutable.exe


Read more: Matthew Saffer's IntelliTrace Blog

Posted via email from .NET Info

Very Simple Example of ICommand CanExecute Method and CanExecuteChanged Event

|
Michael Washington said I should blog about this, so if it’s too mundane, blame him. I added some animation stuff to hopefully make it a little more interesting.

Consider a simulation where the ViewModel controls the onscreen action. On the following screen, clicking the Start Button will cause the space ship to fly around the the screen. But the Start Button should only be enabled if the ViewModel actually has access to the space ship and the Speed is valid.

The Details

ICommand  provides an interface that can easily be bound to by the View to execute code in the ViewModel. In addition to the Execute() method you might expect, it also calls for a CanExecute() method. CanExecute() returns true if Execute() can currently be ‘executed’ safely. If you bind a Button.Command property to an ICommand implementation, the Button is automatically enabled or disabled according to the value returned by CanExecute(). But CanExecute() is a method, not a property, so if conditions change so CanExecute() would return true rather than false, or vice-versa, Button must know to call CanExecute() to find the new value.

That’s where CanExecuteChanged comes in. Button, through its base class, ButtonBase, is automatically subscribed to the CanExecuteChanged event when the Button.Command property is bound to the ICommand. As you might expect, when ButtonBase catches the CanExecuteChanged Event it responds by calling the CanExecute() method and sets the Button.Enabled state accordingly. So in your code, whenever a condition or property changes that affect what CanExecute() will return, the CanExecuteChanged event should be raised.

I’ll point out here that CanStartAnimation(), which is the method that you supply when you create StartAnimationCommand , will be called by CanExecute() to determine what value to return. In this simple example, CanStartAnimation() returns true if the ViewModel has access to the spaceShip FrameworkElement. To clarify a little futher  StartAnimationCommand is an ICommand property on MainPageViewModel. Specifically it is an instance of DelegateCommand,, a class generously provided by John Papa, which of course implements ICommand. The beauty of DelegateCommand is that it lets you define the methods to be called by CanExecute() and Execute().

CanExecute() doesn’t merely return the result from CanStartAnimation(). As you can see in the DelegateCommand source for CanExecute(), it determines whether the bool from CanStartAnimation() – which it just sees as a function pointer named canExecute that accepts an object and returns the bool in question, is different from the bool it got before. If so, it raises the CanExecuteChanged event.

public class DelegateCommand : ICommand
{
   Func<object, bool> canExecute;
   Action<object> executeAction;
   bool canExecuteCache;
   public DelegateCommand(Action<object> executeAction, Func<object, bool> canExecute)
   {
       this.executeAction = executeAction;
       this.canExecute = canExecute;
   }
   #region ICommand Members
   public bool CanExecute(object parameter)
   {
       bool temp = canExecute(parameter);
       if (canExecuteCache != temp)
       {
           canExecuteCache = temp;
           if (CanExecuteChanged != null)
           {
               CanExecuteChanged(this, new EventArgs());
           }

Read more: OpenLight Blog

Posted via email from .NET Info

Announcing Visual Studio 2010 Service Pack 1 Beta

|
I'm happy to announce that the Visual Studio 2010 Service Pack 1 Beta is now ready for download! MSDN subscribers may download the beta immediately with general availability on Thursday. Service Pack 1 Beta comes with a “go live” license which means you can start using the product for production related work (see the license agreement with the product for more details).

Download Service Pack 1 Beta (MSDN Subscribers only)
The link for Thursday's general availability download is here. 


Since the launch of Visual Studio 2010 and the .NET Framework 4 earlier this year and our subsequent Feature Packs, we have been concentrating on your feedback and worked hard on the issues you reported through Connect and our survey. I just recently blogged about the results from our recent survey and called out some of the latest improvements the team has delivered.  

Service Pack 1 (SP1) continues that momentum of focusing on improving the developer experience by addressing some of the most requested features like better help support, IntelliTrace support for 64bit and SharePoint, and including Silverlight 4 Tools in the box. Some of the additional highlights are:

  • Help Viewer -  The new local Help Viewer is a simple client application that re-introduces key productivity features including a fully-expandable table of contents and a keyword index.  For additional information about these improvements, check out Jeff Braaten’s post here.

Read more: Jason Zander's WebLog

Posted via email from .NET Info

Objective C Class Code Generator

| Tuesday, December 7, 2010
This Objective C code generator (see user's guide) will create an Objective C class - see sample output. It will also automatically generate comments compatible with doxygen. You can then readily copy and paste the code snippet output into XCode. A sample class input is given by default and you will be able to readily test it with a console application source code.

Last but not least, it will persist all your entities definitions in askcodegeneration.com/objectivec/simple-class/simple-class-samples.txt in /Users/account/. By default it will create a Person as sample class from this entry:

"Person" "First Name, Last Name, Age(int)"

do code-block/2
do code-block/3
do code-block/4
do code-block/5
do code-block/6

ret-block: ask-params %askcodegeneration.com/objectivec/simple-class/simple-class-samples.txt "Person" "First Name, Last Name, Age(NSUInteger)"

default-fields: parse/all ret-block/2 ","
fields-types: get-fields-types default-fields
fill-template-body fields-types

ans: ask "class prefix (none generated if blank): "
class-prefix: ans
out: build-markup/vars template-without-namespace [class-name] reduce [(class-name)]

write clipboard:// out
print "copied to clipboard..."
input

;Part 2 not working yet
{
ans: ask rejoin ["Do you want to create a test class for " class-name "? (Y/N): "]
either ans = "Y" [

 out1: copy out

 do code-block/7
 do code-block/8
 do code-block/9

 out: build-markup test-template
 write clipboard:// out
 print "copied to clipboard..."
 input

Read more: Ask Code Generation

Posted via email from .NET Info

MobiOne: Send Mobile Apps Directly To Your iPhone With Google App Engine

|
We’ve got a unique functionality in our newly released MobiOne Studio that we’d love to share – it’s the ability to send mobile Web apps or websites directly to your mobile device(s) for review and testing using Google’s App Engine.

MobiOne’s “AppSync” technology is a Cloud-based service that uses Google App Engine to host runtime versions of your mobile application designs, Web apps and Web sites. It then shares your applications with a distribution list of your choosing via text messages. MobiOne’s Design Center and Test Center make use of the AppSync deployment service to automatically package and send your design or Web resources to the Cloud for easy app or mobile Web site review and testing.

Imagine, if you will,  sitting in a boardroom discussing your company’s mobile Web strategy, when it’s your turn to speak you quickly use MobiOne’s AppSync service to send your app to everyone’s mobile device as a text message; they open the text message and see your mobile app in their browser. Not only have you impressed your colleagues and boss with your presentation of an actual app, but because MobiOne requires no coding skills, you spent only a few minutes before the meeting building a fully-functional app or Web site to impress everyone. Peers will believe it took days (with help from a software developer) to accomplish this, but you know it was as simple as putting together a PowerPoint presentation.

MobiOne is that easy to use. So get your mobile app started today using MobiOne: http://www.genuitec.com/mobile/index.html

Read more: genuitec

Posted via email from .NET Info

Making AJAX Applications Crawlable

|
If you're running an AJAX application with content that you'd like to appear in search results, we have a new process that, when implemented, can help Google (and potentially other search engines) crawl and index your content. Historically, AJAX applications have been difficult for search engines to process because AJAX content is produced dynamically by the browser and thus not visible to crawlers. While there are existing methods for dealing with this problem, they involve regular manual maintenance to keep the content up-to-date.

Learn more
Learn why search engines don't see the content you see and what needs to happen to fix this.

Getting started guide
Get started in making your AJAX application visible to search engines. If you are in a hurry, you can start here, but AJAX crawling is a complex topic, so we recommend reading all the documentation.

Creating HTML snapshots
Learn more about creating HTML snapshots, and which technique might be best suited for your application.

Frequently asked questions
Having trouble? Check out the frequently asked questions.

Specification
Get the details.

Read more: Google Code

Posted via email from .NET Info

PyCharm is My New Python IDE

|
Friends, family, and maybe regular readers know that I’m more likely to publicly

Regular readers know that I’ve used a large number of IDEs over the past several years. They also know that I have, in every single case, returned to Vim, and I’ve spent a lot of time and effort making Vim be a more productive tool for me.

No more. I’m using PyCharm. It’s my primary code editor.

I’ve been using it since the very early EAP releases — maybe the first EAP release. I have rarely been disappointed, and when I was, it was fixed fairly rapidly. Here’s a quick overview of the good and bad.

Vim Keybindings!

I’ve been using Vi and Vim for an extremely long time. Long enough that whenever I’m editing text, I instinctively execute Vi keystrokes to navigate. Apparently my brain just works that way and isn’t going to stop. When I use other editors, and talk to users of other editors, one of the first things that comes up is how to do things efficiently by exploiting the keyboard shortcuts provided by the editor, so why reinvent the wheel? Why make me learn yet another collection of key strokes to get things done?

Sure, Vi keybindings are a pretty much completely arbitrary set of shortcuts, but so are whatever shortcuts anyone else is going to come up with. I’m glad that PyCharm decided to let the Vi-using community easily embrace its IDE.

And, by the way, PyCharm has by far the best and most complete Vi emulation mode I’ve ever seen in any IDE.

Git Integration

Well, not just git, but I use git. The git integration isn’t 100% flawless, but it’s perfect for most day-to-day needs. I use it with local repositories, as well as a centralized one at work, in addition to GitHub. Updating the project works really well, and lets me easily see what changes were applied. Likewise, when I’m ready to push and a file shows up in the list I didn’t recall changing, a quick double-click let’s me see what’s going on in a very nice diff viewer.

Read more: Musings of an Anonymous Geek

Posted via email from .NET Info

Beautiful Hack: Using Mono's Profiler to find Hard Memory Leaks

|
Alan McGovern of MonoTorrent, Moonlight and Mono Introspect fame has written a blog post explaining how he used the new Mono Profiling interface to write a custom memory leak detector for Moonlight.

His post is a step-by-step document on how he created a new loadable profiling module that the Mono runtime uses. He then registers for listening to profiling events for the GC roots (MONO_PROFILE_GC_ROOTS) and then tracks the GC handle use

Read more: Miguel de Icaza's web log

Posted via email from .NET Info

Understanding the Three Approaches to Office Development using VSTO

|
When using Visual Studio Tools for Office (VSTO), there are three basic approaches to Office development:
  • Application-Level Managed Add-In
  • Document-Level Customization
  • Office Automation

This post is one in a series on Microsoft Office 2010 application development. These posts will be published in the future as part of an MSDN article. As usual, after the MSDN article is published, I’ll place pointers in the blog posts to the article on MSDN.
  1. Office/SharePoint Building Blocks and Developer Stories
  2. Overview of Office 2010 Application Development
  3. Office Application Scenarios
  4. Understanding the Three Approaches to Office Development using VSTO
Under the covers, the Office client applications use COM to expose their functionality.  One of the uses of COM in Office is to develop an add-in, which is code that the Office client application loads and runs as the user operates the Office client application.  There are specific mechanisms that Office uses for add-ins, including placing DLLs in appropriate places, configuring code access security to enable running the code, and creating registry entries so that the Office client applications know of the existence of the add-in.

Visual Studio Tools for Office layers a .NET managed programming interface on top of the COM interfaces.  This is immensely valuable – the Microsoft developers who built VSTO take care of many issues around building COM applications, and free us up to develop the functionality that meets our customers’ needs.  The .NET run-time is a managed run-time; hence the name of an add-in built using VSTO is a managed add-in.  The .NET assemblies that are layered over COM are called the Office Primary Interop Assemblies (often called PIAs).

There are two approaches to building a customization of Office.

  • Application-level add-in – This type of add-in’s functionality is available regardless of which document, spreadsheet, or presentation is opened.  An example of this variety is a department-wide or corporate-wide application that enables some level of functionality that every employee of the department or corporation needs to access on a regular basis.  As an example, Lexis for Microsoft Office is an Office customization targeting legal firms.  Users need to execute the same code for every document, so it is implemented as an application-level managed add-in.
  • Document-level customization – This type of functionality is part of the document.  In this scenario, .NET managed assemblies (code-signed for security purposes) are attached directly to each document, so if you send the document to a new user, they can open the document and use the add-in’s functionality without explicitly installing an add-in.  For example, a tax analysis department needs to systematically analyze documents for their tax implications and then communicate that analysis back to the document author.  The result of that analysis is associated with a specific document, so it makes sense to build a document-level add-in that manages the process of assessing and optimizing the tax implications of that document.  Actually, in this case you might have an application-level add-in that is used by the tax analysis firm that produces a document that is customized with a document-level add-in.

In another case, a large consulting and accounting firm produces spreadsheets that implement sophisticated calculations that would be difficult to implement directly using Excel formulas.  They use a document-level add-in so that they can send spreadsheets to associates in other firms or to their customer, and those recipients can take advantage of the code in the document-level add-in.  Note that with Office 2007 and Office 2010, document-level customizations are available only for Word and Excel.
The third approach to use the COM interfaces (both directly and through the PIAs) is Office Automation.  You can develop an automation application that runs an Office client application to perform some specific task.  Your application may have the appearance of a traditional Windows application.  It may even be a simple console application.  When you run this application, under the covers, the application runs the Office application (such as Word 2010 or Excel 2010) and uses it to accomplish the desired task.

Read more: Eric White's Blog

Posted via email from .NET Info

HOW TO: Access Hidden Features in Windows Phone

|
Now that the Windows Phone is out with its buttery smooth UI, some of you might find the following TIP-SHEET useful.

Most of these you can also find http://www.microsoft.com/windowsphone/en-us/howto/wp7/default.aspx

List of “hidden” but “Easy to Get to” features in Windows Phone
1. Tap and Hold
Almost in a majority of cases you can “TAP and HOLD” hereinafter abbreviated as TaH (not TAP and DANCEJ) to get a context sensitive menu. E.g.,

a. TaH an Outlook Mail to see “Delete, mark as Unread, …, clear flag”

b. TAH a conversation to see “delete, forward”.

c. TAH an entry in Call History to “Delete”.

d. TAH a calendar entry to see relevant Actions for the appointment.

e. Try TaH in most other places to reveal hidden gems.

2. Search

a. Hardware Search button is context sensitive. At any place in the operation of the Windows Phone if you press the hardware search button it will take you to a context sensitive search.

i. Example if you are in Mail (Outlook, Gmail, Yahoo, Hotmail) pressing on search will provide you the ability to search for in the Inbox or the specific folder you were in at that point.

ii. If you are in your call history, pressing on HW Search will allow you type and search for entries in the Call list.

iii. But not in calendar appointments.

b. Hold the Windows button for voice search

3. Text Editing

a. Hold on a text box to invoke a cursor for editing

Read more: Girish's Blog - Are you Live yet ?

Posted via email from .NET Info

MSDN Magazine December 2010 Issue

|
gg491225.DecCover_lrg(en-us,MSDN.10).png

Windows Phone 7 Development: Sudoku for Windows Phone 7
Get started with Windows Phone 7 development with this Silverlight-based game tutorial that demonstrates key concepts such as the Model-View-ViewModel design pattern, serialization, user storage and multiple orientations.
Adam Miller

Windows Phone 7 Apps: Build Data-Driven Apps with Windows Azure and Windows Phone 7
The performance of data-driven Windows Phone 7 apps relies on both good UI coding practices and snappy access to data. We’ll cover some important design considerations for using Windows Azure effectively with Windows Phone apps.
Danilo Diaz and Max Zilberman

Windows Azure Access:  Re-Introducing the Windows Azure AppFabric Access Control Service
See how to easily authenticate and authorize users from the likes of Windows Live ID, Facebook, Yahoo and Google within your Web sites and services.
Wade Wegner and Vittorio Bertocci

BDD Primer: Behavior-Driven Development with SpecFlow and WatiN
Behavior-Driven Development techniques let you test and code in the language of your business scenario. We’ll explain how the BDD cycle wraps traditional Test-Driven Development techniques and walk you through an example BDD development cycle for an ASP.NET application.
Brandon Satrom

.NET Performance: .NET Performance
Event Tracing for Windows (ETW) is a powerful logging technology that's leveraged in the .NET Framework 4 CLR to make profiling your managed application simpler than ever. ETW collects system-wide data and profiles all resources (CPU, disk, network and memory) making it very useful for obtaining a holistic view.

Read more: MSDN Magazine

Posted via email from .NET Info

Selenium

|
Selenium is a suite of tools to automate web app testing across many platforms.

Selenium...
runs in many browsers and operating systems
can be controlled by many programming languages and testing frameworks.

Read more: Selenium

Posted via email from .NET Info

The simplest way to do design-time ViewModels with MVVM and Blend.

|
The problem is this: You’ve created your Views and ViewModels, but when you view them in Blend, you either see nothing, or the data you see is not useful for testing what you want in the view. There are various ways to deal with this, including writing code (either in the ViewModel or in a service locator) to return different data at design time.

I’m going to demonstrate a different way that doesn’t involve code, and is very simple and most importantly: Malleable.

Assumptions/Prep:

  • I have a ViewModel class called “BooksViewModel” that supports the search UI and result-set. This has some non-trivial properties such as a collection of “Book” classes.
  • I have a View called “BooksView” that allows you to search for books, but I haven’t yet hooked up the data bindings (it’s easier to do after you create sample data).
  • I have no code to differentiate between design time and runtime. The code is geared purely towards runtime.

What we’ll do:
  • Use Blend to create some purely fake sample data.
  • Assign the fake data as the data-context of the control you want, in a design-time-only fashion.
  • Create the bindings and play with the data.
Step 1: Create some fake data with Blend:
Go to the “Data” tab, and create sample data from a class:

Posted via email from .NET Info

NHibernate 3.0 released

|
First, NHibernate 3.0 Cookbook is now a Packt Publishing best seller. Thank you everyone who bought a copy. The NHibernate project gets a portion of each and every sale.

Yesterday, Fabio announced the release of NHibernate 3.0 General Availability. Go get it!

The previous official release of NHibernate was version 2.1.2, just over 1 year ago. Since then, the team has made a ton of improvements and bug fixes.

Most importantly, NHibernate now targets .NET 3.5, allowing us to use lambda expressions and LINQ. This has led to an explosion of new ways to configure and query.

There are a few very minor breaking changes mentioned in the release notes:

[NH-2392] ICompositeUserType.NullSafeSet method signature has changed
[NH-2199] null values in maps/dictionaries are no longer silently ignored/deleted
[NH-1894] SybaseAnywhereDialect has been removed, and replaced with SybaseASA9Dialect. Sybase Adaptive Server Enterprise (ASE) dialects removed.
[NH-2251] Signature change for GetLimitString in Dialect
[NH-2284] Obsolete members removed
[NH-2358] DateTimeOffset type now works as a DateTimeOffset instead a "surrogate" of DateTime
Plans for version 3.1 include additional bug fixes and patches, as well as enhancements for the new LINQ provider.

Read more: NHibernate Forge

Posted via email from .NET Info

SimBin turns to free-to-play model

|
Swedish independent studio SimBin is adopting the free-to-play model in the latest version of its racing title catalogue.
RaceRoom The Game provides players “access to all of the game's features” just by registering on the game’s website, according to Spong.
SimBin, founded in 2003, recently finished its first console title – the Atari-published RacePro.
The developer’s CEO Henrik Roos said turning to the free-to-play model “is a special bonus to our faithful racing community and will also attract more people to join the world of sim-racing.”

Read more: Developer

Read more: SimBin

Posted via email from .NET Info

An UMDF driver for a virtual smart card reader

|
Introduction

Working with smart cards and PKI stuff is an interesting field. You can see the state-of-art of computer security and how it can be used in real environment from real users. But, sometimes, debugging and testing applications that work with smart cards is a real pain, especially when you have to deal with negative test cases and, as it often happens, you don't have many test smart cards to play with. What if you accidentally block a PIN? Or your CSP issues a wrong command, leaving the card in an inconsistent state? These and many other issues are quite common in this field, so one of the first things I realized when I started to work in this field was that I needed an emulator: something to play with without the risk of doing any damage. In this article I will not speak about smart card OS emulation (perhaps it will be covered in the future...), but about a driver for a virtual smart card reader.
Searching the internet for virtual drivers leads you to find many interesting resources, but not the “guide for dummies” that I was hoping to find. I’m not an expert in driver developing; this is not by any means an article on “how to write drivers”. I’m just explaining my approach to a new subject, hoping that it will be useful for someone.
An alternative approach to the driver is just writing your own version on winscard.dll, and put it in the folder of the application you wish to debug. That's easier, in some cases, but has some drawbacks:

- To fully emulate the behavior of Windows Smart Card Resource Manager you must implement lots of functions
- It could be a pain to implement functions like SCardGetStatusChange, specially if you should mix real and simulated readers
- You can't replace system's real winscard.dll, since it's subject to system file protection, so it could by tricky to override it in some applications

Having tried both approaches, I think that the developing a driver is better, having learned some base lessons on how to do it (or having this article as a guide :) ).

Background

It needed just a few clicks on google to realize that, to keep things easy, I had to use UMDF (User Mode Driver Framework) as a basis for the development of the driver. From my point of view, and my understanding of the subject, the main reasons are:

If you make a mistake, you don't get an ugly blue screen - so, easy developing
You can debug your code with your old good user mode debugger - eg. VS2008 - no need for kernel mode debugging - so, easy debugging
In my case performance is not critical, and the little overhead introduced by the framework is not a problem
These are the reasons that led me to use UMDF. Considering the little effort and the satisfaction with the result, I think it was a good choice.
The code is base on the UMDFSkeleton sample of WDK 7.1. I will first comment the important points of the code, then I will explain the installation procedure.
As an addiction, the virtual card reader will communicate with a desktop application to provide the virtual smart card behavior; so, we'll see some IPC between a UMDF driver and a user process.

A look at an UMDF driver structure

As I said, UMDF simplifies a lot the development of a driver. You just need to write some COM (actually, COM-like) objects implementing some core interfaces and that's it. Let's take a look on how it all works.
A user mode driver is like a COM object. So, like a COM object, we are building a dll that exposes a DllGetClassObject function, that will be called by the UMDF framework to obtain a ClassFactory to create the actual driver object.
With ATL is very easy to create COM objects, so we'll use it to further simplify our job. The only function exposed by the dll is

STDAPI DllGetClassObject(__in REFCLSID rclsid, __in REFIID riid, __deref_out LPVOID* ppv)
{
   return _AtlModule.DllGetClassObject(rclsid, riid, ppv);
}

Nothing strange here. The object we are creating (CMyDriver) must implement the IDriverEntry interface, that defines the main entry points of our driver. We can use the OnInitialize method to do all initialization stuff before the actual job begins, but it is not needed in our case.
The OnDeviceAdd  method  is called by the framework whenever a device is connected to the system that is managed by our driver. In our case we create a CMyDriver object (through CMyDevice::CreateInstance method), that will hold a reference to a IWDFDevice object, created by the CreateDevice function. This is the initialization of CMyDriver:


Read more: Codeproject

Posted via email from .NET Info

HTML and Javascript injection

|
Introduction

This article is about HTML and Javascript injection techniques used to exploit web site vulnerabilities. Nowadays it's not usual to find a completely vulnerable site to this type of attacks, but only one is enough to exploit it.
I'll make a compilation of these techniques all together, in order to facilitate the reading and to make it entertaining.
HTML injection is a type of attack focused upon the way HTML content is generated and interpreted by browsers at client side.
Otherwise, Javascript is a widely used technology in dynamic web sites, so the use of technics based on this, like injection, complements the nomenclature of 'code injection'.

Code injection

This type of attack is possible by the way the client browser has the ability to interpret scripts embedded within HTML content enabled by default, so if an attacker embeds script tags such <SCRIPT>, <OBJECT>,<APPLET>, or <EMBED> into a web site, web browser's Javascript engine will execute it.
A typical target of this type of injection are forums, guestbooks, or whatever section where administrator allows the insertion of text comments; if the design of the web site isn't parsing the comments inserted and takes '<' or '>' as real chars, a malicious user could type :

I like this site because <script>alert('Injected!');</script> teachs me a lot
If it works and you can see the message box, the door is opened to attacker's imagination limits!. A common code insertion used to drive navigation to another website is something like this:

<H1> Vulnerability test </H1>

<META HTTP-EQUIV="refresh" CONTENT="1;url=http://www.test.com">

 Same within <FK> or <LI> tag :

<FK STYLE="behavior: url(http://<<Other website>>;">

Other tags used to execute malicious Javascript code are, for example, <BR>, <DIV>, even background-image:

<BR SIZE="&{alert('Injected')}">
<DIV STYLE="background-image: url(javascript:alert('Injected'))">
<TITLE> tag is a common weak point if it's generated dynamically. For example, suppose this situation:

<HTML>
<HEAD>
<TITLE><?php echo $_GET['titulo']; ?>
</TITLE>
</HEAD>
<BODY>
...
</BODY>
</HTML>

Read more: Codeproject

Posted via email from .NET Info

WCF 4.0 routing

|
In previous post, I've created the top layer of my application architecture using Unity 2.0 and WCF.

In this post I would like to add the final building block of the application server : "The WCF Router service".

WCF routing is a very cool feature in WCF 4.0 that can provides a method for isolating or encapsulating your services from your client,Exposing it only to a router service that will be responsible for routing the messages to the right service.

Here's how it works:

First we've got our services web application (I've mapped it to port 9000 on my local machine).

<system.serviceModel>
   <serviceHostingEnvironment aspNetCompatibilityEnabled="true" />
   <behaviors>
     <serviceBehaviors>
       <behavior name="Service1Behavior">
         <serviceMetadata httpGetEnabled="true" />
         <serviceDebug includeExceptionDetailInFaults="true" />
       </behavior>
       <behavior name="Service2Behavior">
         <serviceMetadata httpGetEnabled="true" />
         <serviceDebug includeExceptionDetailInFaults="true" />
       </behavior>
     </serviceBehaviors>
   </behaviors>
   <services>
     <service name="Server.WCFServices.Service1">
       <host>
         <baseAddresses>
           <add baseAddress="http://localhost:9000/service1"/>
         </baseAddresses>

Step 1 – Create an end point

Just as you would create an end point in your original service, an end point should be declared here.

<endpoint address="service1"
                 binding="basicHttpBinding"
                 name="service1EndPoint"
                 contract="System.ServiceModel.Routing.IRequestReplyRouter" />

<endpoint address="service2"
                 binding="basicHttpBinding"
                 name="service2EndPoint"
                 contract="System.ServiceModel.Routing.IRequestReplyRouter" />
Step 2 – Create an client end point

you should create a client end point in the same way, you would do this on a client application:

<client>
     <endpoint name="service1ClientEndPoint"
               address="http://localhost:9000/service1.svc"
               binding="basicHttpBinding"
               contract="*" />

     <endpoint name="service2ClientEndPoint"
               address="http://localhost:9000/service2.svc"
               binding="basicHttpBinding"
               contract="*" />
</client>

Step 3 – Add a filter

Filters is where the magic happens. Its where the router service knows how to handle an end point and how to filter it.

There are several filter types you could use, you can read more regarding routing filter here.

For this example, I've chosen to use an end point name filter.

<filters>
       <filter name="Service1Filter" filterType="EndpointName" filterData="service1EndPoint"/>
       <filter name="Service2Filter" filterType="EndpointName" filterData="service2EndPoint"/>
</filters>

Step 4 – Add the filter to a filter table

Filters in the filters table map filters to the client end point and therefore creates the final connection to the service itself.

<filterTables>
       <filterTable name="filterTable1">
         <!--add the filters to the message filter table-->
         <!--we determine this through the endpoint name, or through the address prefix-->
         <add filterName="Service1Filter" endpointName="service1ClientEndPoint" priority="1"/>
         <add filterName="Service2Filter" endpointName="service2ClientEndPoint" priority="1"/>
         <!--if none of the other filters have matched, this message showed up on the default router endpoint, with no custom header-->
       </filterTable>
</filterTables>
You can sum up the routing process like this :

End point -> filter -> filter table -> Client end point

Read more: Gadi Berqowitz's Blog

Posted via email from .NET Info

Hash for the holidays [Managed implementation of CRC32 and MD5 algorithms updated; new release of ComputeFileHashes for Silverlight, WPF, and the command-line!]

|
It feels like a long time since I last wrote about hash functions (though certain curmudgeonly coworkers would say not long enough!), and there were a few loose ends I've been meaning to deal with...

Aside: If my hashing efforts are new to you, more information can be found in my introduction to the ComputeFileHashes command-line tool and the subsequent release of ComputeFileHashes versions for the WPF and Silverlight platforms.


When I first needed a managed implementation of the CRC-32 algorithm a while back, I ended up creating one from the reference implementation. Thanks to the strong similarities between C and C#, the algorithm itself required only minimal tweaks and the majority of my effort was packaging it up as a .NET HashAlgorithm. Because HashAlgorithm is the base class of all .NET hash functions, the CRC32 class ends up being trivial to drop into any .NET application that already deals with hashing.

Read more: Delay's Blog

Posted via email from .NET Info

Mono Introspect: Binding GObject-based APIs for use in Mono

|
Alan McGovern, the hacker behind the amazing Moonlight GC tracking device has started work on a tool to bind the new Gtk+ 3.0-based APIs that use GObject instrospection for Mono consumption.

Check out his project hosted in Github's mono-introspect module.

Read more: Miguel de Icaza's web log

Posted via email from .NET Info

Contained Database Authentication: How to control which databases are allowed to authenticate users using logon triggers

|
With the release of Microsoft SQL Server code-name “Denali” Community Technology Preview 1 (CTP1) and the introduction of Contained Database (CDB) (http://msdn.microsoft.com/en-us/library/ff929071(SQL.110).aspx ), we also introduced the capability of  database authentication (http://msdn.microsoft.com/en-us/library/ms173463(v=SQL.110).aspx , http://blogs.msdn.com/b/sqlsecurity/archive/2010/12/03/contained-database-authentication-introduction.aspx, http://blogs.msdn.com/b/sqlsecurity/archive/2010/12/04/contained-database-authentication-monitoring-and-controlling-contained-users.aspx).

 Since the configuration setting  that governs CDB & database authentication is a server scoped setting and the option to modify the containment property for a database is database -scoped; some DBAs may be wondering how to control which databases are allowed to authenticate users.

 Database authentication still fires logon triggers, therefore providing a server-scoped access control where the DBA can specify a policy based on the authentication information available. Below are a few of the tools you may find useful when creating logon triggers that are CDB-authentication ready.

 The information provided by sys.dm_exec_sessions has changed slightly to reflect this new authentication option.

 A new column, authenticating_database_id has been added to sys.dm_exec_sessions that displays the database that authenticated the session:
·          When the session is an internal task, the value for this new column will be null
·          When t he session uses server-scoped authentication (i.e. T-SQL login, or Windows authentication with full server access), the value is 1 (i.e. the id of master database)
·         When the session is a CDB authenticated session, the value is the DB_ID of the authenticating database at the time of the authentication.

 Since the database -authenticated token doesn’t have any server-token information (i.e. there is no login), the suser_sname() and any error message referencing the login name (for example, when trying to access another database) will display the SID in string format, for example:
1> use db_test3
2> go
Msg 916, Level 14, State 1, Server RAULGA-VM03, Line 1
The server principal "S-1-9-3-3323865656-1154615280-1570172340-4238753615." is not able to access the database "db_test3" under the current security context.

  In order to find the user name used in the connection string, you can make use of another column from sys.dm_exec_sessions: original_login_name. This column should return the user name used in the connection string.
 It is very important to notice that all of these values are set for the session at the time the session was established, but may not reflect the current state of the server. For example, the user name for the principal may have changed, but the original_login_name column information would still reflect the name used during the authentication (The SID would still be the same in this case).
 Now, putting it all together, here is a simple example of a trigger that would restrict authentication based on the authentication DB_ID.
/***************************************************************
*
* Sample code for CDB authentication-aware logon trigger
*
* Author:   Raul Garcia
* Date:           11/12/2010
*
* This code is provided as-is and confers no rights or warranties.
* This code is based on a CTP version of SQL Server, which is considered a work in progress.
*
* Microsoft SQL Server code-name “Denali” Community Technology Preview 1 (CTP1)
* © 2010 Microsoft Corporation.
*
****************************************************************/

-- Since logon triggers are server-scoped objects,
-- we will create any necessary additional objects in master.
-- This would give DBA better control over these objects since
-- only privileged principals should have privileges to alter them

Posted via email from .NET Info

How to Generate a Container ID for a USB Device

|
Hi, I’m Kristina Hotz, a Program Manager on the USB team.  In this post, I’ll explain how you can create a container ID for a USB device by using the same mechanism as Windows 7.

You will find the information useful if you are developing a USB driver stack that replaces the Microsoft-provided USB driver stack or if you are a USB device manufacturer and would like to know how your device is recognized by the Windows 7 version of the operating system.

A container ID is an identification string that is generated by the USB driver stack. The string is unique to a physical device. To view all physical devices connected to your computer, from Start, select Devices and Printers. A physical device can expose one or more functional devices. For a single function device, the icon in Devices and Printers represents the physical device and its functional device. If you have a multi-function device (for example, a printer/scanner/fax machine), you will notice an icon that represents the physical device (for example, the printer/scanner/fax machine appears as a printer). That is because Windows uses container IDs to group all functional devices associated with the physical device.

After a USB device is connected to the computer, the USB driver stack (specifically, the bus driver) starts enumerating device nodes (devnode) for each functional device associated with the physical device. The bus driver then assigns a container ID for each devnode. The container ID is a property of a devnode, and is specified through a globally unique identifier (GUID). That GUID is set as a string property on a devnode.  All devnodes originating from a physical device must have the same container ID.  

For an external device’s devnode, the bus driver obtains the container ID by one of the following ways:

·         Reading the Microsoft OS ContainerID descriptor supplied by the device.  For more information, see Using Microsoft OS ContainerID Descriptors.
·         Generating a container ID by hashing certain device information. (See How to Generate a Container ID String)
·         Generating a random GUID for the container ID.
·         Inheriting the container ID of the parent devnode.

Note: Windows uses ACPI to determine whether the physical device is an external device or internal device.  An internal device’s devnode always inherits the container ID of the computer, i.e. its parent devnode.


Read more: Microsoft Windows USB Core Team Blog

Posted via email from .NET Info

I Want to be a Consultant in .NET

|
Introduction

So you've made up your mind to get out of the rat race and work as a consultant.  Or perhaps you just decided to supplement your income with some consulting work.  What are the steps you need to take to get things running?   What are the risks?  This article should put you on the right path.

The Birth of a Consultant

Before you become a consultant, you want to ask yourself a very important question, "Am I a Risk Taker?".  If you are a very skilled developer and you have some good communication skills, the risk factor is lower, but there is still risk involved.  Here are some of them:

     As a consultant you are far more expendable than an employee.
     Once your contract ends, you need to find another contract.
     You often need to provide health care for you and your family.
     You are forced to become a businessman, a salesman, a marketer, and an accountant.
     You have to be careful about legalities in receiving payment
     You need to create a business entity or work for an existing business entity.
     Employees of a client might treat you differently.
     You may go a long time before getting another contract.
     The time you collect payment from a client may not be for a while.


Those are the risks.   Still want to be a consultant?  Good!  I think you made the right choice and here is why.

    You control your own salary and set your own rates
     You answer to the client and not a boss who could be falsely evaluating your performance.
     You are not subject to the whims of the performance of the company you work for.
     You don't have to settle for whatever the benefits of the company throws at you.
     You are paid for the hours you work.
     You can work for multiple clients at once.
     You can sometimes set your own hours and even work a few days a week.
     In a virtual world, you can often work from home.
     You do not have to sit in politically driven and HR meetings any more, and for some reason if you do, the company must pay you for the hours they are devoting to these activities.
    You are your own boss and can pick and choose your clients.
    Should your consulting business grow, you can hire people to do the administrative work.
    You get to work on a variety of different projects and get to learn about technology in all different kinds of fields.
 Now that we are fairly sure we want to be a consultant, what are the steps you need to take?

Get a Business Entity

First of all you will need some sort of business entity whether it's a s-chapter corporation, limited liability corporation,  sole proprietorship, or just a registered business.  Why do you need to a business entity?  Other business are much more likely to deal with you if you have one.  You can get a business entity online such as an LLC.  Once you've got your certificate your ready for the next big step.


Read more: C# Corner

Posted via email from .NET Info

Attractive HTML Email templates

| Monday, December 6, 2010
Although my primary interests lie in software development, good design is something I cannot ignore. I quite often need to send HTML emails to clients but have to settle for simple designs due to lack of good templates; and getting someone else to design it ends up taking a lot of time with unsatisfactory results.

Campaignmonitor provides a showcase of 30+ free email templates for download, ready with PSD files. Each template comes tested with various email programs, including iPhone, although I’ve only used it with Gmail. Various designers around the world have contributed their designs to the template bouquet. If you use campaignmonitor the templates also includes HTML files with the appropriate tags for working with the same.

Read more: codedisel
Read more: Campaignmonitor

Posted via email from .NET Info

PHP Interview Questions and Tips

|
So you’ve been slinging resumes for a while and now you have an interview for an awesome PHP job. While part of the interview will be the typical job interview, you should also be prepared for a technical interview. Technical interviews are often given to determine how well you truly know the technologies with which you’ll be working. There are numerous books and articles to help you prepare for the job interview portion but very little has been said on preparing for a PHP technical interview.

General PHP Questions

The first type of questions you’ll be asking in a PHP interview will be general questions about PHP itself. Typically, technical interviewers like to start with easy general questions about PHP such as special output tags, how to pass parameters, how to define constants and how to define variables. Depending on the interviewers own technical background, they may progress to questions on more advanced usage such as the object oriented features of PHP. Spend some time prior to your interview to brush up somewhat on the basics of the language. A quick read of a basic PHP book like PHP and MySQL Web Development (4th Edition) may help you review the high points.

Sample Questions
Q: What’s a PHP Session?

A: PHP Session is an object created by the PHP engine that persists data between HTTP requests

Q: What are tags used for?

A: They allow to output the result of the expression between the tags directly to the browser response.

Q: How do you define a constant?

A: define(“CONSTANT_NAME”, “constant value”);

Q: How would you get the number of parameters passed into a method?

A: use the fun_num_args() call within the method

MySQL
If you are having a PHP interview, it is pretty likely that you are interviewing for some type of web development job. That means it is almost certain that MySQL will be used. You should expect to be asked some questions about accessing MySQL from PHP in your interview. These questions don’t always get very deep. If you can describe how to connect to a database and execute a simple select query, you will likely pass your PHP interview.

Sample Questions
Q: How would you create a MySQL database from PHP?

A: mysql_query(“CREATE DATABASE db_name” connector);

Q: How would you see all tables in a database?

A: mysql> use db_name; show tables;

Q: How do you change a password for a given user via mysqladmin utility?

A: mysqladmin -u root -p password “newpassword”

PHP Frameworks


Read more: Learn computer

Posted via email from .NET Info