This is a mirror of official site: http://jasper-net.blogspot.com/

Animals That Can Live Without Oxygen Discovered, Aliens Basically Guaranteed to Exist Now

| Thursday, April 8, 2010
Scientists have just discovered the first multicellular animals that can survive entirely without oxygen. They live in the L'Atalante Basin in the Mediterranean Sea, a place with salt brine so thick it doesn't mix with oxygen-containing waters above.

This is pretty crazy stuff. Previously, it was thought that only single-celled life could exist in such inhospitable places, but this proves otherwise.

Read more: Gizmode
Read more: BMC Biology

Posted via email from jasper22's posterous

Top 7 Operating Systems for Serious Server Applications

|
While important, the technical specifications of the hardware inside a computer that will be used as a server  are not the only factor that will influence its performance, stability and reliability. Often times, the software you use is just as important.
The heart of the software part of a server is the operating system. This is the single most important thing that you’ll have to install and will be very hard to change without interrupting normal operations (it might even be impossible to replace if you made significant customizations that will run only on that specific configuration).
That is why you should choose it very carefully and weight in all the advantages and disadvantages of the available operating systems. Depending on what you need (increased stability, maximum performance, fast serving of static or dynamic pages, fast database operations, etc.), some OS’es will work better (sometimes significantly better) than others.

Read more: HostWisely

Posted via email from jasper22's posterous

SQL SERVER – 2008 – Introduction to Snapshot Database – Restore From Snapshot

|
Snapshot database is one of the most interesting concepts that I have used at some places recently.

Here is a quick definition of the subject from Book On Line:

A Database Snapshot is a read-only, static view of a database (the source database). Multiple snapshots can exist on a source database and can always reside on the same server instance as the database. Each database snapshot is consistent, in terms of transactions, with the source database as of the moment of the snapshot’s creation. A snapshot persists until it is explicitly dropped by the database owner.

If you do not know how Snapshot database work, here is a quick note on the subject. However, please refer to the official description on Book-on-Line for accuracy. Snapshot database is a read-only database created from an original database called the “source database”. This database operates at page level. When Snapshot database is created, it is produced on sparse files; in fact, it does not occupy any space (or occupies very little space) in the Operating System. When any data page is modified in the source database, that data page is copied to Snapshot database, making the sparse file size increases. When an unmodified data page is read in the Snapshot database, it actually reads the pages of the original database. In other words, the changes that happen in the source database are reflected in the Snapshot database.

-- Create Snapshot Database
CREATE DATABASE SnapshotDB ON
(Name ='RegularDB',
FileName='c:\SSDB.ss1')
AS SNAPSHOT OF RegularDB;
GO
-- Select from Regular and Snapshot Database
SELECT * FROM RegularDB.dbo.FirstTable;
SELECT * FROM SnapshotDB.dbo.FirstTable;
GO

Read more: Journey to SQL Authority with Pinal Dave

Posted via email from jasper22's posterous

Creating Stored Procedures with Managed Code

|
Introduction

SQL is by concept a language to manipulate sets of data; therefore Microsoft SQL Server 2005 database system uses T-SQL (Transact SQL) for writing structure code to control the data flow. Prior to Microsoft SQL Server 2005, the only way to write procedures and functions was by using T-SQL, but now, Microsoft SQL Server 2005 provides an integration with the Common Language Runtime (CLR), and consequently the procedures and functions can be written using managed code in any .NET language such as C#. This article is intended to illustrate how to implement a stored procedure with managed code.

The business scenario

As illustrative purposes, we're going to develop a stored procedure to return a list of products by its subcategory using the AdventureWorks database and Production.Product table shipped with the installation of Microsoft SQL Server 2005 (Listing 1). The main steps are to create a class and the underlying business logic to get a list of products, build this class into an assembly, register the assembly in the SQL Server engine, and then create a stored procedure in the database which is an interface to the corresponding method in the class hosted in the assembly.

select *
from Production.Product
where ProductSubcategoryID=@ProductSubcategoryID;

Listing 1

Developing the solution

The first step is to create a SQL Server project by opening Visual Studio.NET 2005 and select File | New | Project

Read more: C# Corner

Posted via email from jasper22's posterous

Using AntiXss As The Default Encoder For ASP.NET

|
Scott Guthrie recently wrote about the new <%: %> syntax for HTML encoding output in ASP.NET 4. I also covered the topic of HTML encoding code nuggets in the past as well providing some insight into our design choices for the approach we took.

A commenter to Scott’s blog post asked,

   Will it be possible to extend this so that is uses libraries like AntiXSS instead? See: http://antixss.codeplex.com/

The answer is yes!

ASP.NET 4 includes a new extensibility point which allows you to replace the default encoding logic with your own anywhere ASP.NET does encoding.

Read more: haacked

Posted via email from jasper22's posterous

Debugging Delegation and Kerberos Configuration

|
I came across an interesting tool the other day that can be used to debug and diagnose configuration problems with Kerberos. DelegConfig is an ASP.NET application that you install to generate a troubleshooting report about your IIS configuration, Kerberos credential usage, and delegation settings. You configure the DelegConfig installation with the service account you intend to use for your application and visit the report page with the end user account you intend to use. The report also includes a wizard for setting up a simulation of a multi-tier service.

Read more: Nicholas Allen's Indigo Blog
Official site: DelegConfig v2 beta (Delegation / Kerberos Configuration Tool)

Posted via email from jasper22's posterous

Downloading WinDBG

|
I only use WinDBG every once in a while, but, when I need it, I really  need it and need it now. Kevin Dente pointed out earlier today that apparently, the latest versions of WinDBG was not available as a standalone installer, but only as part of the Windows Driver Kit ISO download.

That’s right. You now need to download a 620MB ISO, find a tool to open it up with (since Windows still lacks native support for opening ISO files directly) to extract a 17MB installer for WinDBG. From the WinDBG download page:

   This current version of Debugging Tools for Windows is available as part of the Windows Driver Kit (WDK) release 7.1.0. To download the WDK and manually install Debugging Tools for Windows:

   1. Download the WDK and burn the ISO to a DVD or other installation media. (The WDK ISO file is approximately 620 MB to download.)
   2. Open the ISO from the installation media and open the Debuggers directory.
   3. In the Debuggers directory, run the appropriate MSI or setup EXE for x86 and follow the instructions in the Setup Wizard.
   4. After installation is complete, you can find the debugger shortcuts by clicking Start, pointing to All Programs, and then pointing to Debugging Tools for Windows (x86).

Read more: Winterdom

Posted via email from jasper22's posterous

Assembly-level initialization at design time

|
One sorely missing feature from the Blend 4 Beta is the ability to have a method in which you can perform initialization work specific to design-time.  Such a method is useful for tasks like loading design-time data services into a container, configuring MEF with design-time specific dependencies, and basically anything else that you might otherwise do at startup when the application runs.  Since your App class won’t be started up when in design mode, it would be great if there was a way for Blend to call a method designated for that purpose at design-time.

I have been thinking about this a lot recently.  Initially I thought that Blend could provide an attribute that I could apply to a static method, and it would then invoke that decorated method before loading any Views.  That seemed like the best solution (and still does, in my opinion).  However, since Blend has no such attribute, I figured it was a moot point…until I realized that I could create my own attribute for the same purpose!

If you decorate an assembly with an attribute, that attribute must be instantiated when the assembly is inspected via reflection (which Blend most certainly does).  So, I simply created a custom attribute and applied it to my assembly.  In that attribute’s constructor, I check to see if it was loaded into design-time.  If it was, I then perform my initialization logic.  How simple!

Read more: Josh Smith on WPF

Posted via email from jasper22's posterous

How To Increase Your Chances Of Surviving A Microsoft Interview

|
I've talked with several people recently about how they can increase their chances of making it through their upcoming interviews at Microsoft. After noticing definite patterns in their questions, and in my answers, I decided to record them here where they may help a broader audience.

I'm so nervous! Yep! In my ten-plus years here at Microsoft I've been through forty-some informational interviews and close to ten full interview loops, and I still get nervous, and am sure I am doing horribly, and I am talking really fast, and . . .

Take a breath. Slow down. Be yourself. Yes, you may not have any idea how to solve the problem your interviewer just asked you. It doesn't matter. Unless you have a bad interviewer, what they are most interested in is how you approach the problem, not whether you come up with the best solution. More than once I've gotten stuck on "I know there's a better way to do this!", and so I'm standing there doing nothing rather than working towards a solution - *not* helpful in getting hired!

They keep asking me to write these algorithms I've never heard of! Tell them! I never studied computer science and so run into this all the time. Tell your interviewer that you aren't familiar with the algorithm, or concept, or whatever, and that you'll work through it as best you can. Remember, your interviewer wants to see how you solve problems. They can teach the particular programming language they use or the problem domain they are in as long as you can work through a problem you've never seen before.

Read more: Test Guide

Posted via email from jasper22's posterous

RSA Encryption with .NET 2.0 Cryptography Services and Crypto++ Wrapped as a Managed C++ Class Library

| Wednesday, April 7, 2010
Contents

   * Introduction
   * Disclaimer
   * Using the code
   * Glossary
   * Background
   * The Problem
         o Key Exchange
         o Data Transfer
         o Block Encryption and Decryption
   * Interop classes
         o CLR, Crypto++ and the C++ Standard Library
         o Setting pointers to the new and delete operators
   * Implementation Details
         o class RSAES_PKCS15 (Managed C++)
         o class CryptoPP_RSAES_PKCS15 (Native C++)
   * Tester Applications
         o C# Console Application Tester
   * Things to do
   * Other useful things
         o BER and DER encoding/decoding of Integers
   * Known Issues
         o LoaderLock Exception
         o Compiling Crypto++ as Unicode
         o Base64 Encoding/Decoding with Crypto++
   * In Closing
   * Acknowledgements
   * References
   * History

Introduction

The purpose of this article is to show the interaction between Crypto++ and .NET 2.0 Cryptography services using RSA PKCS#1 encryption and to show how to wrap a Crypto++ as a managed class. Often a client and server use different cryptographic services and need to interact correctly e.g. the public portions of the encryption key need to be exchanged between them and data from the client needs to be decrypted by the server and vice versa. Most of the articles I could find were for older versions of .NET and the documentation on how to use the Cryptography services was a bit sparse, hence the need for this article.

In order to simplify things, I have stripped away the usual communication links between the client and the server. Byte arrays will be passed between them instead. No existing standards will be used to exchange keys either, the public modulus and exponent integers will be sent as byte arrays again. Putting these components together is a plumbing job � explaining this would be very specific to my problem and would confuse a complicated article even more. Below is a short description of what will be achieved by this article:

I want to use Crypto++ from C#, so I wrap it using a managed C++ class as follows:

  1. Compile Crypto++ 5.5.1 either as a static lib or as a DLL using Visual Studio 2005 with dynamically linked standard multithreaded libraries (/MD or /MDd)
  2. Create a native wrapper class which provides a simple interface to Crypto++ and encrypts or decrypts a byte array of any size
  3. Create a managed C++ class which encapsulates the native wrapper class and converts .NET managed types to native types (and vice versa) and then calls the Crypto++ wrapper class methods


Read more: Codeproject

Posted via email from jasper22's posterous

XmlSerializer class for reading and writing XML.

|
There are many ways to read and write XML.

The advantage of the XmlSerializer class is that you can read and/or write XML with very little code. Most of the code required is simply the definition of the data. In other words, if our data is a list of Links consisting of a HREF or URL, a title and a category, then that data could be defined in the following manner:

public class LinkObject
{
     string ThisCategory;
     string ThisHRef;
     string ThisTitle;

     public string Category
     {
           get { return ThisCategory; }
           set { ThisCategory = value; }
     }

     public string HRef
     {
           get { return ThisHRef; }
           set { ThisHRef = value; }
     }

     public string Title
     {
           get { return ThisTitle; }
           set { ThisTitle = value; }
     }
}

Using the XmlSerializer class, we use Serialize.Deserialize to read the data and XmlSerializer.Serialize to write the data. An instance of the XmlSerializer class could be created using:

XmlSerializer Serializer = new XmlSerializer(typeof(LinkObjectsList));

Then the data could be written using:

TextWriter Writer = new StreamWriter(Filename);
Serializer.Serialize(Writer, LinksList);
Writer.Close();

Data could be read using:

TextReader Reader = new StreamReader(Filename);
LinksList = (LinkObjectsList)Serializer.Deserialize(Reader);
Reader.Close();

It is nearly that easy. Note that when the data is as simple of the above data, it is possible to read and write the data using a DataTable. If however the data is more complicated than what a single DataTable is capable of, then the XmlSerializer class can be easier (see below).

Note that the LinkObject class above represents one link. We are writing and reading a list of links, where list could be called an array or a collection or a table or something else. We can create a list of links using:

List<LinkObject> LinksList = new List<LinkObject>();

Read more: C# Corner

Posted via email from jasper22's posterous

Basic analysis of an unmanaged memory dump (C++)

|
Properly collecting a User Mode memory dump is only the first step in uncovering the cause of a crash or hang.  The remainder of this post will assume that you have already configured WinDBG correctly and captured a memory dump using the techniques outlined in previous posts.

For the purpose of this posting we will assume the following scenario.

You are a software vendor that has written an automated banking machine application.  Several times a day the kiosk is restarted by the customer because the application has crashed.  In an effort to identify the cause of the crash, which happens when you are not there, you have used ADPLUS to collect a User Mode memory dump.  The memory dump has been copied onto your machine and you are ready to start debugging.

Open the dump file by selecting the “Open Crash Dump…” option found under the “File” menu within WinDBG.  Browse to the appropriate memory dump file and click the “Open” button.  After a few moments WinDBG will return control to you and a prompt should be seen that is similar to “0:000>” (ProcessId:ThreadId>) as seen in the bottom centre of the image below.  

Read more: Practical Development

Posted via email from jasper22's posterous

NET DiscUtils

|
DiscUtils is a .NET library to read and write ISO files and Virtual Machine disk files (VHD, VDI, XVA, VMDK, etc). DiscUtils is developed in C# with no native code (or P/Invoke).

Project Status

A seventh version has been released which implements ISO, FAT and NTFS file systems. VHD, XVA, VMDK and VDI disk formats are implemented, as well as read/write Registry support. The library also includes a simple iSCSI initiator, for accessing disks via iSCSI.

It is now possible to format, read and modify NTFS volumes.


How to use the Library
Here's a few really simple examples.

How to create a new ISO:

CDBuilder builder = new CDBuilder();
builder.UseJoliet = true;
builder.VolumeIdentifier = "A_SAMPLE_DISK";
builder.AddFile(@"Folder\Hello.txt", Encoding.ASCII.GetBytes("Hello World!"));
builder.Build(@"C:\temp\sample.iso");

You can add files as byte arrays (shown above), as files from the Windows filesystem, or as a Stream. By using a different form of Build, you can get a Stream to the ISO file, rather than writing it to the Windows filesystem.

Read more: Codeplex

Posted via email from jasper22's posterous

Psscor2 Managed-Code Debugging Extension for WinDbg

|
Psscor2 can help you diagnose high-memory issues, high-CPU issues, crashes, hangs and many other problems that might occur in a .NET application; in scenarios involving live processes or dump files.

If you are familiar with SOS.dll, the managed-debugging extension that ships with the .NET Framework, Psscor2.dll provides a superset of that functionality. Most of the added functionality helps you identify issues in ASP.NET.

For example, Psscor2 provides the ability to view:

   * managed call stacks (with source mappings)
   * managed exception information
   * what types are in the managed heap and their reference chain
   * which ASP.NET pages are running on which thread
   * the contents of the ASP.NET cache
   * and much more.

Read more: Jin's WebLog original, translated
Download: MS Download

Posted via email from jasper22's posterous

הגדרה של appSettings

|
בתוך קבצי קונפיג יש לנו מקטע של appSettings שבו אנחנו מכניסים רשימה של key=value


<appSettings>

 <add key="MyKey" value="MyValue"/>

 <add key="TheKey" value="TheValue"/>

</appSettings>

בקוד אנחנו נגש אליהם בעזרת


string value = ConfigurationManager.AppSettings["MyKey"];

(צריך להוסיף reference ל - system.configuration.dll)


יש ל - appSettings שני מאפיינים מעניינים האחד נקרא file והשני נקרא configSource. שניהם מאפשרים להוציא את ה - appSettings לקובץ נפרד לדוגמא


<appSettings configSource="mySettings.config"/>

ובקובץ המצויין יהיה את כל ההגדרות - זה מאוד נחמד ונותן את האפשרות לסדר בצורה טובה יותר את קובץ הקונפיג.

ההבדל בין file ל - configsource הוא
1. configSource מחייב להעביר את כל ההגדרות של appSettings לקובץ החיצוני לעומת file שמאפשר להכניס חלק מהערכים בקובץ המקורי.
2. configSource הוא ממש כמו קובץ הקונפיג המקורי וכל שינוי בו יעשה restart ל - application לעומת file שמאפשר לשנות את הקובץ בלי restart.

Posted via email from jasper22's posterous

Some useful Addins for SQL Server Management Studio 2005 and 2008

|
Here are some useful addins I found for SSMS 2005/2008 on CodePlex and thought of sharing it with my readers.

Internals Viewer for SQL Server - Internals Viewer is a tool for looking into the SQL Server storage engine and seeing how data is physically allocated, organised and stored.

DataScripter - This Addin for SQL Server Management Studio 2008 allows you to generate INSERT statements for all values of a table easily. Just select Script Data as from the context menu of the table and choose one of the options

Fulltext Management for SQL Server - This Addin for SQL Server Management Studio allows you to manage your fulltext catalogs easily. It even works for SQL Server Express editions, so now you can use a nice GUI instead of unhandy SQL commands.

SQL Compact data and schema script utility - This console app and SQL Server 2008 Management Studio add-in helps you better manage your SQL Compact development efforts. If allows you to script schema and data to a .sql file, which can be used in any context. It also scripts DML for use in SSMS scripts

Read more: SQL Server curry

Posted via email from jasper22's posterous

SSH Port Forwading and Tunneling

|
Sometimes it's useful to forward a port from a source to a target machine. For instance port forwading itself is used by a router, which creates a sub network. In that case routers act as port delegators.

Port forwarding has a security function as well. For example you want to hide your web server from your public network. So you can tunnel your HTTP port from your secured to your public network machine.

The following examples describes how to forward a remote or a local port via SSH. Typically SSH forwards to localhost (127.0.0.1). To change this you have to set the GatewayPorts parameter to yes (/etc/ssh/sshd_config).

Remote Port Forwading example:

#you want to forward a local port to a remote machine
ssh -v -g -R remoteport:localhost:localport root@remotehost

#e.g. forwarding my local webserver on port 8080 to http://developers-blog.org:80
ssh -v -g -R 80:localhost:8080 root@developers-blog.org

#to bypass the ClientAliveInterval you can append a while loop to hold up the SSH connection
ssh -v -g -R 80:localhost:8080 root@developers-blog.org "while [ 1 ]; do sleep 10; echo '\''loop step'\''; done"

Local Port Forwading example:

#you want to forward a remote port to my local machine
ssh -v -g -L localport:remotehost:remoteport root@remotehost

#e.g. i want to see my local webserver on my
ssh -v -g -L 8080:developers-blog.org:80 root@developers-blog.org

#for bypass the ClientAliveInterval you can append a while loop as well
ssh -v -g -L 8080:developers-blog.org:80 root@developers-blog.org "while [ 1 ]; do sleep 10; echo '\''loop step'\''; done"

Read more: Developers Blog - Programming Languages, Technologies and Visions

Posted via email from jasper22's posterous

How to setup a local web server on your computer using XAMPP

|
Web development work should always be done locally. When developing a website, all the development work should be done on a local LAMP Stack environment installed on your computer. That way, the production time is greatly reduced and you can fully test your work before launching.

When you are completely done developing your project, the migration to the live server is seamless. Here are the simple steps to install a local server on your PC to easily develop websites.

This article applies to the installation on Windows 98, NT, 2000, 2003, XP and Vista, of Apache, MySQL, PHP + PEAR, Perl, mod_php, mod_perl, mod_ssl, OpenSSL, phpMyAdmin, Webalizer, Mercury Mail Transport System for Win32 and NetWare Systems v3.32, Ming, JpGraph, FileZilla FTP Server, mcrypt, eAccelerator, SQLite, and WEB-DAV + mod_auth_mysql.
Installing XAMPP on your computer

  1. First, download XAMPP for Windows Installer
  2. Then run the installer on your computer and make sure that your Windows firewall unblocks Apache.
  3. Run the Apache administrator.
  4. Open your browser and go to http://127.0.0.1 – If all went well, a screen will appear where you can choose your language.
  5. Go to http://127.0.0.1/security/xamppsecurity.php and setup a password (it ill be used for your databases), and click on “Password Changing”.

Congratulations! You’re done! Now put your website’s files in a new directory under C:\xampp\htdocs\ (if you installed xampp in C:\xampp). For example: C:\xampp\htdocs\myproject\; and setup your databases using PHPMyAdmin located here http://127.0.0.1/phpmyadmin/.

Configuring Mod Rewrite

Read more: Richard Castera

Posted via email from jasper22's posterous

VC++ Tip: Get detailed build throughput diagnostics using MSBuild, compiler and linker

|
We know that build throughput for applications are a time crunch on developer productivity. We have spent some time on improving linker throughput and other areas in VS2010, and will continue to investigate improving overall build throughput in future releases.

In this blog post, we will describe a couple of options to get diagnostics for your projects using MSBuild and then taking a deeper dive into the compiler and the linker.

Using MSBuild

Using the IDE, you can enable Timing Logging by setting “Tools/Options/Projects and Solutions/VC++ Project Settings/Build Timings” = “Yes” or raise the verbosity of the build to “Diagnostics” from “Tools/Options/Project and Solutions/Build and Run/MSBuild project build output verbosity”.

Using these options, you can get performance summaries per project and also get details on where time is spent on targets and tasks. This sort of information is useful say when you are trying to figure out how long that copy task is taking to copy your files over from across folders.

1>------ Rebuild All started: Project: mfc-app, Configuration: Debug Win32 ------
1>Build started 1/12/2010 5:31:58 PM.
1>_PrepareForClean:
1>  Deleting file "Debug\mfc-app.lastbuildstate".
1>InitializeBuildStatus:
1>  Creating "Debug\mfc-app.unsuccessfulbuild" because "AlwaysCreate" was specified.
1>ClCompile:
1>  stdafx.cpp
...............
1>  ChildFrm.cpp
1>  Generating Code...
1>Manifest:
1>  Deleting file "Debug\mfc-app.exe.embed.manifest".
1>LinkEmbedManifest:
1>  mfc-app.vcxproj -> C:\Users\user\documents\visual studio 2010\Projects\mfc-app\Debug\mfc-app.exe
1>FinalizeBuildStatus:
1>  Deleting file "Debug\mfc-app.unsuccessfulbuild".
1>  Touching "Debug\mfc-app.lastbuildstate".
1>

Read more: Visual C++ Team Blog

Posted via email from jasper22's posterous

Moving data between 32-bit and 64-bit SQL Server instances

|
    Yes, you can move SQL Server data back and forth between x64, x86, and IA64 architectures. The data and log files themselves do not store anything that indicates the architecture and work the same on either 32-bit or 64-bit. The same applies to the backup files. Given those facts it becomes clear that we can easily move data between architectures. You can backup on x86 and restore to x64. Detach/attach works fine. Log shipping works because it is basically backup/restore with some scheduling. Mirroring and transactional replication take data from the transaction log and push the data to another system so again they work across architectures. Merge replication is basically just another application sitting on top of SQL Server, it moves data by reading tables in one location and modifying data in another location. Again, this can all be done across architectures.

   Hopefully you are not installing new x86 boxes, 64-bit handles memory so much better. If you have legacy x86 boxes you can easily do a backup or detach from that old system and restore or attach on the new x64 instance. You can also reverse the process and copy data from x64 back to x86. The same logic applies to the other technologies listed above.

Per BOL (I used the SQL 2008 R2 version):

·         The SQL Server on-disk storage format is the same in the 64-bit and 32-bit environments. Therefore, a database mirroring session can combine server instances that run in a 32-bit environment and server instances that run in a 64-bit environment.

·         Because the SQL Server on-disk storage format is the same in the 64-bit and 32-bit environments, a replication topology can combine server instances that run in a 32-bit environment and server instances that run in a 64-bit environment.

·         The SQL Server on-disk storage format is the same in the 64-bit and 32-bit environments. Therefore, a log shipping configuration can combine server instances that run in a 32-bit environment and server instances that run in a 64-bit environment.

If you're doing SAN level replication you'll need to talk to your SAN vendor about their support across platforms.

Read more: Cindy Gross - Troubleshooting, tips, and general advice about SQL Server

Posted via email from jasper22's posterous

Understanding RAID for SQL Server

|
This is a continuation of our series on designing a SQL Server file subsystem. Our post on March 30 discussed software RAID (redundant array of independent disks) and RAID level 0. Today, we discuss the RAID levels that provide data redundancy—the ones that you really care about if you are smart about running Teamcenter on SQL Server.

RAID Level 1

SQL Server 2008 Books Online says, "This level is also known as disk mirroring because it uses a disk file system called a mirror set. Disk mirroring provides a redundant, identical copy of a selected disk. All data written to the primary disk is written to the mirror disk. RAID 1 provides fault tolerance and generally improves read performance but may degrade write performance" ("RAID Levels and SQL Server," SQL Server 2008 Books Online, MSDN).
Level 1 is one of our favorite ways to set up SQL Server. It is fast and provides data protection like a superhero straddling the speed and safety worlds. All superheroes have weaknesses, and RAID level 1 is no exception—it uses two hard disks of identical sizes, and this can cause several drawbacks. The first drawback is that the size of your logical disk is the same as the size of one of the physical disks. In other words, you pay for two disks, and you only get to use the storage size of one.
The other, more important, drawback is that you can only store files of up to the size of one of the disks on the logical disk. For example, if you have two 185 gigabyte (GB) drives in the RAID level 1 configuration, you end up with a single logical drive of 185 GB. In this example, the maximum file you could hold on the disk would be less than 185 GB in size. Since your largest database files are the .mdf file (which holds the data) and the .ldf file (which holds the transaction logs), you need to make sure that they do not exceed the size of the level 1 logical drive.
One way to prevent the data from exceeding the size of the level 1 drive is to create a secondary database file (.ndf) of an equal size on another level 1 logical drive set. This divides the data in the database between the two files.
You can keep your transaction log file (.trn) from getting too big by frequently backing up the transaction log, which allows it to be stored on a RAID level 1 array.
As we continue this series of posts about the disk subsystem, we will talk more about which database files go on which kinds of logical drives and about the arrangement of physical disks.

RAID Level 5

SQL Server 2008 Books Online says that level 5 is also known as striping with parity. "Data redundancy is provided by the parity information. The data and parity information are arranged on the disk array so that the two types of information are always on different disks" ("RAID Levels and SQL Server," SQL Server 2008 Books Online, MSDN).

A RAID level 5 configuration lets you have more than two physical drives in the RAID configuration. In fact, there is really no benefit until you have there or more drives. The data is written to all three drives at the same time. However, the data is read from any of the drives without involving the other physical drives in the array.

Read more: Understanding RAID for SQL Server Part 1, Part 2

Posted via email from jasper22's posterous

Creating a Web Service in Cloud

|
Let's create our first web service in the cloud. The service will return the factorial of a number. Just like our Hello World ASP.NET application, we will create a new Cloud Service project but this time with no Role. Once the project is created, create a new ASP.NET Web Service Application project and add it as a Web Role in the Cloud Service project. Just add the following methods to it.

[WebMethod]
public double GetFactorial(int x)
{
       double factorial = 0;
       if (x >= 0)
       {
               factorial = CalcFactorial(x);
       }

       return factorial;
}

privat static double CalcFactorial (int x)
{
       // base case
       if (x <= 1)
       {
               return 1;
       }

       return x * CalcFactorial (x -1);
}

Now, press F5 and you will get the following.

Read more: Sajid's TechnoTips

Posted via email from jasper22's posterous

Use the Cassia .NET Library to Detect Users Connected to Windows Server

|
To programmatically detect the users connected to Windows Server, you have to use PInvoke to call the Windows Terminal Services API. Alternatively, you can use a .NET library called Cassia, which was created exclusively for this purpose. Cassia provides a wrapper around the Windows Terminal Services API.

The following code snippet gives you an idea of how to use Cassia (extracted from the Cassia Project Home):

ITerminalServicesManager manager = new TerminalServicesManager();
using (ITerminalServer server = manager.GetRemoteServer("your-server-name"))
{
   server.Open();
   foreach (ITerminalServicesSession session in server.GetSessions())
   {
       Console.WriteLine("Session ID: " + session.SessionId);
       Console.WriteLine("User: " + session.UserAccount);
       Console.WriteLine("State: " + session.ConnectionState);
       Console.WriteLine("Logon Time: " + session.LoginTime);
   }
}

Read more: DevX.com
Official site: Cassia Project Home

Posted via email from jasper22's posterous

C++0x Core Language Features In VC10: The Table

|

When we announced that the Visual Studio 2010 Release Candidate Is Now Available For Download, a reader, Igor, asked us to provide a table summarizing which C++0x Core Language features are implemented in VC10.  So, here it is!  It's derived from, but slightly modified from, GCC's tables.  For example, I added "Rvalue references v2".

 

C++0x Core Language Features

Proposal

VC9

VC10

Rvalue references

N2118

No

v2

    Rvalue references v2

N2844

No

v2

    Rvalue references for *this

N2439

No

No

    Initialization of class objects by rvalues

N1610

Yes

Yes

static_assert

N1720

No

Yes

auto

N1984

No

Yes

    Multi-declarator auto

N1737

No

Yes

    Removing old auto

N2546

No

Yes

    Trailing return types

N2541

No

Yes

Lambdas

N2927

No

v1.0

decltype

N2343

No

Yes

Right angle brackets

N1757

Yes

Yes

Extern templates

N1987

Yes

Yes

nullptr

N2431

No

Yes

Strongly typed enums

N2347

Partial

Partial

Forward declared enums

N2764

Partial

Partial

Extended friend declarations

N1791

Partial

Partial

Local and unnamed types as template arguments

N2657

Yes

Yes

C++0x Core Language Features: Concurrency




exception_ptr

N2179

No

Yes

Thread-local storage

N2659

Partial

Partial

(more...)


Read more: Visual C++ Team Home

Posted via email from jasper22's posterous

Unable to create a login in SQL Server 2005 and we get the following error The server principal '[\$]' already exists.

|
SYMPTOM
=======

·         When you try to create a login using the command "Create login [<Domain>\<Machine account>$] from windows" we might get the following error

o    Msg 15025, Level 16, State 2, Line 1
o    The server principal '<DOMAIN NAME>\<MACHINE NAME>$' already exists.
·         The following error might be returned when we try creating the SQL Server login in Management studio

o    Create failed for Login '<DOMAIN NAME>\<MACHINE NAME>$'.  (Microsoft.SqlServer.Smo)
o    The server principal ‘<DOMAIN NAME>\<MACHINE NAME>$’ already exists. (Microsoft SQL Server, Error: 15025)
·         The error message says that the server principal already exists. However, if you look for any such principal under Security in Management studio you will not find the login <DOMAIN NAME>\<MACHINE NAME>$
·         Further, when you run the following query in a new query window, you will not find the login <DOMAIN NAME>\<MACHINE NAME>$

o    Select LOGINNAME from sys.SYSLOGINS

CAUSE
=====

This problem occurs if there is already a login which is registered under the same SID as that of the Login which you are trying to add.

RESOLUTION
=========

·         To determine whether the SID already exists for a different login, please follow these steps:
·         In the new query window, run the following command:

SELECT SUSER_SID('<DOMAIN NAME>\<MACHINE NAME>$');
GO

·         Once the raw hex SID is retrieved run the following query against that SID to fetch the Server Principal name as shown in the below example:
Select * from sys.server_principals where SID=<SID found using the previous command>

o    This should give you the server principal that is already using the above SID.

·         Technically, it is not possible to have a more than one login with the same SID unless these logins have been manually created.

Read more: Microsoft SQL Server Tips & Tricks

Posted via email from jasper22's posterous

Operator Overload - Part 2 (explicit, implicit)

|
בהמשך להסבר  איך דורסים את האופרטורים הרגילים, נראה כאן איך אפשר לממס אופרטרים של casting - למה הכוונה, נניח שיש את המחלקה הבאה

class Dolar
{
   public int Value { get; set; }

   public Dolar(int value)
   {
      Value = value;
   }
}


כל פעם שנרצה לייצר מופע נצרך לכתוב כך:

Dolar d1 = new Dolar(50);

למעשה יש דרך שבה נוכל לכתוב כך

Dolar d1 = 50;

וזה בעזרת מימוש implicit operator, נוסיף למחלקה את הקוד הבא

public static implicit operator Dolar(int value)
{
   return new Dolar(value);
}

ולמעשה השמה של מספר למופע של Dolar יקרא לפונקצייה הזאת.

אנחנו יכולים גם להגדיר explicit operator במקרים שאנחנו חוששים לאיבוד מידע


long l = 50;

Dolar d2 = (Dolar)l;

במקרה הזה יתכן שיש במשתנה מספר גדול יותר מ - int ואנחנו רוצים לוודא שהמשתמש יודע מה הוא עושה, ולכן נכתוב

Posted via email from jasper22's posterous

Cassandra Jump Start For The Windows Developer

|
Recently I have been exploring the NoSQL options for .NET and specifically a database called Cassandra.  In case you haven’t heard of Cassandra before, it is a decentralized, fault-tolerant, elastic database designed by Facebook for high availability.  As Wikipedia describes it:

   Cassandra is an open source distributed database management system. It is an Apache Software Foundation top-level project, as of February 17, 2010, designed to handle very large amounts of data spread out across many commodity servers while providing a highly available service with no single point of failure. It is a NoSQL solution that was initially developed by Facebook and powers their Inbox Search feature. Jeff Hammerbacher, who led the Facebook Data team at the time, has described Cassandra as a BigTable data model running on an Amazon Dynamo-like infrastructure.

I bet you have used data that has been served by Cassandra and not even realized it, here are some prominent users of Cassandra:

   * Facebook
   * Digg
   * Twitter
   * Reddit

Sounds interesting or at least worth a look, right?  Well I thought so, however during my journey of getting the database setup I have come to realize there is almost no documentation on installation for Linux, and even less for Windows.  So I am going to provide you with a jump start to installing Cassandra on your machine.  I am doing this so you don’t have to spend days jumping around the web, going down false paths, and pulling your hair out like I did, all so you can get on to what you really care about … development.
First Things First

The first thing you need to understand about Cassandra is that it is developed in Java.  So you can run it on any machine that supports Java 6 or better.  So before you go any farther make sure you Java JRE is updated to the latest version.

The next thing you need is a copy of Cassandra.  Which can be found here.  My setup is going to be based off of the latest stable release.
Running From Windows

As I said before you can run from an operating system that Java has a runtime for.  So the first and probably most obvious one for a Windows developer, is running Cassandra on Windows.

Read more: Nick Berardi's Coder Journal

Posted via email from jasper22's posterous

The day I understood TDD

|
I’ve been practicing and advocating TDD (Test Driven Development) even before I’ve started working at Typemock but I can point at a specific point of time in which I actually “got it”.

At that time I had a great mentor and I was sure I got the whole “Red-Green-Refactor” routine, In fact I knew it so well that I allowed myself to “speed development” by writing the code before the actual test.

One day while happily coding with a fellow developer we came across an interesting problem: we needed to create a value generator – a class that will return a unique value each time a method (GetNextValue) is called.

Of course being two bright and talented developers we’ve started by debating how this class should be implemented and so that it would support every conceivable type – needless to say after a few minuets we were still “designing”  and every design we had was flawed – it had a bunch corner cases that forced us to search for yet another better-stronger-faster design.

Luckily for us we had someone in the same room that saw our plight and decided to put a stop to it. What he did is remind us how TDD should be done – one test at a time.

“Write a test that checks two unique integers” – he said.

“But it won’t work for strings or even doubles” – we said.

“Do it anyway” - And we did:

[TestMethod]

public void ValueGenerator_GenerateValuesForInt_TwoDifferentValuesReturned()
{
   var val1 = ValueGenerator.GetNextValue(typeof(int));
   var val2 = ValueGenerator.GetNextValue(typeof(int));

   Assert.AreNotEqual(val1, val2);
}

Read more: Helper Code

Posted via email from jasper22's posterous

Microsoft release Feature Builder Power Tool

|
Feature Builder is a Power Tool for Visual Studio 2010 (preview) which helps you easily create rich Visual Studio extensions. These extensions include tools (Visual Studio automation), code (your sample code or binaries you wish to share with others) and a map (a set of steps your users will want to follow to get the best experience with your extension). You can use this power tool to quickly package up sample code with custom menus, or take the time to create complete automated guidance experiences targeted toward a specific technology. You can share your extension with users by distributing a .vsix file, or posting to the Visual Studio Gallery.

You can create two different kinds of extensions using Feature Builder. A standard Feature Extension can contain tools, code, and a simple map - it will run on the Visual Studio Premium and Visual Studio Professional editions (in the final version of this tool). A more advanced extension, called an Ultimate Feature Extension, can contain everything a feature extension can contain, as well as rich modeling and visualization tools that can take advantage of the modeling platform inside the Visual Studio 2010 Ultimate edition (required). These tools can be used to provide a logical view of your target solution, and to visualize your existing code. This is the preferred type of extension to use if you intend to provide architectural guidance or share specific refactoring or pattern knowledge.

This preview requires Windows 7 or Windows Server 2008 R2, Visual Studio 2010 Ultimate Edition, and the installation of the Visual Studio SDK (RC1 Version) to build Feature Extensions. The Feature Extensions you create have the same requirements except for the SDK. The RTM version of this tool will require Visual Studio 2010 Ultimate Edition to create Feature Extensions, but will allow you to create Feature Extensions which do not require the Ultimate Edition to run.

Read more: Visual Studio, VSIP Partners and more ......

Posted via email from jasper22's posterous

Redirecting functions in shared ELF libraries

|
TABLE OF CONTENTS
1. The problem
1.1 What does redirecting mean?
1.2 Why redirecting?
2. Brief ELF explanation
2.1 Which parts does ELF file consist of?
2.2 How do shared ELF libraries link?
2.3 Some useful conclusions
3. The solution
3.1 What is the algorithm of redirection?
3.2 How to get the address, which a library has been loaded to?
3.3 How to write and restore a new function address?
4. Instead of conclusion
5. Useful links

1. The problem

We all use Dynamic Link Libraries (DLL). They have excellent facilities. First, such library loads into the physical address space only once for all processes. Secondly, you can expand the functionality of the program by loading the additional library, which will provide this functionality. And that is without restarting the program. Also a problem of updating is solved. It is possible to define the standard interface for the DLL and to influence the functionality and the quality of the basic program by changing the version of the library. Such methods of the code reusability were called “plug-in architecture”. But let’s move on.

Of course, not every dynamic link library relies only on itself in its implementation, namely, on the computational power of the processor and the memory. Libraries use libraries or just standard libraries. For example, programs in the C\C++ language use standard C\C++ libraries. The latter, besides, are also organized into the dynamic link form (libc.so and libstdc++.so). They are stored in the files of the specific format. My research was held for Linux OS where the main format of dynamic link libraries is ELF (Executable and Linkable Format).

Recently I faced the necessity of intercepting function calls from one library into another - just to process them in such a way. This is called the call redirecting.

Read more: Codeproject

Posted via email from jasper22's posterous

Connect Microsoft Excel To SQL Azure Database

|
A number of people have found my post about getting started with SQL Azure pretty useful. But, it's all worthless if it doesn't add up to user value. Database are like potential energy in physics-it's a promise that something could  be put in motion. Users actually making decisions based on analysis is analogous to kinetic energy in physics. It's the fulfillment of the promise of potential energy.

So what does this have to with Office 2010? In Excel 2010 we made it truly easy to connect to a SQL Azure database and pull down data. Here I explain how to do it.

By following these steps you will be able to:

1. Create an Excel data connection to a SQL Azure database

2. Select the data to import into Excel

3. Perform the data import

All mistakes herein, if any, are my own. Please alert me to potential errors.
Import SQL Azure Data Into Excel

You need to be running Excel 2010 (post-Beta 2 builds) for these steps to work properly.

Read more: John R. Durant's WebLog

Posted via email from jasper22's posterous

Lockless memory allocator

| Tuesday, April 6, 2010
Optimize your Software

     It is a simple step to speed up your software. No source code changes are required. The Lockless memory allocator seemlessly replaces your system allocator, and you reap the performance benefits.

Spend Less on Hardware

     You can save money on hardware by spending less on expensive processors and memory. The Lockless memory allocator can speed up your software in a more inexpensive way to meet your performance targets.

Fully Utilize Modern Multicore Machines.

     The Lockless memory allocator is designed for 64bit multicore machines whilst still supporting 32bit applications. Allocations are 16 byte aligned to optimize SSE2 usage. 64 byte allocations are cache-line aligned to prevent speed loss from cache-line bouncing in multithreaded applications.

Multithread Optimized

     The Lockless memory allocator uses lock-free techniques to minimize latency and memory contention. This provides optimal scalability as the number of threads in your application increases. Per-thread data is used to reduce bus communication overhead. This results in thread-local allocations and frees not requiring any synchronization overhead in most cases.

Read more: Lockless

Posted via email from jasper22's posterous

Canadian researchers reveal online spy ring based in China

|
Canadian researchers have uncovered a vast “Shadow Network” of online espionage based in China that used seemingly harmless means such as e-mail and Twitter to extract highly sensitive data from computers around the world.

Stolen documents recovered in a year-long investigation show the hackers have breached the servers of dozens of countries and organizations, taking everything from top-secret files on missile systems in India to confidential visa applications, including those of Canadians travelling abroad.

The findings, which are part of a report that will be made public today in Toronto, will expose one of the biggest online spy rings ever cracked. Written by researchers at the University of Toronto’s Munk Centre for International Studies, the Ottawa-based security firm SecDev Group and a U.S. cyber sleuthing organization known as the Shadowserver Foundation, the report is expected to be controversial.

The researchers have found a global network of “botnets,” computers controlled remotely and made to report to servers in China. Along with those servers, the investigators located where the hackers stashed their stolen files, allowing a glimpse into what the spy ring is looking for.

“Essentially we went behind the backs of the attackers and picked their pockets,” said Ron Deibert, director of the Citizen Lab at the Munk School of Global Affairs, which investigated the spy ring.

Read more: The globe and mail

Posted via email from jasper22's posterous

Common Setup Issues and Their Resolutions, When Publishing WCF Service to Local IIS

|
If you create a WCF Service Website hosted in local IIS, or you publish your WCF Service Application project to local IIS in your machine, you may encounter issues related to IIS setup that your service cannot be hosted in local IIS. Here is an example as below: HTTP Error 500.21 – Internal Server Error Handler “svc-Integrated” has a bad module “ManagedPipelineHandler” in its module list.

clip_image004_118a45ce-97d2-40e8-841e-17b69d1a7c7c.gif

Read more: WCF Tools team's blog

Posted via email from jasper22's posterous

Speeding Up NHibernate Startup Time

|
One technique I use and posted on the NHUsers mailing list consists in serializing a previously-configured Configuration  to the filesystem and deserializing it on all subsequente starts of the application:

Configuration cfg = null;
IFormatter serializer = new BinaryFormatter();

//first time
cfg = new Configuration().Configure();

using (Stream stream = File.OpenWrite("Configuration.serialized"))
{
serializer.Serialize(stream, configuration);
}

//other times
using (Stream stream = File.OpenRead("Configuration.serialized"))
{
cfg = serializer.Deserialize(stream) as Configuration;
}

Check it out for yourselves.

Read more: Development With A Dot

Posted via email from jasper22's posterous

!address -summary explained

|
In order to debug any high memory issue we rely heavily on the output of !address –summary command
[You’ll find !address –summary as the part of Ext.dll extension].

We can interpret quite a few things from it which can help us in further debugging. Here’s how

For example (for  32 bit app)

0:027> !address –summary
-------------------- Usage SUMMARY --------------------------
   TotSize (      KB)   Pct(Tots) Pct(Busy)   Usage
  29b32000 (  683208) : 32.58%    41.98%    : RegionUsageIsVAD
  1cab1000 (  469700) : 22.40%    00.00%    : RegionUsageFree
   d3b4000 (  216784) : 10.34%    13.32%    : RegionUsageImage
   3bfc000 (   61424) : 02.93%    03.77%    : RegionUsageStack
     f0000 (     960) : 00.05%    00.06%    : RegionUsageTeb
  2896a000 (  665000) : 31.71%    40.86%    : RegionUsageHeap
         0 (       0) : 00.00%    00.00%    : RegionUsagePageHeap
      1000 (       4) : 00.00%    00.00%    : RegionUsagePeb
      1000 (       4) : 00.00%    00.00%    : RegionUsageProcessParametrs
      1000 (       4) : 00.00%    00.00%    : RegionUsageEnvironmentBlock
      Tot: 7fff0000 (2097088 KB) Busy: 6353f000 (1627388 KB)

-------------------- Type SUMMARY --------------------------
   TotSize (      KB)   Pct(Tots)  Usage
  1cab1000 (  469700) : 22.40%   : <free>
  119a8000 (  288416) : 13.75%   : MEM_IMAGE
   10b5000 (   17108) : 00.82%   : MEM_MAPPED
  50ae2000 ( 1321864) : 63.03%   : MEM_PRIVATE

------------------- State SUMMARY --------------------------
   TotSize (      KB)   Pct(Tots)  Usage
  3152f000 (  808124) : 38.54%   : MEM_COMMIT
  1cab1000 (  469700) : 22.40%   : MEM_FREE
  32010000 (  819264) : 39.07%   : MEM_RESERVE Largest free region: Base 6b0b2000 - Size 0203e000 (33016 KB) *

Read more: WebTopics

Posted via email from jasper22's posterous