This is a mirror of official site: http://jasper-net.blogspot.com/

Kinect + Javascript hack shows potential for web interfaces

| Friday, November 26, 2010
Developed by the clever folks at MIT Media Lab – Fluid Interfaces Group, DepthJS is a web browser extension that allows any any web page to interact with Microsoft Kinect via Javascript. Released last night this video shows the potential that DepthJS has for developers and is open source, and available to download at GitHub.

DepthJS has a plethora of possibilities for online interfaces including full screen presentations, or indeed some flash games. With no need for any pointer device to navigate around an interface, it may even evolve into a multitouch remote control plugin for our home theater PCs.

Read more: webdistortion

Posted via email from .NET Info

Privilege escalation 0-day in almost all Windows versions

|
Today proof of concept code (source code, with a compiled binary) of a 0-day privilege escalation vulnerability in almost all Windows operating system versions (Windows XP, Vista, 7, Server 2008 ...) has been posted on a popular programming web site.
The vulnerability is a buffer overflow in kernel (win32k.sys) and, due to its nature allows an attacker to bypass User Access Control (UAC) on Windows Vista and 7 operating systems.
What’s interesting is that the vulnerability exist in a function that queries the registry so in order to exploit this the attacker has to be able to create a special (malicious) registry key. Author of the PoC managed to find such a key that can be created by a normal user on Windows Vista and 7 (so, a user that does not even have any administrative privileges).
The PoC code creates such a registry key and calls another library which tries to read the key and during that process it ends up calling the vulnerable code in win32k.sys.
Since this is a critical area of the operating system (the kernel allows no mistakes), the published PoC only works on certain kernel versions while on others it can cause a nice BSOD. That being said, the code can be probably relatively easily modified to work on other kernel versions.
We are not aware of any exploitation of this vulnerability at the moment and, since it can be exploited only locally, it obviously depends on another attack vector, but knowing how users can be easy on clicking on unknown files, this is definitely something we will keep our eye on and post updates if we see exploitation.
The PoC has been in the mean time removed from the original site but now that it has been published I’m sure that everyone who wants to get it can do that easily.

Read more: Sans

Posted via email from .NET Info

Grab Up to 768 MB Free Dropbox Space Through Social Media Connections

|
Need a little more space in your free Dropbox file syncing account? Already run through the "Getting Started," .EDU referrals, and all your friends? If you hook Dropbox up to Twitter and Facebook, you can snag up to 768 MB more storage.

You don't even have to actually post anything about Dropbox to your Twitter or Facebook accounts to activate the majority of the 128 MB upgrades listed at Dropbox's quietly available "Free" page. Most of the tweaks involve simply registering Dropbox through Facebook and Twitter for future posts you might want to make about your shared files, following Dropbox on Twitter, and telling the Dropbox team how their product is useful. If you do want to shout out about Dropbox on your accounts, you get all 768 MB free.

Read more: Lifehacker

Posted via email from .NET Info

Improving Application Startup Time

|
Visual Studio is a wonderful development environment, whose IntelliSense®, integrated debugging, online help, and code snippets help boost your performance as a developer. But just because you're writing code fast doesn't mean you're writing fast code.
Over the few past months, the CLR performance team met with several customers to investigate performance issues in some of their applications. One recurring problem was client application startup time. So in this column, I'll present lessons we learned analyzing these applications.

Planning for Performance
Your success in reaching your performance goals depends on the process you will be using. A good process can help you achieve the level of performance you need. These four simple rules will help:
Think in Terms of Scenarios Scenarios can help you focus on what is really important. For instance, if you are designing a component that will be used at startup, it is likely that the component will be called only once (when the app starts). From a performance point of view you want to minimize the use of external resources, such as network or disk, because they are likely to be a bottleneck. If you don't take into account that the component will be used at startup, you could spend time optimizing code paths without seeing any significant improvement. The reason is that most of the startup time will be spent loading DLLs or reading configuration files.
For startup scenarios you should analyze how many modules are loaded and how your app is going to access configuration data (files on disk, the registry, and so on). Refactoring your code by removing some dependencies or by delay-loading modules (which I'll cover later) could result in big performance improvements.
For code that is called repeatedly (such as a hash or parse function), speed is key. To optimize, you need to focus on the algorithms and minimize the cost per instruction. Data locality is also important. For example, if the algorithm touches large regions of memory, it is likely that L2 cache misses will prevent your algorithm from running at the fastest speed. Two metrics that you can use in this scenario are CPU cost per iteration and allocations per iteration. Ideally you want them both to be low. These examples should illustrate that performance is very context-dependent, and playing out scenarios can help you to tease out important variables.
Next time, before you start writing code, spend some time thinking about the scenarios in which the code will run, and identify which are the metrics and what are the factors that will impact performance. If you apply these simple recommendations, your code will perform well by design.
Set Goals It's a trivial concept, but sometimes people forget that, in order to decide if an application is fast or slow, you need to have goals to measure against. All performance goals you define (for instance, that the main window of your application should be fully painted within three seconds of application launch) should be based on what you think is the customer expectation. Sometimes it is not easy to think in terms of hard numbers early in the product development cycle (when you are supposed to set your performance goals), but it is better to set a goal and revise it later than not to have a goal at all.
Make Performance Tuning Iterative The process should consist of measuring, investigating, refining/correcting. From the beginning to the end of the product cycle, you need to measure your app's performance in a reliable, stable environment. You should avoid variability that's due to external factors (for instance, you should disable anti-virus or any automatic update such as SMS, so they don't interfere with performance test execution). Once you have measured your application's performance, you need to identify the changes that will result in the biggest improvements. Then change the code and start the cycle again.
Know Your Platform Well Before you start writing code, you should know the cost of each feature you will use. You need to know, for instance, that reflection is generally expensive so you'll need to be careful using it. (This doesn't mean that reflection should be avoided, just that it has specific performance requirements.)
Now let's move past the planning stage and tackle some coding problems. Startup time can be a problem for client applications with complex UI and connections to multiple data sources. End users expect the main window to appear as soon as they double-click on the app's icon, so startup time has a big impact on how customers view your application. Knowing the two types of startup scenarios you will be dealing with, cold and warm startup, will help you focus your efforts.

Read more: MSDN Magazine

Posted via email from .NET Info

Future of Personal Computing: Post-iPad Concepts

|
r5ye54gtrewrgfdrgfr.jpg

Note, we did not say that iPad is the future of personal computing - it is too early to say, for example, how well it performs with various professional applications - but iPad is certainly a step toward more popular use of computing (even your grandma might get enticed to touch its screen and shiny silver back). However, the question remains, what's beyond iPad, and is there anything our there beyond the simple touchscreen/keyboard configuration?

Read more: Dark Roasted Blend

Posted via email from .NET Info

Four Facebook Alternative Alternatives

|
Now that Diaspora, which is building an open-source distributed social network, has launched in private alpha, I figured it’d be a good idea to remind you that there are several alternatives to that particular Facebook alternative, some of which have been around longer and in more advanced stages of development.

Note that there may be more initiatives that I haven’t heard of or simply didn’t or forgot to mention, so this is by no means an exhaustive list. Also, all of these deserve a full review, so I refrained from making quick-and-dirty comparisons between all of them.

OneSocialWeb
The Appleseed Project
Elgg


Read more: Techcrunch

Posted via email from .NET Info

The Seven Principles You Need to Know to Build a Great Social Product

|
Social products are an interesting bird. For even the most experienced product designer, social products prove an elusive lover. While there are many obvious truths in social products, there are also alot of ways to design them poorly. Especially when you are deep in the moment making pixel-level decisions trying to remember what’s important, things may not be so clear.

The only magic I’ve found in designing compelling social products that have the best shot at breaking through the noise and capturing people’s time and money is in being extremely clear on how your social product meets a few key design principles.

1. Design your product to matter in a world of infinite supply. In 2010, people are inundated with an overwhelming number of people, applications, requests, alerts, relationships, and demands on their time. You love your product. The benefits of it are totally obvious to you. However, if you and every member of your team can’t crisply articulate what emotional benefit someone will get from spending 15 minutes on your social product that they can’t get on Facebook, LinkedIN, or Twitter, you’ve got work to do.

This isn’t touchy feely stuff. Neither I nor the prospective people who may use your social product care about your features, your game mechanics, or how amazing your application will be when there are millions of people on it. I’m selfish with my time and you’ve got seconds to hook me in with something new. And I’m not alone.

To successfully use the fleeting moments you have, you need to orchestrate everything under your control to work together seamlessly under a single brand with a single reason for existence. Make it emotional. If your team can’t tie back every decision they are making to the emotion you want people to feel when they are using your social product, then your reason for existence isn’t strong enough to serve its role, which is to guide your team and the product decisions you are making.

2. Be the best in the world at one thing. To put an even finer point on the focus required of any social upstart, you need to be best in the world at one thing. For Lululemon, they’ve built a $450 million annual revenue business by focusing on the black yoga pant. For Twitter, it’s the 140 character message. For Facebook, it is connecting you to the people you already know. Everything these companies do ties back to a specific thing they are going to be best in the world at doing.

It’s not always obvious upfront what should be your best in the world focus and enshrining the wrong thing can be a problem. However, it is much worse to build a social product without guiding principles. When you are focused on the one thing your social product is going to do better than everyone else, all you need to launch is your one thing and no more.

Read more: Techcrunch

Posted via email from .NET Info

How to Triple Boot Your Hackintosh with Windows and Linux

|
We've walked through how to triple-boot your Mac with Windows and Linux, but if you're using a shiny new Hackintosh, the process is a bit more complicated. Here's how to get all three operating systems up an running on your new PC.

While the Chameleon bootloader (the default boot screen for your Hackintosh) is a great friend to Hackintosh builders, Windows and Linux try to muck everything up by attempting to take over your computer with their own bootloaders, resetting the active partition, and throwing your partition tables out of sync. There are two ways to triple boot your Hackintosh. The first is very straightforward and allows you a lot of flexibility, while the second is much more complicated but offers other advantages depending on how many hard drives you have. This guide assumes you've already installed Mac OS X as described in our most recent Hackintosh guide, and, if you're using the second method, that you still have the iBoot disc handy. You'll also obviously need the Windows 7 and Linux installation discs as well. If you've got everything ready, follow the instructions below to get Windows 7 and Linux living harmoniously on the same PC.

Read more: Lifehacker

Posted via email from .NET Info

Kinect hacks let you control a web browser and Windows 7 using only The Force (updated)

|
Hacking the Xbox 360 Kinect is all about baby steps on the way to what could ultimately amount to some pretty useful homebrew. Here's a good example cooked up by some kids at the MIT Media Lab Fluid Interfaces Group attempting to redefine the human-machine interactive experience. DepthJS is a system that makes Javascript talk to Microsoft's Kinect in order to navigate web pages, among other things. Remember, it's not that making wild, arm-waving gestures is the best way to navigate a web site, it's just a demonstration that you can. Let's hope that the hacking communinity picks up the work and evolves it into a multitouch remote control plugin for our home theater PCs. Boxee, maybe you can lend a hand?

Update: If you're willing to step outside of the developer-friendly borders of open-source software then you'll want to check out Evoluce's gesture solution based on the company's Multitouch Input Management (MIM) driver for Kinect. The most impressive part is its support for simultaneous multitouch and multiuser control of applications (including those using Flash and Java) running on a Windows 7 PC. Evoluce promises to release sofware "soon" to bridge Kinect and Windows 7. Until then be sure to check both of the impressive videos after the break.

Read more: Engadget

Posted via email from .NET Info

Infinitec Infinite USB Memory Drive review

|
The idea behind Infinitec's Infinite USB Memory Drive is actually quite straightforward, but we've found that when we tell friends and acquaintances about the unit, it often boggles their minds. So, we'll try to keep it real simple: This red plastic stick is a 802.11b/g/n WiFi radio disguised as a USB flash drive. And when we say "disguised", we're not just talking about the stick's size, but its functionality as well -- it lets you wirelessly transfer files direct from from your WiFi-equipped laptop's hard drive to just about anything with a USB port. Stick it into an Xbox 360 or set-top-box, for instance, and it pretends to be your average thumbdrive, but with access to theoretically anything you choose. Sounds like a fantastic idea, but does it really work? Find out after the break in our full review.

Read more: Engadget

Posted via email from .NET Info

OpenVizsla hopes to bring USB sniffing to the everyhacker

|
Remember that Kinect hack how-to? A key figure in the story was the use of a USB analyzer that was plugged in-between the Kinect and the Xbox to pick up on USB traffic and pull out a log that could be used for hacking. Well, there's a new 'OpenVizsla' project on KickStarter that's aiming to build open source hardware that can put this typically expensive tech ($1,400+) in the hands of more hackers, who use the hardware for anything from jailbreaking locked-down devices to building Linux drivers for hardware. The project was actually started by hackers "bushing" and "pytey," who have worked on hacking the Wii and the iPhone, respectively. They've already raised a good chunk of change for the project in pledges, with backing from folks like Stephen Fry and DVD Jon helping out the momentum, and hopefully we'll be seeing the next generation of hacks enabled by OpenVizsla and its brood before too long.

Read more: Engadget

Read more: OpenVizsla

Posted via email from .NET Info

Streaming over HTTP with WCF

|
Recently I had a customer email me looking for information on how to send and receive large files with a WCF HTTP service. WCF supports streaming specifically for these types of scenarios.  Basically with streaming support you can create a service operation which receives a stream as it’s incoming parameter and returns a stream as it’s return value (a few other types like Message and anything implementing IXmlSerializable are also supported). MSDN describes how streaming in WCF works here, and how to implement it here. There’s a few gotchas however if you are dealing with sending large content with a service that is hosted in ASP.NET. If you scour the web you can find the answers, such as in the comments here.

In this post I’ll bring everything together and walk you through building a service exposed over HTTP and which uses WCF streaming. I’ll also touch on supporting file uploads with ASP.NET MVC, something I am sure many are familiar with. The sample which we will discuss requires .NET 4.0 and ASP.NET MVC 3 RC. If you don’t have MVC you can skip right to the section “Enabling streaming in WCF”. Also it’s very easy to adopt the code to work for web forms.

The scenario
For the sample we’re going to use a document store. To keep things simple and stay focused on the streaming, the store allows you to do two things, post documents and retrieve them through over HTTP. Exposing over HTTP means I can use multiple clients/devices to talk to the repo.  Here are more detailed requirements.

1. A user can POST documents to the repository with the uri indicating the location where the document will be stored using the uri format “http://localhost:8000/documents/{file}”. File in this case can include folder information, for example the following is a valid uri, “ ”.

Below is what the full request looks like in Fiddler. Note: You’ll notice that the uri (and several below) has a “.” in it after localhost, this is a trick to get Fiddler to pick up the request as I am running in Cassini (Visual Studio’s web server).

Read more: My Technobabble

Posted via email from .NET Info

Building a Go Compiler in Go

| Thursday, November 25, 2010
One of the biggest design decisions we made with erGo™ was the decision to write the compiler itself in the Go language. It’s had a huge impact on every level of our development process. In the end, we believe that we absolutely made the right decision. On the other hand, we completely understand why the original Go team didn’t do it that way. It would’ve been close to impossible. The only reason we were able to do it is because their compiler tools were already so stable and solid. Even so, it was a difficult process. We were facing the dual challenge of developing a major software product in a language we had no experience in and doing it for a platform for which a compiler didn’t exist. We began this project literally within weeks of Go’s launch (Go was announced on November 10th; we were coding by Thanksgiving). At the time, the MinGW port hadn’t even been started, much less become stable enough for us to use.

So why did we decide to do it that way? A few things pushed us over the edge. First of all, we knew it would be a really important thing for the Go language. It’s a bold statement: this language is strong enough to write a full fledged compiler in. Since no compiler existed for Windows, we knew that the only way a compiler for Go could get on Windows is if it compiled itself, so it would make a bold statement about erGo™ as well. Second, it would demonstrate our own faith in the language. If we believe the language is that good (and we do), it would be much easier to convince users of that. Finally, it would give us a very large code base of a “real” project to use as a test case for the compiler. erGo™ itself exercises a pretty broad portion of the language, and the standard packages we rely on (which erGo™ had to compile before it could compile itself) only add to that. It gave us a very useful validation tool.

How did we do it? With a very unusual (and at times irritating) development setup. We run on Windows machines with Ubuntu running inside a VM. Then we used shared directories (via networking) so that we can access files from both sides. We run a two step makefile process. First, we use 6g to compile a version of erGo™ that runs as a Linux binary. That compiler is then used to compile a Windows binary on the Windows side of the box, and we build our test suites using that.

Read more: The erGo™ Blog

Posted via email from .NET Info

Lucandra.NET

|
Project Description
Lucandra.NET is a Lucene + Cassandra implementation written in C# which is based on the Lucandra (Java) project by Jake Luciani. Apache's Lucene is a high-performance full-text search engine, and Apache's Cassandra is a promising NoSQL database, originally developed by Facebook.

Note: This page is currently under construction and I'm working on the documentation as much as I can as time permits. Not all source code is fully documented, but will be.
About Lucandra.NET
Lucandra.NET originally started off as a direct port of Jake Luciani's "Lucandra" project (https://github.com/tjake/Lucandra), driven by curiosity and the desire to learn a little bit more about the inner workings of Lucene and Cassandra. After completing the port from Java, I realized that this truly is a valid and promising replacement for the traditional file-based segment stores used by Lucene. We decided to use this in one of our production products, so since then I've gone through and re-written/re-factored a lot of the code and tried to squeeze the most out of it that I can in hopes that some other users of Lucene.NET would find it useful. Note that the code no longer directly represents the original Lucandra code as I've kind-of gone my own way with things; so there's a little learning curve if you're hoping on being able to walk into this code with familiarity of Lucandra's code (though some is still similar).

Some things to keep in mind:

  • Lucandra.NET is currently written for Cassandra 0.7-beta3 and Thrift-0.5.0 and will not work (without modification) with versions previous to Cassandra 0.7, and is untested with earlier 0.7 beta releases.
  • Lucandra.NET is built against Lucene.NET 2.9.2.
  • Lucandra.NET is not compatible with Lucandra (Java). This is due to the fact that Lucandra.NET uses a different data model within Cassandra and also that Lucandra uses Java object serialization, which is for obvious reasons not compatible with .NET's serialization.
  • In the current version, it is expected that you will use a ByteOrderedPartitioner as your partitioner in Cassandra. This facilitates wildcard & range queries, sorting, etc.
  • Lucandra (Java) performs hashing on keys stored in Cassandra; Lucandra.NET does not currently do this. This is primarily due to the fact that I do not fully understand the inner-workings of the partitioners in Cassandra and until I have some time to really play with them and see the how the partitioning works across a cluster, this won't be implemented.

  • Read more: Codeplex

    Posted via email from .NET Info

    The Small Things: Converting a Existing Web Application to a MVC Web Application

    |
    I have an existing web application that contains a couple of WCF services, but today I also wanted to add some pages and decided that I wanted to create them with ASP.NET MVC2. This is the steps I took to get the MVC pages to run just fine in my existing application.

    1. First of all I added references to System.Web.Mvc and System.Web.Routing
    2. I created a dummy MVC project so that I could see what was missing in my existing project.
    3. Then I created the convention based folder structure. Except for the minimum required folders of Controllers and Views, I also added Scripts, Content and Model folders.
    4. I copied all the java script files from the dummy project to my Scripts folder.
    5. I added a Global.asax file to my project and copied the content from Global.asax in the dummy project, but of course corrected the default route.
    6. I opened up the dummy project file in Notepad, and the same with my existing project file. In the project files I looked for the <ProjectTypeGuids> and compared them. For ASP.NET MVC2 the {F85E285D-A4E0-4152-9332-AB1D724D3325} GUID was missing so I changed from:
    <ProjectTypeGuids>{349c5851-65df-11da-9384-00065b846f21};{fae04ec0-301f-11d3-bf4b-00c04f79efbc}</ProjectTypeGuids>
    to:
    <ProjectTypeGuids>{F85E285D-A4E0-4152-9332-AB1D724D3325};{349c5851-65df-11da-9384-00065b846f21};{fae04ec0-301f-11d3-bf4b-00c04f79efbc}</ProjectTypeGuids>

    Please note that the exact GUID will be different for ASP.NET MVC3 (I assume this anyway).

    Read more: Eric's Blog

    Posted via email from .NET Info

    Efficiently Generating SHA256 Checksum For Files Using C#

    |
    I’m building a file synchronization with the cloud application and I want to store a checksum with the file so that I can verify later that the file is what I think it is. I’m not a crypto guy so I searched around the internet for a solution.   I found lots of examples and settled on this one that uses SHA256 for the job.  I also found some comments saying that it would be more efficient to wrap it in a BufferedStream rather than processing the entire file at once.

    Example Links: http://efreedom.com/Question/1-1345851/MD5-File-Processing ; http://stackoverflow.com/questions/1177607/what-is-the-fastest-way-to-create-a-checksum-for-large-files-in-c/1177744#1177744

    My intention for this post was to show how much more efficient it would be to use BufferedStream, however my results don’t show that.  I’m guessing that somehow, the efficiency is happening under the covers in a place I don’t see.

    If anyone knows this space well, please feel free to comment and suggest a better method.  I’m publishing my source below for the test and my surprisingly similar results whether I used buffering or not.

    Looking forward to the responses.

       Console.WriteLine(str1 + " " + stopwatch1.ElapsedMilliseconds);
               Console.WriteLine(str2 + " " + stopwatch2.ElapsedMilliseconds);

               Console.ReadLine();
           }

           private static string GetChecksum(string file)
           {
               using (FileStream stream = File.OpenRead(file))
               {
                   var sha = new SHA256Managed();
                   byte[] checksum = sha.ComputeHash(stream);
                   return BitConverter.ToString(checksum).Replace("-", String.Empty);
               }
           }

           private static string GetChecksumBuffered(Stream stream)
           {
               using (var bufferedStream = new BufferedStream(stream, 1024 * 32))
               {
                   var sha = new SHA256Managed();
                   byte[] checksum = sha.ComputeHash(bufferedStream);
                   return BitConverter.ToString(checksum).Replace("-", String.Empty);
               }
           }
       }


    Read more: PeterKellner.net

    Posted via email from .NET Info

    Visual Studio Sharing Files Between Projects

    |
    VSAddLink.png

    Normally when an existing file is added to a project within Visual Studio, the file is copied into the project folder. There is an option to link to the file, rather than duplicate it, allowing files to be shared between multiple projects.


    Adding Existing Files to a Project
    When you are creating a project using Visual Studio you can create and add new files or include files that already exist. When you add an existing file using the default options, the file is copied from the original location into the target directory within the project folder. All further editing is of the new copy and is not reflected in the original file.
    Another option is to link to the existing file instead of duplicating it. Using this option you can share files between multiple projects, either in the same solution or separate solutions. As only one file exists, editing it is reflected in all projects that use it. This can be particularly useful if you have utility classes that are not complex enough to warrant compiling into an assembly, or if you have a configuration file that you wish to share amongst all projects in a solution.

    Read more: Black Wasp

    Posted via email from .NET Info

    Enabling email notifications in the SQL Agent for Alerting & Job monitoring

    |
    To enable email notifications in SQL Agent you will need to follow the following steps:

    1. Configure Datbase Mail: create an email SMTP account and corresponding Email Profile. To do so please follow directions outlined in the blog post: "SQL Database Mail - Send Emails from SQL Server".

    2. In SSMS (SQL Server Managment Studio) select SQL Server Agent node and right click on the properties (see image below). If the properties menu item is disabled than it is because your SQL Agent service is not running. Please start the Agent before accessing the properties.

    Note: We do consider to allow the access to the Agent properties on not running instance as well (in one of our future releases)

    Read more: SQL Server Agent Team Blog

    Posted via email from .NET Info

    10 Secret Reasons Why You Lose Clients

    |
    Have you ever lost a client and then wondered why?

    Of course, you can always ask your client what went wrong, but many will not tell you. They may be afraid of hurting your feelings or they may just want to avoid a potentially unpleasant confrontation with you.
    Since I use freelancers once in a while that puts me in the unique position of being a freelancer and also an occasional client of freelancers. So, today I put my client hat on for a change and listed all the things that I could think of that would keep me from rehiring a freelancer.

    Things That You Did Wrong


    Sometimes, when you lose a client it IS because of something that you did. Here are five things that would cause me not to rehire a freelancer:
    1. You didn’t listen. Clients like it when freelancers pay attention to what they need. They expect that freelancers will follow their instructions and ask questions if they don’t understand or have a problem with what they are being asked to do.
    2. You were late. Admittedly, some deadlines are arbitrary, but others are not. While everyone can have an emergency once in a while, continually missing a due date is a major problem. If the project is tied to a launch or deliverable, being late might even cost your client money.
    3. You were rude/disrespectful. There’s really no excuse for being rude or disrespectful to a client. Even if you understand what the client needs better than they do themselves, you should still be courteous to your client. No one wants to be treated badly.
    4. You have a bad reputation. It’s really important to monitor your online reputation. Believe me, most of your clients know enough to search on your name or your business name to find out what others are saying about you. Do you know your online reputation?

    Read more: Freelance folder

    Posted via email from .NET Info

    Inside Native Applications

    |
    Introduction

    If you have some familiarity with NT's architecture you are probably aware that the API that Win32 applications use isn't the "real" NT API. NT's operating environments, which include POSIX, OS/2 and Win32, talk to their client applications via their own APIs, but talk to NT using the NT "native" API. The native API is mostly undocumented, with only about 25 of its 250 functions described in the Windows NT Device Driver Kit.

    What most people don't know, however, is that "native" applications exist on NT that are not clients of any of the operating environments. These programs speak the native NT API and can't use operating environment APIs like Win32. Why would such programs be needed" Any program that must run before the Win32 subsystem is started (around the time the logon box appears) must be a native application. The most visible example of a native application is the "autochk" program that runs chkdsk during the initialization Blue Screen (its the program that prints the "."'s on the screen). Naturally, the Win32 operating environment server, CSRSS.EXE (Client-Server Runtime Subsystem), must also be a native application.

    In this article I'm going to describe how native applications are built and how they work.

    How Does Autochk Get Executed

    Autochk runs in between the time that NT's boot and system start drivers are loaded, and when paging is turned on. At this point in the boot sequence Session Manager (smss.exe) is getting NT's user-mode environment off-the-ground and no other programs are active. The HKLM\System\CurrentControlSet\Control\Session Manager\BootExecute value, a MULTI_SZ, contains the names and arguments of programs that are executed by Session Manager, and is where Autochk is specified. Here is what you'll typically find if you look at this value, where "Autochk" is passed "*" as an argument:

    Autocheck Autochk *

    Session Manager looks in the &lt;winnt>\system32 directory for the executables listed in this value. When Autochk runs there are no files open so Autochk can open any volume in raw-mode, including the boot drive, and manipulate its on-disk data structures. This wouldn't be possible at any later point.

    Read more: Windows Sysinternals

    Posted via email from .NET Info

    RawWrite for Windows

    | Wednesday, November 24, 2010
    rawwrite-thumb.png

    RawWrite (or rawrite) is an essential tool for creating boot disks and other floppy disk images. Traditional rawwrite programs do not run under modern versions of windows so here is the Win32 version which does.

    Read more: chrysocome.net

    Posted via email from .NET Info

    Rootkit In a Network Card Demonstrated

    |
    rootkit-detection.jpg

    Guillaume Delugré, a reverse engineer at French security firm Sogeti ESEC, was able to develop proof-of-concept code after studying the firmware from Broadcom Ethernet NetExtreme PCI Ethernet cards... Using the knowledge gained from this process, Delugré was able to develop custom firmware code and flash the device so that his proof-of-concept code ran on the CPU of the network card.

    Read more: Slashdot

    Posted via email from .NET Info

    One Giant Cargo Ship Pollutes As Much As 50M Cars

    |
    One giant container ship pollutes the air as much as 50 million cars. Which means that just 15 of the huge ships emit as much as today's entire global 'car park' of roughly 750 million vehicles. Among the bad stuff: sulfur, soot, and other particulate matter that embeds itself in human lungs to cause a variety of cardiopulmonary illnesses. Since the mid-1970s, developed countries have imposed increasingly stringent regulations on auto emissions. In three decades, precise electronic engine controls, new high-pressure injectors, and sophisticated catalytic converters have cut emissions of nitrous oxides, carbon dioxides, and hydrocarbons by more than 98 percent. New regulations will further reduce these already minute limits. But ships today are where cars were in 1965: utterly uncontrolled, free to emit whatever they like.

    Read more: Slashdot

    Posted via email from .NET Info

    Seagate To Pay Former Worker $1.9M For Phantom Job

    |
    The jury in a Minnesota-based wrongful employment case delivered a verdict ordering disk-drive manufacturer Seagate to pay $1.9 million to a former employee who uprooted his family and career at Texas Instruments in Dallas to move to Minnesota for a job that did not exist. The man was supposed to be developing solid state drive technology for Seagate but was laid off months later. 'The reason that was given is that he was hired to be a yield engineer but the project never came to fruition,' the former employee's attorney said. 'They didn't care what effect it had on his career.'

    Read more: Slashdot

    Posted via email from .NET Info

    $1.3 Billion Oracle-SAP Verdict Is Biggest Ever For Software Piracy

    |
    After an 11 day trial whose highlights included the hilarious “Where In The World Is HP CEO Leo Apotheker?“ the Oracle vs. SAP intellectual property case has finally ended today in a whopping $1.3 billion dollar verdict, “The largest amount ever awarded for software piracy” according to Oracle co-president Safra Catz.

    Before the trial, SAP admitted that its 2005 acquisition TomorrowNow pirated Oracle’s intellectual property and used it in order to pilfer customers from Oracle. Evidence presented during the trial showed that key SAP executives were aware of what was happening. ““For more than three years, SAP stole thousands of copies of Oracle software and then resold that software and related services to Oracle’s own customers,” said Catz.

    The amount of the verdict was the biggest point of contention, as Oracle lawyers pushed for $1.7 billion in damages while SAP legal thought that the number was more in the $40 million range.

    Read more: Techcrunch

    Posted via email from .NET Info

    AJAX 2: What is coming with XMLHttpRequest Level 2? - JS Classes blog

    |
    XMLHttpRequest, often referred as AJAX, is going to get a new and improved specification version named Level 2. This article gives an overview of what is planned for XMLHttpRequest Level 2 and how it can be used to improve Web applications usability.

    Contents

    Introduction
    Brief history of XMLHttpRequest
    XMLHttpRequest limitations
    AJAX 2: What is XMLHttpRequest Level 2?

    Conclusions

    Introduction

    For those that may still not be familiar, XMLHttpRequest is a JavaScript object that can send arbitrary HTTP requests from the user browser to a Web server in order to submit or retrieve information in the background.

    It is often used to develop the so called AJAX Web applications, i.e. applications that interact with Web servers in a faster way, as they usually exchange information with the Web server without having to load a new page in the browser.

    Brief history of XMLHttpRequest

    The XMLHttpRequest was introduced by Microsoft in 1999 as part of the Microsoft Exchange Server 2000 product. The intention was to provide an highly interactive user interface similar to the Microsoft Outlook program but being solely based on a Web browser. That Web based interface was named OWA - Outlook Web Access.

    The idea had a great success and the XMLHttpRequest object started being bundled in Internet Explorer 5 as part of a ActiveX object named MSXML.

    Read more:  JS Classes blog

    Posted via email from .NET Info

    Open-Source Social Network Diaspora Goes Live

    |
    Diaspora, a widely anticipated social network site built on open-source code, has cracked open its doors for business, at least for a handful of invited participants. 'Every week, we'll invite more people,' stated the developers behind the project, in a blog item posted Tuesday announcing the alpha release of the service. 'By taking these baby steps, we'll be able to quickly identify performance problems and iterate on features as quickly as possible.' Such a cautious rollout may be necessary, given how fresh the code is. In September, when the first version of the working code behind the service was posted, it was promptly criticized for being riddled with security errors. While Facebook creator Mark Zuckerberg may not be worried about Diaspora quite yet, the service is one of a growing number of efforts to build out open-source-based social-networking software and services

    Read more: Slashdot

    Posted via email from .NET Info

    Visual Studio 2010 Slowdown: VMDebugger is the Culprit

    |
    I recently wrote how Visual Studio 2010 is very slow on my fast PC, taking 25-30 seconds to start up.  Thanks to a Microsoft employee who helped me but wishes to remain anonymous, my problem is solved.

    The VMWare add-in, VMDebugger, causes Visual Studio 2010 to load very slowly on my fast PC.


    Note that your mileage may vary, and VMDebugger may not ultimately be responsible.  Software is so complex these days it’s amazing anything works.  But I learned a couple lessons from this experience:

    It’s not enough to turn off the VMDebugger add-in with the Visual Studio Add-In Manager.  It’s also not enough to run Visual Studio from the command line (devenv.exe) with the /ResetAddin flag to prevent the add-in from starting.  The only way to truly remove the VMDebugger add-in and stop its effect on Visual Studio is to uninstall the entire VMWare Workstation product from your PC.  You can then reinstall VMWare but be sure to NOT install its Visual Studio add-in.

    To see the true effect of running devenv.exe in /SafeMode, you must run it multiple times.  The first few times I ran it in safe mode, Visual Studio started relatively slow.  But when I ran it the fourth and subsequent times, VS2010 started in just a few seconds.  This led me to conclude that the Visual Studio slowdown was indeed caused by an add-in.

    Read more: DevTopics

    Posted via email from .NET Info

    11 Things to Do When a Client Files Bankruptcy

    |
    Bankruptcy filings are up considerably. So, don't be surprised if you open your mail and find a letter from an attorney telling you that one of your clients or customers is seeking relief from the courts to solve his or her financial troubles.

    The bankruptcy process is full of rules that the debtor and creditor must follow. However, bankruptcy is not as formal as say civil court, says Victoria Ring, a debtor bankruptcy specialist and CEO of Colorado Bankruptcy Training, which provides instruction and support to attorneys nationwide. Bankruptcy is a big "Let's Make a Deal." You can negotiate a resolution, hopefully one that is in your favor, in cases where the debtor is trying to save the business and pay back creditors.

    With a Chapter 11 or Chapter 13 filing, reorganization is the goal. Debtors are required to pay debts according to a repayment plan the court sets up. Chapter 7 bankruptcy filing is quite different; the business is shutting its doors permanently and individuals are given a "fresh start" by liquidating assets and discharging debts.

    Of course, the problem is that the vast majority of the filings are Chapter 7. More than 1.5 million consumer bankruptcy filings were processed over a 12-month period ending September 30, a 14 percent increase from the previous year, according to data released by the Administrative Office of the U.S. Courts. Chapter 7 filings were up 16 percent to over 1.1 million. Chapter 13 filings were up 9 percent to 434,839, while Chapter 11 filings were down nearly 4 percent to 14,191. Business bankruptcy filings fell 1 percent to 58,322.

    1. Stop Contact Completely

    Once a person or business files for bankruptcy, you have to stop any and all collection activity. If you make contact to try to get your money back, you will violate the bankruptcy code and you can actually be sued. Even if you filed a lawsuit against the client, it gets stayed until the bankruptcy is completed. You can, however, contact the attorney or court appointed trustee to work out an arrangement on how your debt is handled in the bankruptcy, says Ring, who is the author of 102 Things Your Need to Know Before You File Bankruptcy. If for some reason you are not listed in the bankruptcy petition as a creditor who is owed money, then you will have the right to keep collecting on the debt even after the bankruptcy is over, says Ring.

    2. Do a Cost-Benefit Analysis

    Assess whether it is even worth your time or should you simply take the loss, says Daniel Gershburg, a Brooklyn, New York bankruptcy attorney. Meaning, "in a practical sense can you really get any money back from this consumer or client?" For instance, say the business grosses over $500,000 but it has over $1 million in debts and a long string of 15 creditors or more. There is very little chance you are going to receive any money back, Gershburg says. In most cases, he adds, small companies or consumers filing bankruptcy aren't going to have tangible assets that the trustee can sell and then distribute to any and all creditors. Ring suggests reviewing the schedule I and schedule J, included in every petition, which will show the filer's income and expenses.

    Dig Deeper: Report: Businesses Going Bankrupt


    3. Pay Attention to the Type of Bankruptcy

    Read more: Inc.

    Posted via email from .NET Info

    Grid, Cloud, HPC … What’s the Diff?

    |
    It’s always nice when another piece of the puzzle comes into focus.  In this case, my time speaking at the first ever International Super Computer (ISC) Cloud Conference the week before last was well spent.  The conference was heavily attended by those out of the grid computing space and I learned a lot about both cloud and grid.  In particular, I think I finally understand what causes some to view grid as a pre-cursor to cloud while others view it as a different beast only tangentially related.

    This really comes down to a particular TLA in use to describe grid: High Performance Computing or HPC.  HPC and grid are commonly used interchangeably.  Cloud is not HPC, although now it can certainly support some HPC workloads, née Amazon’s EC2 HPC offering.  No, cloud is something a little bit different:  High Scalability Computing or simply HSC here.

    Let me explain in some depth …

    Scalability vs. Performance
    First it’s critical for readers to understand the fundamental difference between scalability and performance.  While the two are frequently conflated, they are quite different.  Performance is the capability of particular component to provide a certain amount of capacity, throughput, or ‘yield’.  Scalability, in contrast, is about the ability of a system to expand to meet demand.  This is quite frequently measured by looking at the aggregate performance of the individual components of a particular system and how they function over time.

    Put more simply, performance measures the capability of a single part of a large system while scalability measures the ability of a large system to grow to meet growing demand.
    Scalable systems may have individual parts that are relatively low performing.  I have heard that the Amazon.com retail website’s web servers went from 300 transactions per second (TPS) to a mere 3 TPS each after moving to a more scalable architecture.  The upside is that while every web server might have lower individual performance, the overall system became significantly more scalable and new web servers could be added ad infinitum.

    High performing systems on the other hand focus on eking out every ounce of resource from a particular component, rather than focusing on the big picture.  One might have high performance systems in a very scalable system or not.

    For most purposes, scalability and performance are orthogonal, but many either equate them or believe that one breeds the other.

    Grid & High Performance Computing
    The origins of HPC/Grid exist within the academic community where needs arose to crunch large data sets very early on.  Think satellite data, genomics, nuclear physics, etc.  Grid, effectively, has been around since the beginning of the enterprise computing era, when it became easier for academic research institutions to move away from large mainframe-style supercomputers (e.g. Cray, Sequent) towards a more scale-out model using lots of relatively inexpensive x86 hardware in large clusters.  The emphasis here on *relatively*.

    Most x86 clusters today are built out for very high performance *and* scalability, but with a particular focus on performance of individual components (servers) and the interconnect network for reasons that I will explain below.  The price/performance of the overall system is not as important as aggregate throughput of the entire system.  Most academic institutions build out a grid to the full budget they have attempting to eke out every ounce of performance in each component.

    Read more: cloud scaling

    Posted via email from .NET Info

    HTTP Post Denial Of Service: more dangerous than initially thought

    |
    What’s special about this denial of service attack is that it’s very hard to fix because it relies on a generic problem in the way HTTP protocol works. Therefore, to properly fix it would mean to break the protocol, and that’s certainly not desirable. The authors are listing some possible workarounds but in my opinion none of them really fixes the problem.

    The attack explained

    An attacker establishes a number of connections with the web servers. Each one of these connections contains a Content-Length header with a large number (e.g. Content-Length: 10000000). Therefore, the web server will expect 10000000 bytes from each one of these connections. The trick is not to send all this data at once but to send it character by character over a long period of time (e.g. 1 character each 10-100 seconds). The web server will keep these connections open for a very long time, until it receives all the data. In this time, other clients will have a hard time connecting to the server, or even worse will not be able to connect at all because all the available connections are taken/busy.

    In this blog post, I would like to expand on the effect of this denial of service attack against Apache.

    First, I would like to start with one of their affirmations:

    “Hence, any website which has forms, i.e. accepts HTTP POST requests, is susceptible to such attacks.”

    At least in the case of Apache, this is not correct. It doesn’t matter if the website has forms or not.
    Any Apache web server is vulnerable to this attack. The web server doesn’t decide if the resource can accept POST data before receiving the full request.

    Read more: acunetix

    Posted via email from .NET Info

    10 Essential Tools for building ASP.NET Websites

    |
    I recently put together a simple public website created with ASP.NET for my company at Superexpert.com. I was surprised by the number of free tools that I ended up using to put together the website. Therefore, I thought it would be interesting to create a list of essential tools for building ASP.NET websites. These tools work equally well with both ASP.NET Web Forms and ASP.NET MVC.

    Performance Tools

    After reading Steve Souders two (very excellent) books on front-end website performance High Performance Web Sites and Even Faster Web Sites, I have been super sensitive to front-end website performance. According to Souders’ Performance Golden Rule:

    “Optimize front-end performance first, that's where 80% or more of the end-user response time is spent”

    You can use the tools below to reduce the size of the images, JavaScript files, and CSS files used by an ASP.NET application.

    1. Sprite and Image Optimization Framework

    CSS sprites were first described in an article written for A List Apart entitled CSS sprites: Image Slicing’s Kiss of Death. When you use sprites, you combine multiple images used by a website into a single image. Next, you use CSS trickery to display particular sub-images from the combined image in a webpage.

    The primary advantage of sprites is that they reduce the number of requests required to display a webpage. Requesting a single large image is faster than requesting multiple small images. In general, the more resources – images, JavaScript files, CSS files – that must be moved across the wire, the slower your website.

    However, most people avoid using sprites because they require a lot of work. You need to combine all of the images and write just the right CSS rules to display the sub-images. The Microsoft Sprite and Image Optimization Framework enables you to avoid all of this work. The framework combines the images for you automatically. Furthermore, the framework includes an ASP.NET Web Forms control and an ASP.NET MVC helper that makes it easy to display the sub-images. You can download the Sprite and Image Optimization Framework from CodePlex at http://aspnet.codeplex.com/releases/view/50869.

    The Sprite and Image Optimization Framework was written by Morgan McClean who worked in the office next to mine at Microsoft. Morgan was a scary smart Intern from Canada and we discussed the Framework while he was building it (I was really excited to learn that he was working on it).


    Read more:  Stephen Walther

    Posted via email from .NET Info

    Passing Configuration Data to the Master Page in ASP.NET MVC

    |
    One of the common activities that I have found myself doing lately whenever I create a new project is to identify a way of loading configuration information and customizing the behavior of the Master Page. The way I initially went about solving this problem was to create a base class and have all of my Page ViewModel’s inherit from it. This caused two different things that I had to constantly maintain, making sure all of my ViewModel’s inherited from this MasterViewModel and then remembering to set the data every time a ViewResult is returned in an Action.

    public class MasterViewModel : IMasterViewModel
    {
       public string SiteName { get; set; }
    }

    I was able to take care of the second concern by simply creating a base controller that all of my controllers would inherit from and then override View(string viewName, string masterName, object model) { … } so that the settings are automatically injected into the model.

    protected override ViewResult View(string viewName, string masterName, object model)
    {
       ((MasterViewModel)model).SiteName = "Example";
       return base.View(viewName, masterName, model);
    }

    In order to get around the smell of forcing my ViewModel’s to inherit from a base class I put the MasterViewModel into the ViewData dictionary that is returned in the ViewResult.

    protected override ViewResult View(string viewName, string masterName, object model)
    {
       ViewResult result = base.View(viewName, masterName, model);
       result.ViewData[Data.Site] = _blogConfiguration.Configuration;
       return result;
    }


    Read more: about:thoughts

    Posted via email from .NET Info

    Ice Meteorite Found with Extraterrestrial Life-Forms

    |
    31117-223459-0847a6e20ee9a8f3a1058dea6210d86f.jpg

    Duane P. Snyder will announce the discovery of the first and only known ICE METEORITE containing EXTRATERRESTRIAL LIFE-FORMS on November 30, 2010 at 10:00am at the Ramada Inn Conference Center, 1555 Phoenix Road, South Haven, MI 49090.
    SOUTH HAVEN, Mich., Nov. 21, 2010 /PRNewswire/ -- Duane P. Snyder will announce the discovery of the first and only known ICE METEORITE containing EXTRATERRESTRIAL LIFE-FORMS on November 30, 2010 at 10:00am at the Ramada Inn Conference Center, 1555 Phoenix Road, South Haven, MI 49090.
    Also to be announced: The ICE METEORITE's particle analysis, it's gas analysis, where it likely came from and PHOTOS of EXTRATERRESTRIAL LIFE-FORMS found in the melt-water of the ICE METEORITE.
    Dr. Albert Schnieders of Tascon USA Inc, Chestnut Ridge, New York 10977, has commented that we basically found nearly all elements up to 90u in the sample spherical particles.


    Read more: PRNewswire

    Posted via email from .NET Info

    Common WinDbg Commands (Thematically Grouped)

    |

    Setting up kernel mode debugger in windows

    |
    Introduction

    When ever there is a bug in your program you usually open a debugger(GDB,visual studio debugger etc) to fix it, but how do you debug a bug in the operating system? Do you load the running OS in to debugger? Is it possible? The simple answer is no, Because even the debugger works with the help of OS. Its a catch22 situation
    History

    Earlier developers used two machines one is the defective OS(slave) and other machine is the one containing debugger software(master).  Now the defective slave is connected to master machine using a high speed cable, and then both slave and master machines are started, with the help of debugger the execution of slave machine is  paused. But this solution have some drawbacks.  

    The connection speed between the machines is too slow, because the data and commands should be passed to and fro between master and slave.
    Require extra hardware like cable and two separate machines
    Current Process

    Fortunately we now have much better options for beginners who want to study the internals of the OS by debugging. With the help of virtual machines we now do not require two separate machines. The slave machine can be thought of as a guest VM and master machine can be thought of as a host computer(your real physical machine). The connection between these host and guest have been made even simpler with the help of a software called VirtualKD(Virtual Kernel Debugger)[with out this tool we have to manually set up a named pipe in the guest and modify boot.ini to enable some special options. Its little time consuming]. So in this tutorial I will help you set up kernel mode debugger.

    I will be using following tools.

    • WinDbg (Windows Kernel Debugger) 
    • Virtual Box (Virtual Machine Manager) 
    • VirtualKD (Tool to enable very high speed kernel debugging between host and just machines)

    here after when ever I refer to OS it will be one version of windows

    Read more: Codeproject

    Posted via email from .NET Info

    Installing QT 4.7.x for VS2010

    |
    had some issues installing QT on windows with vs2010 so I thought I'll share the solution that worked for me.

    I'm using win7 64 bit , vs2010, qt 4.7.1
    What do I need ?
    qt add on for visual studio (qt-vs-addin-1.1.7.exe)
    qt open source distribution (qt-everywhere-opensource-src-4.7.1.zip)
    Installation Process


    download and install qt add on for visual studio
    download qt open source distribution
    create qt directory (i used C:\Qt\4.7.1x32\)
    extract the zip contents into previously created folder (extract files & folders not the root directory named qt-everywhere-opensource)
    open visual studio command prompt named...


    Read more: Code in Vain

    Posted via email from .NET Info

    Build a Pandora clone with Silverlight 4

    |
    For the uninitiated, Pandora is a popular Flash-based internet radio service: users create their own radio stations by seeding an artist or song, and by giving a thumb-up or thumb-down to played songs. The software responds by playing more of what the user likes based off these inputs.

    In this article, I'm going to show you how to build a Pandora-like music service using Silverlight 4, WCF, and Entity Framework. The end result is Chavah ("HA-vah"), a fun Silverlight clone of Pandora which can be seen live at judahhimango.com/chavah.

    Introduction

    Why build a Pandora clone, you ask?

    I have a selfish motivation: Pandora doesn’t play the music I’m most interested in. As a Jewish follower of Yeshua (Jesus), I enjoy a tiny genre of religious music called Messianic Jewish music, a genre so small Pandora doesn’t know about any of our music.

    I thought, “Why not build a Pandora clone to play some great Messianic Jewish tunes?” It would serve myself primarily, but also others in our community. And it'd raise the awareness of some of the great Messianic music out there to boot.

    But a more general motivation is, “Why not?” If anything, it’s a great learning experience to expose one’s self to a full client/server application using Silverlight, WCF, and Entity Framework. These technologies are currently popular and in demand by employers, so it’s great résumé fodder to boot.

    Out of these motivations, the Chavah project was born. Chavah is my attempt to build a Pandora-like clone that plays Messianic Jewish music, and is the subject of this article.

    Why Silverlight?

    During the 2010 Professional Developer’s Conference, Microsoft hyped up IE9 and its HTML 5 support, without commenting much on Silverlight. Because of this, there has been speculation as to whether Silverlight would fade out in favor of HTML 5; folks questioning why to use Silverlight over HTML 5.

    While Microsoft has since reiterated their long term support for Silverlight, the question of why use Silverlight was a very real question when starting a web app like Chavah.

    My reasons for ultimately choosing Silverlight are pragmatic:

    • Silverlight is more cross-platform than HTML 5. Most of my target audience is still running browsers with limited or no HTML 5 support. And even for the few that do have HTML 5-compatible browsers , the support is neither consistent across browsers, nor stable. My target audience is Windows and Mac, and Silverlight runs great on both, right now, in most any web browser.
    • The tooling kicks ass. Visual Studio + Blend is a hard combo to beat. C# is a great language with great tool and refactoring support. Animations, paths, soft UI edges, glow effects, drop shadows, and all things UI sexy are easy with Blend. 
    • Silverlight will always be a step ahead of the HTML spec. By the time HTML 5 has broad reach, Silverlight will have features that only HTML 6 will bring. This is the nature of design-by-committee of the HTML spec. Silverlight will innovate and bring new features plain old HTML is missing, and will deliver it faster and broader than the HTML spec ever will.
    • Silverlight is an app development platform; HTML was intended for documents. Chavah is an application, not a document. HTML + Javascript + CSS hacks can turn a document into an app, but ultimately, a platform built for apps is more compelling for an application like Chavah.
    • Since Chavah is an application, I want users to be able to install Chavah locally onto their desktops. With Silverlight’s out-of-browser capabilities, users can optionally install Chavah as a native application. This simply would not be possible with HTML + CSS + Javascript, barring browser-specific hacks like IE9’s “pinned sites” or Google Chrome’s “application shortcuts”.

    Points of Interest

    In showing how Chavah is built, this article will show you how you can simulate the look and feel of apps like Pandora – smooth UI layout, animations, fluid feel – all in Silverlight 4. I’ll show you how a real-world application can make use of some of the new features in Silverlight, like out-of-browser functionality, easing animations, and GPU acceleration.
    Additionally, we’ll cover communication with the backend WCF web service where the “pick a song based off the user’s likes and dislikes” logic is located.

    I’ll show you how to add some fun social features to your application using WCF, Entity Framework, and SQLite.

    Read more: Codeproject

    Posted via email from .NET Info

    Why does machine.config contain invalid Xml content after installing .Net 3.5 patches?

    |
    For quite a few times, I heard customers would hit this issue after installing .Net 3.5 patches or repair 3.5 on Windows Vista or Windows 2008 Server. Basically the machine.config file contains some invalid Xml content and applications using configuration do not work, especially for IIS-hosted applications. The main problem is that the WCF 3.5 installer (WFServiceReg.exe) did not handle the different cases very well.
    Problem Statement
    There are three different cases that I have heard:

    Issue 1: .Net 3.0 is removed but .Net 3.5 is on the box

    On Windows Vista and Windows 2008 Server, .Net 3.0 is installed through Component-Based Setup (CBS). However, .Net 3.5 is installed through Windows Installer (MSI). Thus .Net 3.5 does not have a strong dependency on .Net 3.0. People could accidentally uninstall .Net 3.0 from the box. This would cause the section handlers (for <system.serviceModel> etc) for WCF removed from machine.config. However, any further .Net 3.5 patch would cause the WCF installer to run and it would install the following dangling elements into machine.config:
    <system.serviceModel>
     <extensions>
       ...
     </extensions>
     <client>
       ...
     </client>
    </system.serviceModel>
    This would cause the application to fail with the following error:
    System.Configuration.ConfigurationErrorsException: Unrecognized configuration section system.serviceModel. (c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Config\machine.config line 146)

    Issue 2: .Net 3.0 is on the box but the WCF section handlers are removed

    For some unknown reason, .Net 3.0 is not uninstalled from the machine. However, the WCF section handlers are accidentally removed when different orders of install/uninstall operations happened. The application would also fail with the same error as Issue 1 above.

    Issue 3: Redundant Xml elements when configSource is used


    Read more: Wenlong Dong's Blog

    Posted via email from .NET Info