This is a mirror of official site: http://jasper-net.blogspot.com/

Announcing Silverlight 5

| Friday, December 3, 2010
Today at the Silverlight FireStarter event we unveiled the next release of Silverlight.

Silverlight 5 adds significant new features and capabilities, and enables developers to create premium media experiences and deliver rich applications across browsers, desktops and devices.  In my keynote this morning we demonstrated a number of them, and highlighted both the developer productivity Silverlight 5 provides and the great new user experiences it enables.  You can watch my keynote here.

A Silverlight 5 beta will be available in the first half of next year, and the final release will ship in the second half of 2011.


Premium Media Experiences


We are seeing great adoption of Silverlight for premium media solutions. In the last few months we’ve seen companies like Canal+, TV2, and Maximum TV launch both live and on-demand Silverlight solutions.

Silverlight 5 will enable media experiences to go even further by adding:

  • Hardware video decode: Silverlight 5 now supports GPU accelerated video decode, which significantly reduces CPU load for HD video.  Using Silverlight 5, even low powered Netbooks will be able to play back 1080p HD content
  • Trickplay: Silverlight 5 now enables variable speed playback of media content on the client with automatic audio pitch correction. This is great for training videos where you want to speed up the trainer while still understanding what he’s saying
  • Improved power awareness will prevent screensavers from kicking in while you’re watching movies while allowing the computer to sleep when video is not playing.
  • Remote-control support is now built-into Silverlight 5 - allowing users to control media playback with remote control devices.

Application Development

Silverlight provides a rich application development environment that enables you to build great web delivered applications.

Silverlight 5 delivers significant improvements for application development including:

Databinding and MVVM: Silverlight 5 delivers significant data-binding improvements that improve developer productivity and provide better Silverlight/WPF feature convergence.  Developers can now debug data-binding expressions, set breakpoints on bindings, and more easily determine errors.  Implicit DataTemplates now allow templates to be created across an application to support a particular type by default.  Ancestor RelativeSource bindings makes it easier for a DataTemplate to bind to a property on a container control. Binding in style setters allows bindings to be used within styles to reference other properties.  And a new DataContextChanged event is being introduced to make handling changes easier. Markup extensions are also now support and allow code to be run at XAML parse time for both properties and event handlers, enabling cutting-edge MVVM support.

WCF and RIA Services: Silverlight 5 now includes WS-Trust support.  WCF RIA Services improvements include complex type support, better MVVM support, and improved customization of code generation.  Silverlight 5’s networking stack also now supports low-latency network scenarios that enable more responsive application scenarios.

Text and Printing: Silverlight 5 delivers improved text clarity that enables crisper and cleaner text rendering, multi-column text flow and linked text containers, character and leading support, and full OpenType font support.  Silverlight 5 also includes a new Postscript Vector Printing API that provides programmatic control over what you print, and enables printing richer reports and documents.  Pivot functionality – which enables developers to build amazing information visualization experiences – will also be provided built-into the Silverlight 5 SDK.

Graphics: Silverlight 5 includes immediate mode graphics support that enables developers to take full advantage of the GPU (graphics processing unit) and enables accelerated 3-D graphics support.  This new support facilitates much richer data visualization scenarios (make sure to watch the keynote to see some really eye-popping ones).

Read more: ScottGu's Blog

Read more: Microsoft

Posted via email from .NET Info

PSA: Botched AVG 2011 update might be why your PC won't start today

|
Did you update your free copy of AVG 2011 today, in the hopes of evading a nasty bug? In a set of mildly familiar circumstances, the antivirus company has inadvertently unleashed an even nastier one. Users running 64-bit editions of Windows 7 and AVG 2011 are reporting a STOP error after a mandatory antivirus update this morning, which is keeping some from booting their machines into Windows at all. The buggy update has since been pulled and there are a couple ways to preemptively keep it from happening if you're staring at the message above, but if you've already been stung, you're looking at some quality time with a recovery disc or repair partition to fix your Windows boot files. Find all the solutions, including the preemptive ones, at our source link below.

Read more: Engadget

Posted via email from .NET Info

Flaw in Microsoft Windows SAM Processing Allows Continued Administrative Access Using Hidden Regular User Masquerading After Compromise

|
TITLE:
Flaw in Microsoft Windows SAM Processing Allows Continued Administrative Access Using Hidden Regular User Masquerading After Compromise

SUMMARY AND IMPACT:
All versions of Microsoft Windows allow real-time modifications to the Security Accounts Manager (SAM) that enable an attacker to create a hidden administrative backdoor account for continued access once a system has been compromised. Once an attacker has compromised a Microsoft Windows computer system using any method, they can either leave behind a regular user or hijack a known user account (Such as ASPNET). This user account will now have all of the rights of the built-in local administrator account from local or remote connections. The user will also share the Administrator's desktop and profile. When inspected by system administrators, the regular user always looks like it is just part of the built-in user's group. The attacker can also make the regular user account hard to detect by creating a user with the username of "ALT-0160", for blank space. Events in the audit log pertaining to the hidden account will be created if the system administrator has enabled auditing, but the user name fields are all blank. Once a system has been compromised, the attacker would need to ensure the Task Scheduler service is enabled only when starting the method. This method can be used to masquerade as any user account on the computer system.

DETAILS:
Use the following steps to exploit this vulnerability.

Read more: ExploitDevelopment.com

Posted via email from .NET Info

Web Server and ASP.NET Application life Cycle in Depth

| Thursday, December 2, 2010
Introduction

In this article we will able to understand what’s happen when the user submit a request to ASP.NET web app. There are lots of article that explain this topic but no one shows in a clear way what really happens in depth during the request. After reading this article you will be able to
undestand :

What is a web server
HTTP - TCP/IP Protocol
IIS
Web communication
Application Manager
Hosting Environment
Application Domain
Application Pool
How many app domain are created against a client request
How many HttpAppllication are created against a request and how I can affects this behaviour
What is the work processor and how many of it are running Against a request
What it happens in depth between a request and response

Start from scratch

All the article I have read usually begin with “The user sends a request to IIS...bla bla bla” . Anyone know the IIS is a web server where we host our web application(and much more) but what is a web server?

Let start from really begin J

A Web Server (like Internet Information Server/Apache/etc.) is a piece of software that enables a website to be viewed using HTTP.We can abstract this concept saying that a web server is a piece of software that allow resources(web page,images,etc) the be requested over HTTP protocol. I ‘m sure many of you thought that a web server is just a special super computer but It’s just the software that runs on it that makes the difference between a normal computer and a web server.


Read more: Codeproject

Posted via email from .NET Info

Death from the mailroom – iPhone hacks your company from the inside

|
Las Vegas (NV) – The Apple iPhone is great for phone calls and viewing YouTube videos, but it can also be turned into one heck of a wireless hacking tool capable of wrecking havoc on almost any company or government organization from the inside.  In a talk at the Defcon security convention, Robert Graham and David Maynor of Errata Security explained how they could defeat firewalls, intrusion detection systems and even armed security guards by Fedexing a modified iPhone to a fictitious employee.   The phone calls home every hour and can then be instructed to sniff network traffic, discover nearby wireless devices and even download information.

Read more: TG Daily

Posted via email from .NET Info

AutoTest.NET

|
I just want to quickly point out a tool that I’ve been playing with for a couple of days now, named AutoTest.NET. Its an open-source tool that originates from a popular tool in the Ruby community called ZenTest, which basically runs all your valuable unit tests when you save your source files or when you build your code. It enables you to get feedback about your changes as soon as possible.

The project started out a couple of years ago on Google code and was first initiated by James Avery. Contribution stopped at some point until recently where Svein Arne Ackenhausen forked the source code and put it on GitHub. Now it runs both for .NET as well as Mono, with NUnit, MSTest and xUnit as the currently supported unit test frameworks.

Here’s a screenshot from the feedback window when all tests pass:

Success_thumb.png

Read more: <elegantc*ode>

Posted via email from .NET Info

NHibernate Code Base Analysis

|
Patrick Smacchia writing. I am not a NH developer but the creator of a static analysis tool for .NET developer: NDepend. I recently analyzed NH v3.0.0 Candidate Release 1 with NDepend and I had a chance to discuss some results with NH developer Fabio Maulo. Fabio suggested me to show some results on the NH blog, so here it is.

NDepend generated a report by analyzing NH v3.0.0 CR1 code base. See the report here. NDepend has also the ability to show static analysis results live, inside Visual Studio. The live results are richer than the static report results. Here, I will mostly focus on results extracted from the report, but a few additional results will be obtained from the richer NDepend live capabilities.

Code Size

NH code base weights almost 63K Lines of Code (LoC as defined here). Developers hates LoC as a productivity yardstick measurement, but it doesn't mean that the LoC code metric is useless. LoC represents a great way to compare code base size and gets an idea of the overall development effort. In the report namespace metrics section, we can see that the namespace NHibernate.Hql.Ast.ANTLR.* generated by ANTLR weights around 18K LoC. So we can consider that NH handcrafted code weights 45 LoC. Now we have a number to compare to the 19K LoC of NUnit, the 28K LoC of CC.NET, the 32K LoC of Db4o, the 110K LoC of NDepend, the roughly 130 KLoC of Llblgen, the roughly 500K LoC (or so) of R# (that certainly contains a significant portion of generated code) and the roughly 2M LoC of the .NET Fx 4.

So not only NH is one of the most successful OSS initiative, it is also one of the biggest OSS code base. To quote one NH contributor, NH is a big beast!

Assembly Partitioning

NH is packaged in a single NHibernate.dll assembly. I am a big advocate of reducing the number of assemblies and one assembly seems an ideal number. This way:

Projects consumers of NH just need to link, maintain the reference to just one assembly. This is a very good thing compared to many other OSS Fx that force to reference, maintain many assemblies.
Compilation time is much (much) faster. Compilation time of one single VS project can be easily 10 times faster than the compilation time of the same code base partitioned in many VS projects.
Startup-time of an application using NH is a bit faster. Indeed, the CLR comes with a slight overhead for each extra assemblies to load at runtime.
On the dependency graph or dependency matrix diagrams of the report, I can see that the NH assembly is linking 3 extra assemblies that needs to be redistributed as well: Antlr3.Runtime, Remotion.Data.Linq, and Iesi.Collections.

Code Coverage and NH Code Correctness

Read more: NHibernate Forge

Posted via email from .NET Info

Combine and compress javascript and css files in ASP.Net MVC

|
Goal:
When loading js or css files, combine all the js files into one and all css files into one file respectively when rendering to improve on performance. Also compress if need be on the fly.

In this example we use many css files and even more js files to organize the ASP.Net Mvc web app into manageable pieces. The reason for the separation is mainly because it gives the team the ability to work on different part of the web app by working on the affected css or js files. It also helps to decide at a very granular level which css or js files to load and cache in the browser and which ones are very unique and/or specific and/or large so as to load only when really needed. For example I have an extremely large contract page with about 4000 lines of jQuery to handle a spreadsheet like functionality. It is not used often and used only by certain sales reps. Do not want to load this file as part of the generic/global js file since it would be wasted space for most part. This file is loaded on the fly when needed. How that is done is another story :-) Since my app is a CRM app, it has many screens that have unique css styling requirements and so in some situations the css file should only be loaded when needed. The reasons are really not that important in the context of this blog, just want to show you how to do this in ASP.Net Mvc if and when the need arises.

System Requirements:
.Net 2+

ASP.Net Mvc v1+

Solution:

In your Master file head section
<link rel="stylesheet" type="text/css" href="<%=Url.RouteUrl(new {controller = "Scripts", Action = "GetAllCss"})%>" />
<script type="text/javascript" src="<%=Url.RouteUrl(new {controller = "Scripts", Action = "GetAllScripts"})%>"></script>

In the Mvc ScriptController controller:
       public static IPathMapper ServerPathMapper { get; set; }

       protected static bool Enable;
       protected static bool EnableHtmlCompression;
       protected static bool EnableHtmlMinification;
       protected static bool EnableProfiler;
       protected static bool EnableScriptCompression;
       protected static bool EnableScriptMinification;


Read more: Renso Hollhumer

Posted via email from .NET Info

GUID Vs Int data type as primary key

|
Recently one of my friend ask me when I should go for GUID and When I should go for Int as primary key in table. So decided to write a blog post for it. Here are advantages and disadvantage of the GUID and INT.

INT Data Type:

Advantages:

  • Its required small space in terms of the storage it will only allocates 4 bytes to store data.
  • Insert and update performance will be faster then the GUID. It will increase the performance of the application.
  • Easy to index and Join will give best performance with the integer.
  • Easy to understand and remember
  • Support of function that will give last value generated like Scope_Indentity()

Disadvantages:
  • If you are going to merge table frequently then there may be a chance to duplicated primary key.
  • Limited range of uniqueness if you are going to store lots of data then it may be chance to run out of storage for INT data type.
  • Hard to work with distributed tables.
GUID Data Type:

Advantages:

  • It is unique for the current domains. For primary key is uniquely identifies the table.
  • Less chances of for duplication.
  • Suitable for inserting and updating large amount of data.
  • Easy for merging data across servers.

Disadvantages:


Read more: DotNetJaps

Posted via email from .NET Info

How to dispose multiple submit buttons on a single form

|
Introduction

Sometimes, you could be forced to send same data on a single form to different URL , or you want to use multiple buttons to  handle different requests.Using multiple submit buttons (with same name and same type) on a single form is  extremely available.However, how do you check which buttons was pressed on Server and Client? This article shows you how to do it.

Background  

On Server, I take VBScript for example.  I use Request.Form("bufftonname") Through  getting different button value to distinguish pressed button.

On Client,  no doubt, I use javacript.  Throuth value of document.pressed , I can differ which button pressed.  

Sample  

At First, my HTML code like this:  

<form method="post" >
   <input id="Text1" name="Text1" type="text" />
   <br />
   <input name="mybutton" type="submit" value="button1" />
   <input name="mybutton" type="submit" value="button2" />
</form>  
This is a very simple program, only with one form which contains one text input and two submit buttons which have same name--"mybutton", same type--"submit" and different value--"button1" and "button2". when we create script, these unique values will help us to check which button was pressed.

Now, I add some VBScript to  respose the different button request.

<%
         if request.form("mybutton") ="button1" then
             response.Write "Oh, you pressed button 1"
         elseif request.form("mybutton") = "button2" then
             response.Write "Aha, you pressed button 2"
         end if
%>

Above is just  code which I want to display on SERVER. The idea is not complicated, and the code is simple.

Until now, my text input can't  work. So I increase some codes to check if my input is null, and chenge my SERVER scripts to display it's value only by pressing button1( this means I hope button2 stay the same ).

    <%
         if request.form("mybutton") ="button1" then
             response.Write "Oh, you pressed button 1, now Text1 text show:"&request.Form("Text1")
         elseif request.form("mybutton") = "button2" then
             response.Write "Aha, you pressed button 2"
         end if
    %>
   <form method="post" onsubmit="return form_check();" >
       <input id="Text1" name="Text1" type="text" />
       <br />
       <input name="mybutton" type="submit" value="button1" />

Read more: Codeproject

Posted via email from .NET Info

Selectively Filtering Content in Web Browsers

|
Typically the job of a web browser is to download and display content-- establishing a network connection, sending HTTP requests, retrieving the web page, and downloading and running all of its content. These operations pose non-trivial challenges, and as such, web-browsers are among the most complicated software that most of us routinely use. However, there’s a whole separate (higher level!) challenge around selectively not running (filtering) content.

Today, different browsers offer many different mechanisms for selectively filtering content. This post is a survey of how these mechanisms work, and the subtle and sometimes not so subtle differences between them.

Examples and Motivations

Different users have shown an interest in myriad different types of Content Blocking, and not all users have similar goals.

Certain types of blockers are over a decade old and extremely commonly used (e.g. popup-blockers) while others are less often used or only of interest to a small niche audience. Just reading the comments on this blog, it’s clear that some users want to be able to block cookies, plugins or ActiveX controls, certain types of content (e.g. malware, adult content), privacy-impactful “trackers” (e.g. “web beacons”), advertisements, file downloads, or content they consider “annoying” (e.g. popups, flashing content). Individual consumers may have many different reasons for wanting to block particular content: faster performance, improved security, increased reliability and stability, enhanced privacy, increased battery life, preference about user-experience, legal or supervisory requirements (e.g. parental controls) lower bandwidth charges, as well as many others.

However, on the other end of the internet connection, a website provider may or may not want content blocked, for any of any number of reasons: revenue (direct or indirect), site analytics and understanding customers and markets, predictability and reliability of the user experience, malicious intent, and many others.

In some scenarios, site publishers and developers are just fine with content blocking and modification. For instance, a site owner whose legitimate site was compromised to serve malware probably wants that malware content blocked to keep his visitors safe until the site can be cleaned. Accessibility tools are crucial for some people to use the web and websites. Some sites and networks may offer users a way for to opt-out of analytics or other tracking.

Blocking at the Network Level

There are several common ways to block content at the network level—the most common are by using the HOSTS file, or by filtering content with a proxy. There are a number of other, less-common network-level approaches, including using a router to block particular content (most Linksys routers can be configured to block Java, ActiveX installers, and cookies, for example). Large organizations or networks with restricted bandwidth, for instance, may block content at the gateway:


Read more: IE Blog

Posted via email from .NET Info

Creating and Using a Macro

|
Keyboard:  CTRL + SHIFT + R (record/stop recording); CTRL + SHIFT + P (run)
Menu:  Tools -> Macros -> Record Temporary Macro; Tools -> Macros -> Run Temporary Macro; Tools -> Macros -> Save Temporary Macro
Command:  Tools.RecordTemporaryMacro; Tools.RunTemporaryMacro; Tools.SaveTemporaryMacro

You can record macros to do just about anything in Visual Studio.  In this example, we will create a macro that adds a new class to our project.  First, create a new project.  For this example create a Console Application:

Now we are going to add a class to the project and give the class a name.  When we do this, we will record the actions into a temporary macro.  Press CTRL + SHIFT + R to begin recording our macro.  You will know you are recording if the status bar indicates it in the lower left-hand corner:

Add a new class (CTRL + SHIFT + A) called "Bubba.cs":

DANGER:  There are lots of little "gotchas" that you will run into doing macros.  One that took me  a little while to figure out while doing this example was leaving off the ".cs" at the end of the file name.  For some reason it  doesn't like that at all.  Keep an eye out for little things like that as you use this feature.

Read more: Visual Studio Tips and Tricks

Posted via email from .NET Info

Understanding and Using .NET Partial Classes

|
Introduction

One of the language enhancements in .NET 2.0—available in both VB.NET 2005 and C# 2.0—is support for partial classes. In a nutshell, partial classes mean that your class definition can be split into multiple physical files. Logically, partial classes do not make any difference to the compiler. During compile time, it simply groups all the various partial classes and treats them as a single entity.

One of the greatest benefits of partial classes is that it allows a clean separation of business logic and the user interface (in particular the code that is generated by the visual designer). Using partial classes, the UI code can be hidden from the developer, who usually has no need to access it anyway. Partial classes will also make debugging easier, as the code is partitioned into separate files.

In this article, I will examine the use of partial classes in more detail and discuss how Visual Studio 2005 makes use of  partial classes.  

Using the Partial Classes

Listing 1 contains two class definitions written in VB.NET, with the second class definition starting with the partial keyword. Both class definitions may reside in two different physical files. Functionally, Listing 1 is equivalent to Listing 2.

'—MyClass1.Properties.vb
'—one of the classes need not have the Partial keyword
 Public Class MyClass1
        Private pX As Integer
        Private py As Integer

        Property x() As Integer
                 Get
                    Return pX
                 End Get
                 Set(ByVal value As Integer)
                    pX = value
                 End Set
        End Property

Read more: Codeproject

Posted via email from .NET Info

Objective-C для C# разработчиков

|
Несколько месяцев назад я начал разрабатывать приложения для iPhone. Переключение с платформы .NET и C# на Cocoa и Objective-C проходило не без приключений, но было достаточно интересным и познавательным. Скоро мне предстоит помогать осваивать новую платформу и другим разработчикам нашей компании. Поэтому решил написать серию вводных заметок, которые, надеюсь, сделают этот переход более плавным.

В этой заметке будет приведен небольшой набор фактов об Objective-C с точки зрения C#-разработчика.

  • Objective-C — это объектно ориентированный язык, «честное» расширение языка С (программа, написанная на C, является программой на Objective-C, что не всегда верно, например, для С++).
  • Достаточно подробно о языке было написано в недавней статье Objective-C с нуля.
  • для описания объекта создается два файла — заголовочный с расширением .h и файл реализации с расширением .m
  • классы Objective-C являются объектами, грубо говоря, класс Objective-C можно представить как реализацию паттерна фабричный метод с набором статических методов.
  • множественное наследование, так же как и в C#, не поддерживается.
  • NSObject — это «аналог» базового класса System.Object в .NET.
  • когда мы пишем interface в Objective-C, подоразумеваем class C#.
  • а то, что в C# называем interface, в Objective-C зовем protocol.
  • в Objective-C есть два типа методов — методы класса (их объявление начинаем со знака "+") и методы экземпляра (объявление начинается с "-"). Методы класса, как понимаете, — это те же статические методы С#.
  • если мы хотим вызвать метод объекта, мы посылаем ему сообщение об этом (Objective-C — message-oriented язык, в отличии от function-oriented С#).
  • в Objective-C все методы public (точнее говоря, в нем вообще нет разделения методов по уровням доступа).
  • в Objective-C все методы virtual (то есть любой метод может быть переопределен в классе-наследнике).
  • сборщика мусора в Objective-C, к сожалению, нет (при программировании для iPhone). Вместо него используется механизм подсчета ссылок.

Read more: Habrahabr.ru

Posted via email from .NET Info

Mono.Cecil: делаем свой «компилятор»

|
Одной из самых роскошных тем для программистов, балующихся изобретением велосипедов, является написание собственных языков, интерпретаторов и компиляторов. Действительно, программа, способная создавать или исполнять другие программы, инстинктивно вселяет благоговейный трепет в сердца кодеров — потому что сложно, объемно, но безумно увлекательно.

Большинство начинают с собственных интерпретаторов, которые представляют собой в общем виде огромный свитч команд в цикле. Интересно, вольготно, но муторно и весьма медленно. Хочется чего-то более шустрого, чтобы JIT'ить умело и, желательно, само следило за памятью.

Отличным решением данной проблемы является выбор .NET в качестве целевой платформы. Оставим лексический разбор на следующий раз, а сегодня давайте попробуем сделать простейшую программу, которая создает работающий исполняемый экзешник:

greeter1.png

Программа будет требовать имя, и выводить в консоли Hello, %username%.

Для создания экзешника существует много способов, например:
Трансляция в C#-код и вызов csc.exe: просто, но неспортивно
Генерация IL-кода в текстовой форме и компиляция ilasm.exe: неудобно по причине необходимости писать руками огромный манифест
Генерация сборки напрямую с помощью Reflection или Cecil

Как раз последний вариант я и выбрал. К сожалению, я не знаю, в чем для данной задачи Cecil превосходит Reflection, но мне попался пример именно на Cecil, поэтому именно его я и разберу.

Mono.Cecil — это библиотека, позволяющая работать со сборкой как с массивом байтов. C ее помощью можно как создавать свои собственные сборки, так и ковыряться и модифицировать уже существующие. Она предоставляет широкий спектр классов, которыми (обычно) удобно пользоваться.

Предмет беседы

Вот, собственно, готовый код (без описания класса, формы и всего прочего, кроме собственно метода-генератора):

using Mono.Cecil;
using Mono.Cecil.Cil;

public void Compile(string str)
{
 // создаем библиотеку и задаем ее название, версию и тип: консольное приложение
 var name = new AssemblyNameDefinition("SuperGreeterBinary", new Version(1, 0, 0, 0));
 var asm = AssemblyDefinition.CreateAssembly(name, "greeter.exe", ModuleKind.Console);

 // импортируем в библиотеку типы string и void
 asm.MainModule.Import(typeof(String));
 var void_import = asm.MainModule.Import(typeof(void));

 // создаем метод Main, статический, приватный, возвращающий void
 var method = new MethodDefinition("Main", MethodAttributes.Static | MethodAttributes.Private | MethodAttributes.HideBySig, void_import);


Read more: Habrahabr.ru

Posted via email from .NET Info

ProFTPD.org Compromised, Backdoor Distributed

|
A warning has been issued by the developers of ProFTPD, the popular FTP server software, about a compromise of the main distribution server of the software project that resulted in attackers exchanging the offered source files for ProFTPD 1.3.3c with a version containing a backdoor. It is thought that the attackers took advantage of an unpatched security flaw in the FTP daemon in order to gain access to the server.

Read more: Slashdot
Read more: Sourceforge ProFTPD

Posted via email from .NET Info

Linux.fm is an online radio station that broadcasts the Linux kernel

|
linux-radio---broadcasting-the-linux-kernel1.jpg

Linux Radio (found at Linux.fm) is an online station that randomly broadcasts source files from the Linux kernel. Each time you visit the site, a new file is read to you by a virtual speaker. As if this project wasn't geeky enough, the virtual speaker that you hear is materialized through the open source (of course) speech synthesizer known as eSpeak. Oh, and Linux Radio is dedicated to Dr. Sheldon Cooper.

Naturally, this gets a whole load of bonus points for sheer geeky awesomeness, though whether it can be deemed useful for, well, anything, is another matter.

Read more: DownloadSquad

Posted via email from .NET Info

Ge.tt Is a Brilliant, Real-Time File-Sharing Pipe, No Add-ons Required

|
500x_ge.tt.jpg

Ge.tt is a clever, instant file-sharing webapp that makes sharing files simple and fast. You can share a link to your file(s) immediately, without waiting for the upload to complete, and it doesn't use Flash, Java applets, or any other plug-ins.

Ge.tt couldn't be easier to use. You don't even need an account to use it (though you get extra benefits from registering). Visit Ge.tt, click the Select files button, choose one or more files you want to share (or you can simply drag and drop in supported browsers), and that's it. Ge.tt will begin uploading your files and instantly generate a unique URL for you to share.

You can share the link before the upload is finished—files will update on the download page in real-time, as they're uploaded, and the user on the other end can start downloading a file while you're still uploading it. In fact, you can also add files to the share after you've shared the link. Any new files you upload will automatically show up on their end without reloading the page.

If you're sharing large files, we still think Opera Unite is probably your best option, but for the rest of your quick file-sharing needs, Ge.tt looks like an excellent tool for the job.


Read more: Lifehacker

Posted via email from .NET Info

The Pirate Bay Co-Founder Starting P2P-DNS

|
The Pirate Bay Co-Founder, Peter Sunde, has started a new project which will provide a decentralized p2p based DNS system. This is a direct result of the increasing control which the US government has over ICANN. The project is called P2P-DNS and according to the project's wiki, this is how the project is described: 'P2P-DNS is a community project that will free internet users from imperial control of DNS by ICANN. In order to prevent unjust prosecution or denial of service, P2P-DNS will operate as a distributed and less centralized service hosted by the users of DNS. Temporary substitutes, (as Alpha and Beta developments), are being made ready for deployment. A network with no centralized points of failure, (per the original design of the internet), remains our goal. P2P-DNS is developing rapidly

Read more: Slashdot
Read more: dot-P2P

Posted via email from .NET Info

The computer, monitor and desk merge in BendDesk

|
benddesk.jpg

   Researchers from Aachen University's Media Computing Group have created a computer workstation where the desk and screen are transformed into one multi-touch display. The display is curved at the middle and uses infrared emitters and cameras to track user movement over the whole of the surface, which has its graphical user interface beamed onto it by a couple of short throw projectors hidden within its wooden frame.

Those who spend much of their working lives at a computer workstation will be familiar with the usual setup of one or two (or more) vertical displays set somewhere towards the back and input peripherals laid out on a horizontal area at the front. Users generally place a number of other objects on the flat surface in front of them too, such as paper documents (despite numerous moves towards a paperless office), pens and mugs of coffee.

Read more: gizmag

Posted via email from .NET Info

Advent Calendar For Geeks

|
Well, as children and adults all over the world begin their day with chocolate, with the traditional Advent calendar, I'd like to remind you that there's an alternative for geeks. The Perl Advent calendar will give you a new Perl tip every day right up to Christmas

Read more: Slashdot

Posted via email from .NET Info

.NET OpCodes

|
OpCodes Class

Provides field representations of the Microsoft Intermediate Language (MSIL) instructions for emission by the ILGenerator class members (such as Emit).

For a detailed description of the member opcodes, see the Common Language Infrastructure (CLI) documentation, especially "Partition III: CIL Instruction Set" and "Partition II: Metadata Definition and Semantics". The documentation is available online; see ECMA C# and Common Language Infrastructure Standards on MSDN and Standard ECMA-335 - Common Language Infrastructure (CLI) on the Ecma International Web site.

Read more: MSDN

Posted via email from .NET Info

sharpdx

|
SharpDX is intended to be used as an alternative managed DirectX framework. The API is generated automatically from DirectX SDK headers, with AnyCpu target, meaning that you can run your application on x86 and x64 platform, without recompiling your project.

News
30 November 2010, SharpDX 1.0 final is released. Full support for Direct3D10, Direct3D10.1, Direct3D11, Direct2D1, DirectWrite, D3DCompiler, DXGI 1.0, DXGI 1.1, DirectSound, XAudio2, XAPO.
Features


The key features and benefits of this API are:
  • API is generated from DirectX SDK headers : meaning a complete and reliable API and an easy support for future API.
  • Full support for the following DirectX API:
  • Direct3D10
  • Direct3D10.1
  • Direct3D11
  • Direct2D1 (including custom rendering, tessellation callbacks)
  • DirectWrite (including custom client callbacks)
  • D3DCompiler
  • DXGI
  • DXGI 1.1
  • DirectSound
  • XAudio2
  • XAPO
  • An integrated math API directly ported from SlimMath
  • Pure managed .NET API, platform independent : assemblies are compiled with AnyCpu target. You can run your code on a x64 or a x86 machine with the same assemblies, without recompiling your project.
  • Lightweight individual assemblies : a core assembly - SharpDX - containing common classes and an assembly for each subgroup API (Direct3D10, Direct3D11, DXGI, D3DCompiler...etc.). Assemblies are also lightweight.
  • C++/CLI Speed : the framework is using a genuine way to avoid any C++/CLI while still achieving comparable performance.
  • API naming convention mostly compatible with SlimDX API.
  • Raw DirectX object life management : No overhead of ObjectTable or RCW mechanism, the API is using direct native management with classic COM method "Release".

  • Read more: SharpDX

    Posted via email from .NET Info

    Antechinus(R) C# Editor

    |
    csharpsc.gif

    Easily create console and Windows programs, libraries and add-on modules

    Read more: C-Point

    Posted via email from .NET Info

    Making your own ViewEngine with Markdown

    |
    Recently I was thinking about integrating the new Razor Templating Engine into MVC so that I could learn how to create my own ViewEngine for MVC. However, I couldn’t quite figure out exactly how to use it in anyway that would make it different from Razor itself. Instead I decided to use Markdown as I frequent Stackoverflow.com quite a bit to try and help with questions related to Razor. (And get some answers for myself) It seemed like a good direction to go. Markdown is probably not a good idea though to use as a view in general. Markdown First things first, I needed a Markdown parser for c#. Luckily Wumpus1 already created a markdownsharp library available on Google Code. Markdown, somewhat, makes sense as a good sample as it’s intended to take text, translate it to html, and display to the end user. It’s also meant to be read without any translation…so this might work for some sort of user input where the end user doesn’t need to know some complex markup language such as wikicode. I’ve added a list of locations to search by default to the MarkdownViewEngine. The ViewEngine public class MarkdownViewEngine : IViewEngine { #region IViewEngine Members string[] SearchLocations; ViewEngineResult FindPartialView( ControllerContext controllerContext, string partialViewName, bool useCache) { } ViewEngineResult FindView( ControllerContext controllerContext, string viewName, string masterName, bool useCache) { } void ReleaseView(ControllerContext controllerContext, IView view) { } Read more: BuildStarted.com

    Posted via email from .NET Info

    The Really Cool NTILE() Window Function

    |
    If you regularly code queries and have never been introduced to the windowing functions, then you are in for a treat. I've been meaning to write about these for over a year, and now it's time to get down to it.

    Support in Major Servers
    SQL Server calls these functions Ranking Functions.

    PostgreSQL supports a wider range of functions than MS SQL Server, having put them in at 8.4, and PostgreSQL and calls them Window Functions.

    Oracle's support is broader (by a reading of the docs) than SQL Server or PostgreSQL, and they call them Analytic Functions.

    I try to stay away from MySQL, but I did a quick Google on all three terms and came up with a few forum posts asking when and if they will be supported.

    The NTILE() Function
    In this post we are going to look at NTILE, a cool function that allows you to segment query results into groups and put numbers onto them. The name is easy to remember because it can create any -tile, a percentile, a decile, or anything else. In short, an n-tile. But it is much easier to understand with an example, so let's go right to it.

    Finding percentiles
    Consider a table of completed sales, perhaps on an eCommerce site. The Sales Manager would like them divided up into quartiles, four equally divided groups, and she wants the average and maximum sale in each quartile. Let's say the company is not exactly hopping, and there are only twelve sales, which is good because we can list them all for the example.

    Read more: The Database Programmer

    Posted via email from .NET Info

    CryptoLicensing Tips and Tricks To Eliminate Piracy And Cracking Of Your Software

    |
    CryptoLicensing - a popular licensing and copy-protection scheme from LogicNP Software - uses the latest military strength, state-of-the-art cryptographic technology to generate secure and unbreakable license codes to ensure that your software and intellectual property is protected. Cryptographic licenses are unbreakable even when using brute force computing power. Furthermore, since the licenses can only be generated using the private key (which only you possess), this means that it is impossible for a hacker to develop a 'keygen' (key generator) for your software.

    On top of this cryptographic validation, CryptoLicensing provides various locks and limits like machine-locking, activated licenses, trial licenses, floating licenses, leased licenses, domain-locked licenses, beta licenses, licenses with user-data, licenses with flags/features and more.

    Read more: dot net curry

    Posted via email from .NET Info

    QUICK INTRO TO R AND PL/R - PART 1

    |
    In this article we'll provide a summary of what PL/R is and how to get running with it. Since we don't like repeating ourselves, we'll refer you to an article we wrote a while ago which is still fairly relevant today called Up and Running with PL/R (PLR) in PostgreSQL: An almost Idiot's Guide and just fill in the parts that have changed. We should note that particular series was more geared toward the spatial database programmer (PostGIS in particular). There is a lot of overlap between the PL/R, R, and PostGIS user-base which is comprised of many environmental scientists and researchers in need of powerful charting and stats tools to analyse their data who are high on the smart but low on the money human spectrum.

    This series will be more of a general PL/R user perspective. We'll follow more of the same style we did with Quick Intro to PL/Python. We'll end our series with a PL/R cheatsheet similar to what we had for PL/Python.

    As stated in our State of PostGIS article, we'll be using log files we generated from our PostGIS stress tests. These stress tests were auto-generated from the PostGIS official documentation. The raster tests are comprised of 2,095 query executions exercising all the pixel types supported. The geometry/geograpy tests are comprised of 65,892 spatial SQL queries exercising every PostGIS geometry/geography supported in PostGIS 2.0 -- yes this includes TINS, Triangles,Polyhedral Surfaces, Curved geometries and all dimensions of them. Most queries are unique. If you are curious to see what these log tables look like or want to follow along with these exercises, you can download the tables from here.

    What is R and PL/R and why should you care?
    R is both a language and an environment for doing statistics and generating graphs and plots. It is GNU-licensed and a common favorite of Universities and Research institutions. PL/R is a procedural language for PostgreSQL that allows you to write database stored functions in R. R is a set-based and domain specific language similar to SQL except unlike the way relational databases treat data, it thinks of data as matrices, lists and vectors. I tend to think of it as a cross between LISP and SQL though more experienced Lisp and R users will probably disagree with me on that. This makes it easier in many cases to tabulate data both across columns as well as across rows. The examples we will show in these exercises, could be done in SQL, but they are much more succinct to write in R. In addition to the language itself, there are a whole wealth of statistical and graphing functions available in R that you will not find in any relational database. These functions are growing as more people contribute packages. Its packaging system called Comprehensive R Archive (CRAN) is similar in concept to Perl's CPAN and the in the works PGXN for PostgreSQL.

    What do you need before you can use PL/R?
    PostgreSQL and latest version of PL/R at this time plr-8.3.0.11 which works on PostgreSQL 8.3-9.0

    Read more: Postgres OnLine Journal

    Posted via email from .NET Info

    How to Install and Configure Monit on Linux for Process Monitoring

    |
    Monit is an open source utility that provides several system monitoring functionality that are extremely helpful to sysadmins. This article provides a jumpstart guide on monit installation and configuration. We also discuss a specific example related to processes monitoring.

    1. Install monit

    On Fedora, openSUSE, Debian install monit as a package from the distribution repository. For example, on Debian (and Ubuntu), install monit using apt-get as shown below.

    # apt-get install monit

    If your distribution don’t have the monit package, download monit source and install it.

    2. Configure monit

    A sample process monitoring entry in the monit configuration file /etc/monit/monitrc looks like the following.

    check process PROCESSNAME
           with pidfile PIDFILENAME-WITHABSOLUTE-PATH
           start = STARTUP-SCRIPT
           stop = STOP-SCRIPT

    For example, to monitor the cron daemon, append the following lines to the monitrc file.

    # vim /etc/monit/monitrc
    check process crond
               with pidfile /var/run/crond.pid
               start = "/etc/init.d/cron start"
               stop  = "/etc/init.d/cron stop"

    Read more: The Geek Stuff

    Posted via email from .NET Info

    October’s Free Professional PSD Web Templates

    |
    This is a list of 39 hand-picked, fully customizable templates which contain the graphic source files in .PSD format, fresh and creative resources from design community.
    These web PSD templates are easy to use and customize and with some coding knowledge can be turned into ready websites. Please check the terms of use for each template before downloading and using it.

    1. Mash-Up II: Free PSD Business Template

    psd-business-template.jpg

    2. Colorful Free Coming Soon Page PSD
    free-coming-soon-page-psd.jpg

    18. Struct News – Free PSD Site template
    struct-news-psd.jpg

    Read more: net-kit

    Posted via email from .NET Info

    How to have NHibernate use a SQL MERGE INTO statement

    |
    On our project we were noticing a massive performance penalty when sending a number of records to the database to either be inserted or updated via the standard NHibernate framework syntax, so we decided to see if we could do anything about this. Turns out that we were able to alter the query that NHibernate normally generated, and change it into a MERGE INTO type statement, which meant a huge performance gain, as SQL was now doing the de-duping process itself on the server, which was always going to be more efficient.

    The purpose of this blog is to outline the high level steps that we needed to implement in order to allow the MERGE INTO statement to execute.

    The changes were really concentrated into a SQL statement constructor utility class as follows…

    /// <summary>
    /// This class holds definitions for common SQL statements to be executed
    /// against the DB directly
    /// </summary>
    public static class SqlStatementConstructor
    {
       /// <summary>
       /// Creates the TSQL statement to adds the items to custom list via a MERGE operation.
       /// </summary>
       /// <param name="customListId">The custom list id.</param>
       /// <param name="selectionQuery">The selection query.</param>
       /// <param name="customListType">Type of the custom list.</param>
       /// <returns>
       /// A TSQL statement that can be executed against the DB.
       /// </returns>
       public static string AddItemsToCustomList(int customListId, DetachedCriteria selectionQuery, ListType customListType)
       {
           // First, add the ID projection to the selection query, there's no need to pull any other fields.
           selectionQuery.SetProjection(Projections.Id());

           // Prepare the selectionQuery and extract the NH name of the ID column.
           // The format of the initial selected field (which is the one we're after) will be:
           // SELECT this_.ID as XXXX FROM... its very likely that NH will always use the variable y0_ as the ID
           // but we always extract rather than assume.

           string itemSelectionQuery = selectionQuery.ToSql();

           int endOfFirstAs = itemSelectionQuery.IndexOf(" as ") + 4;
           int lengthOfIdFieldName = itemSelectionQuery.IndexOf(" FROM ") - endOfFirstAs;
           string selectionQueryIdName = itemSelectionQuery.Substring(endOfFirstAs, lengthOfIdFieldName);
           string customListField = GetCustomListFieldName(customListType);

           // Now add our custom list id into the select in the correct place very simply by inserting after the initial SELECT.
           itemSelectionQuery = itemSelectionQuery.Insert(6, string.Format(" {0} as ListID, ", customListId));

           StringBuilder sb = new StringBuilder();

           sb.Append("MERGE dbo.CustomListValue AS Target");
           sb.Append(" USING (");
           sb.Append(itemSelectionQuery);
           sb.Append(") AS Source");
           sb.Append(string.Format(" ON (Target.CustomListID = Source.ListID AND Target.{0} = Source.{1})", customListField, selectionQueryIdName));
           sb.Append(" WHEN MATCHED THEN");


    Read more: EMC Consulting

    Posted via email from .NET Info

    How to deploy web applications using Mercurial

    |
    Does deploying changes to your site take too long? Are you tired of manually sorting out the update? Here is how to deploy your projects using Mercurial.

    Why?

    • Ease of updating. Mercurial keeps track of the changes and only sends the necessary changes -  you don’t need to worry about transferring files.
    • Make it possible to roll back changes on the deployed site. You can use hg to roll back from a bad update if necessary.
    • Once set up, it’s beautiful. “hg deploy”. How can you not like that?
    How is this different from the other guide you wrote about setting up private repo hosting?
    • While the differences aren’t that big, this setup is better for deployment rather than code distribution via repositories:
    • Minimal dependencies. This approach only uses the hg-ssh script from the Mercurial core contrib.
    • Manual configuration. You can set different directories for each repository which allows you to work with your existing webroot setup. However, you will need to manually add new repositories, since hg-ssh does not support adding new repositories remotely.
    If you want a private version of Bitbucket (without any additional features, of course), e.g.  to be able to remotely init/clone new repositories, check out my other tutorial about setting up private repo hosting.

    1. Make sure that the .hg directories are never served to the public

    Read more: Mixu's tech blog

    Posted via email from .NET Info

    App-V Tool Suite

    |
    Project Description
    A collection of tools for Microsoft Application Virtualization (App-V). These tools were developed at Sinclair Community College in the process of setting up and then supporting its App-V implementation.

    In the process of setting up and supporting its App-V implementation, which now consists of over 450 applications used by over 80 academic departments, the IT staff at Sinclair Community College in Dayton, OH(http://www.sinclair.edu) developed a suite of tools to make their lives easier.

    Currently, this "suite" contains only one tool. The rest will be posted as time allows.

    Read more: Codeplex

    Posted via email from .NET Info

    An Introduction to the Windows System State Analyzer

    |
    There often arises a need to figure out what may have changed on a system, either due to a specific issue or even to compare the difference between two systems. Today I would like to introduce you to the Windows System State Analyzer utility. Unless you are a developer or tester, you probably have never heard of this tool, as it is part of the Windows 2008 R2 Logo Software Certification and Windows 2008 R2 Logo Program Software Certification toolkits.

    The basic functionality of the System State Analyzer tool is to allow you to compare two snapshots taken at different points in time. This allows you to compare the state of a machine both before and after an application install for instance. Today I will give you a run-through of what the tool looks like while doing a compare of a system both before and after installing a software package, in this case Virtual PC 2007. The initial UI will look something like this:

    image_thumb.png

    As you can see, the interface is divided into two panes, each of which is for a separate snapshot that you wish to compare. You start by naming the first snapshot. By default, you are given several default name instances such as Post Install, Pre Configuration or Custom.

    The Tools – Options menu is where you can choose what you wish to include in the snapshot for comparison. You can compare drives, registry keys, services or drivers.

    Read more: Ask the Performance Team

    Posted via email from .NET Info

    Mercurial.Net

    |
    Project Description
    .NET wrapper class library for the Mercurial Distributed Version Control System (DVCS) - (http://mercurial.selenic.com/), written in C# 3.0 for the .NET 3.5 Client Profile runtime.

    This class library intends to implement a full wrapper for the Mercurial command line client, hg.

    Features

    • Written in C# 3.0 for the .NET 3.5 Client Profile runtime
    • All execution of the command line client handled by the library, including reading of standard output/error and handling of exit codes
    • Structured output (like changeset logs) will be parsed into .NET objects, but raw output in its original form will also be available
    • Observable execution, all output from commands available during execution
    • All source code available, fully documented (that is, all public types and methods will have full XML documentation, source itself hopefully won't need comments)

    Read more: Codeplex

    Posted via email from .NET Info

    Useful SQL Server System Stored Procedures You Should Know

    |
    System Stored Procedures are useful in performing administrative and informational activities in SQL Server. Here’s a bunch of System Stored Procedures that are used on a frequent basis (in no particular order):


    System Stored Procedure

    Description

    sp_help Reports information about a database object, a user-defined data type, or a data type
    sp_helpdb Reports information about a specified database or all databases
    sp_helptext Displays the definition of a user-defined rule, default, unencrypted Transact-SQL stored procedure, user-defined Transact-SQL function, trigger, computed column, CHECK constraint, view, or system object such as a system stored procedure
    sp_helpfile Returns the physical names and attributes of files associated with the current database. Use this stored procedure to determine the names of files to attach to or detach from the server
    sp_spaceusedDisplays the number of rows, disk space reserved, and disk space used by a table, indexed view, or Service Broker queue in the current database, or displays the disk space reserved and used by the whole database
    sp_whoProvides information about current users, sessions, and processes in an instance of the Microsoft SQL Server Database Engine. The information can be filtered to return only those processes that are not idle, that belong to a specific user, or that belong to a specific session
    sp_lock Reports information about locks. This stored procedure will be removed in a future version of Microsoft SQL Server. Use the sys.dm_tran_locks dynamic management view instead.
    sp_configure Displays or changes global configuration settings for the current server
    sp_tables Returns a list of objects that can be queried in the current environment. This means any object that can appear in a FROM clause, except synonym objects.
    sp_columnsReturns column information for the specified tables or views that can be queried in the current environment

    Read more: SQL Server Curry

    Posted via email from .NET Info

    NKinect

    |
    Project Description
    .NET 4.0 (C++/CLI) based open source implementation of Microsoft Kinect. Currently supports CodeLaboratories NUI SDK, but will be brought to OpenKinect/libfreenect when a Windows version is stable.

    Current features

    • Accelerometer reading
    • Motor serial number property
    • Realtime image update
    • Realtime depth calculation
    • Export to PLY (On demand)
    • Control motor LED
    • Control Kinect tilt
    Planned
    • Realtime point cloud image generation
    • Use 2+ Kinects to create stereo 3D (if the underlying library supports it)
    • TUIO interaction
    • Satellite "mouse" assemblies
    • Include OpenKinect/libfreenect transparent implementation

    Read more: Codeplex

    Posted via email from .NET Info

    Introduction to Reverse Engineering Software

    |
    Abstract

    This book is an attempt to provide an introduction to reverse engineering software under both Linux and Microsoft Windows�. The goal of this book is not to cover how to reproduce an entire program from a binary, but instead how to use the Scientific Method to deduce specific behavior and to target, analyze, extract and modify specific operations of a program, usually for interoperability purposes. As such, the book takes a top-down approach, starting at the highest level (program behavior) and drilling down to assembly when it is needed.

    Table of Contents

    1. Introduction
    2. The Compilation Process
    3. Gathering Info
    4. Determining Program Behavior
    5. Determining Interesting Functions
    6. Understanding Assembly
    7. Debugging
    8. Executable formats
    9. Code Modification
    10. Network Application Interception
    11. Contribut(e|ions)!
    12. Extra Resources
    A. Tools
    B. Documentation resouces
    C. Web links and resources

    Read more: Introduction to Reverse Engineering Software

    Posted via email from .NET Info

    mindbg

    |
    Project Description
    Mindbg is a simple debugger engine written in .net 4.0 for learning purposes. If you want to learn some CLR internals and especially debugging API read my blog (http://lowleveldesign.wordpress.com) where I describe all steps of mindbg implementation.

    Part 3 - symbol and source files
    The third part explains how to bind binary code with the source file lines using symbols API. The whole post may be found at http://lowleveldesign.wordpress.com/2010/11/08/writing-a-net-debugger-part-3-symbol-and-source-files/

    Part 2 - handling events and creating wrappers
    The second part describes the process of handling debuggee events and introduces the concept of COM wrappers for debug interfaces. The whole post may be found at http://lowleveldesign.wordpress.com/2010/10/22/writing-a-net-debugger-part-2-handling-events-and-creating-wrappers/

    Part 1 - starting the debugging session
    The first part of writing the debugger is ready. I described in it the process of starting the debugging session (either by creating a process or attaching to the running one). The whole post may be found on my blog http://lowleveldesign.wordpress.com/2010/10/11/writing-a-net-debugger-part-1-starting-the-debugging-session/. The part 1 source code may be found in the downloads section.

    Read more: Codeplex

    Posted via email from .NET Info

    Writing a .net debugger (part 4) – breakpoints

    |
    After the last part the mindbg debugger stops at the application entry point, has module symbols loaded and displays source code that is being executed. Today we will gain some more control over the debugging process by using breakpoints. By the end of this post we will be able to stop the debugger on either a function execution or at any source line.

    Setting a breakpoint on a function is quite straightforward – you only need to call CreateBreakpoint method on a ICorDebugFunction instance (that we want to have a stop on) and then activate the newly created breakpoint (with ICorDebugBreakpoint.Activate(1) function). The tricky part is how to find the ICorDebugFunction instance based on a string provided by the user. For this purpose we will write few helper methods that will use ICorMetadataImport interface. Let’s assume that we would like to set a breakpoint on a Test method of TestCmd class in testcmd.exe assembly. We will then use command “set-break mcmdtest.exe!TestCmd.Test”. After splitting the command string we will receive module path, class name and method name. We could easily find a module with a given path (we will iterate through the modules collection – for now it won’t be possible to create a breakpoint on a module that has not been loaded). Having found a module we may try to identify the type which “owns” the method. I really like the way how it is done in mdbg source code so we will copy their idea

    Read more: Low Level Design

    Posted via email from .NET Info

    Certificate Request (PKCS#10) Generator

    |
    Project Description
    A .NET 2.0 application that can create PKCS#10 Certificate Requests, either by generating a new key or reusing a preexisting one (taken from the "MY" certificate store).

    Minimum requirement : Windows Vista and above. .NET 2.0.

    Read more: Codeplex

    Posted via email from .NET Info

    Support Debugging Tool for Microsoft Dynamics GP Build 14 released

    |
    It has been a while since we had a new build of the Support Debugging Tool for Microsoft Dynamics GP. So we have decided to release Build 14.  This is primarily a maintenance release with bug fixes, minor enhancements and a couple of new features.  This build can be installed over the top of any existing installed build without needing to remove the old build first.

    Please note that this is the final release for version 9.00.

    Below is a summary of the changes made for releases 9.00.0014, 10.00.0014 and 11.00.0014, I have divided them into logical sections:

    Fixes

    • Changes to Reject Script functionality in Advanced Debugging Mode to restore fields to their previous values.
    • Changes to Reject Script and Reject Record functionality so it can be controlled by using the OUT_Condition variable.
    • Fixed Non-logging triggers disabling Manual Logging Mode and Automatic Debugger Mode.
    • Fixed Security Information Show Resources Window showing Resource ID and Dictionary ID in incorrect columns.

    Enhancements
    • Added Ctrl-R as Shortcut Accelerator Key for Raise All Windows (v10.0 or later).

    Read more: Developing for Dynamics GP

    Posted via email from .NET Info

    CRL checking by IIS

    |
    When a Client certificate is presented to an IIS website, IIS looks for the CRL verification to determine the validity of the certificate, much in a similar way a browser does the CRL checking for an SSL enabled website. When IIS receives the client cert it looks into the CDP (CRL Distribution point) under the details tab of the client cert. It then uses one of the HTTP/LDAP links listed there to download the CRL on the server. This link will basically be pointing to one of the CDP servers hosted by the CA. IIS uses this link to download the CRL for future verification purpose. This is overall what IIS does. Obviously internally it is making calls to Crypto Subsystem for all these activities.

    When does IIS kick off a new download of a CRL? Does it look at the Next Update field within the CRL and then keep a log (somewhere on IIS or registry) on when it requires to download the next CRL from the CA?

    ==>

    To answer the above question in specific it depends upon various settings/scenarios as described below.

    IIS by default looks into the downloaded CRL for the next update field. This is stored in its own memory cache and also physically in the server under either

    %windir%\System32\config\systemprofile\Application Data\Microsoft\CryptnetUrlCache\MetaData (on Win2k3 server), or
    %windir%\System32\config\systemprofile\AppData\LocalLow\Microsoft\CryptnetUrlCache\MetaData (on Win2k8)

    You can find the cached CRLs using this command as well Certutil –urlcache CRL at a command-line prompt.

    If the current date is well behind the ‘Next Update’ field value it will use the current CRL to validate the client certificates.

    CRL verification depends upon the metabase properties (IIS 6.0) like CertCheckMode, RevocationFreshnessTime and RevocationURLRetrievalTimeout.

    1. If CertCheckMode is set to 0, IIS does the CRL verification based on the cached CRL on the server (based on its properties like current date and ‘Next Update’ field).

    Read more: Care, Share and Grow!

    Posted via email from .NET Info

    Lazy Loading Video To Speed Up Your Web Page

    |
    When you have a web page containing a video that will not be played until clicked, you can speedup page loading by applying this quick tip which you may have seen on Facebook.  Will wrap the video embed code with HTML comment and place it inside an anchor element that uses a video screenshot as the background image, and when that anchor is clicked will remove the wrapping HTML markup to load the video.

    jQuery Approach

    After taking a screenshot of that video, place it as background of the anchor element. Will use “video-in-link” as the class name to be used as the jQuery selector. With inline CSS will set the anchor as a block element and give it the same dimensions of the flash video player. For graceful degradation, the anchor will link to that video page for browsers with javascript disabled. Wrap the video embed code with HTML comment and place it inside the anchor.


    <a href="http://www.youtube.com/watch?v=UmN5JJkXPiE" class="video-in-link"
      style="background:url(video1.jpg); width:425px; height:344px; display:block">
      <!--
     
         
         
         
         
         
            type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true"
            width="425" height="344" wmode="transparent">
     

      -->
    </a>

    Next, will include jQuery javascript file from Google CDN. On document ready Will use "One" method to attach a click event handler -one time only- on anchors with the class name “video-in-link”.. In click handler will simply remove comment markup from the inner HTML of the anchor and remove its “href” attribute..

    <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js" ></script>
    <script type="text/javascript">
    //<![CDATA[
    $(document).ready(function(){
      $("a.video-in-link").one('click',function(){
         var anchor = $(this);
         anchor.html(anchor.html().replace('<!--','').replace('-->',''));
         anchor.removeAttr('href');
         return false;
      })
    })
    //]]>
    </script>

    Non-jQuery Approach

    For Javascript-only version will use the onClick attribute to call a javascript function and pass the clicked anchor using “this” keyword.

    Read more: aext.net

    Posted via email from .NET Info

    Backing up ASP.NET configuration files

    |
    ASP.NET applications running on Microsoft Internet Information Services (IIS) use web.config files to store the various configuration settings for its functioning. After updating your ASP.NET web application you may find that the application fails, and you'll need to revert back to the previous version of the configuration file. For this reason it is important to ensure you back up the web.config files correctly and regularly so that backups are available to revert to.

    The following section describes how to locate and backup ASP.NET configuration files for applications running under IIS 6.0, and on IIS7.0 and above.

    Internet Information Services 6.0

    In IIS 6.0 running on Windows Server 2003, web.config files are only used by ASP.NET web applications. Therefore, it is recommended to back up the entire hierarchy of your config files from all of the websites, virtual directories, and web applications within IIS. In the IIS Manager, right-click and open the website, directory, or application and go to the physical folder where it's web.config file is located. Each website/directory/application will have a maximum of one web.config file. Make sure to back up the config files and ensure their order is intact while backing up.

    In the above scenario, the root of the Default Web Site may have its own web.config file. Application1 under the Default Web Site may also have its own web.config file, and so on. It is a good practice to backup the metabase.xml file (for IIS related configuration settings) and all the Web.config files for ASP.NET applications on a regular basis.

    Internet Information Services 7.0 and above

    Read more: MS Support

    Posted via email from .NET Info

    What is difference between HTTP Handler and HTTP Module

    |
    Here are the difference between HTTP Handlers and HTTP Modules.

    Http Handlers:

    Http handlers are component developed by asp.net developers to work as end point to asp.net pipeline. It has to implement System.Web.IHttpHandler interface and this will act as target for asp.net requests. Handlers can be act as ISAPI dlls the only difference between ISAPI dlls and HTTP Handlers are Handlers can be called directly URL as compare to ISAPI Dlls.

    Http Modules:

    Http modules are objects which also participate in the asp.net request pipeline but it does job after and before HTTP Handlers complete its work. For example it will associate a session and cache with asp.net request. It implements System.Web.IHttpModule interface.

    HTTP Handler implement following Methods and Properties

    • Process Request: This method is called when processing asp.net requests. Here you can perform all the things related to processing request.
    • IsReusable: This property is to determine whether same instance of HTTP Handler can be used to fulfill another request of same type.

    Http Module implements following Methods and Properties.
    • InIt: This method is used for implementing events of HTTP Modules in HTTPApplication object.
    • Dispose: This method is used perform cleanup before Garbage Collector destroy everything.

    An Http Module can Support following events exposed to HTTPApplication Object.
    1. AcquireRequestState: This event is raised when asp.net request is ready to acquire the session state in http module.
    2. AuthenticateRequest: This event is raised when asp.net runtime ready to authenticate user.
    3. AuthorizeRequest: This event is raised when asp.net request try to authorize resources with current user identity.


    Read more: DOTNETJAPS

    Posted via email from .NET Info

    Steps to Call WCF Service using jQuery

    |
    Introduction

    In this post is related to No. of step one should follow to call WCF Service in your client code and No. of thing require to take care when you call WCF service.
    Before start reading and following this read about how to create WCF service. Create, Host(Self Hosting, IIS hosting) and Consume WCF servcie

    Step 1
    Once you done with creation of WCF service you need to specify the attribute on server type class for ASP.NET  Compatibility mode. so that WCF service works as normal ASMX services and support all  ASP.NET existing feature. By setting compatibility mode WCF service require to host on IIS and get communicate with client application using HTTP protocol. More about this in detail : WCF Web HTTP Programming Object Model
    Following line of code set ASP.NET Compatibility mode

    [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)]
    public class Service : IService
    {
       //  .....your code
    }

    Service.Cs

    [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)]
    public class Service : IService
    {
       public string GetData(int value)
       {
           return string.Format("You entered: {0}", value);
       }


       public string[] GetUser(string Id)
       { return new User().GetUser(Convert.ToInt32(Id)); }


    Step 2
    Need to specify attribute at operation level in Service contract file for each method/operation. So to do this its require decorate method with WebInvoke which mark a service operation as one that responds to HTTP requests other than GET. So your operational level code in contract file is

    [OperationContract]
       [OperationContract]
       [WebInvoke(Method = "POST",  BodyStyle = WebMessageBodyStyle.Wrapped,  ResponseFormat = WebMessageFormat.Json)]
       string[] GetUser(string Id);

    as you can see in code here sub-attribute having value to support call via jQuery i.e Method=post so data get posted to service via post method. ResponseFormat = WebMessaeFormat Json indicate data return as json format.

    IService.cs

    [ServiceContract]
    public interface IService
    {
       [OperationContract]
       [WebInvoke(Method = "GET", ResponseFormat = WebMessageFormat.Json)]
       string GetData(int value);


    Read more: Codeproject

    Posted via email from .NET Info

    OpenCube

    | Wednesday, December 1, 2010
    quickmenu_download.png

    Design and publish advanced pure CSS based web menus in a full visual environment.

    Visual design for Windows, Mac, and Linux.
    Industry leading cross browser support.
    Visually create pure <UL><LI> menus!
    Fully functional in JavaScript disabled browsers!
    Hover Tree, Content Tab, and Scroller add-ons.
    Use with Expression Web, CS3+4, PHP, ASP...
    Create search engine friendly menus in minutes!
    Compact lightweight script starts at around 5K!
    The industry leading professional menu design tool!

    Read more: OpenCube

    Posted via email from .NET Info

    JaCIL: A CLI to JVM Compiler

    |
    JaCIL (pronounced "jackal") is a graduate capstone project to create a byte-code compiler to translate CLI binaries to binaries suitable for consumption by the JVM.

    Both the .NET Framework via the Common Language Infrastructure (CLI) and the Java Virtual Machine (JVM) provide for a managed, virtual execution environment for running high-level virtual machine code. As these two platforms become ubiquitous, it is of pragmatic and academic interest to have the capability of running CLI code on the JVM and visa-versa.

    JaCIL stands for Java Common Intermediate Language. The Common Intermediate Language (CIL) is the byte-code language of the CLI.

    JaCIL leverages the Mono Cecil library to read CLI assemblies and the ObjectWeb ASM framework to generate Java class files. In order to use the ObjectWeb ASM Java API, JaCIL utilizes the IKVM.NET JVM implementation for the CLI.

    Read more: JaCIL

    Posted via email from .NET Info

    Code2Code.net: translates your C++ code into C#, VB.NET, ...

    |
    Migration from C++ to C#

    When migrating code to a more modern language, you sometimes have the luxury to clean up and overhaul the structure. Other times, you may have little time, or need to interact with legacy libraries, and thus need to keep sections of the original code relatively unchanged. This site may help you with a rough translation:  paste your code below to translate it.
    Status

    December 2006: this translation service is being occasionaly used by my employer, a financial institution. New features (not available here) include preliminary translation of C++ templates. The public is allowed to use this pre-release web interface. Please review the conditions and then simply paste your C++ code below. Conditions:

    1. Your code must be non-commercial.

    Code submitted below will be posted >here with results for everyone to see.
    For commercial code, please >contact us.

    2. You accept that this page does only half the work.

    Futher work on your part is required. In most cases, the translated code will not even compile.
    Also, this translation works on line-by-line basis, it will not create high-level .NET constructs (e.g. 'new Form()') out of C++ statements

    Read more: Code2Code.net

    Posted via email from .NET Info

    Code converter by SharpDevelop

    |
    Can convert from C# to various languages

    Read more: SharpDevelop

    Posted via email from .NET Info

    Amazon finally reveals itself as the Matrix

    |
    Amazon’s new Mechanical Turk product is brilliant because it will help application developers overcome certain types of problems (resulting in the possibility for new kinds of applications) and somewhat scary because I can’t get the Matrix-we-are-all-plugged-into-a-machine vision out of my head.

    The “machine” is a web service that Amazon is calling “artificial artificial intelligence.” If you need a process completed that only humans can do given current technology (judgment calls, text drafting or editing, etc.), you can simply make a request to the service to complete the process. The machine will then complete the task with volunteers, and return the results to your software.

    Volunteers are paid different amounts for each task, and money earned is deposited into their Amazon accounts. Amazon keeps a 10% margin on what the requester pays.

    Read more: Techcrunch

    Posted via email from .NET Info

    Linus On Branching Practices

    |
       Not long ago Linus Torvalds made some comments about issues the kernel maintainers were facing while applying the "feature branch" pattern. They are using Git (which is very strong with branching and merging) but still need to take care of the branching basics to avoid ending up with unstable branches due to unstable starting points. While most likely your team doesn't face the same issues that kernel development does, and even if you're using a different DVCS like Mercurial, is worth to take a look at the description of the problem and the clear solution to be followed in order to avoid repeating the same mistakes. The same basics can be applied to every version control system with good merge tracking, so let's avoid religious wars and focus on the technical details.

    Read more: Slashdot
    Read more: Plastic SCM

    Posted via email from .NET Info